Title
stringlengths 18
136
| Content
stringlengths 293
255k
| Category
stringclasses 1
value | Role
stringclasses 1
value | Whitepaper
stringclasses 1
value |
|---|---|---|---|---|
10_Considerations_for_a_Cloud_Procurement
|
Archived10 Considerations for a Cloud Procurement March 2017 This version has been archived For the most recent version of this paper see: https://docsawsamazoncom/whitepapers/latest/considerationsfor cloudprocurement/considerationsforcloudprocurementhtmlArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 1 Contents Purpose 2 Ten Procurement Considerations 2 1 Understand Why Cloud Computing is Different 2 2 Plan Early To Extract the Full Benefit of the Cloud 3 3 Avoid Overly Prescriptive Requirements 3 4 Separate Cloud Infrastructure (Unmanaged Services) from Managed Services 4 5 Incorporate a Utility Pricing Model 4 6 Leverage ThirdParty Accreditations for Security Privacy and Auditing 5 7 Understand That Security is a Shared Responsibility 6 8 Design and Implement Cloud Data Governance 6 9 Specify Commercial Item Terms 6 10 Define Cloud Evaluation Criteria 7 Conclusion 7 ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 2 Purpose Amazon Web Services (AWS) offers scalable costefficient cloud services that public sector customers can use to meet mandates reduce costs drive efficiencies and accelerate innovation The procurement of an infrastructure as a service (IaaS) cloud is unlike traditional technology purchasing Traditional public sector procurement and contracting approaches that are designed to purchase products such as hardware and related software can be inconsistent with cloud services (like IaaS) A failure to modernize contracting and procurement approaches can reduce the pool of competitors and inhibit customer ability to adopt and leverage cloud technology Ten Procurement Considerations Cloud procurement presents an opportunity to reevaluate existing procurement strategies so you can create a flexible acquisition process that enables your public sector organization to extract the full benefits of the cloud The following procurement considerations are key components that can form the basis of a broader public sector cloud procurement strategy 1 Understand Why Cloud Computing is Different Hyperscale Cloud Service Providers (CSPs) offer commercial cloud services at massive scale and in the same way to all customers Customers tap into standardized commercial services on demand They pay only for what they use The standardized commercial delivery model of cloud computing is fundamentally different from the traditional model for onpremises IT purchases (which has a high degree of customization and might not be a commercial item) Understanding this difference can help you structure a more effective procurement model IaaS cloud services eliminate the customer ’s need to own physical assets There is an ongoing shift away from physical asset ownership toward ondemand utilitystyle infrastructure services Public sector entities should understand how standardized utilitystyle services are budgeted for procured and used and then build a cloud procurement strategy that is ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 3 intentionally different from traditional IT —designed to harness the benefits of the cloud delivery model 2 Plan Early To Extract the Full Benefit of the Cloud A key element of a successful cloud strategy is the involvement of all key stakeholders (procurement legal budget/finance security IT and business leadership) at an early stage This involvement ensures that the stakeholders can understand how cloud adoption will influence existing practices It provides an opportunity to reset expectations for budgeting for IT risk management security controls and compliance Promoting a culture of innovation and educating staff on the benefits of the cloud and how to use cloud technology helps those with institutional knowledge understand the cloud It also helps to accelerate buyin during the cloud adoption journey 3 Avoid Overly Prescriptive Requirements Public sector stakeholders involved in cloud procurements should ask the right questions in order to solicit the best solutions I n a cloud model physical assets are not purchased so traditional data center procurement requirements are no longer relevant Continuing to recycle data center questions will inevitably lead to data center solutions which might result in CSPs being unable to bid or worse lead to poorly designed contracts that hinder public sector customers from leveraging the capabilities and benefits of the cloud Successful cloud procurement strategies focus on applicationlevel performancebased requirements that prioritize workloads and outcomes rather than dictating the underlying methods infrastructure or hardware used to achieve performance requirements Customers can leverage a CSP’s established best practices for data center operations because the CSP has the depth of expertise and experience in offering secure hyperscale Iaa S cloud services It is not necessary to dictate customized specifications for equipment operations and procedures (eg racks server types and distances between data centers) By leveraging commercial cloud industry standards and best practices (including industryrecognized accreditations and certifications) customers avoid placing unnecessary restrictions on the services they can use and ensure access to innovative and costeffective cloud solutions ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 4 4 Separate Cloud Infrastructure (Unmanaged Services) from Managed Services There is a difference between procuring cloud infrastructure (IaaS) and procuring labor to utilize cloud infrastructure or managed services such as Software as a Service (SaaS) cloud Successful cloud procurements separate cloud infrastructure from “hands on keyboard” services and labor or other managed services purchases Cloud infrastructure and services such as labor for planning developing executing and maintaining cloud migrations and workloads can be provided by CSP partners (or other third parties) as one comprehensive solution However cloud infrastructure should be regarded as a separate “service” with distinct roles and responsibilities service level agreements (SLAs) and terms and conditions 5 Incorporate a Utility Pricing Model To realize the benefits of cloud computing you need to think beyond the commonly accepted approach of fixedprice contracting To contract for the cloud in a manner that accounts for fluctuating demand you need a contract that lets you pay for services as they are consumed CSP pricing should be: Offered using a pay asyougo utility model where at the end of each month customers simply pay for their usage Allowed the flexibility to fluctuate based on market pricing so that customers can take advantage of the dynamic and competitive nature of cloud pricing Allowing CSPs to offer pay asyougo pricing or flexible payper use pricing gives customers the opportunity to evaluate what the cost of the usage will be instead of having to guess their future needs and over procure CSPs should provide publicly available up todate pricing and tools that allow customers to evaluate their pricing such as the AWS Simple Monthly Calculator: http://awsamazoncom/calculator Additionally CSPs should provide customers with the tools to generate detailed and customizable billing reports t o meet business and compliance needs ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 5 CSPs should also provide features that enable customers to analyze cloud usage and spending so that customers can build in alerts to notify them when they approach their usage thresholds and projected/budgeted spend Such alerts enable organizations to determine whether to reduce usage to avoid overages or prepare additional funding to cover costs that exceed their projected budget 6 Leverage ThirdParty Accreditations for Security Privacy and Auditing Leveraging industry best practices regarding security privacy and auditing provides assurance that effective physical and logical security controls are in place This prevents overly burdensome processes and duplicative approval workflows that are often unjustified by real risk and compliance needs There are many security frameworks best practices audit standards and standardized controls that cloud solicitations can cite such as the following: Federal Risk and Authorization Management Program (FedRAMP) Service Organization Controls (SOC) 1/American Institute of Certified Public Accountants (AICPA): AT 801 (formerly Statement on Standards for Attestation Engagements [SSAE] No 16)/International Standard on Assurance Engagements (ISAE) 3402 (formerly Statement on Auditing Standards [SAS] No 70) SOC 2 SOC 3 Payment Card Industry Data Security Standard (PCI DSS) International Organization for Standardization (ISO) 27001 ISO 27017 ISO 27108 ISO 9001 Department of Defense (DoD) Security Requirements Guide (SRG) Federal Information Security Management Act (FISMA) International Traffic in Arms Regulations (ITAR) Family Educational Rights and Privacy Act (FERPA) Information Security Registered Assessors Program (IRAP) (Australia) ITGrundschutz (Germany) Federal Information Processing Standard (FIPS) 1402 ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 6 7 Understand That Security is a Shared Responsibility As cloud computing customers are building systems on a cloud infrastructure the security and compliance responsibilities are shared between service providers and cloud consumers In an IaaS model customers control both how they architect and secure their applications and the data they put on the infrastructure CSPs are responsible for providing services through a highly secure and controlled infrastructure and for providing a wide array of additional security features The respective responsibilities of the CSP and the customer depend on the cloud deployment model that is used either IaaS SaaS or Platform as a Service (PaaS)Customers should clearly understand their security responsibilities in each cloud model 8 Design and Implement Cloud Data Governance Organizations should retain full control and ownership over their data and have the ability to choose the geographic locations in which to store their data with CSP identity and access controls available to restrict access to customer infrastructure and data Customers should clearly understand their responsibilities regarding how they store manage protect and encrypt their data A major benefit of cloud computing as compared to traditional IT infrastructure is that customers have the flexibility to avoid traditional vendor lock in Cloud customers are not buying physical assets and CSPs provide the ability to move up and down the IT stack as needed with greater portability and interoperability than the old IT paradigm Public sector entities should require that CSPs: 1) provide access to cloud portability tools and services that enable customers to move data on and off their cloud infrastructure as needed and 2) have no required minimum commitments or required longterm contracts 9 Specify Commercial Item Terms Cloud computing should be purchased as a commercial item and organizations should consider which terms and conditions are appropriate (and not appropriate) in this context A commercial item is recognized as an item that is of a type that has been sold leased licensed or otherwise offered for sale to the general public and generally performs the same for all users/customers both comme rcial and government IaaS CSP terms and conditions are designed to reflect how a cloud services model functions (ie physical assets are not being ArchivedAmazon Web Services – 10 Considerations for a Cloud Procurement Page 7 purchased and CSPs operate at massive scale to offer standardized commercial services) It is critical that a CSP’s terms and conditions are incorporated and utilized to the fullest extent 10 Define Cloud Evaluation Criteria Cloud evaluation criteria should focus on system performance requirements Select the appropriate CSP from an established resource pool to take advantage of the cloud’s elasticity cost efficiencies and rapid scalability This approach ensures that you get the best cloud services to meet your needs the best value in these services and the ability to take advantage of marketdriven innovation The National Institute of Standards and Technology (NIST) definitions of cloud benefits are an excellent starting point to use for determining cloud evaluation criteria: http://nvlpubsnistgov/nistpubs/Legacy/SP/nistspecialpublication800 146pdf Conclusion Thousands of public sector customers use AWS to quickly launch services using an efficient cloudcentric procurement process Keeping these ten steps in mind will help organizations deliver even greater citizen student and mission focused outcomes
|
General
|
consultant
|
Best Practices
|
5_Ways_the_Cloud_Can_Drive_Economic_Development
|
Archived5 Ways the Cloud Can Drive Economic Development August 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents Amazon Web Services’s (“AWS”) current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this docu ment and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances fro m AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Sharing More Data and Information 1 Increasing Productivity 3 Preparing Citizens for the Workforce & Building Skills 5 Driving Local Development 6 Allocating Resources More Effectively 8 Key Takeaway 9 Contributors 9 Archived Abstract Government agencies often look to promote new technology for cost savings and efficiency but it does not stop there The second and third tier effects of technology can be long lasting for citizens businesses and economies When public institutions adop t the cloud they experience an internal transformation Inside an organization cloud usage drives greater accessibility of data and information sharing increases worker productivity and improves resource allocation The external benefit of the cloud is recognized through a government ’s ability to put reclaim ed time and resource s toward serving citizens This includes provision ing public services such as occupational skills training quicker and more effective service delivery a pathway to a more productive workforce and ultimately a boost to local development This whitepaper examines the enterprise level benefits of the cloud as well as the residual impact on economic development The US Economic Development Administration defines economic development as “[creating] the conditions for economic growth and improved quality of life by expanding the capacity of individuals firms and communities to maximize the use of their talents and skills to support innovation lower transaction costs and responsibly produce and trade valuable goods and services” We explore this concept through the lens of the cloud ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 1 Introduction Technology empowers governments to improve how and when they reach citizens It improves the quality and accessibility of public service s ultimately creat ing a more productive environment where citizens can thrive Leveraging the cloud is one way governments can accelerate this shift with benefits occurring first inside the institution Sharing More Data and Information One enterprise level benefit of the cloud is its emphasis on data and information sharing The cloud ’s data sharing tools encourage staff to store information in a central location adding visib ility inside the workplace A more collaborative environment can lead to increased communication and idea sharing among agencies and teams that might otherwise op erate in siloes This is true for federal regional and local governments as well as for businesses and entrepreneurs The result is n ear real time access to critical information across an array of industries Examples include data on job creation by location and level retention statistics payroll by industry classification – or North American Industry Classification System code s in the US – in addition to information on health services trade and commerce weather patterns and more Data and IoT solutions can help address development challenges Nexleaf Analytics is one organization harnessing the power of data to tackle global development issues From climate change to public health and food insecurity its mission is to preserve hu man life and protect the planet through sensor technologies and data analytics and by advocating for data driven solutions The organizatio n developed Internet ofThings (IoT) platforms ColdTrace and StoveTrace to help governments ensure the potency of life saving vaccines at the ‘last mile’ and to facilitate the adoption of cleaner cookstoves respectively ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 2 “Data is at the core of creating sustainable change By getting meaningful real time data flowing from the bottom up people have the tools and insights they need to take responsive actions” according to Mar tin Lukac Nexleaf’s CTO and cofounder Nexleaf’s solution powered by A mazon Web Services Inc (AWS) aggregates crucial data that can lead to responsive interventions By collaborating with governments and NGOs in 10 countries across Asia and Africa the organization ensures its solutions adhere to local country laws and preferences and identifies the right tools and analytics to benefit constituents Engaging people on the ground empowers a data driven approach to improving the effic iency of their systems advocating for better resources and tap ping into potential avenues for economic and social development Data drives c ommunity collaboration and innovation The cloud encourages partnerships and collaboration within communities It can lead local governments to facilitate relationships with small and medium sized enterprises (SMEs) which according to an Organisation for Economic Co operati on and Development (OECD ) report “account for over 95% of firms and 60% 70% of employment and generate a large share of new jobs in OECD economies” In Boston Massachusetts the Mayor's Office of New Urban Mechanics took an innovative approach to proble msolving through crowdsourcing Teaming with a technology firm the government sought creative ideas from across Boston to help improve Street Bump its app to collect roadside maintenance and plan long term investments for the city The use of big data and community engagement helped the agency find a creative solution to a public issue Street Bump’s website now reports that te ns of thousand s of bumps have been detected through the app The public private partnership brought automation and speed to an otherwise manual city improvement process and also gave local startups a platform to voice and implement innovative ideas that otherwise may n ot have been discovered Newport Wales is another example of a city optimizing public data in this case to assess environmental conditions It began using IoT sensors to collect ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 3 data such as pollution levels augmenting earlier process es of collecting air samples in glass vials across 85 different location s Together with Pinacl Solutions and Davra Networks Newport is working toward a solution for improving air quality flood control and waste management gleaning timely insights from sensor data via solutions hosted on AWS The effort aimed to boost citizens’ safety and quality of life as part of a vision to improve Newport’s economy The Humanitarian OpenStreetMap Team (HOT) is yet another global organization applying the pri nciples of open source and open data shar ing to humanitarian response and economic development Known for its ability to rapidly coordinate volunteers to map sites impacted by disaster HOT relies on a collaboration with Digi talGlobe Inc for critical satellite imagery data accessible through its Open Data Program and imagery license If not for this partnership HOT would not exist as it is today according to HOT’s Director of Technology Cristiano Giovando Additionally through the AWS Public Datasets Program anyone can analyze data and build comple mentary services using a broad range of compute and data analytics tools The cloud combines fragmented data from a variety of sources improving users’ access and enabling more time for analysis This can facilitate innovation and the possibility of new discover ies Increasing Productivity Consistent r eliability and a lack of physical infrastructure can d rive productivity gains inside and out of a cloud using organization Workforce productivity can improve up to 50% following a large scale AWS migration according to AWS migration experts In addition AWS’s more than 90 solutions offers organizations faster access to services they would otherwise have to build and maintain themselves Government organizations around the world including a road and traffic agency in Belgium and Italy’s public finance regulator have realized increased productivity from the cloud – both for the benefit of their operation s and their citizens ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 4 Productivity gains help institutions better deliver on their mission The Agentschap Wegen & Verkeer (AWV) deploy s new maintenance capabilities up to eight times faster thanks to the automation of services and databases through the AWS Cloud according to Bert Weyne planning & coordination lead at AWV The agency manages 6970 kilometers of roads and 7668 kilometers of cycle lanes in Belgium with its team of 250 road i nspectors having a direct impact on citizen safety In the event of a pothole for example the team uses an app to log information about the issue and prioritize repairs “When we wer e running on in house servers our road inspectors complained about the app’s reliability At times they were unable to access the app and would have to use paper and p en instead It was embarrassing ” says Weyne In addition to bett er performance Weyne’s team has used the cloud to reduce costs speed development and cut infrastructure management time He adds “… by using managed services we’ve slashed system admin time by 67 percent which has improved our agility We can now dev elop and test features three times faster” The cloud has also enabled Italy’s auditing and oversight authority for public accounts and bu dgets to operate more effectively as a remote team Prior to working with AWS Corte dei conti (Cdc) felt constrained by physical IT infrastructure “We wanted to change the way our 3000 plus employees worked enabling them to access applications from anywh ere on any device But we had to ensure that this flexibility for staff didn’ t jeopardize the safety of data ” said C dc’s IT officer Leandro Gelasi This was attainable through a hybrid architecture migration approach and through collaboration with AWS Advanced Consulting Partner XPeppers Srl “As a result [employees are] much more productive Decisions get made faster and the whole system works better It’s a brilliant result fo r our entire organization” said Gelasi As Gelasi and his team prove their ability to fulfill duties securely from any location it may lend an opportunity to employ more workers in small towns and rural locations ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 5 Preparing Citizens for the Workforce & Building Skills Skilldevelopment and education programs offer meaningful contributions to economic development In line with the United Nation s’ 2030 Sustainable Development Goals which includes training and skill building for youth cl oud technology provisions the scaling of educational content and innovative teaching formats to reach learners wherever they are Quality inclusive and relevant education is a key factor in breaking cycles of poverty and reduc ing gender inequalities worldwide By expanding learning beyond the confines of a physical classroom technology helps increase access to courses and level s the playing field for learners of diverse geographical and socio economic backgrounds For schools and educators the cloud offers not only cost savings and agility but also the opportunity to develop breakthroughs in educational models and student engagement Reaching diverse job seekers where ver they are Digital Divide Data (DDD) is a nonprofit social enterprise that uses AWS to support regional workforce development Its goal is to create sustainable tech jobs for youth through Impact Sourcing a model that provides economically marginalized youth with training and jobs in next generation technologies such as cloud computing machine learning cyber security and data analytics In col laboration with Intel AWS worked with DDD to launch the first ofits kind AWS Cloud Academy in Kenya to train certify and employ underserved youth in cloud computing as a stepping stone to more advanced IT careers The program's first cohort included 30 hi gh school graduates from Kibera Nairobi with the second cohort compris ed of 70% women The social enterprise plan s to train five cohorts annually graduating 150200 clo ud engineers p er year – all of whom have the option to work for DDD as cloud computing engineers or to pursue cloud opportunities in the growing local tech sector In terms of workforce benefits DDD and AWS graduates earn five times more than their peers While i nformal workers in Kenya earn an average of $116 USD ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 6 per month AWS graduates earn an average of $575 USD per month The combination of training and w ork experience propels DDD graduates to earn higher income gain economic security and ultimately create better futures for themselves and their families In the US the Loui siana Department of Public Safety and Corrections manages nine state correctional facilities that house 19000 adult prisoners The state run agency offers educational and vocational programs with the goal of helping inmates earn degrees gain job training secure employment and avoid re incarceration The agency sought to implement a new IT environment that would support a better and more reliable online learning solution It also needed effective system security to prevent inmates from accessing the inte rnet amid concerns about victims’ safety and other criminal activity After opting for Amazon WorkSpaces – a managed secure desktop computing service on AWS – the agency along with partner ATLO Software succeeded in launching educational training labs at four Louisiana correctional facilities With the addition of an Amazon Virtual Private Cloud they were operating on a secure network Thanks to onsite labs inmates now have better access to vocational training have the opportunity to earn college credits or degrees and can potentially participate in the labor market Driving Local Development Retaining Local Talent Retaining local talent can be a challenge for cities Moreover a concentration of intellectual capital and innovative businesses and startups can be a strong indicator of economic development Cloud technology can help give new businesses a boost in their forecasting demand generation and innovation when bringing their products or services to market AWS accelerate s this process through AWS Activate a program designed to provide startups with resourc es and credits to get started with the cloud; through access to tools like Amazon LightSail which provides technology like virtual private servers to enterprises of all sizes for the cost of a cup of ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 7 coffee ; and by encouraging public private partnerships and small business linkages namely through the strength of the AWS Partner Network (APN) Additionally AWS CloudStart formed to encourage the growth of SMEs and economic development organizations by providing resources to educate train and help these entities embrace the cost effectiveness of the AWS Cloud “As small businesses leverage a broader portfolio of digital solutions they can see an increase in agility while simultaneously lowering costs and reducing time to innovation” according to Zandile Keebine found er of participating organization GirlCode a nonprofit that aims to empower girls through technology In the US Kansas City Missouri is one example of a city that is successfully using smart technology to attract talent to an emerging business center Along the two mile corridor of the Kansas City Streetcar a $15 million public private partnership supports the deployment of 328 Wi Fi access points and 178 smart streetlights that can detect traffic patterns and open parking spaces It has also funded 25 video kiosks pavement sensors video cameras and other devices all connected by the city’s nearly ubiquitous fiber optic data network The successful use of smart city technology has been a key component in bringi ng people back to Kansas City’s core “Ten years ago we had fewer than 5000 people living downtown” said Bob Bennett Kansans City’s chief innovation officer “We have seen a 520 percent growth in the number of residents in downtown and a 400 percent gr owth in development investment I believe our smart city project has played a prominent role in getting people excited about living here” Entrepreneur ship and p ublicprivate partnerships Cloud technology provides governments with the means to educate and train citizens boosting workforce participation and eligibility Driving local entrepreneurship is an important outgrowth of this investment “A vibrant entrepreneurial sector is essential to small firm development” according to the OECD It adds that regions with “pockets of high entrepreneurial activity” and public private partnerships can lead to more job opportunities and innovation ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 8 A municipality in Sweden is feeling the effects of a strategic partnership aimed at helping small bu sinesses adapt and thrive Consultant CAG Malardalen in Västerås Sweden uses the cloud to help constituents make more data driven decisions deploy resources more efficiently and help shape the economic conditions essential for attract ing new economic activity “[We are] striving to bring the region the latest in cloud technology Our ambition is to always deliver the most relevant IT solutions to our customers Through working with AWS CloudStart our customers benefit from the foundational knowledge we have gathered and we are already seeing a lot of new possibilities for us as a service provider across Sweden” says Tomas Täuber CEO of CAG Malardalen Allocating Resources More Effectively Cloud technology allows governments to rethink critical processes It builds new efficiencies across procurement security compliance and data protection Additionally the cost effectiveness of the cloud enables agencies to redirect resources toward advancing their mission freeing up capacity to create more innovative public services Increased access to new and better citizen services ushers in a higher standard of living offering the potential to draw new inhabitants to a city or region The cloud can act as a catalyst for this type of development driving organizations tow ard increased operational efficiencies and enabling a greater focus on the mission In the Middle East the Kingdom of Bahrain underwent a shift in how it procures resources in its plan to digitize its economy Using the cloud to efficiently deliver ser vices to constituents The Kingdom of Bahrain Information & eGovernment Authority (iGA) is accountable for moving all of its government services online It is responsible for information and communications technolog y (ICT) governance and procurement for the entire Bahraini government The iGA launched a cloud first policy to support its economic development plans ArchivedAmazon Web Services Inc – 5 Ways t he Cloud Can Drive Economi c Development Page 9 Bahrain’s adoption of a cloud first policy boosted efficiency across the public sector and trimmed IT e xpenditures by up to 90% in 2017 according to the Economic Development Board annual report “Through adopting a cloud first policy we have helped reduce the government procurement process for new technology from months to less than two weeks” said Mohammed Ali Al Qaed CEO of Bahrain iGA With cloud based technology as the focus for public ICT procurement the Bahraini government can exercise minimal upfront investment by paying only for the services it needs With tools for cost alloc ation and service provisioning the AWS Cloud offers built in resource discipline enabling governments to shift their focus toward advancing development goals Key Takeaway Technology driven innovation is one way public institutions can drive economic development With the right technology governments nonprof its economic development organizations and other entities can improve their internal operations become more productive and ultimately focus more acutely on serving citizens This can create co nditions in which citizens enjoy improved quality of life and where businesses flourish As organizations increasingly embrace cloud based solutions long lasting effects can be realized in the form of community wide collaboration partnerships with local businesses and increased innovation This can help these institutions wield greater influence on economic development Contributors The following individuals and organizations contributed to this document: • Carina Veksler Public Sector Solutions AWS Public Sector • Randi Larson Public Sector Content AWS Public Sector • John Brennan International Expansion AWS Public Sector • Mike Grella Economic Development AWS Public Policy
|
General
|
consultant
|
Best Practices
|
A_Platform_for_Computing_at_the_Mobile_Edge_Joint_Solution_with_HPE_Saguna_and_AWS
|
ArchivedA Platform for Computing at the Mobile Edge: Joint Solution with HPE Saguna and AWS February 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its a ffiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 The Business Case for Multi Access Edge Computing 1 MEC Addresses the Need for Localized Cloud Services 2 MEC Leverages the Capabilities Inherent in Mobile Networks 2 MEC Provides a S tandards Based Solution that Enables an Ecosystem of Edge Applications 2 Mobile Edge Solution Overview 4 Example Reference Architectures for Edge Applications 6 Smart City Surveillance 7 AR/VR Edge Applications 10 Connected Vehicle (V2X) 13 Conclusion 15 Contributors 15 Appendix 15 Infrastructure Layer 16 Application Enablement Layer 22 ArchivedAbstract This whitepaper is written for communication service providers with network infrastructure as well as for application developers and technology suppliers who are exploring applications that can benefit from edge computing In this paper we esta blish the value of a standards based computing platform at the mobile network edge describe use cases that are well suited for this platform and present a reference architecture base d on the solutions offered by AWS Saguna and HPE A subset of use cases are reviewed in detail that illustrat e how the reference architecture can be adapted as a platform to serve use case specific requirementsArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 1 Introduction Imagine a world where cars can alert drivers about dangerous road conditions to help them take action to avoid collision and where devices can help fleets of cars drive autonomously and predict traffic patterns Consider a new Industrial Revolution where Internet of Things (IoT) devices or sensors report data collected in real time from large and small machines allowing for intelligent automation and orchestration in industries such as manufacturing agriculture healthcare and logistics Envision city and public services that provide intelligent parking congestion management pollution detection and mitigation emergency response and security While this is happening internet users access bandwidth of 10 times the current maximums and latencies at 1/100 th of current averages using a seamless combination of mobile WiFi and fixed access Fifth generation mobile network (5G) applications are enabling these scenarios by providing 10 ti mes the current bandwidth maximum and 1 This new generation of applications is fueling technological developments and creating new business opportunities for mobile operators One such technological and business development which is key to enabling many new generation of applications is “edge computing ” Edge computing addresses the latency requirements of specialized 5G applications helps manage the potentially exorbitant access cost and network load due to fast growing data demand and support s data localization where necessary By providing a cloud enabled platform for edge computing mobile operators are well positioned to take a leading role in the 5G ecosystem while opening up completely new business cases and revenue streams This whitepaper present s a solution that allow s you to leverage the infrastructure of your existing mobile networks and establis h a platform to enable new revenue generating applications and 5G use case s The Business Case for Multi Access Edge Computing Multi Access Edge Computing (MEC) is a cloud based IT service environment at the edge infrastructure of networks that serves multiple channels of telecommunications access for example mobile wide area networks Wi Fi or LTE based local area networks and wireline ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 2 In this section we discuss the many benefits of a MEC platform that sits at the edge of the cellular mobile network MEC Addresses t he Need for L ocalized Cloud Services Agility scalability ela sticity and cost efficiencies of c loud computing have made it the platform of choice for application development and delivery IoT applications need local cloud services that operat e close to connected devices to improve the economics of telemetry data processing to minimize latency for time critical applications and to ensure that sensitive information is protected locally MEC L everages the Capabilities Inherent in Mobile Network s Mobile networks have expanded to the point where they offer coverage in most countries around the world These networks combine wireless access broadband capacity and security MEC Provides a Standards Based Solution that Enables an Ecos ystem of Edge Applications MEC transforms mobile communication networks into distributed cloud computing platforms that operate at the mobile access network Strategically located in proximity to end users and connected devices MEC enables mobile operators to open their networks to new differentiated services while providing application developers and content providers access to Edge Cloud benefits The ETSI MEC Industry Specification Group (ISG) has defined the first set of standardized APIs and services for MEC The standard is supported by a wide range of industry participants including leading mobile operators and industry vendors Both HP E and Saguna are active members in the ETSI ISG In the following sections we outline the key benefits provided by MEC ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 3 Extremely Low Latency Traditional internet based cloud environments have physical limitations that prohibit you from hosting applications that require extremely low latency Alternatively MEC provides a lowlatency cloud computing env ironment for edge app lications by operating close to end users and connected IoT devices Broadband Delivery Video content is typically delivered using TCP streams When network latency is compounded by congestion users experience annoying delays due to the drop in bitrate The MEC environment provides low latency and minimal jitter which creates a broadband highway for streaming at high bitrates Economical and Scalable In massive IoT uses cases many devices such as sensors or cameras send vast amounts of data upstream which current backhaul networks1 cannot support MEC provides a cloud computing environment at the network edge where IoT data can be aggregated and processed locally thus significantly reducing upstream data MEC infrastructure can scale as you grow by e xpanding local capacity or by deploying additional edge clouds in new locations Privacy and Security By deploying the MEC Edge Cloud locally you can ensure that your private data stays on premise s However unlike server based on premise s installations MEC is a fully automated edge cloud environment with centralized management Role of MEC in 5G MEC enable s ultra low latency use cases specified as part of the 5G network goals MEC also enables fast delivery of data and the connection of billions of devic es while allowing for cost economization related to transporting enormous volumes of data from user devices and IoT over the backhaul network It is important to note that MEC is currently deployed in 4G networks By deploying this standard based technolo gy in existing networks communication service providers can benefit from MEC today while creating an evolutionary path to their next generation 5G network ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 4 Mobile Edge Solution Overview Saguna has developed a MEC virtualized radio access network (vRAN) solution that runs on Hewlett Packard Enterprise (HPE) edge infrastructure This solution lets application developers create mobile edge applications using AWS services while allowing mobi le operators to effectively deploy MEC and operate edge applications within their mobile network Figure 1: End toend MEC solution architecture The proposed m obile edge solution consists of three main layer s as illustrated in Figure 1: • Edge Infrastructure Layer – Based on the powerful x86 compute platform this layer provides compute storage and networking resources at edge locations It supports a wide range of deployment options from RAN base d station sites to backhaul aggregation sit es and regional branch offices • MEC Layer – This layer lets you place an application within a mobile access network and provides a number of services including mobile traffic breakout and steering registration and certification services for application s deployed at the e dge and radio network information services It also provides optional integration point s with mobile core network services such as charging and lawful intercept ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 5 • Application Enablement Layer – This layer provides tools and frameworks to build deploy and maintain edge assisted application s This layer allows you to place certain application modules locally at the edge (eg latency critical or bandwidth hungry components) while keeping other application functions in the cloud The flexible design inherent in the MEC solution architecture allows you to scale the edge component to fit the needs of concrete use cases You can deploy t he edge component at the deepest edge of mobile network (eg colocated with eNodeB equipment at a RAN site) which lets you to deploy lowlatency and bandwidth demanding application components in close proximity to end devices You can also deploy an edge component at any traffic aggregation point between a base station and mobile core which allows you to serve traffic from multiple base stations The proposed m obile edge platform provides a variety of tools to build deploy and manage edge assisted applications such as: • Development libraries and frameworks spanning edge tocloud including function asaservice at the edge and cloud AI frameworks for creating and training models in the cloud seamless deployment and inference at the edge and communication brokerage between edge application services and cloud These development libraries and frameworks expose well defined A PIs and have been widely adopt ed in the developer community shortening the learning curve and accelerating time tomarket for edge assisted applications and use cases • Tools to automate deployment and life cycle management of edge application component s throughout massively distributed edge infrastructure • Infrastructure services such as virtual infrastructure services at the edge traffic steering policies at the edge DNS services radio awareness services integration of edge platform into overall netwo rk function virtualization ( NFV ) framework of mobile operator • Diverse compute resources fitted to the particular needs of edge application such as CPU GPU for acceleration of graphics intensive or AI workloads FPGA accelerators cryptographic and data compression accelerators etc ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 6 This unique combination of functionalities lets you quickly develop edge applications de ploy and manage edge infrastructure and applications at scale and lets you achieve a fast time tomarket with edge enabled use cases Example Reference Architectures for Edge Applications A mobile edge platform enables new app lication behaviors By adding the ability to run certain components and application logic at the mobile network edge in close proximity to the user devices/c lient s the mobile edge platform allows you to reengineer the functional split between c lient and application server s and enables a new generation of application experiences The following list provide s examples of possible mobile edge computing applications in industrial automotiv e public and consumer domains : • Industrial o Next generation augmented reality ( AR) wearables (eg s mart glasses) o IoT for a utomation predictive maintenance o Asset tracking • Automotive o Driverless cars o Connected vehicle tovehicle or vehicle toinfrastructure (V2X ) • Smart Cities o Surveillance cameras o Smart parking o Emergency response managemen t • Consumer Enhanced Mobile Broadband o Next generation Augmented Reality/Virtual Reality ( AR/VR) and video analytics o Social media highbandwidth media sharing ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 7 o Live event streaming o Gaming In the following sections we provide examples of how the mobile edge solution can be implemented for smart city surveillance AR/VR edge applications and Connected V2X Smart City Surveillance Cities can take advantage of IoT technologies to increase the safety security and overall quality of life for residents and keep operational costs down For example video recognition technology enables real time situational analysis (also called “video as a sensor”) which allow s you to detect a variety of objects from video feed (eg people vehicles personal items ) recognize the overall situation (eg a traffic jam fight trespassing and abandoned objects) and classify recognized objects (eg faces license plates) The mobile edge solution enables new abilities in building robust and cost efficient smart city surveill ance systems: • Efficient video processing at the edge – Computer vision systems in general require high quality video input (especially for extracting advanced attributes) and hardware acceleration of inference models The mobile edge solution lets you host a computing environment at the network edge This lets you offload backhaul networks and cloud connectivity from bandwidth hungry high resolution video feeds and allows lowlatency actions based on recognition results (eg opening gates for recognized vehicles or people controlling traffic with adaptive traffic lights) The mobile edge platform provides industry standard GPU resources to accelerate video recognition and any other artificial intelligence ( AI) models deployed at the edge • Flexible access network – End toend smart city surveillance system s might leverage different means to generate video input such as existing fixed surveillance cameras mobile wearable cameras (eg for law enforcement services or first responders) and drone mounted mobile surveillance The diversity of endpoints generating video input requires a high degree of flexibility from access network – leveraging fixed video networks and mobile cellular network s with native mobility support for wearable or unmanned aerial vehicle (UAV )mounted ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 8 cameras Additionally automated drone mounted system s require low latency access to control the flight of the drone which might require endtoend latencies of millisecond scale The mobile edge platform provides a means to use robust lowlatency cellular access with native mobility support for the latter cases and incorporate s existing fixed video networks • Flexible video recognition models – Robust video recognition AI model s usually require extensive training on sample sets of objec ts and events as well as periodic tuning (or development of models for extracting some new attributes) These compute intensive tasks use highly scalable lower cost compute cloud resources However seamless deployment of the trained models to the edge f or execution and managing the life cycle of the deployed models is a complex operational task The mobile edge platform provides seamless development and operational experience starting from creation training and tuning an AI model in the cloud to depl oying it at edge locations and managing the lifecycle of the deployed models The following diagram shows an example architecture of a smart city surveillance edge application: Figure 2: Edge assisted smart city surveillance application A smart city surveillance solution has three main domains: • Field domain – D iverse ecosystem of video producing devices eg body worn cameras from first responder units drones fixed video ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 9 surveillance systems and wireless fixed cameras Video feeds are ingested int o the mobile edge platform via cellular connectivity and use existing video networks • Edge sites – L ocated in close proximity to the video generating devices and host latency sensitive services ( eg UAV flight control local alerts processing) bandwidth hungry compute intensive applications (edge inference) and gateway functionalities for video infrastructure control (camera management) Video services extract target attributes from the video streams and share metadata with local alerting services and cloud services Video services at the edge can also produce low resolution video proxy or sampling video s for transferring only the video s of interest to the cloud • Cloud domain – H osts centralized non latency critical functions such as device and service management functions AAA and policies command and control center functions as well as compute intensive non latency critical tasks of AI model training You can augment a MEC smart city surveillance application with machine learning (ML) and inference models via: • Model training (for surveillance patterns of interest eg facial recognition person counts dwell time analysis heat maps activity detection) using deep learning AMIs on the AWS Cloud • Deployment of trained models to the MEC platform’s application container using AWS Greengrass and Amazon Sage Maker • Application of inference logic (eg alerts or alarms based on select pattern detection) using AWS Greengrass ML inference Figure 3: Detailed view of solution for smart city surveillance application ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 10 This design approach based on the mobile edge platform is a costefficient way of building and operating a s mart city surveillance system with edge processing for bandwidth hungry and laten cysensitive services AR/VR Edge Applications AR/VR is one of the use cases that benefits most from a mobile edge platform AR/VR edge applications can benefit from the m obile edge platform in the following ways: • Next generation AR wearables Current immersive AR experiences require heavy processing on the client side (eg calculating head and eye position and motion information from tracking sensors rendering of high quality 3 D graphics for the AR experience and running video recognition models) The requirement to run heavy computation s on AR device s (eg head mounted display s smart glasses smartphone s) has influenced the characteristics of the se devices —cost size weight battery life and overall aesthetic appeal Figure 4 : Nextgenerat ion AR devices You can avoid b ulkiness cost weight ergonomic and aesthetic limitations on the devices by offloading the heaviest computational tasks from the device s to a remote server or cloud However a truly immersive AR experience requires keeping coherence between AR content and the surrounding physical world with an end toend latency below 10 ms which is unachievable by offload ing to a traditional centralized cloud The m obile edge platform provides compute power at the network edge which allows you to offload latency critical functions from the AR device to the ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 11 network and enables the next generation of lightweight compact devices with long er battery life and native mobility • Mission critical operations AR experiences have been valuable in workforce enablement applications with remote collaboration applications AR assisted maintenance in the industrial space etc In many cases those AR experience s have become an important part of mission critical operations for example ARassisted mainte nance of equipment in hazardous conditions (eg oil extraction sites refineries and mines ) and in ARassisted healthcare Those use cases require high reliability from the AR application even when global connectivity from the c lient to the server side is degrad ed or broken The m obile edge platform provides the capability to re engineer an AR application in a way that the solution can operate offline with critical components deployed both locally in close proximity to devices and globally in the cloud as a fallback option • Localized data processing In many cases AR devices combine data from different local sources ( eg adding live sensor readings from a local piece of equipment to an AR maintenance application) In many cases ingesting data into th e cloud requires high bandwidth and is governed by data security or privacy frameworks A true AR experience requires localized data processing and ingest The m obile edge platform allows you to ingest data from any local source into the AR application as well as execute commands from the AR application to the local data sources (eg perform equipment maintenance tasks) The following diagram shows an example archit ecture for an AR edge application ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 12 Figure 5: Edge assisted AR application The edge assisted AR application has three main domains: • Ultra thin client (eg head mounted display) – G enerates sensor readings of head and eye position location and other relevant data such as live video feed from embedded cameras • Edge services – Part of an AR backend hosted in close proximity to the client on network side These services execute latency critical functions (computing positioning and tracking from AR sensor readings AR graphics rendering) bandwidth hungry functions (eg computer vision models for video recognition) and local data (processing of IoT sensor readings from localized equipment) • Cloud services – Part of AR backend hosted in a traditional centralized cloud These services execute functions centralized in nature (eg authentication and policies command and control center and AR model repository) resource hungry non latency critical functions (computer vision model training) and horizontal cross enterprise functions (eg data lakes integration points with other enterprise systems etc ) This design approach allows client s to offload heavy computations which makes client devices cost efficient lightweight and battery efficient This design also allows local data to be ingested from external sources and contro ls actions to local systems enables offline operation saves costs of WAN connectivity and secures compliance with potential data localization guidelines By working as an integrated part of the mobile network this use case natively supports global mobility telco grade reliability and security ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 13 Connected Vehicle (V2X) Connectivity between vehicles pedestrians roadside infrastructure and other elements in the environment is enabling a tectonic shift in transportation T he full promise of V2X solu tions can only be realized with a new generation of mobile edge applications : • Transportation safety – V2X promises the ability to coordinate actions between vehicles sharing the road (T his ability is sometimes called “Cooperative Cruise Control ”) Informa tion exchange between connected vehicles about intention to change speed or trajectory can significantly improve the safety and robustness of automated or autonomous driving through cooperative maneurvering However due to the very dynamic nature of car traffic these decisions must be made in near real time (with end toend latencies on a millisecond time scale) The m assively distributed nature of road infrastructure near realtime decision making and the requirements for hi ghspeed mobility make the mobile edge platform perfect for host ing the distributed logic of cooperative driving • Transportation efficiency – Cooperative driving promises not only increase d safety o n the road but also a significant boost in transportation efficiency With coordinated vehicle maneuvers the overall capacity of road infrastructure can increase without significant investment in road reconstruction The promise of higher transportation efficiency is further supported by v ehicle toinfrastructure solutions Vehicles can communicate with roadside equipment for speed guidance to coordinate traffic light changes and to reserve parking lots While some information requires only short range communication (eg from a vehicle to a r oadside unit) the coordinated actions of a distributed infrastructure (eg coordinating traffic light changes between multiple intersections) req uires the mobile edge platform to host the logic • Transportation experience – With autonomous driving technologies car infotainment system s are becoming more widespread The mobile edge platform enables the unique possibility of massively distributed content caching with high localization and context awareness as well as the ability to enable location and context based inter actions with vehicle passengers (eg guidance about local ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 14 attractions for travelers time and location limited promotions from local vendors etc) The following diagram shows an example architecture of a V2X edge application Figure 6: Edge assisted connected vehible (V2X) application The V2X solution has three main domains: • Field domain – V ehicle s that generat e data about intended driving maneuvers (eg braking lane change s turn s acceleration) and receive notifications from surroun ding vehicles Road infrastructure that includes all sensors and actuators that are relevant to the driving experience ( eg wind and temperature sensors street lighting connected traffic lights that are controlled via gateway devices such as Road Side Unit) • Edge sites – L ocated in close proximity to the road (eg respective RAN eNodeB sites) and host latency sensitive or highly localized V2X application services Examples of those services include processing and relaying driving maneuver notification s for vehicle coordination processing local sensor readings from road infrastructure dynamic generation of control commands to road infrastructure (eg coordinated traffic lights across several intersections) and caching highly localized infotainment content • Cloud domain – Host s centralized and non latency critical functions such as AAA and policy control historical data collection and ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 15 processing command and control center functions and centralized infotainment content origin With this design approach you can realize low latency and a coordinated exchange of data and control commands between vehicles and surrounding infrastructure This provides a highly specific context for every interaction Conclusion Many technological and market developments are converging to create an opportunity for new applications that take advantage of modern mobile networks and the edge access infrastructure This paper emphasizes the need for an application enablement ecosystem approach and presents a platform to serve multiple edge use cases Contributors The following individuals and organizations contributed to this document: • Shoma Chakravarty WW Technical Leader Telecom Amazon Web Services • Tim Mattison Partner Solution s Architect Amazon Web Services • Alex Rez nik Enterprise Solution Architect and ETSI MEC Chair HPE • Rodion Naurzalin Lead Architect Edge Solutions HPE • Tally Netzer Marketing Leader Saguna • Danny Frydman CTO Saguna Appendix This Appendix gives a more detailed overview of the functional components of the proposed m obile edge platform solution as well as technical characteristics of each component Figure 7 illustrates a functional diagram of the mobile edge platform: ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 16 Figure 7: Mobile edge platform functional diagram Infrastructure Layer The physical infrastructure for a MEC node is based on an edge optimized converged HPE Edgeline EL4000 platform (Figure 8 ) Figure 8: HPE Edgel ine EL4000 chassis and four m710x cartridges The end toend MEC solution gives you the ability to place workloads within any segment of your mobile access network for example at a RAN site backhaul aggregation hub or CRAN hub T he HPE Edge line EL4000 has been optimized for the MEC solution as follows : Compute Density The Edge line EL4000 hosts up to four hot swap SoC cartridges in 1U chassis providing up to 64 Xeon D cores with optimized price/core and watt/core characteristics That design provides 2x – 3x higher compute density compared to a typical traditional data center pl atform while keeping power consumption low These characteristics allow an operator to place a MEC node based on Edge line EL4000 at the deepest edge of access network down to a RAN site ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 17 where space and power constrain ts make other general purpose compute platforms inefficient Workload Specific Compute The diversity of MEC use cases requires that the underlying infrastructure be able to provide different types of compute resources The Edge line EL4000 platform provides diverse compute and hardware acceler ation capabilities which allows you to co locate workloads with different compute needs: • x86 processors that serve general workloads Typical workload example s include a Virtual Network Function virtualized edge application enablement platform and applications that provide fast control actions at the edge for low latency use cases • Builtin GPU that accelerat es graphics processing Typical workload example s are video transcoding at the edge for MEC assisted content distribution and 3 D graphics rendering at the edge for AR/VR streaming application • Plug in dedicated GPU cards that accelerat e deep learning algorithms Enabled by strategic partnership with NVIDIA the Edge line platform can be used for deep learning hardware acceleration at the edge Ty pical workload example s include video analytics and computer vision at the edge and ML inference at the edge for anomaly detection and predictive maintenance • Builtin acceleration of cryptographic operations with QuckAssist Technology (eg accelerating cryptographic or data compression workloads) • Support of up to four PCI E extension slots in a single chassis which provides options for specialized plug in units such as dedicated FPGA boards neuromorphic chips etc Such specialized hardware accelerati on is being evaluated for many network function workloads (such as RAN baseband processing) and applications (efficient deep learning inference) Physical and Operational Characteristics A MEC node should be ready to operate at physical sites and is traditionally used for hosting telco purpose built appliances that are optimized for the physical site environment (eg radio base station equipment at RAN sites ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 18 access routers at traffic hubs etc ) The operational environment of the MEC node sites may be very different from the traditional data center with limited physical space for equipment hosting consumer grade climate control and limited physical accessibility The Edge line EL4000 is optimized to operate in such environments with operational characteristics comparable to the telco purpose built appliances: Parameter RAN Baseband Appliance Typical Data C enter Platform Edge line EL4000 Operating Temperature (oC) +0 …+50 +10 … +35 0 … +55 NonDestructive Shock Tolerance (G) 30 2 30 Expected Mean Time Between Failures ( MTBF ) (years) 3035 1015 >35 On top of enhanced operational characteristics the Edge line EL4000 exposes open iLO interface for the management of highly distributed infrastructure of MEC nodes The iLO interface is compliant with RedFish industry standard It exposes infrastructure management functions via simple RESTful service Saguna OpenRAN C omponents Overview The MEC p latform layer is based on the Saguna OpenRAN solution and consists of the following functions: • Saguna vE dge function located within MEC n ode • Saguna vGate function (optional) located at the core network site • Saguna OMA function (optional) located within a MEC node or at the aggregation point of several MEC n odes Saguna vEdge resides in the MEC node and enables services and applications to operate inside the mobile RAN by providing MEC services such as registration and certification Traffic Offload Function (TOF) real time Radio Network Information Services (RNIS) and optional DNS services The virtualized software node is deployed in the RAN on a server at a RAN site or aggregation point of mobile backhaul traffic It may serve single or multiple ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 19 eNodeB base stations and small cells It can easily be extended to support WiFi and other communications standards in heterogeneous network (HetNet) deployments Saguna vEdge taps the S1 interface (GTP U and S1 AP protocols) and steers the traffic to the appropriate local or remote endpoint based on configured policies Saguna vEdge implements local LTE traffic steering in number of modes (inline steering breakout tap) It has a communication link that connects it to the optional Saguna vGate node using Saguna’s OPTP (Open RAN Transport Protocol) It exposes open REST APIs for managing the platform and providing platform services to the MEC assisted applications Saguna vGate is an optional component that resides in the core network It is responsible for preserving core functionality for RAN generated traffic: l awful interception (LI) charging and policy control The Saguna vGate also enables mobility support for session generated by an MEC assisted application Operating in a v irtual machine Saguna vGate is adjacent to the enhanced packet core (EPC) It has a communication link that connects it to the Saguna vEdge nodes using Saguna’s OPTP (Open RAN Transport Protocol) and m obile network integrations for LI and charging functions Saguna OMA (Open Management and Automation) is an optional subsystem that resid es in the MEC n ode or at the aggregation p oint of several MEC n odes It provides a management layer for the MEC nodes and integrates into the cloud Network Function Virtualization ( NFV ) environment which includes the NFV Orchestrator the Virtual Infrastructure Manager (VIM) and Operations Support Systems (OSS) Saguna OMA provides two management modules: • Virtualized Network Function Manager (VNFM) Provides Life Cycle Management and monitoring for MEC Platform (Saguna vEdge) and MEC assisted applications This is a standard layer of management required within NFV environments It resides at the edge to manage the local MEC environment ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 20 • Mobile Edge Platform Manager (MEPM) – Provides an additional layer of management required for operating and prioritizing MEC applications It is re sponsible for managing the rules and requirements presented by each MEC application rules and resolving conflicts between different MEC assisted applications The Saguna OMA node operates on a virtual machine and manages on boarded MEC assisted application s via its workflow engine using Saguna and third party plugins The Saguna OMA is managed via REST API Saguna OpenRAN Services As a MEC p latform layer Saguna OpenRAN provides the following services: Mobile Network Integration Services • Mobility with Internal Handover support for mobility events between cells connected to the same MEC n ode and External Handover support between two or more MEC n odes and between cells connected to a MEC node and unconnected cells • Lawful Interception (LI) for RAN based generated data It supports X1 (Admin) X2 (IRI) and X3 (CC) interfaces and is pre integrated with Utimaco and Verint LI systems • Charging support using CDR generation for application based charging (based on 3GPP TDF CDR) and charging triggering based on time session and data Supported charging methods are File based (ASN1) and GTP’ • Management vEdge REST API for MEC services discovery and registration MEPM and VNFM let you efficiently operate a MEC solution and integrate it into your existing NFV en vironment Edge Services • Registration for MEC assisted applications The MEC Registration service provides dynamic registration and certification of MEC applications and registration to other MEC services provided by the MEC Platform setting the MEC appli cation type • Traffic Offload Function routes specific traffic flows to the relevant applications as configured by the user The TOF also handles tunneling ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 21 protocols such as GPRS Tunneling Protocol (GTP) for Long Term Evolution (LTE) network Standard A10/A 11 interfaces for 3GPP2 CDMA Network and handles plain IP traffic for WiFi/DSL Network • DNS provides DNS caching service by storing recent DNS addresses locally to accelerate the mobile i nternet and DNS server functionality preconfiguring specific DNS responses for specific domains This lets the User Equipment ( UE) connect to a local application for specific TCP sessions • Radio Network Information Service provided per Cell and per Radio Access Bearer (RAB) The service is vendor independent and can support eNodeBs from multiple RAN vendors simultaneously It supports standard ETSI queries (eg cell info) and notification mechanism (eg RAB establishment events) Additional information based on Saguna proprietary model provides real time feedback on cell congestion level and RAB available throughput using statistical analysis • Instant Messaging with Short Message Service (SMS) provided as a REST API request It offers smart messaging capabilities for example sending SMS to UEs on a specific area ( eg sports stadium) or sending SMS to UE when entering or exiting a specific area (eg shop) Mobile Edge Applications • Throughput guidance application uses the internal RNIS algorithm to deliver throughput guidance for specific IP addresses on the server side or according to domain names of the servers The application can be configured with the period of such Throughput Guidance update per target • DDoS Mitigation application monitors traffic originating from the connected device for specific DDoS attacks on different layers (IP layer for ICMP flooding IP scanning Ping of death; TCP/UDP layer for TCP sync attacks UDP message flooding; Application layer) Devices that are detected as generating DDoS traffic are reported to the network management and traffic from these devices can be locally stopped or the device can be remotely disabled by the network core ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 22 Application Enablement L ayer The Application Enablement layer consists of AWS Greengrass hosted on the MEC node side AWS Greengrass is designed to support IoT solutions that connect different types of devices with the cloud and each other It also runs local functions and parts of applications at the network edge Devices that run Linux and support ARM or x86 architectures can host the AWS Greengr ass Core The AWS Greengrass Core enables the local execution of AWS Lambda code messaging data caching and security Devices running the AWS Greengrass Core act as a hub that can communicate with other devices that have the AWS IoT Device SDK installed such as micro controller based devices or large appliances These AWS Greengrass Core devices and the AWS IoT Device SDK enabled devices can be configured to communicate with one another in a Greengrass Group If the AWS Greengrass Core device loses connection to the cloud devices in the Greengrass Group can continue to communicate with each other over the local network A Greengrass Group represents localized assembly of devices For example it may represent one floor of a building one truck or one home AWS Greengrass builds on AWS IoT and AWS Lambda and it can also access other AWS services It is built for offline operation and greatly simplifies the implementation of local processing Code running in the field can collect filter and aggregate fr eshly collected data and then push it up to the cloud for long term storage and further aggregation Further code running in the field can also take action very quickly even in cases where connectivity to the cloud is temporarily unavailable AWS Greengr ass has two constituent parts : the AWS Greengrass Core and the IoT Device SDK Both of these components run on onpremises hardware out in the field The AWS Greengrass Core is designed to run on devices that have at least 128 MB of memory and an x86 or ARM CPU running at 1 GHz or better and can take advantage of additional resources if available It runs Lambda functions locally interacts with the AWS Cloud manages security and authentication and communicates with the other devices under its purview ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 23 The IoT Device SDK is used to build the applications on devices connected to the AWS Greengrass Core device (generally via a LAN or other local connection) These applications capture data from sensors subscribe to MQTT topics and use AWS IoT device shadows to store and retrieve state information AWS Greengrass features include : • Local support for AWS Lambda – AWS Greengrass includes support for AWS Lambda and AWS IoT d evice shadows With AWS Greengrass you can run AWS Lambda functions right on the device to execute code quickly • Local support for AWS IoT d evice shadows – AWS Greengrass also includes the functionality of AWS IoT d evice shadows The d evice shadow caches the state of your device like a vi rtual version or “shadow” and tracks the device’s current versus desired state • Local messaging and protocol adapters – AWS Greengrass enables messaging between devices on a local network so they can communicate with each other even when there is no connection to AWS With AWS Greengrass devices can process messages and deliver them to other device s or to AWS IoT based on business rules that the user defines Devices that communicate via the popular industrial protocol OPC UA are supported by the AWS Gr eengrass protocol adapter framework and the out ofthebox OPC UA protocol module Additionally AWS Greengrass provides protocol adapter framework to implement support for custom legacy and proprietary protocols • Local resource access – AWS Lambda functions deployed on an AWS Greengrass Core can access local resources that are attached to the device This allows you to use serial ports USB peripherals such as add on security devices sensors and actuators on board GPUs or the local file system to quickly access and process local data • Local machine learning i nference – A llows you to locally run a n MLmodel that’s built and trained in the cloud With hardware acceleration available in the MEC infrastructure layer this feature provides a powerful mec hanism for solving any machine learning task at the local edge eg discovering patterns in data building computer vision systems and running anomaly detection and predictive maintenance algorithms ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 24 AWS Greengrass has a growing list of features Curren t features are shown in Figure 9 Figure 9: AWS Greengrass features AWS Greengrass on the MEC node acts as a pivot point It integrates the MEC platform with the AWS I oT solution and other AWS services providing a powerful application enablement environment for developing deploying and managing MEC assisted applications at scale The figure below illustrates the current portfolio of AWS services that enable a seamless IoT pipeline —from endpoints connecting via Amazon FreeRTOS or the IoT SDK through MQTT or OPC UA to edge gateways that host AWS Greengrass and Lambda functions providing data processing capabilities at the edge up to cloud hosted AWS IoT Core AWS Device Management AWS Device Defender and AWS IoT Analytics services as well as enterprise applications ArchivedAmazon Web Services – A Platform for Computing at the Mobile Edge Page 25 Figure 10: AWS services that enable a seamless IoT pipeline 1 In a telecommunications network the backhaul portion of the network comprises the intermediate links between the core network or backbone network an d the small subnetworks at the "edge"
|
General
|
consultant
|
Best Practices
|
A_Practical_Guide_to_Cloud_Migration_Migrating_Services_to_AWS
|
Archived A Practical Gui de to Cl oud Migration Migratin g Service s to AWS December 2015 This paper has been archived For the latest technical content see: https://docsawsamazoncom/prescriptiveguidance/latest/mrpsolution/mrpsolutionpdfArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 2 of 13 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice C ustomers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document do es not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this docum ent is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 3 of 13 Contents Abstract 3 Introduction 4 AWS Cloud Adoption Framework 4 Manageable Areas of Focus 4 Successful Migrations 5 Breaking Down the Economics 6 Understand OnPremises Costs 6 Migration Cost Considerations 8 Migration Options 10 Conclusion 12 Further Reading 13 Contributors 13 Abstract To achieve full benefits of moving applications to the Amazon Web Services (AWS) platform it is critical to design a cloud migration model that delivers optimal cost efficiency This includes establishing a compelling business case acquiring new skills within the IT organization implemen ting new business processes and defining the application migration methodology to transform your business model from a traditional on premises computing platform to a cloud infrastructure ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 4 of 13 Perspective Areas of Focus Introduction Cloudbased computing introduces a radical shift in how technology is obtained used and managed as well as how organizations budget and pay for technology services With the AWS cloud platform project teams can easily configure the virtual network using t heir AWS account to launch new computing environments in a matter of minutes Organizations can optimize spending with the ability to quickly reconfigure the computing environment to adapt to changing business requirements Capacity can be automatically sc aled —up or down —to meet fluctuating usage patterns Services can be temporarily taken offline or shut down permanently as business demands dictate In addition with pay peruse billing AWS services become an operational expense rather than a capital expense AWS Cloud Adoption Framework Each organization will experience a unique cloud adoption journey but benefit from a structured framework that guides them through the process of transforming their people processes and technology The AWS Cloud Adopt ion Framework (AWS CAF) offers structure to help organizations develop an efficient and effective plan for their cloud adoption journey Guidance and best practices prescribed within the framework can help you build a comprehensive approach to cloud comput ing across your organization throughout your IT lifecycle Manageable Areas of Focus The AWS CAF breaks down the complicated planning process into manageable areas of focus Perspectives represent top level areas of focus spanning people process and te chnology Components identify specific aspects within each Perspective that require attention while Activities provide prescriptive guidance to help build actionable plans The AWS Cloud Adoption Framework is flexible and adaptable allowing organizations to use Perspectives Components and Activities as building blocks for their unique journey Business Perspective Focuses on identifying measuring and creating business value using technology services The Components and Activities within the Business Perspective can help you develop a business case for cloud align ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 5 of 13 business and technology strategy and support stakeholder engagement Platform Perspective Focuses on describing the structure and relationship of technology elements and services in complex IT environments Components and Activities within the Perspective can help you develop conceptual and functional models of your IT environment Maturity Perspective Focuses on defining the target state of an organization's capabilities measuring maturity and optimizing resources Components within Maturity Perspective can help assess the organization's maturity level develop a heat map to prioritize initiatives and sequence initiatives to develop the roadm ap for execution People Perspective Focuses on organizational capacity capability and change management functions required to implement change throughout the organization Components and Activities in the Perspective assist with defining capability and skill requirements assessing current organizational state acquiring necessary skills and organizational re alignment Process Perspective Focuses on managing portfolios programs and proj ects to deliver expected business outcome on time and within budget while keeping risks at acceptable levels Operations Perspective Focuses on enabling the ongoing operation of IT environments Components and Activities guide operating procedures service management change management and recovery Security Perspective Focuse s on helping organizations achieve risk management and compliance goals with guidance enabling rigorous methods to describe structure of security and compliance processes systems and personnel Components and Activities assist with assessment control selection and compliance validation with DevSecOps principles and automation Successful Migrations The path to the cloud is a journey to business results AWS has helped hundreds of customers achieve their business goals at every stage of their journey While every organization’s path will be unique there are common patterns approaches and best pract ices that can be implemented to streamline the process 1 Define your approach to cloud computing from business case to strategy to change management to technology 2 Build a solid foundation for your enterprise workloads on AWS by assessing and validating yo ur application portfolio and integrating your unique IT environment with solutions based on AWS cloud services Perspective Areas of Focus ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 6 of 13 3 Design and optimize your business applications to be cloud aware taking direct advantage of the benefits of AWS services 4 Meet your internal and external compliance requirements by developing and implementing automated security policies and controls based on proven validated designs Early planning communication and buy in are essential Understanding the forcing function (tim e cost availability etc) is key and will be different for each organization When defining the migration model organizations must have a clear strategy map out a realistic project timeline and limit the number of variables and dependencies for trans itioning on premises applications to the cloud Throughout the project build momentum with key constituents with regular meetings and reporting to review progress and status of the migration project to keep people enthused while also setting realistic ex pectations about the availability timeframe Breaking Down the Economics Understand On Premises Costs Having a clear understanding of your current costs is an important first step of your journey This provides the baseline for defining the migration model that delivers optimal cost efficiency Onpremises data centers have costs associated with the servers storage networking power cooling physical space and IT labor required to support applications and services running in the production environment Although many of these costs will be eliminated or reduced after applications and infrastructure are moved to the AWS platform knowing your current run rate will help determine which applications are good candidates to move to AWS which applications need to be rewrit ten to benefit from cloud efficiencies and which applications should be retired The following questions should be evaluated when calculating the cost of on premises computing: Understanding Costs To build a migration model for optimal efficiency it is important to accurately understand the current costs of running onpremises applications as well as the interim costs incurred during the transition ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 7 of 13 “Georgetown’s modernization strategy is not just about upgrading old systems; it is about changing the way we do business building new partnerships with the community and working to embrace innovation Cloud has been an important component of this Although we thought the primary driver would be cost savings we have found that agility innovation and the opportuni ty to change paths is where the true value of the cloud has impacted our environment “Traditional IT models with heavy customization and sunk costs in capital infrastructures —where 90% of spend is just to keep the trains running —does not give you the opp ortunity to keep up and grow” Beth Ann Bergsmark Interim Deputy CIO and AVP Chief Enterprise Architect Georgetown University Labor How much do you spend on maintaining your environment (broken disks patching hosts servers going offline etc)? Network How much bandwidth do you need? What is your bandwidth peak to average ratio? What are you assuming for network gear? What if you need to scale beyond a single rack? Capacity What is the cost of over provisioning for peak capacity? How do you plan for capacity? How much buffer capacity are you planning on carrying? If small what is your plan if you need to add more? What if you need less capacity? What is your plan to be abl e to scale down costs? How many servers have you added in the past year? Anticipating next year? Availability / Power Do you have a disaster recovery (DR) facility? What was your power utility bill for your data center(s) last year? Have you budgeted for both average and peak power requirements? Do you have separate costs for cooling/ HVAC? Are you accounting for 2N power? If not what happens when you have a power issue to your rack? Servers What is your average server utilization? How much do you overpr ovision for peak load? What is the cost of over provisioning? Space Will you run out of data center space? When is your lease up? ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 8 of 13 Migration Cost Considerations To achieve the maximum benefits of adopting the AWS cloud platform new work pract ices that drive efficiency and agility will need to be implemented: IT staff will need to acquire new skills New business processes will need to be defined Existing business processes will need to be modified Migration Bubble AWS uses the term “migration bubble” to describe the time and cost of moving applications and infrastructure from on premises data centers to the AWS platform Although the cloud can provide significant savings costs may increase as you move into the migration bubble It i s important to plan the migration to coincide with hardware retirement license and maintenance expiration and other opportunities to reduce cost The savings and cost avoidance associated with a full all in migration to AWS will allow you to fund the mig ration bubble and even shorten the duration by applying more resources when appropriate Time Figure 1: Migration Bubble Level of Effort The cost of migration has many levers that can be pulled in order to speed up or slow down the process including labor process tooling consulting and technology Each of these has a corresponding cost associated with it based on the level of effort required to move the application to the AWS platform Migration Bubble Planning • • • • • • Planning and Assessment Duplicate Environments Staff Training Migration Consulting 3rd Party Tooling Lease Penalties Operation and Optimization Cost of Migration $ ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 9 of 13 To calculate a realistic total cost of ownership (TCO) you need to understand what these costs are and plan for them Cost considerations include items such as: Labor During the transition existing staff will need to continue to maintain the production environment learn new skills and decommission the old infrastructure once the migration is complete Additional labor costs in the migration bubble include: Staff time to plan and assess project scope and project plan to migrate applications and infrastructure Retaining consulting partners with the expertise to streamline migration of applications and infrastructure as well as training staff with new skills Due to the general lack of cloud experience for most organization s it is necessary to bring in outside consulting support to help guide the process Process Penalty fees associated with early termination of contracts may be incurred (facilities software licenses etc) once applications or infrastructure are decommissioned The cost of tooling to automate the migration of data and virtual machines from on premises to AWS Technology Duplicate environments will be required to keep production applications/infrastructure available while transitioning to the AWS platform Cost considerations include: Cost to maintain production environment during migration Cost of AWS platform comp onents to run new cloud based applications Licensing of automated migration tools license to accelerate the migration process ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 10 of 13 “I wanted to move to a model where we can deliver more to our citizens and r educe the cost of delivering those services to them I wanted a product line that has the ability to scale and grow with my department AWS was an easy fit for us and the way we do business” Chris Chiancone CIO City of McKinney City of McKinney City of McKinney Texas Turns to AWS to Deliver More Advanced Services for Less Money The City of McKinney Texas about 15 miles north of Dallas and home to 155000 people was ranked the No 1 Best Place to live in 2014 by Money Magazine The city’s IT department is going all in on AWS and uses the platform to run a wide range of services and applications such as its land management and records management systems By using AWS the city’s IT department can focus on delivering new and better services for its fast growing population and city employees instead of spending resources buying and maintaining IT infrastructure City of McKinney chose AWS for our ability to scale and grow with the needs of the city’s IT department AWS provides an easy fit for the way the city does business Without having to own the infrastructure the C ity of McKinney has the ability to use cloud resources to address business needs By moving from a CapEx to an OpEx model they can now return funds to critical city projects Migration Options Once y ou understand the current costs of an on premises production system the next step is to identify applications that will benefit from cloud cost and efficiencies Applications are either critical or strategic If they do not fit into either category they should be taken off the priority list Instead categorize these as legacy applications and determine if they need to be replaced or in some cases eliminated Figure 2 illustrates decision points that should be considered in ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 11 of 13 “A university is really a small city with departments running about 1000 diverse small services across at the university We made the decision to go down the cloud journey and have been working with AWS for the past 4 years In building our business case we wanted the ability to give our customers flexible IT services th at were cost neutral “We embraced a cloud first strategy with all new services a built in the cloud In parallel we are migrating legacy services to the AWS platform with the goal of moving 80% of these applications by the end of 2017” Mike Chapple P hD Senior Director IT Services Delivery University of Notre Dame selecting applications to move to the AWS platform focusing on the “6 Rs” — retire retain re host re platform re purchase and re factor Decommission Refactor for AWS Rebuild Application Architecture AWS VM Import Org/Ops Change Do Not Move Move the App Infrastructure Design Build AWS Lift and Shift (Minimal Change) Determine Migration 3rd Party Tools Impact Analysis Management Plan Identify Environment Process Manually Move App and Data Ops Changes Migration and UAT Testing Signoff Operate Discover Assess (Enterprise Architecture and Determine Migration Path Application Lift and Shift Determine Migration Process Plan Migration and Sequencing 3rd Party Migration Tool Tuning Cutover Applications) Vendor S/PaaS (if available) Move the Application Refactor for AWS Recode App Components Manually Move App and Data Architect AWS Environment Replatform (typically legacy applications) Rearchitect Application Recode Application and Deploy App Migrate Data Figure 2: Migration Options Applications that deliver increased ROI through reduced operation costs or deliver increased business results should be at the top of the priority list Then you can determine the best migration path for each workload to optimize cost in the migration process ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 12 of 13 Conclusion Many organizations are extending or moving their business applications to AWS to simplify infrastructure management deploy quicker provide greater availability increase agility allow for faster innovation and lower cost Having a clear understanding of existing infrastructure costs the components of your migration bubble and their corresponding costs and projected savings will help you calculate payback time and projected ROI With a long history in enabling enterprises to successfully adopt cloud computing Amazon Web Services delivers a mature set of services specifically designed for the unique security compliance privacy and governance requirements of large organizations With a technology platform that is both broad and deep Professional Services and Support organizations robust training programs and an ecosystem tens ofthousands strong AWS can help you move faster and do more With AWS you can: Take advantage of more services storage options and security controls than any other cloud platform Deliver on stringent standards with the broadest set of certifications accreditations and controls in the industry Get deep assistance with our global cloud focused enterprise professional services support and training teams ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 13 of 13 Further Reading For additional help please consult the following sources: The AWS Cloud Adoption Framework http://d0awsstaticcom/whitepapers/aws_cloud_adoption_frameworkp df Contributors The following individuals and organizations contributed to this document: Blake Chism Practice Manager AWS Public Sector Sales Var Carina Veksler Public Sector Solutions AWS Public Sector Sales Var
|
General
|
consultant
|
Best Practices
|
Active_Directory_Domain_Services_on_AWS
|
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlActive Di rectory Domain Services on AWS Design and Planning Guide November 20 2020 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlContents Importance of Active Directory in the cloud 1 Terminology and definitions 1 Shared responsibility model 3 Direct ory services options in AWS 4 AD Connector 4 AWS Managed Microsoft Active Directory 5 Active Directory on EC2 7 Comparison of Active Directory Services on AWS 7 Core infrastructure design on AWS for Windows Workloads and Directory Services 9 Planning AWS accounts and Organization 9 Network design considerations for AWS Managed Microsoft AD 9 Design consideration for AWS Managed Micro soft Active Directory 12 Single account AWS Region and VPC 12 Multiple accounts and VPCs in one AWS Region 13 Multiple AWS Regions deploymen t 14 Enable Multi Factor Authentication for AWS Managed Microsoft AD 16 Active Directory permissions delegation 17 Design considerations for running Active Directory on EC2 instances 18 Single Region deployment 18 Multi region/global deployment of self managed AD 20 Designing Active Directory sites and services topology 21 Security considerations 22 Trust relationships with on premises Active Directory 22 Multi factor authentication 24 AWS account security 24 Domain controller security 24 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlOther considerations 25 Conclusion 26 Contributors 26 Further Reading 27 Document Revisions 27 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAbstract Cloud is now the center of most enterprise IT strategies Many enterprises find that a wellplanned move to the cloud results in an immediate business payoff Active Directory is a foundation of the IT infrastructure for many large enterprises This whitepaper covers best practices for designing Active Directory Domain Services (AD DS) architecture in Amazon Web Services (AWS) including AWS Managed Microsoft AD Active Directo ry on Amazon Elastic Compute Cloud (Amazon EC2) instances and hybrid scenarios This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 1 Importance of Active Directory in the cloud Microsoft Active Directory was introduced in 1999 and became de facto standard technology for centralized management of Microsoft Windows computers and user authentications Active Directory serves as a distributed hierarchical data storage for information about corporate IT infrastructure including Domain Name System (DNS) zones and records devices and users user credentials and access rights based on groups membership Currently 95% of enterprises use Active Directory for authentication Successful adoption of cloud technology requires considering existing IT infr astructure and applications deployed on premises Reliable and secure Active Directory architecture is a critical IT infrastructure foundation for companies running Windows workloads Terminology and definitions AWS Managed Microsoft Active Directory AWS Directory Service for Microsoft Active Directory also known as AWS Managed Microsoft AD is Microsoft Windows Server Active Directory Domain Services (AD DS) deployed and managed by AWS for you The service runs on actual Windows Server for the highest po ssible fidelity and provides the most complete implementation of AD DS functionality of cloud managed AD DS services available today Active Directory Connector (AD Connector) is a directory gateway (proxy) that redirects directory requests from AWS applic ations and services to existing Microsoft Active Directory without caching any information in the cloud It does not require any trusts or synchronization of user accounts Active Directory Trust A trust relationship (also called a trust) is a logical rel ationship established between domains to allow authentication and authorization to shared resources The authentication process verifies the identity of the user The authorization process determines what the user is permitted to do on a computer system or network Active Directory Sites and Services In Active Directory a site represents a physical or logical entity that is defined on the domain controller Each site is associated with an Active Directory domain Each site also has IP definitions for what IP addresses and ranges belong to that site Domain controllers use site information to inform Active Directory clients about domain controllers present within the closest site to the client This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 2 Amazon V irtual Private Cloud ( Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including the selection of your own private IP address ranges creation of subnets and configuration of route tables and network gateways You can also create a hardware Virtual Private Network (VPN) connection between your corporate data center and your VPC to leverage the AWS Cloud as an extension of your corporate data ce nter AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS Using AWS Direct Connect you can establish private connectivity between AWS and your data center office or colocation environment AWS Single Sign On (AWS SSO) is a cloud SSO service that makes it easy to centrally manage SSO access to multiple AWS accounts and business applications With AWS SSO you can easily manage SSO access and user permissions to all of your accounts in AWS Organi zations centrally AWS Transit Gateway is a service that enables customers to connect their VPCs and their on premises networks to a single gateway Domain controller (DC) – an Active Directory server that responds to authentication requests and store a re plica of Active Directory database Flexible Single Master Operation (FSMO) roles In Active Directory some critical updates are performed by a designated domain controller with a specific role and then replicated to all other DCs Active Directory uses r oles that are assigned to DCs for these special tasks Refer to the Microsoft documentation web site for more information on FSMO roles Global Catalog A glob al catalog server is a domain controller that stores partial copies of all Active Directory objects in the forest It stores a complete copy of all objects in the directory of your domain and a partial copy of all objects of all other forest domains Read Only Domain Controller (RODC ) Read only domain controllers (RODCs) hold a copy of the AD DS database and respond to authentication requests but applications or other servers cannot write to them RODCs are typically deployed in locations where physical s ecurity cannot be provided VPC Peering A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 or IPv6 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 3 addresses Instances in either VPC can communicate with each other as if they are within the same network Shared responsibility model When operating in the AWS Cloud Security and Compliance is a shared responsibility between AWS and the custome r AWS is responsible for security “of” the cloud whereas customers are responsible for security “in” the cloud Figure 1 Shared Responsibility Model when operating in AWS Cloud AWS is responsible for securing its software hardware and the facilities where AWS services are located including securing its computing storage networking and database services In addition A WS is responsible for the security configuration of AWS Managed Services like Amazon DynamoDB Amazon Relational Database Service (Amazon RDS) Amazon Redshift Amazon EMR Amazon WorkSpaces and so on Customers are responsible for implementing appropria te access control policies using AWS Identity and Access Management ( IAM) configuring AWS Security Groups (Firewall) to prevent unauthorized access to ports and enabling AWS CloudTrail Customers are also responsible for enforcing appropriate data loss p revention policies to ensure compliance with internal and external policies as well as detecting and This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 4 remediating threats arising from stolen account credentials or malicious or accidental misuse of AWS If you decide to run your own Active Directory on Am azon EC2 instances you have full administrative control of the operating system and the A ctive Directory environment You can set up custom configurations and create a complex hybrid deployment topology However you must operate and support it in the sam e manner as you do with onpremises Active Directory If you use AWS Managed Microsoft AD AWS provides instance deployment in one or multiple regions operational management of your directory monitoring backup patching and recovery services You confi gure the service and perform administrative management of users groups computers and policies AWS Managed Microsoft AD has been audited and approved for use in deployments that require Federal Risk and Authorization Management (FedRAMP) Payment Card Industry Data Security Standard (PCI DSS) US Health Insurance Portability and Accountability Act (HIPAA) or Service Organizational Control (SOC) compliance When used with compliance requirements it is your responsibility to configure the directory password policies and ensure that the entire application and infrastructure deployment meets your compliance requirements For more information see Manag e Compliance for AWS Managed Microsoft AD Directory services options in AWS AWS provides a comprehensive set of services and tools for deploying Microsoft Windows workloads on its rel iable and secure cloud infrastructure AWS Active Directory Connector (AD Connector) and AWS Managed Microsoft AD are fully managed services that allow you to connect AWS applications to an existing Active Directory or host a new Active Directory in the cl oud Together with the ability to deploy selfmanaged Active Directory in Amazon EC2 instances these services cover all cloud and hybrid scenarios for enterprise identity services AD Connector AD Connector can be used in the following scenarios: • Sign in to AWS applications such as Amazon Chime Amazon WorkDocs Amazon WorkMail or Amazon WorkSpaces using corporate credentials (See the list of compatible applications on the AWS Documentation site) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 5 • Enable Access to the AWS Management Console with AD Crede ntials For large enterprises AWS recommends us ing AWS Single Sign On • Enable multi factor authentication by integrating with your existing RADIUS based MFA infrastructure • Join Windows EC2 instances to your on premises Active Directory Note: Amazon RDS for SQL Server and Amazon FSx for Windows File Server are not compatible with AD Connector Amazon RDS for SQL Server compatible with AWS Managed Microsoft AD only Amazon FSx for Windows File Server can be deployed with AWS Managed Microsoft AD or self managed Active Directory AWS Managed Microsoft Active Directory AWS Directory Service lets you run Microsoft Active Directory as a managed service By default each AWS Managed Microsoft AD has a minimum of two domain controllers each deployed in a separate Availability Zone (AZ) for resiliency and fault tolerance All domain controllers are exclusively yours with nothing shared with any oth er AWS customer AWS provides operational management to monitor update backup and recover domain controller instances You administer users groups computer and group policies using standard Active Directory tools from a Windows computer joined to the AWS Managed Microsoft AD domain AWS Managed Microsoft AD preserves the Windows single sign on (SSO) experience for users who access AD DS integrated applications in a hybrid IT environment With AD DS trust support your users can sign in once on premises and access Windows workloads runnin g onpremises and in the cloud You can optionally expand the scale of the directory by adding domain controllers thereby enabling you to distribute requests to meet your performance requirements You can also share the directory with any account and VPC Multi Region replication can be used to automatically replicate your AWS Managed Microsoft AD directory data across multiple Regions so you can improve performance for users and applications in disperse geographic locations AWS Managed Microsoft AD uses native AD replication to replicate your directory’s data securely to the new Region Multi Region replication is only supported for the Enterprise Edition of AWS Managed Microsoft AD AWS Managed Microsoft AD enables you to forward all domain controller’s Windows Security event log to Amazon CloudWatch giving you the ability to monitor your use of the directory and any administrative intervention performed in the course of AWS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 6 operating the service It is also approved for applications in the AWS Cloud tha t are subject to compliance by the US Health Insurance Portability and Accountability Act (HIPAA) Payment C ard Industry Data Security Standard (PCI DSS) Federal Risk and Authorization Management (FedRAMP) or Service Organizational Control (SOC) when you enable compliance for your directory You can also tailor security with features that enable you to manage password policies and enable secure LDAP communications through Secure Socket Layer (SSL)/Transport Layer Security (TLS) You can also enable multi factor authentication (MFA) for AWS Managed Micros oft AD This authentication provides an additional layer of security when users access AWS applications from the internet such as Amazon WorkSpaces or Amazon QuickSight AWS Managed Microsoft AD enables you to extend your schema and perform LDAP write operations These features combined with advanced security features such as Kerberos Constrained Deleg ation and Group Managed Service Account provide the greatest degree of compatibility for Active Directory aware applications like Microsoft SharePoint Microsoft SQL Server Always On Availability Groups and many NET applications Because Active Directo ry is an LDAP directory you can also use AWS Managed Microsoft AD for Linux Secure Shell (SSH) authentication and other LDAP enabled applications The full list of supported AWS applications is available on the AWS Documentation site AWS Managed Microsoft AD runs actual Window Server 2012 R2 Active Directory Domain Services and operates at the 2012 R2 functional level AWS Managed Microsoft AD is available in two editions: Standard and Enterprise These editions have different storage capacity ; Enterprise Edition also has multi region features Edition Storage capacity Approximate number of objects that can be stored* Approximate number of users in domain* Standard 1 GB ~30000 Up to ~5000 users Enterprise 17 GB ~500000 Over 5000 users * The number of objects varies based on type of objects schema extensions number of attributes and data stored in attributes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 7 Note: AWS Domain Administrators have full administrative access to all domains hosted on AWS See your agreement with AWS and the AWS Data Privacy FAQ for more information about how AWS handles content that you store on AWS systems including directory informat ion You do not have Domain or Enterprise Admin permissions and rely on delegated groups for administration AWS Managed Microsoft AD can be used for following scenarios: managing access to AWS Management Console and cloud services joining EC2 Windows ins tances to Active Directory deploying Amazon RDS databases with Windows authentication using FSx for Windows File Services and signing in to productivity tools like Amazon Chime and Amazon WorkSpaces For more information on this solution see Design consideration for AWS Managed Microsoft Active Directory in this document Active Directory on EC2 If you prefer to extend your Active Directory to AWS and manage it yourself for flexibility or other reasons you h ave the option of running Active Directory on EC2 For more information s ee Design considerations for running Active Directory on EC2 instances in this document Comparison of Active Directory Services on AWS The following table compares the features and functions between various Directory Services options available on AWS Many features are not applicable directly to AWS AD Connector because it is actins only as a proxy to the existing Active Directory domain Function AWS AD Connector AWS Managed Microsoft AD Active Directory on EC2 Managed service yes yes no Multi Region deployment n/a yes Enterprise Edition yes Share directory with multiple accounts no yes no Supported by AWS applications (Amazon Chime Amazon WorkSpaces AWS Single Sign On & etc) yes yes yes (through federation or AD Connector) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 8 Function AWS AD Connector AWS Managed Microsoft AD Active Directory on EC2 Supported by RDS (SQL Server Oracle MySQL PostgreSQL and MariaDB) n/a yes no Supported by FSx for Windows File Server n/a yes yes Creating users and groups yes yes yes Joining computers to the domain yes yes yes Create trusts with existing Active Directory domains and forests n/a yes yes Seamless domain join for Windows and Linus EC2 instances yes yes yes with AWS AD Connector Schema extensions n/a yes yes Add domain controllers n/a yes yes Group Managed Service Accounts n/a yes Depends on the Windows Server version Kerberos constrained delegation n/a yes yes Support Microsoft Enterprise CA n/a yes yes Multi Factor Authentication yes through RADIUS yes through RADIUS yes with AD Connector Group policy n/a yes yes Active Directory Recycle bin n/a yes yes PowerShell support n/a yes yes This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 9 Core infrastructure design on AWS for Windows Workloads and Directory Services Planning AWS accounts and Organization AWS Organizations helps you centrally manage your AWS accounts identity services and access policies for your workloads on AWS Whether you a re a growing startup or a large enterprise Organizations helps you to centrally manage billing; control access compliance and security; and share resources across your AWS accounts For more information refer to the AWS Organizations User Guide With AWS Organization s you can centrally define critical resources and make them available to accounts across your organization For example you can authenticate against your central identity store and enable applications deployed in other accounts to access it If your users need to manage AWS services and access AWS applications with their Active Directory credentials we recommend integrating your identity servi ce with the management account in AWS Organization s • Deploy AWS Managed AD in the management account with trust to your on premises A ctive Directory to allow users from any trusted domain to access AWS Applications Share AWS Managed AD to other accounts across your organization • Deploy AWS Single Sign On in the management account to centrally manage access to multiple AWS accounts and business applic ations and provide users with single sign on access to all their assigned accounts and applications from one place AWS SSO also includes built in integrations to many business applications such as Salesforce Box and Microsoft Office 365 Network design considerations for AWS Managed Microsoft AD Network design for Microsoft workloads and directory services consist s of network connectivity and DNS names resolution This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 10 To plan the network topology for your organization refer to the whitepaper Building a Scalable and Secure Multi VPC AWS Network Infrastructure and consider the following recommendations: • Plan your IP networks for Microsoft workloads without overlapping address spaces Microsoft does not recommend using Active Directory over NAT • Place directory services into a centralized VPC that is reachable from any other VPC with workloads depending on Active Directory • By default instances that you launch into a VPC cannot communicate with your onpremises network To extend your existing AD DS i nto the AWS Cloud you must connect your on premises network to the VPC in one of two ways: by using Virtual Private Network (VPN) tunnels or by using AWS Direct Connect To connect multiple VPCs in AWS you can use VPC peering or AWS Transit Gateway Network port requirements and security groups Active Directory requires certain network ports to be open to allow traffic for LDAP AD DS replication user authentication Windows Time services Distributed File System (DFS) and many more When you deploy Active Directory on EC2 instances using the AWS Quick Start or AWS Managed Microsoft AD it automatically creates a new security group with all required por t rules If you manually deploy your Active Directory you need to create a security group and configure rules for all required network protocols For a complete list of ports see Active Directory and Active Directory Domain Services Port Requirements in the Microsoft TechNet Library DNS names resolution Active Directory heavily relies on DNS services and hosts its own DNS services on domain controllers To establish seamless name resolution in all your VPCs and your onpremises network create a Route 53 Resolver deploy inbound/ outbound endpoints in your VPC and configure conditional forwarders to all of your Active Directory domai ns (including AWS Managed AD and on premises A ctive Directory ) in the Route 53 Resolver Share centralized Route 53 Resolver endpoints across all VPC in your organization Create conditional forwarders on your on premises DNS servers for all Route 53 DNS This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 11 zones and DNS zones on AWS Managed AD and point them to Route 53 Resolver Endpoints Figure 2 Route 53 Resolver configuration for hybrid network Here are design considerations for DNS resolution : • Make all Active Directory DNS domain s resolvable for all clients because they are using it to locate Active Directory services and register their DNS names using dynamic updates • Try to keep the DNS name resolution local to the AWS Region to reduce latency • Use Amazon DNS Server (2 resolve r) as a forwarder for all other DNS domains that are not authoritative on your DNS Servers on A ctive Directory domain controllers This setup allows your DCs to recursively resolve records in Amazon Route 53 private zone and use Route 53 Resolver condition al forwarders • Use Route 53 Resolver Endpoints to create DNS resolution hub and manage DNS traffic by creating conditional forwarders For more information on designing a DNS name resolution strategy in a hybrid scenario see the Amazon Route 53 Resolver for Hybrid Clouds blog post This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 12 Note: The Amazon EC2 instance limits the numbe r of packets that can be sent to the Amazon provided DNS server to a maximum of 1024 packets per second per network interface This limit cannot be increased If you run into this performance limit you must set up conditional forwarding for Amazon Route 5 3 private zones to use the Amazon DNS Server (2 resolver) and use root hints for internet name resolution This setup reduces the chances of you exceeding the 1024 packet limit on AWS DNS resolver Design consideration for AWS Managed Microsoft Active Dir ectory Active Directory depends on the network and accounts design Before you select the right Active Directory topology you must choose your network and organizational design Although there is no one sizefitsall answer for how many AWS accounts a par ticular customer should have most companies create more than one AWS account as multiple accounts provide the highest level of resource and billing isolation in the following cases: • The business requires strong fiscal and budgetary billing isolation betw een specific workloads business units or cost centers • The business requires administrative isolation between workloads • The business requires a particular workload to operate within specific AWS service limits and not impact the limits of another workload • The business’s workloads depend on specific instance reservations to support high availability (HA) or disaster recovery (DR) capacity requirements Single account AWS Region and VPC The simplest case is when you need to deploy a new solution i n the cloud from scratch You can deploy AWS Managed Microsoft AD in minutes and use it for most of the services and applications that require Active Directory This solution is ideal for scenarios with no additional requirements for logical isolation betw een application tiers or administrat ors This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 13 Figure 3 Managed A ctive Directory architecture deployed by Quick Start Multiple accounts and VPCs in one AWS Region Large organizations use multiple AWS accounts for administrative delegation and billing purpose s You can share a single AWS Managed Microsoft AD with multiple AWS accounts within one AWS Region This capability makes it easier and more cost effective for you to manage directory aware workloads from a single directory across accounts and VPCs This option also allows you seamless ly join your Amazon EC2 Windows instances to AWS Managed Microsoft AD This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 14 Figure 4 Sharing single AWS Managed Microsoft AD with another account AWS recommends that you create a separate account for identity services like Active Directory and only allow a very limited group of administrators to have access to this account Generally you should treat Active Directory in the cloud in the same manner as on premises A ctive Directory Just as you would limit access to a physica l data center make sure to limit administrative access to the AWS account control Create additional AWS accounts as necessary in your organization and share the AWS Managed Microsoft AD with them After you have shared the service and configured routing these users can use A ctive Directory to join EC2 Windows instances but you maintain control of all administrative tasks Deploy AWS Managed AD in your management account of AWS Organization s This allow s you to use Managed AD for authentication with AWS Identity and Access Management (IAM) to access the AWS Management Console and other AWS applications using your Active Directory credentials Multiple AWS Regions deployment AWS Managed Microsoft AD Enterprise Edition supports Multi Region deployment You can use automated multi Region replication in all Regions where AWS Managed Microsoft AD is available AWS services such as Amazon RDS for SQL Server and Amazon FSx connect to the local instances of the global directory This allows your users to sign in once to AD aware applications running in AWS as well as AWS services like Amazon RDS for SQL Server in a ny AWS Region – using credentials from AWS Managed Microsoft AD or a This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 15 trusted AD domain or forest Refer to AWS Directory Service documentation for the current list of AWS Services supporting Mu ltiRegion replication feature With multi Region replication in AWS Managed Microsoft AD AD aware applications such as SharePoint SQL Server Always On AWS services such as Amazon RDS for SQL Server and Amazon FSx for Windows File Server use the dire ctory locally for high performance and are multi Region for high resiliency The following list comprises additional benefits of Multi Region replication • It enables you to deploy a single AWS Managed Microsoft AD instance globally quickly and eliminates the heavy lifting of self managing a global AD infrastructure • Optimal performance for workloads deployed in multiple regions • Multi Region resiliency AWS Managed Microsoft AD handles automated software updates monitoring recovery and the security of the underlying AD infrastructure across all Regions • Disaster recovery In the event that all domain controllers in one Region are down AWS Managed Microsoft AD recovers the domain controllers and replicates the directory data automatically Meanwhile do main controllers in other Regions are up and running To deploy AWS Managed Microsoft AD across multiple Regions you must create it in Primary region and after that a dd one or more Replicated regions Consider following factors for your Active Directory d esign: • When you deploy a new Region AWS Managed Microsoft AD creates two domain controllers in the selected VPC in the new Region You can add more domains controllers later for scalability • AWS Managed Microsoft AD uses a backend network for replication and communications between domain controllers • AWS Managed Microsoft AD creates a new Active Directory Site and names it the same name of the Region For example us east1 You can also rename this later using the Active Directory Sites & Services tool • AWS Managed AD is configured to use change notifications for inter site replications to eliminate replication delays This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 16 After you add your new Region you can do any of the following tasks : • Add more domain controllers to the new Region for horizontal scala bility • Share your directory with more AWS accounts per Region Directory sharing configurations are not replicated from the primary Region and you may have different sharing configuration in different region based on your security requirements • Enable log forwarding to retrieve your directory’s security logs using Amazon CloudWatch Logs from the new Region When you enable log forwarding you must provide a log group name in each Region where you replicated your directory • Enable Amazon Simple Notification Service (Amazon SNS) monitoring for the new Region to track your directory health status per Region Enable Multi Factor Authentication for AWS Managed Microsoft AD You can enable multi factor authentication (MFA) for your AWS Managed Microsoft AD to increase security when your users specify their A ctive Directory credentials to access supported Amazon enterprise applications When you enable MFA your users enter their user name and password (first factor) and then e nter an authentication code (second factor) that they obtain from your virtual or hardware MFA solution These factors together provide additional security by preventing access to your Amazon enterprise applications unless users supply valid user credenti als and a valid MFA code To enable MFA you must have an MFA solution that is a remote authentication dial in user service (RADIUS) server or you must have an MFA plugin to a RADIUS server already implemented in your on premises infrastructure Your MFA solution should implement onetime passcodes (OTP) that users obtain from a hardware device or from software running on a device (such as a mobile phone) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 17 Figure 6 Using AWS Managed Microsoft Active Directory with MFA for access to Amazon Work Spaces A more detailed description of this solution is available on the AWS Security Blog Active Directory permissions delegation When you use AWS Managed Microsoft AD AWS assumes responsibility for some of the service level tasks so that you may focus on other business critical tasks The following service level tasks are a utomatically performed by AWS • Taking snapshots of the Directory Service and providing the ability to recover data • Creating trusts by administrator request • Extending Active Directory schema by administrator request • Managing Active Directory forest config uration • Managing monitoring and updating domain controllers • Managing and monitoring DNS service for Active Directory • Managing and monitoring Active Directory replication • Managing Active Directory sites and networks configuration This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 18 With AWS Managed Microsoft AD you also may delegate administrative permissions to some groups in your organization These permissions include managing user accounts joining computers to the domain managing group policies and password policies managing DNS DHCP DFS RAS CA and other services The full list of permissions that can be delegated is described in the AWS Directory Service Administration Guide Work with all teams that are using Active Directory services in your organization and create a li st with all of the permissions that must be delegated Plan security groups for different administrative roles and use AWS Managed Microsoft AD delegated groups to assign permissions Check the AWS Directory Service Administration Guide to make sure that it is possible to delegate all of the required permissions Design considerations for running Active Directory on EC2 instances If you cannot use AWS Managed Microsoft AD and you have Windows workloads you want to deploy on AWS you can still run Active Directory on EC2 instances in AWS Depending on the number of Regions where you are deploying your solution your Active Directory design may slightly differ The following section provides a deployment guide and recommendation on how you can deploy Active Directory on EC2 instances in AWS Single Region deployment This deployment scenario is applicable if you are operating in a singl e Region or you do not need Active Directory to be in more than a single Region The deployment options or architecture patterns are not significantly different whether you are operating in a single VPC or multiple VPCs If you are using multiple VPCs you must ensure that network connectivity between the VPCs is available through VPC peering VPN or AWS Transit Gateway The following diagrams depict how Active Directory can be deployed in a single Region in a single VPC or multiple VPCs This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Activ e Directory Domain Services on AWS 19 Figure 7 Deplo ying Active Directory on EC2 instances in a single Region for single VPC Figure 8 Deploying Active Directory on EC2 instances in a single Region for multiple VPCs Consider the following points when deploying Active Directory in this architecture: • We recommend deploying at least two domain controllers (DCs) in a Region These domain controllers should be placed in different AZs for availability reasons • DCs and other non internet facing servers should be placed in private subnets • If you require additional DCs due to performance you can add more DCs to existing AZs or deploy to another available AZ This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 20 • Configure VPCs in a Region as a single A ctive Directory site and define A ctive Directory subnets accordingly This configuration ensures that all of your clients correctly select the closest available DC • If you have multiple VPCs you can centralize the Active Directory services in one of the existing VPCs or create a shared services VPC to centralize the domain controllers • You must ensure you have highly available network connectivity between VPCs such as VPC peering If you are connecting the VPCs using VPNs or other methods ensure connectivity is highly available • If you want to use your self managed Active Directory credentials to acc ess AWS Services or thirdparty services you can integrate your self managed AD with AWS IAM and AWS Single Sign On using AWS AD Connector or AWS Managed AD through a trust relationship In these cases AD Connector or AWS Managed AD must be deployed in t he management account of your organization Multi region/global deployment of self managed AD If you are operating in more than one Region and require Active Directory to be available in these Regions use the multi region/global deployment scenario Withi n each of the Regions use the guidelines for single Region deployment as all of the single Region best practices still apply The following diagrams depict how Active Directory can be deployed in multiple Regions In this example we are showing Active Di rectory deployed in three Regions that are interconnected to each other using cross Region VPC peering In addition these Regions are also connected to the corporate network using AWS Direct Connect and VPN This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 21 Figure 9 Deploying Active Directory on EC2 instances in multiple Regions with multiple VPCs Consider the following recommendations when deploying Active Directory in this architecture : • Deploy a t least two domain controllers in each Region These domain controllers should be placed in different AZs for availability reasons • Configure VPCs in a reg ion as a single A ctive Directory site and define A ctive Directory subnets accordingly This configuration ensures all of your clients will correctly select the closest available domain controller • Ensure robust inter Region connectivity exists between all of the Regions Within AWS you can leverage cross Region VPC peering to achieve highly available private connectivity between the Regions You can also leverage the Transit VPC solution to interconnect multiple regions Designing A ctive Directory sites an d services topology It’s important to define A ctive Directory sites and subnets correctly to avoid clients from using domain controllers that are located far away as this would cause increased latency See How Domain Controllers are Located in Windows This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Ser vices on AWS 22 Follow these best practices for configuring sites and services: • Configure one A ctive Directory site per AWS Region If you are operating in multiple AWS Regions we recommend configuring one A ctive Directory site for each of these Regions • Define the entire VPC as a subnet and assign it to the A ctive Directory site defined for this Region • If you have multiple VPCs in the same Region define each of these VPCs as separate subnets and assign it to the single A ctive Directory site set up for this Region This setup allows you to use domain controllers in that Region to service all clients in that region • If you have enabled IPv6 in your Amazon VPC create the necessary IPv6 subnet definition and assign it to this A ctive Directory site • Define all IP address ranges If clients exist in undefined IP address ranges the clients might not be associated with the correct A ctive Directory site • If you have reliable high speed connectivity between all of the sites you can use a single site link for all of your AD sites and maintain a single replication configuration • Use consistent sites names in all AD forests connected by trusts Security considerations Trust relationships with on premises A ctive Directory Whether you are deploying Active Directory on EC2 instances or using AWS Managed Microsoft AD these are the three common deployment patterns seen on AWS 1 Deploy a standal one forest/domain on AWS with no trust In this model you set up a new forest and domain on AWS which is different and separate from the current Active Directory that is running on premises In this deployment both accounts (user credentials service acc ounts) and resources (computer objects) reside in Active Directory running on AWS and most or all of the member servers run on AWS in single or multiple Regions For this deployment there is no network connectivity requirement between on premises and AWS for the purposes of Active Directory as nothing is shared between the two A ctive Directory forests This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 23 2 Deploy a new forest/domain on AWS with one way trust If you are planning on leveraging credentials from an on premises A ctive Directory on AWS member serve rs you must establish at least a one way trust to the Active Directory running on AWS In this model the AWS domain becomes the resource domain where computer objects are located and on premises domain becomes the account domain Note: You must have robust connectivity between your data center and AWS A connectivity issue can break the authentication and make the whole solution not accessible for users Consider to extend your Active Directory domains to AWS to eliminate dependency on connectivity with onpremises infrastructure or deploy a multi path AWS Direct Connect or VPN connection 3 Extend your existing domain to AWS In this model you extend your existing Active Directory deployment from on premises to AWS which means adding additional domain controllers (running on Amazon EC2) to your existing domain and placing them in multiple AZs within your Amazon VPC If you are operating in multiple Regions add domain controllers in each of these Regions This deployment is easy flexible and provides the following advantages: o You are not required to set up additional trusts o DCs in AWS are handling both accounts and resources o More resilient to network connectivity issues o You can seamlessly set up and use AWS Cloud in a hybrid scenario with least impact to the applications (Note that network connectivity is required between your data center and AWS for initial and on going replication of data between the domain controllers) When you use cross forest trust relationships in Active Direct ory you need to use consistent Active Directory site names in both forests to have optimal performance Refer to the article Domain Locator Across a Forest Trust for more information See How Domain and Forest Trusts Work on the Microsoft Doc umentation website for more information This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 24 Multifactor authentication Multi factor authentication (MFA) is a simple best practice that adds an extra layer of protection on top of your user name and password With MFA enabled when users sign in to the AWS Management Console they are prompted for thei r user name and password (the first factor —what they “know ”) then prompted for an authentication response from their AWS MFA device (the second factor —what they “have ”) Taken together these multiple factors provide increased security for your AWS accoun t settings and resources We recommend enabling MFA on all of your privileged accounts regardless of whether you are using IAM or federating through SSO AWS account security Since you are running your domain controllers on Amazon EC2 securing your AWS account is an important process in securing your Active Directory domain Follow these recommendations to make sure your AWS account is secure • Enable MFA and then lock away your AWS root user credential • Use IAM groups to manage permission if you are using IAM users • Grant least privilege to all your users within AWS • Enable MFA for all privileged users • Use EC2 roles for applications that run on EC2 instances • Do not share access keys • Rotate credentials regularly • Turn on and analyze log files in AWS CloudTrail VPC Flow Logs and Amazon S3 bucket logs • Turn on encryption for data at rest and in transit where necessary Domain controller security Domain controllers provide the physical storage for the AD DS database i n addition to providing the services and data that allow enterprises to effectively manage their servers workstations users and applications If privileged access to a domain controller is obtained by a malicious user that user can modify corrupt or destroy the AD DS database and by extension all of the systems and accounts that are managed This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 25 by Active Directory Make sure your domain controller is secure to avoid compromising your Active Directory data The following points are some of the best pract ices to secure domain controllers running on AWS: • Secure the AWS account where the domain controllers are running by following least privilege and role based access control • Ensure unauthorized users don’t have access in your AWS account to create/access A mazon Elastic Block Store (Amazon EBS) snapshots launch or terminate EC2 Instances or create/copy EBS volumes • Ensure you are deploying your domain controllers in a private subnet without internet access Ensure that subnets where domain controllers are running don’t have a route to a NAT gateway or other device that would provide outbound internet access • Keep your security patches up todate on your domain controllers We recommend you first test the security patches in a non production environment • Restrict ports and protocols that are allowed into the domain controllers by using security groups Allow remote management like remote desktop protocol (RDP) only from trusted networks • Leverage the Amazon EBS encryption feature to encrypt the root and addit ional volumes of your domain controllers and use AWS Key Management Service (AWS KMS) for key management • Follow Microsoft recommended security configuration baselines and Best Practices for Securing Active Directory Other considerations FSMO Roles You can follow the same recommendation you would follow for your on premises deployment to determine FSMO roles on DCs See also best practices from Microsoft In the case of AWS Managed Microsoft AD all domain controllers and FSMO roles assignments are managed by AWS and do not require you to manage or change them Global Catalog Unless you have slow connections or an extremely large A ctive Directory database w e recommend adding global catalog role to all of your domain This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 26 controllers in multi domain forests (except the domain controller with the Infrastructure Master role) If you are hosting Microsoft Exchange in AWS Cloud at least one global catalog server is required in a site with Exchange servers For more information about global catalog see Microsoft documentation Since there is only one domain in the forest for AWS Managed Microsoft AD all domain controllers are configured as global catalog and will have full informatio n about all objects Read Only Domain Controllers (RODC) It’s possible to deploy RODC on AWS if you are running A ctive Directory on EC2 instances and require it and there are no special considerations for doing so AWS Managed Microsoft AD does not suppo rt RODCs All of the domain controllers that are deployed as a part of AWS Managed Microsoft AD are writable domain controllers Conclusion AWS provides several options for deploying and managing Active Directory Domain Services in the cloud and hybrid env ironments You can leverage AWS Managed Microsoft AD if you no longer want to focus on heavy lifting like managing the availability of the domain controllers patching backups and so on Or you can run Active Directory on EC2 instances if you need to have full administrative control on your Active directory In this whitepaper we have discussed these two main approaches of deploying Active Directory on AWS and have provided you with guidance and consideration for each of the de sign Depending on our deployment pattern scale requirements and SLA you may select one of these options to support your Windows workloads on AWS Contributors Contributors to this document include : • Vladimir Provorov Senior Solutions Architect Amazon Web Services • Vinod Madabushi Enterprise Solutions Architect Amazon Web Services This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ activedirectorydomainservices/activedirectory domainserviceshtmlAmazon Web Services Active Directory Domain Services on AWS 27 Further Reading For additional information see: • AWS Whitepapers • AWS Directory Service • Microsoft Workloads on AWS • Active Directory Domain Services on the AWS Cloud: Quick Start Reference Deployment • AWS Documentation Document Revisions Date Descript ion November 2020 AWS Managed Microsoft AD multi region feature update August 2020 Numerous updates throughout December 2018 First publication
|
General
|
consultant
|
Best Practices
|
Amazon_Aurora_Migration_Handbook
|
This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 1 Amazon Aurora Migration Handbook July 2020 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 2 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 3 Contents Introduction 5 Database Migration Considerations 6 Migration Phases 7 Features and Compatibility 7 Performance 8 Cost 9 Availability and Durability 9 Planning and Testing a Database Migration 11 Homogeneous Migrations 11 Summary of Available Migration Methods 12 Migrating Large Databases to Amazon Aurora 15 Partition and Shard Consolidation on Amazon Aurora 16 MySQL and MySQL compatible Migration Options at a Glance 17 Migrating from Amazon RDS for MySQL 18 Migrating from MySQL Compatible Databases 23 Heterogeneous Migrations 26 Schema Migration 27 Data Migration 28 Example Migration Scenarios 28 SelfManaged Homogeneous Migrations 28 Multi Threaded Migration Using mydumper and myloader 39 Heterogeneous Migrations 45 Testing and Cutover 46 Migration Testing 46 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 4 Cutover 47 Troubleshooting 49 Troubleshooting MySQL Specific Issues 49 Conclusion 54 Contributors 55 Further Reading 56 Document Revisions 56 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 5 Abstract This paper outlines the best practices for planning executing and troubleshooting database migrations from MySQL compatible and non MySQL compatible database products to Amazon Aurora It also teaches Amazon Aurora database administrators how to diagnose and troubleshoot common migration and replication erro rs Introduc tion For decades traditional relational databases have been the primary choice for data storage and persistence These database systems continue to rely on monolithic architectures and were not designed to take advantage of cloud infrastructure These monolithic architectures present many challenges particularly in areas such as cost flexibility and availability In order to address these challenges AWS redesigned relational database for the cloud infrastructure and introduced Amazon Aurora Amazon Aurora is a MySQL compatible relational database engine that combines the speed availability and security of high end commercial databases with the simplicity and cost effectiveness of open source databases Aurora provides up to five times better performance than MySQL and comparable performance of high end commercial databases Amazon Aurora is priced at one tenth the cost of commercial engines Amazon Aurora is available through the Amazon Relational Database Service (Amazon RDS) platform Like other Amazon RDS databases Aurora is a fully managed database service With the Amazon RDS platform most database management tasks such as hardware provisioning softwa re patching setup configuration monitoring and backup are completely automated Amazon Aurora is built for mission critical workloads and is highly available by default An Aurora database cluster spans multiple Availability Zones (AZs) in a region providing out ofthebox durability and fault tolerance to your data across physical data centers An Availability Zone is composed of one or more highly available data centers operated by Amazon AZs are isolated from each other and are connected through lo w latency links Each segment of your database volume is replicated six times across these AZs This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 6 Aurora cluster volumes automatically grow as the amount of data in your database increases with no performance or availability impact so there is no need for estimating and provisioning large amount of database storage ahead of time An Aurora cluster volume can grow to a maximum size of 64 terabytes (TB) You are only charged for the space that you use in an Aurora cluster volume Aurora's automated backup capability supports point intime recovery of your data enabling you to restore your database to any second during your retention period up to the last five minutes Automated backups are stored in Amazon Simpl e Storage Service (Amazon S3) which is designed for 99999999999% durability Amazon Aurora backups are automatic incremental and continuous and have no impact on database performance For applications that need read only replicas you can create up to 15 Aurora Replicas per Aurora database with very low replica lag These replicas share the same underlying storage as the source instance lowering costs and avoiding the need to perform writes at the replica nodes Amazon Aurora is highly secure and all ows you to encrypt your databases using keys that you create and control through AWS Key Management Service (AWS KMS) On a database instance running with Amazon Aurora encryption data stored at rest in the underlying storage is encrypted as are the auto mated backups snapshots and replicas in the same cluster Amazon Aurora uses SSL (AES 256) to secure data in transit For a complete list of Aurora features see Amazon Aurora Given the rich feature se t and cost effectiveness of Amazon Aurora it is increasingly viewed as the go to database for mission critical applications Database Migration Considerations A database represents a critical component in the architecture of most applications Migrating t he database to a new platform is a significant event in an application’s lifecycle and may have an impact on application functionality performance and reliability You should take a few important considerations into account before embarking on your first migration project to Amazon Aurora Migrations are among the most time consuming and critical tasks handled by database administrators Although the task has become easier with the advent of managed This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 7 migration services such as AWS Database Migration Service large scale database migrations still require adequate planning and execution to meet strict compatibility and performance requirements Migration Phases Because database migrations tend to be complex we adv ocate taking a phased iterative approach Figure 1 Migration phases This paper examines the following major contributors to the success of every database migration project: • Factors that justify the migration to Amazon Aurora such as compatibility performance cost and high availability and durability • Best practices for choosing the optimal migration method • Best practices for planning and executing a migration • Migration troubleshooting hints This section discusses imp ortant considerations that apply to most database migration projects For an extended discussion of related topics see the Amazon Web Services (AWS) whitepaper Migrating Your Databases to Amazon Aurora Features and Compatibility Although most applications can be architected to work with many relational database engines you should make sure that your application works with Amazon Aurora Amazon Aurora is designed to be wire compatible with MySQL 5 55657 and 80 Therefore most of the code applications driver s and tools that are used today with MySQL databases can be used with Aurora with little or no change However certain MySQL features like the MyISAM storage engine are not available with Amazon Aurora Also due to the managed nature of the Aurora ser vice SSH This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 8 access to database nodes is restricted which may affect your ability to install third party tools or plugins on the database host For more details see Aurora on Amazon RDS in the Amazon Relational Database Service (Amazon RDS) User Guide Performance Performance is often the key motivation behind database migrations However deploying your database on Amazon Aurora can be beneficial even if your applications don’t have performance issues For example Amazon Aurora scalability features can greatly reduce the amount of engineering effort that is required to prepare your database platform for future traffic growth You should include benchmarks and performance evaluations in every migration project Therefore many successful database migration projects start with performance evaluations of the new database platform Although the RDS Aurora Performance Assessment Benchmarking paper gives you a decent idea of overall database performance these benchmarks do not emulate the data access patterns of your applications For more useful results test the database performance for time sensitive workloads by running your queries (or subset of your queries) on the new platform directly Consider these strategies : • If your current database is MySQL migrate to Amazon Aurora with downtime and performance test your database with a test or staging version of your application or by replaying the production workload • If you are on a non MySQL compliant engine you can selectively copy the busiest tables to Amazon Aurora and test your queries for t hose tables This gives you a good starting point Of course testing after complete data migration will provide a full picture of real world performance of your application on the new platform Amazon Aurora delivers comparable performance with commercia l engines and significant improvement over MySQL performance It does this by tightly integrating the database engine with an SSD based virtualized storage layer designed for database workloads This reduces writes to the storage system minimizes lock con tention and This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 9 eliminates delays created by database process threads Our tests with SysBench on r38xlarge instances show that Amazon Aurora delivers over 585000 reads per second and 107000 writes per second five times higher than MySQL running the same benchmark on the same hardware One area where Amazon Aurora significantly improves upon traditional MySQL is highly concurrent workloads In order to maximize your workload’s throughput on Amazon Aurora we recommend architecting your applications to driv e a large number of concurrent queries Cost Amazon Aurora provides consistent high performance together with the security availability and reliability of a commercial database at one tenth the cost Owning and running databases come with associated cost s Before planning a database migration an analysis of the total cost of ownership (TCO ) of the new database platform is imperative Migration to a new database platform should ideally lower the total cost of ownership while providing your applications with similar or better features If you are running an open source database engine (MySQL Postgres) your costs are largely related to hardware server management and database management activities However if you are running a commercial database engine (Oracle SQL Server DB2 etc) a significant portion of your cost is database licensing Amazon Aurora can even be more cost efficient than open source databases because its high scalability helps you reduce the number of database instances that are required to handle the same workload For more details see the Amazon RDS for Aurora Pricing page Availability and Durability High availability and disaster recovery are important considerations for databases Your application may already have very strict recovery time objective (RTO) and recovery point objective (RPO) requirements Amazon Aurora can help you meet or exceed your availability goals by having the following components: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 10 1 Read replicas : Increase read throughput to support high volume application requests by creating up to 15 database Aurora replicas Amazon Aurora Replicas share the same underlying storage as the source inst ance lowering costs and avoiding the need to perform writes at the replica nodes This frees up more processing power to serve read requests and reduces the replica lag time often down to single digit milliseconds Aurora provides a reader endpoint so th e application can connect without having to keep track of replicas as they are added and removed Aurora also supports auto scaling where it automatically adds and removes replicas in response to changes in performance metrics that you specify Aurora sup ports cross region read replicas Cross region replicas provide fast local reads to your users and each region can have an additional 15 Aurora replicas to further scale local reads 2 Global Database : You can choose between Global Database which provides the best replication performance and traditional binlog based replication You can also set up your own binlog replication with external MySQL databases Amazon Aurora Global Database is de signed for globally distributed applications allowing a single Amazon Aurora database to span multiple AWS regions It replicates your data with no impact on database performance enables fast local reads with low latency in each region and provides disa ster recovery from region wide outages 3 Multi AZ: Aurora stores copies of the data in a DB cluster across multiple Availability Zones in a single AWS Region regardless of whether the instances in the DB cluster span multiple Availability Zones For more i nformation on Aurora see Managing an Amazon Aurora DB Cluster When data is written to the primary DB instance Aurora synchronously replicates the data across Availability Zones to six storage nodes associated with your cluster volume Doing so provides data redundancy eliminates I/O freezes and minimizes latency spikes during system backups Running a DB instance with high availability can enhance availability during planned system maintenance and help protect your databases against failure and Availability Zone disruption For more information about durability and availability features in Amazon Aurora see Aurora on Amazon RDS in the Amazon RDS User Guide This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 11 Planning and Testing a Database Migration After you determine that Amazon Aurora is the right fit for your application the next step is to decide on a migration approach and create a database migration plan Here are the suggested high level steps: 1 Review the available migration techniques described in this document and choose one that satisfies your requirements 2 Prepare a migration plan in the form of a step bystep checklist A checklist ensures that all migration steps are executed in the correct order and that the migration process flow can be controlled (eg suspended or resumed) without the risk of important steps be ing missed 3 Prepare a shadow checklist with rollback procedures Ideally you should be able to roll the migration back to a known consistent state from any point in the migration checklist 4 Use the checklist to perform a test migration and take note of the time required to complete each step If any missing steps are identified add them to the checklist If any issues are identified during the test migration address them and rerun the test migration 5 Test all rollback procedures If any rollback proced ure has not been tested successfully assume that it will not work 6 After you complete the test migration and become fully comfortable with the migration plan execute the migration Homogeneous Migrations Amazon Aurora was designed as a drop in replacement for MySQL 56 It offers a wide range of options for homogeneous migrations (eg migrations from MySQL and MySQL compatible databases) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 12 Summary of Available Migration Methods This section lists common migration sources and the migration metho ds available to them in order of preference Detailed descriptions step bystep instructions and tips for advanced migration scenarios are available in subsequent sections Common method is widely adopted is built aurora read replica asynchronized wit h source master RDS or self managed MySQL databases Figure 1 Common migration sources and migration methods for Amazon Aurora Amazon RDS Snapshot Migration Compatible sources: • Amazon RDS for MySQL 56 • Amazon RDS for MySQL 51 and 55 (after upgrading to RDS for MySQL 56) Feature highlights: • Managed point andclick service available through the AWS Management Console • Best migration speed and ease of use of all migration methods • Can be used with binary log replication for near zero migration downtime For details see Migrating Data from a MySQL DB Instance to an Amazon Aurora DB Cluster in the Amazon RDS User Guide This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 13 Percona XtraBackup Compatible sources and limitations : • Onpremises or self managed MySQL 56 in EC2 can be migrated zero downtime migration • You can’t restore into an existing RDS instance using this method • The total size is limited to 6 TB • User accounts functions and stored procedures are not imported automatically Feature highlights: • Managed backup ingestion from Percona XtraBackup files stored in an Amazon Simple Storage Servi ce (Amazon S3) bucket • High performance • Can be used with binary log replication for near zero migration downtime For details see Migrating Data from MySQL by using an Amazon S3 bucket in the Amazon RDS User Guide SelfManaged Export/Import Compatible sources: • MySQL and MySQL compatible databases such as MySQL MariaDB or Percona Server including managed servers such as Amazon RDS for MySQL or MariaDB • NonMySQL compatible databases DMS Migration Compatible sources: • MySQL compatible and non MySQL compatible databases Feature highlights: • Supports heterogeneous and homogenous migrations This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 14 • Managed point andclick data migration service available through the AWS Management Console • Schemas must be migrated separately • Supports CDC replication for near zero migration downtime For details see What Is AWS Database Migration Service? in the AWS DMS User Guide For a heterogeneous migration where you are migrating from a database engine other than MySQL to a MySQL datab ase AWS DMS is almost always the best migration tool to use But for homogeneous migration where you are migrating from a MySQL database to a MySQL database native tools can be more effective Using Any MySQL Compatible Database as a Source for AWS DMS: Before you begin to work with a MySQL database as a source for AWS DMS make sure that you the following prerequisites These prerequisites apply to either self managed or Amazon managed sources You must have an account for AWS DMS that has the Replicati on Admin Role The role needs the following privileges: • Replication Client: This privilege is required for change data capture (CDC) tasks only In other words full loadonly tasks don’t require this privilege • Replication Slave: This privilege is required for change data capture (CDC) tasks only In other words full loadonly tasks don’t require this privilege • Super: This privilege is required only in MySQL versions before 566 DMS highlights for non MySQL compatible sources: • Requires manual schema conversion from source database format into MySQL compatible format • Data migration can be performed manually using a universal data format such as comma separated values (CSV) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 15 • Change data capture (CDC) replication might be possible with third party tool s for near zero migration downtime Migrating Large Databases to Amazon Aurora Migration of large datasets presents unique challenges in every database migration project Many successful large database migration projects use a combination of the following strategies: • Migration with continuous replication: Large databases typically have extended downtime requirements while moving data from source to target To reduce the downtime you can first load baseline data from source to target and then enable replica tion (using MySQL native tools AWS DMS or third party tools) for changes to catch up • Copy static tables first: If your database relies on large static tables with reference data you may migrate these large tables to the target database before migratin g your active dataset You can leverage AWS DMS to copy tables selectively or export and import these tables manually • Multiphase migration: Migration of large database with thousands of tables can be broken down into multiple phases For example you may move a set of tables with no cross joins queries every weekend until the source database is fully migrated to the target database Note that in order to achieve this you need to make changes in your application to connect to two databases simultaneously while your dataset is on two distinct nodes Although this is not a common migration pattern this is an option nonetheless • Database clean up: Many large databases contain data and tables that remain unused In many cases developers and DBAs keep backup copies of tables in the same database or they just simply forget to drop unused tables Whatever the reason a database migration project p rovides an opportunity to clean up the existing database before the migration If some tables are not being used you might either drop them or archive them to another database You might also delete old data from large tables or archive that data to flat files This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 16 Partition and Shard Consolidation on Amazon Aurora If you are running multiple shards or functional partitions of your database to achieve high performance you have an opportunity to consolidate these partitions or shards on a single Aurora databa se A single Amazon Aurora instance can scale up to 64 TB supports thousands of tables and supports a significantly higher number of reads and writes than a standard MySQL database Consolidating these partitions on a single Aurora instance not only redu ces the total cost of ownership and simplify database management but it also significantly improves performance of cross partition queries • Functional partitions : Functional partitioning means dedicating different nodes to different tasks For example i n an e commerce application you might have one database node serving product catalog data and another database node capturing and processing orders As a result these partitions usually have distinct nonoverlapping schemas o Consolidation strateg y: Migrate each functional partition as a distinct schema to your target Aurora instance If your source database is MySQL compliant use native MySQL tools to migrate the schema and then use AWS DMS to migrate the data either one time or continuously using replication If your source database is non MySQL complaint use AWS Schema Conversion Tool to migrate the schemas to Aurora and use AWS DMS for one time load or continuous replication • Data shards : If you have the same schema with distinct sets of data acros s multiple nodes you are leveraging database sharding For example a high traffic blogging service may shard user activity and data across multiple database shards while keeping the same table schema This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 17 o Consolidation strategy : Since all shards share the sa me database schema you only need to create the target schema once If you are using a MySQL compliant database use native tools to migrate the database schema to Aurora If you are using a non MySQL database use AWS Schema Conversion Tool to migrate the database schema to Aurora Once the database schema has been migrated it is best to stop writes to the database shards and use native tools or an AWS DMS one time data load to migrate an individual shard to Aurora If writes to the application cannot be stopped for an extended period you might still use AWS DMS with replication but only after proper planning and testing MySQL and MySQL compatible Migration Options at a Glance Source Database Type Migration with Downtime Near zero Downtime Migration Amazon RDS MySQL Option 1: RDS snapshot migration Option 2: Manual migration using native tools* Option 3: Schema migration using native tools and data load using AWS DMS Option 1: Migration using native tools + binlog replication Option 2: RDS snapshot migration + binlog replication Option 3: Schema migration using native tools + AWS DMS for data movement MySQL Amazon EC2 or onpremises Option 1: Schema migration with native tools + AWS DMS for data load Option 1: Schema migration using native tools + A WS DMS to move data Oracle/SQL server Option 1: AWS Schema Conversion Tool + AWS DMS (recommended) Option 2: Manual or third party tool for schema conversion + manual or thirdparty data load in target Option 1: AWS Schema Conversion Tool + AWS DMS (recommended) Option 2: Manual or third party tool for schema conversion This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 18 Migrating from Amazon RDS for MySQL If you are migrating from an RDS MySQL 56 database (DB) instance the recommended approach is to use the snapshot migration feature Snapshot m igration is a fully managed point andclick feature that is available through the AWS Management Console You can use it to migrate an RDS MySQL 56 DB instance snapshot into a new Aurora DB cluster It is the fastest and easiest to use of all the migrati on methods described in this document For more information about the snapshot migration feature see Migrating Data to an Amazon Aurora DB Cluster in the Amazon RDS User Guide This section provides ideas for projects that use the snapshot migration feature The liststyle layout in our example instructions can help you prepare your own migration checklist Estimating Space Requirements for Snapshot Migration When you migrate a snapshot of a MySQL DB instance to an Aurora DB cluster Aurora uses an Am azon Elastic Block Store (Amazon EBS) volume to format the data from the snapshot before migrating it There are some cases where additional space is needed to format the data for migration The two features that can potentially cause space issues during m igration are MyISAM tables and using the ROW_FORMAT=COMPRESSED option If you are not using either of these features in your source database then you can skip this section because you should not have space issues During migration MyISAM tables are conve rted to InnoDB and any compressed tables are uncompressed Consequently there must be adequate room for the additional copies of any such tables The size of the migration volume is based on the allocated size of the source MySQL database that the snapsho t was made from Therefore if you have MyISAM or compressed tables that make up a small percentage of the overall database size and there is available space in the original database then migration should succeed without encountering any space issues How ever if the original database would not have enough room to store a copy of converted MyISAM tables as well as another (uncompressed) copy of compressed tables then the migration volume will not be big enough In this situation you would need to modify the source Amazon RDS MySQL This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 19 database to increase the database size allocation to make room for the additional copies of these tables take a new snapshot of the database and then migrate the new snapshot When migrating data into your DB cluster observe the following guidelines and limitations: • Although Amazon Aurora supports up to 64 TB of storage the process of migrating a snapshot into an Aurora DB cluster is limited by the size of the Amazon EBS volume of the snapshot and therefore is limited to a m aximum size of 6 TB Non MyISAM tables in the source database can be up to 6 TB in size However due to additional space requirements during conversion make sure that none of the MyISAM and compressed tables being migrated from your MySQL DB instance exc eed 3 TB in size For more information see Migrating Data from an Amazon RDS MySQL DB Instance to an Amazon Aurora MySQL DB Cluster You might want to modify your d atabase schema (convert MyISAM tables to InnoDB and remove ROW_FORMAT=COMPRESSED ) prior to migrating it into Amazon Aurora This can be helpful in the following cases: • You want to speed up the migration process • You are unsure of how much space you need t o provision • You have attempted to migrate your data and the migration has failed due to a lack of provisioned space Make sure that you are not making these changes in your production Amazon RDS MySQL database but rather on a database instance that was restored from your production snapshot For more details on doing this see Reducing the Amount of Space Required to Migrate Data into Amazon Aurora in the Amazon RDS User Guide The naming conventions used in this section are as follows: • Source RDS DB instance refers to the RDS MySQL 56 DB instance that you are migrating from • Target Aurora DB cluster refers to the Aurora DB cluster that you are migrating to This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 20 Migrating with Downtime When migration downtime is acceptable you can use the following high level procedure to migrate an RDS MySQL 56 DB instance to Amazon Aurora: 1 Stop all write activity against the source RDS DB instance Database downtime begins here 2 Take a snapshot of the source RDS DB instance 3 Wait until the snapshot shows as Available in the AWS Management Console 4 Use the AWS Management Console to migrate the snapshot to a new Aurora DB cluster For instructions see Migra ting Data to an Amazon Aurora DB Cluster in the Amazon RDS User Guide 5 Wait until the snapshot migration finishes and the target Aurora DB cluster enters the Available state The time to migrate a snapshot primarily depends on the size of the database You can determine it ahead of the production migration by running a test migration 6 Configure applications to connect to the newly created target Aurora DB cluster instead of the source RDS DB instance 7 Resume write activity against the target Aurora DB cluster Database downtime ends here Migrating with Near Zero Downtime If prolonged migration downtime is not acceptable you can perform a near zero downtime migration through a combination of snapshot migration and binary log replication Perform the high level procedure as follows: 1 On the source RDS DB instance ensure that a utomated backups are enabled 2 Create a Read Replica of the source RDS DB instance 3 After you create the Read Replica manually stop replication and obtain binary log coordinates 4 Take a snapshot of the Read Replica This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 21 5 Use the AWS Management Console to migrat e the Read Replica snapshot to a new Aurora DB cluster 6 Wait until snapshot migration finishes and the target Aurora DB cluster enters the Available state 7 On the target Aurora DB cluster configure binary log replication from the source RDS DB instance using the binary log coordinates that you obtained in step 3 8 Wait for the replication to catch up that is for the replication lag to reach zero 9 Begin cut over by stopping all write activity against the source RDS DB instance Application downt ime begins here 10 Verify that there is no outstanding replication lag and then configure applications to connect to the newly created target Aurora DB cluster instead of the source RDS DB instance 11 Complete cut over by resuming write activity Application downtime ends here 12 Terminate replication between the source RDS DB instance and the target Aurora DB cluster For a detailed description of this procedure see Replication Between Aurora and MySQL or Between Aurora and Another Aurora DB Cluster in the Amazon RDS Us er Guide If you don’t want to set up replication manually you can also create an Aurora Read Replica from a source RDS MySQL 56 DB instance by using the RDS Management Console The RDS automation does the following: 1 Creates a snapshot of the source RDS DB instance 2 Migrates the snapshot to a new Aurora DB cluster 3 Establishes binary log replication between the source RDS DB instance and the target Aurora DB cluster After replication is established you can complete the cut over steps as described previously This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 22 Migrating from Amazon RDS for MySQL Engine Versions Other than 56 Direct snapshot migration is only supported for RDS MySQL 56 DB instance snapshots You can migrate RDS MySQL DB instances that are running other engine versions by u sing the following procedures RDS for MySQL 51 and 55 Follow these steps to migrate RDS MySQL 51 or 55 DB instances to Amazon Aurora: 1 Upgrade the RDS MySQL 51 or 55 DB instance to MySQL 56 • You can upgrade RDS MySQL 55 DB instances directly to MySQL 56 • You must upgrade RDS MySQL 51 DB instances to MySQL 55 first and then to MySQL 56 2 After you upgrade the instance to MySQL 56 test your applications against the upgraded database and address any compatibility or performance co ncerns 3 After your application passes the compatibility and performance tests against MySQL 56 migrate the RDS MySQL 56 DB instance to Amazon Aurora Depending on your requirements choose the Migrating with Downtime or Migrating with Near Zero Downtime procedures described earlier For more information about upgrading RDS MySQL engine versions see Upgrading the MySQL DB Engine in the Amazon RDS User Guide RDS for MySQL 57 For migrations from RDS MySQL 57 DB instances the snapshot migration approach is not supported because the database engine version ca n’t be downgraded to MySQL 56 In this case we recommend a manual dump andimport procedure for migrating MySQL compatible databases described later in this whitepaper Such a procedure may be slower than snapshot migration but you can still perform it with near zero downtime using binary log replication This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 23 Migrating from MySQL Compatible Databases Moving to Amazon Aurora is still a relatively simple process if you are migrating from an RDS MariaDB instance an RDS MySQL 57 DB instance or a se lf managed MySQL compatible database such as MySQL MariaDB or Percona Server running on Amazon Elastic Compute Cloud (Amazon EC2) or on premises There are many techniques you can use to migrate your MySQL compatible database workload to Amazon Aurora This section describes various migration options to help you choose the most optimal solution for your use case Percona XtraBackup Amazon Aurora supports migration from Percona XtraBackup files that are stored in an Amazon S3 bucket Migrating from binar y backup files can be significantly faster than migrating from logical schema and data dumps using tools like mysqldump Logical imports work by executing SQL commands to re create the schema and data from your source database which involves considerable processing overhead By comparison you can use a more efficient binary ingestion method to ingest Percona XtraBackup files This migration method is compatible with source servers using MySQL versions and 56 Migrating from Percona XtraBackup files invol ves three steps: 1 Use the innobackupex tool to create a backup of the source database 2 Upload backup files to an Amazon S3 bucket 3 Restore backup files through the AWS Management Console For details and step bystep instructions see Migrating data from MySQL by using an Amazon S3 Bucket in the Amazon RDS User Guide SelfManaged Export/Import You can use a variety of export/import tools to migrate your data and schema to Amazon Aurora The tools can be described as “MySQL native” because they are either part of a MySQL project or were designed specifically for MySQL compatible databases This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 24 Examples of native migration tools include the following: 1 MySQL utilities such as mysqldump mysqlimport and mysql command line client 2 Third party utilities such as mydumper and myloader For details see this mydumper project page 3 Builtin MySQL commands such as SELECT INTO OUTFILE and LOAD DATA INFILE Native tools are a great option for power users or database administrators who want to maintain full control over the migration process Self managed migrations involve more steps and are typically slower than RDS snapshot or Percona XtraBackup migrations but they offer the best compatibility and flexibility For an in depth discussion of the best practices for self managed migrations see the AWS whitepaper Best Practices for Migrating MySQ L Databases to Amazon Aurora You can execute a self managed migration with downtime (without replication) or with nearzero downt ime (with binary log replication) SelfManaged Migration with Downtime The high level procedure for migrating to Amazon Aurora from a MySQL compatible database is as follows: 1 Stop all write activity against the source database Application downtime begin s here 2 Perform a schema and data dump from the source database 3 Import the dump into the target Aurora DB cluster 4 Configure applications to connect to the newly created target Aurora DB cluster instead of the source database 5 Resume write activity Appli cation downtime ends here For an in depth discussion of performance best practices for self managed migrations see the AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 25 SelfManaged Migration with Near Zero Downtime The following is the high level procedure for near zero downtime migration into Amazon Aurora from a MySQL compatible database: 1 On the source database enable binary logging and ensure that binary log files are retained for at least the amount of time that is required t o complete the remaining migration steps 2 Perform a schema and data export from the source database Make sure that the export metadata contains binary log coordinates that are required to establish replication at a later time 3 Import the dump into the tar get Aurora DB cluster 4 On the target Aurora DB cluster configure binary log replication from the source database using the binary log coordinates that you obtained in step 2 5 Wait for the replication to catch up that is for the replication lag to reach zero 6 Stop all write activity against the source database instance Application downtime begins here 7 Double check that there is no outstanding replication lag Then configure applications to connect to the newly created target Aurora DB cluster inst ead of the source database 8 Resume write activity Application downtime ends here 9 Terminate replication between the source database and the target Aurora DB cluster For an in depth discussion of performance best practices of self managed migrations see the AWS whitepaper Best Practices for Mig rating MySQL Databases to Amazon Aurora AWS Database Migration Service AWS Database Migration Service is a managed database migra tion service that is available through the AWS Management Console It can perform a range of tasks from simple migrations with downtime to near zero downtime migrations using CDC replication This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 26 AWS Database Migration Service may be the preferred option if y our source database can’t be migrated using methods described previously such as the RDS MySQL 56 DB snapshot migration Percona XtraBackup migration or native export/import tools AWS Database Migration Service might also be advantageous if your migrat ion project requires advanced data transformations such as the following : • Remapping schema or table names • Advanced data filtering • Migrating and replicating multiple database servers into a single Aurora DB cluster Compared to the migration methods describe d previously AWS DMS carries certain limitations: • It does not migrate secondary schema objects such as indexes foreign key definitions triggers or stored procedures Such objects must be migrated or created manually prior to data migration • The DMS CDC replication uses plain SQL statements from binlog to apply data changes in the target database Therefore it might be slower and more resource intensive than the native master/slave binary log replication in MySQL For step bystep instructions on how to migrate your database using AWS DMS see the AWS whitepaper Migrating Your Databases to Amazon Aurora Heterogeneous Migrations If you a re migrating a non MySQL compatible database to Amazon Aurora several options can help you complete the project quickly and easily A heterogeneous migration project can be split into two phases: 1 Schema migration to review and convert the source schema objects (eg tables procedures and triggers) into a MySQL compatible representation 2 Data migration to populate the newly created schema with data contained in the source database Optionally you can use a CDC replication for near zero downtime migratio n This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 27 Schema Migration You must convert database objects such as tables views functions and stored procedures to a MySQL 56 compatible format before you can use them with Amazon Aurora This section describes two main options for converting schema objects Whichever migration method you choose always make sure that the converted objects are not only compatible with Aurora but also follow MySQL’s best practices for schema design AWS Schema Conversion Tool The AWS Schema Conversion Tool (AWS SCT) can great ly reduce the engineering effort associated with migrations from Oracle Microsoft SQL Server Sybase DB2 Azure SQL Database Terradata Greenplum Vertica Cassandra and PostgreSQL etc AWS SCT can automatically convert the source database schema and a majority of the custom code including views stored procedures and functions to a format compatible with Amazon Aurora Any code that can’t be automatically converted is clearly marked so that it can be processed manually For more information see the AWS Schema Conversion Tool User Guide For step by step instructions on how to convert a non MySQL compatible schema using the AWS Schema Conversion Tool see t he AWS whitepaper Migrating Your Databases to Amazon Aurora Manual Schema Migration If your source database is not in the scope of SCT comp atible databases you can either manually rewrite your database object definitions or use available third party tools to migrate schema to a format compatible with Amazon Aurora Many applications use data access layers that abstract schema design from business application code In such cases you can consider redesigning your schema objects specifically for Amazon Aurora and adapting the data access layer to the new schema This might require a greater upfront engineering effort but it allows the new s chema to incorporate all the best practices for performance and scalability This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 28 Data Migration After the database objects are successfully converted and migrated to Amazon Aurora it’s time to migrate the data itself The task of moving data from a non MySQL compatible database to Amazon Aurora is best done using AWS DMS AWS DMS supports initial data migration as well as CDC replication After the migration task starts AWS DMS manages all the complexities of the process including data type transformations compression and parallel data transfer The CDC functionality automatically replicates any changes that are made to the source database during the migration process For more information see the AWS Database Migration Service User Guide For step bystep instructions on how to migrate data from a non MySQL compatible database into an Amazon Aurora cluster using AWS DMS see the AWS whitepaper Migrating Your Databases to Amazon Aurora Example Migration Scenarios There are several approaches for performing both self managed homogeneo us migration and heterogeneous migrations SelfManaged Homogeneous Migrations This section provides examples of migration scenarios from self managed MySQL compatible databases to Amazon Aurora For an in depth discussion of homogeneous migration best pra ctices see the AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora Note: If you are migrating from an Amazon RDS MySQL DB instance you can use the RDS snapshot migration feature instead of doing a self managed migration See the Migrating from Amazon RDS for MySQL section for more details This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 29 Migrating Using Percona XtraBackup One option for migrating data from MySQL to Amazon Aurora is to use the Percona XtraBackup utility For more information about usin g Percona Xtrabackup utility see Migrating Data from an External MySQL Database in the Amazon RDS User Guide Approach This scenario uses the Percona XtraBackup utility to take a binary backup of the source MySQL database The backup files are then uploaded to an Amazon S3 bucket and restored into a new Amazon Aurora DB cluster When to Use You can adopt this approach for small to large scale migrations when the following conditions are met: • The source database is a MySQL 55 or 56 database • You have administrative system level access to the source database • You are migrating database servers in a 1 to1 fashion: one source MySQL server becomes one new Aurora DB cluster When to Consider Other Options This approach is not currently supported in the following scenarios • Migrating into existing Aurora DB clusters • Migrating multiple source MySQL servers into a single Aurora DB cluster Examples For a step bystep example see Migrating Data from an External MySQL Database in the Amazon RDS User Guide OneStep Migration Using mysqldump Another migration option uses the mysqldump utility to migrate data from MySQL to Amazon Aurora This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 30 Approach This scenario uses the mysqldump utility to export schema and data definitions from the source server and import them into the target Auro ra DB cluster in a single step without creating any intermediate dump files When to Use You can adopt this approach for many small scale migrations when the following conditions are met: • The data set is very small (up to 1 2 GB) • The network connection between source and target databases is fast and stable • Migration performance is not critically important and the cost of re trying the migration is very low • There is no need to do any intermediate schema or data transformations When to Cons ider Other Options This approach might not be an optimal choice if any of the following conditions are true • You are migrating from an RDS MySQL DB instance or a self managed MySQL 55 or 56 database In that case you might get better results with snapsho t migration or Percona XtraBackup respectively For more • details see the Migrating from Amazon RDS for MySQL and Percona XtraBackup sections • It is impossible to establish a network connection from a single client instance to source and target databases due to network architecture or security considerations • The network connection between source and target databases is unstable or very slow • The data set is larger than 10 GB • Migration performance is critically important • An intermediate dump file is required in order to perform schema or data manipulations before you can import the schema/data This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 31 Notes For the sake of simplicity this scenario assumes the following: 1 Migration commands are executed from a client instance running a Linux operating system 2 The source server is a self managed MySQL database (eg running on Amazon EC2 or on premises) that is configured to allow connections from the client instance 3 The target Aurora DB cluster already exists and is configured to allow connections from the client instance If you don’t yet have an Aurora DB cluster review the stepbystep cluster launch instructions in the Amazon RDS User Guide 17 4 Export from the source database is performed using a privileged super user MySQL ac count For simplicity this scenario assumes that the user holds all permissions available in MySQL 5 Import into Amazon Aurora is performed using the Aurora master user account that is the account whose name and password were specified during the cluster launch process Examples The following command when filled with the source and target server and user information migrates data and all objects in the named schema(s) between the source and t arget servers mysqldump host=<source_server_address> \ user=<source_user> \ password=<source_user_password> \ databases <schema(s)> \ singletransaction \ compress | mysql host=<target_cluster_endpoint> \ user=<target_user> \ password=<target_user_password> This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 32 Descriptions of the options and option v alues for the mysqldump command are as follows: • <source_server_address> : DNS name or IP address of the source server • <source_user> : MySQL user account name on the source server • <source_user_password> : MySQL user account password on the source server • <schema(s)> : One or more schema names • <target_cluster_endpoint> : Cluster DNS endpoint of the target Aurora cluster • <target_user> : Aurora master user name • <target_user_password> : Aurora master user password • single transaction : Enforces a consi stent dump from the source database Can be skipped if the source database is not receiving any write traffic • compress : Enables network data compression See the mysqldump docume ntation for more details Example: mysqldump host=source mysqlexamplecom \ user=mysql_admin_user \ password=mysql_user_password \ databases schema1 \ singletransaction \ compress | mysql host=auroracluster xxxxxamazonawscom \ user=aurora_master_user \ password=aurora_user_password Note: This migration approach requires application downtime while the dump and import are in progress You can avoid application downtime by extending the scenario with MySQL binary log replication See the Self Managed Migration with Near Zero Downtime section for more details This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 33 FlatFile Migration Using Files in CSV Format This scenario demonstrates a schema and data migration using flat file dumps that is dumps that do not encapsulate data in SQL statements Many database administrators prefer to use flat files over SQL format files for the following reasons: • Lack of SQL encap sulation results in smaller dump files and reduces processing overhead during import • Flatfile dumps are easier to process using OS level tools; they are also easier to manage (eg split or combine) • Flatfile formats are compatible with a wide range of database engines both SQL and NoSQL Approach The scenario uses a hybrid migration approach: • Use the mysqldump utility to create a schema only dump in SQL format The dump describes the structure of schema objects (eg tables views and functions) but does not contain data • Use SELECT INTO OUTFILE SQL commands to create dataonly dumps in CSV format The dumps are created in a one filepertable fashion and contain table data only (no schema definitions) The import phase can be executed in two ways: • Traditional approach: Transfer all dump files to an Amazon EC2 instance located in the same AWS Region and Availability Zone as the target Aurora DB cluster After transferring the dump files you can import them into Amazon Aurora using the mysql command line client and LOAD DATA LOCAL INFILE SQL commands for SQL format schema dumps and the flat file data dumps respectively This is the approach that is demonstrated later in this section This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 34 • Alternative approach: Transfer the SQL format schema dumps t o an Amazon EC2 client instance and import them using the mysql command line client You can transfer the flat file data dumps to an Amazon S3 bucket and then import them into Amazon Aurora using LOAD DATA FROM S3 SQL commands For more information including an example of loading data from Amazon S3 see Migrating Data from MySQL by Using an Amazon S3 Bucket in the Amazon RDS User Guide When to Use You can adopt this approach for most migration projects where performance and flexibility are important: • You can dump small data sets and import them one table at a time You can also run multiple SELECT INTO OUTFILE and LOAD DATA INFILE operations in parallel for best performance • Data that is stored in flat file dumps is not encapsulated in database specific SQL statements Therefore it can be handled and processed easily by the systems participating in the data exchange When to Consider Other Options You might choose not to use this approach if any of the following conditions are true: • You are migrating from an RDS MySQL DB instance or a self managed MySQL 56 database In that case you might get better results with snapshot migration or Percona XtraBackup respectively See the Migrating from Amazon RDS for MySQL and Percona XtraBackup sections for more details • The data set is very small and does not require a high performance migration approach • You want the migration process to be as simple as possible and you don’t require any of the performance and flexibility benefits listed earlier Notes To simplify the demons tration this scenario assumes the following: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 35 1 Migration commands are executed from client instances running a Linux operating system: o Client instance A is located in the source server’s network o Client instance B is located in the same Amazon VPC Availability Zone and Subnet as the target Aurora DB cluster 2 The source server is a self managed MySQL database (eg running on Amazon EC2 or on premises) configured to allow connections from client instance A 3 The target Aurora DB cluster already exist s and is configured to allow connections from client instance B If you don’t have an Aurora DB cluster yet review the stepbystep cluster launch instruct ions in the Amazon RDS User Guide 4 Communication is allowed between both client instances 5 Export from the source database is performed using a privileged super user MySQL account For simplicity this scenario assumes that the user holds all permissions available in MySQL 6 Import into Amazon Aurora is performed using the master user account that is the account whose name and password were specified during the cluster launch process Note that this migration approach requires application downtime while t he dump and import are in progress You can avoid application downtime by extending the scenario with MySQL binary log replication See the Self Managed Migration with Near Zero Downtime sectio n for more details Examples In this scenario you migrate a MySQL schema named myschema The first step of the migration is to create a schema only dump of all objects mysqldump host=<source_server_address> \ user=<source_user> \ password=<source_user_password> \ databases <schema(s)> \ singletransaction \ nodata > myschema_dumpsql This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 36 Descriptions of the options and option values for the mysqldump command are as follows: • <source_server_address> : DNS name or IP address of th e source server • <source_user> : MySQL user account name on the source server • <source_user_password> : MySQL user account password on the source server • <schema(s)> : One or more schema names • <target_cluster_endpoint> : Cluster DNS endpoint of the target Aur ora cluster • <target_user> : Aurora master user name • <target_user_password> : Aurora master user password • single transaction : Enforces a consistent dump from the source database Can be skipped if the source database is not receiving any write traffic • nodata : Creates a schema only dump without row data For more details see mysqldump in the MySQL 56 Reference Manual Example: admin@clientA:~$ mysqldump host=11223344 user=root \ password=pAssw0rd databases myschema \ singletransaction nodata > myschema_dump_schema_onlysql After you complete the schema only dump you can obtain data dumps for each table After logging in to the source MyS QL server use the SELECT INTO OUTFILE statement to dump each table’s data into a separate CSV file admin@clientA:~$ mysql host=11223344 user=root password=pAssw0rd mysql> show tables from myschema; + + This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 37 | Tables_in_myschema | + + | t1 | | t2 | | t3 | | t4 | + + 4 rows in set (000 sec) mysql> SELECT * INTO OUTFILE '/home/admin/dump/myschema_dump_t1csv' FIELDS TERMINATED BY '' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY ' \n' FROM myschemat1; Query OK 4194304 rows affected (235 sec) (repeat for all remaining tables) For more information about SELECT INTO statement syntax see SELECT INTO Syntax in the MySQL 56 Reference Manual After you complete all dump operations the /home/admin/dump directory contains five files: one schema only dump and four data dumps on e per table admin@clientA:~/dump$ ls sh1 total 685M 40K myschema_dump_schema_onlysql 172M myschema_dump_t1csv 172M myschema_dump_t2csv 172M myschema_dump_t3csv 172M myschema_dump_t4csv Next you compress and transfer the files to client instance B located in the same AWS Region and Availability Zone as the target Aurora DB cluster You can use any file transfer method available to you (eg FTP or Amazon S3) This example uses SCP with SSH private key authentication admin@clientA:~/dump$ gzip mysc hema_dump_*csv admin@clientA:~/dump$ scp i sshkeypem myschema_dump_* \ <clientB_ssh_user>@<clientB_address>:/home/ec2 user/ This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 38 After transferring all the files you can decompress them and import the schema and data Import the schema dump first because a ll relevant tables must exist before any data can be inserted into them admin@clientB:~/dump$ gunzip myschema_dump_*csvgz admin@clientB:~$ mysql host=<cluster_endpoint> user=master \ password=pAssw0rd < myschema_dump_schema_onlysql With the schem a objects created the next step is to connect to the Aurora DB cluster endpoint and import the data files Note the following: • The mysql client invocation includes a localinfile parameter which is required to enable support for LOAD DATA LOCAL INFILE commands • Before importing data from dump files use a SET command to disable foreign key constraint checks for the duration of the database session Disabling foreign key checks not only improves import performance but it also lets you import data files in arbitrary order admin@clientB:~$ mysql localinfile host=<cluster_endpoint> \ user=master password=pAssw0rd mysql> SET foreign_key_checks = 0; Query OK 0 rows affected (000 sec) mysql> LOAD DATA LOCAL INFILE '/home/ec2 user/myschema_dump_t1csv' > INTO TABLE myschemat1 > FIELDS TERMINATED BY '' OPTIONALLY ENCLOSED BY '"' > LINES TERMINATED BY ' \n'; Query OK 4194304 rows affected (1 min 266 sec) Records: 4194304 Deleted: 0 Skipped: 0 Warnings: 0 (repeat for all rema ining CSV files) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 39 mysql> SET foreign_key_checks = 1; Query OK 0 rows affected (000 sec) That’s it you have imported the schema and data dumps into the Aurora DB cluster You can find more tips and best practices for self managed migrations in the AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora Multi Threaded Migration Using mydumper and myloader Mydumper and myloader are popular open source MySQL export/import tools designed to address performance issues associated with the lega cy mysqldump program They operate on SQL format dumps and offer advanced features such as the following: • Dumping and loading data using multiple parallel threads • Creating dump files in a file pertable fashion • Creating chunked dumps in a multiple filespertable fashion • Dumping data and metadata into separate files for easier parsing and management • Configurable transaction size during import • Ability to schedule dumps in regular intervals For more details see the MySQL Data Dumper project page Approach The scenario uses the mydumper and myloader tools to perform a multi threaded schema and data migration without the need to manually invoke any SQL commands or desig n custom migration scripts The migration is performed in two steps: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 40 1 Use the mydumper tool to create a schema and data dump using multiple parallel threads 2 Use the myloader tool to process the dump files and import them into an Aurora DB cluster also in multi threaded fashion Note that mydumper and myloader might not be readily available in the package repository of your Linux/Unix distribution For your convenience the scenario also shows how to build the tools from source code When to Use You can adopt this approach in most migration projects: • The utilities are easy to use and enable database users to perform multi threaded dumps and imports without the need to develop custom migration scripts • Both tools are highly flexible and have reasonable co nfiguration defaults You can adjust the default configuration to satisfy the requirements of both small and large scale migrations When to Consider Other Options You might decide not to use this approach if any of the following conditions are true: • You are migrating from an RDS MySQL DB instance or a self managed MySQL 55 or 56 database In that case you might get better results with snapshot migration or Percona XtraBackup respectively See the Migrating from Amazon RDS for MySQL and Percona XtraBackup sections for more details • You can’t use third party software because of operating system limitations • Your data transformation processes require intermediate dump files in a flat file forma t and not an SQL format Notes To simplify the demonstration this scenario assumes the following: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 41 1 You execute the migration commands from client instances running a Linux operating system: a Client instance A is located in the source server’s network b Clien t instance B is located in the same Amazon VPC Availability Zone and Subnet as the target Aurora cluster 2 The source server is a self managed MySQL database (eg running on Amazon EC2 or on premises) configured to allow connections from client instance A 3 The target Aurora DB cluster already exists and is configured to allow connections from client instance B If you don’t have an Aurora DB cluster yet review the stepbystep cluster launch instructions in the Amazon RDS User Guide 4 Communication is allowed between both client instances 5 You perform the export from the source database using a privileged super user MySQL account For simplicity the example assumes that the user holds all permissions available in MySQL 6 You perform the import into Amazon Aurora using the master user account that is the account whose n ame and password were specified during the cluster launch process 7 The Amazon Linux 2016033 operating system is used to demonstrate the configuration and compilation steps for mydumper and myloader Note : This migration approach requires application down time while the dump and import are in progress You can avoid application downtime by extending the scenario with MySQL binary log replication See the Self Managed Migration with Near Zero Dow ntime section for more details Examples (Preparing Tools) The first step is to obtain and build the mydumper and myloader tools See the MySQL Data Dumper project page for up todate download links and to ensure that tools are prepared on both client instances This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 42 The utilities depend on several packages that you should install first [ec2user@clientA ~]$ sudo yum install glib2 devel mysql56 \ mysql56devel zlib devel pcre devel openssl devel g++ gcc c++ cmake The next steps involve creating a directory to hold the program sources and then fetching and unpacking the source archive [ec2user@clientA ~]$ mkdir mydumper [ec2 user@clientA ~]$ cd mydumper/ [ec2user@clientA mydumper]$ wget https://launchp adnet/mydumper/09/091/+download/mydumper 091targz 20160629 21:39:03 (153 KB/s) ‘mydumper 091targz’ saved [44463/44463] [ec2user@clientA mydumper]$ tar zxf mydumper 091targz [ec2user@clientA mydumper]$ cd mydumper 091 Next you b uild the binary executables [ec2user@clientA mydumper 091]$ cmake (…) [ec2user@clientA mydumper 091]$ make Scanning dependencies of target mydumper [ 25%] Building C object CMakeFiles/mydumperdir/mydumperco [ 50%] Building C object CMakeFiles/mydumperdir/server_detectco [ 75%] Building C object CMakeFiles/mydumperdir/g_unix_signalco Linking C executable mydumper [ 75%] Built target mydumper Scanning dependencies of target myloader [100%] Building C object CMakeFiles/myloaderdi r/myloaderco Linking C executable myloader [100%] Built target myloader This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 43 Optionally you can move the binaries to a location defined in the operating system $PATH so that they can be executed more conveniently [ec2user@clientA mydumper 091]$ sudo mv mydumper /usr/local/bin/mydumper [ec2user@clientA mydumper 091]$ sudo mv myloader /usr/local/bin/myloader As a final step confirm that both utilities are available in the system [ec2user@clientA ~]$ mydumper V mydumper 091 built against MySQL 5631 [ec2user@clientA ~]$ myloader V myloader 091 built against MySQL 5631 Examples (Migration) After completing the preparation steps you can perform the migration The mydumper command uses the following basic syntax mydumper h <source_serve r_address> u <source_user> \ p <source_user_password> B <source_schema> \ t <thread_count> o <output_directory> Descriptions of the parameter values are as follows: • <source_server_address> : DNS name or IP address of the source server • <source_user> : MySQL user account name on the source server • <source_user_password> : MySQL user account password on the source server • <source_schema> : Name of the schema to dump • <thread_count> : Number of parallel threads used to dump the data This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 44 • <output_directory> : Name of the directory where dump files should be placed Note : mydumper is a highly customizable data dumping tool For a complete list of supported parameters and their default values use the builtin help mydumper help The example dump is executed as follows [ec2user@clientA ~]$ mydumper h 11223344 u root \ p pAssw0rd B myschema t 4 o myschema_dump/ The operation results in the following files being created in the dump directory [ec2user@clientA ~]$ ls sh1 myschema_dum p/ total 733M 40K metadata 40K myschema schemacreatesql 40K myschemat1 schemasql 184M myschemat1sql 40K myschemat2 schemasql 184M myschemat2sql 40K myschemat3 schemasql 184M myschemat3sql 40K myschemat4 schemasql 184M myschemat4sql The directory contains a collection of metadata files in addition to schema and data dumps You don’t have to manipulate these files directly It’s enough that the directory structure is understood by the myloader tool Compress the entire directory and transfer it to client instance B [ec2user@clientA ~]$ tar czf myschema_dumptargz myschema_dump [ec2user@clientA ~]$ scp i sshkeypem myschema_dumptargz \ <clientB_ssh_user>@<clientB_address>:/home/ec2 user/ This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 45 When the transfer is complete connect to client instance B and verify that the myloader utility is available [ec2user@clientB ~]$ myloader V myloader 091 built against MySQL 5631 Now you can u npack the dump and import it The syntax used for the myloader command is very similar to what you already used for mydumper The only difference is the d (source directory) parameter replacing the o (target directory) parameter [ec2user@clientB ~]$ tar zxf myschema_dumptargz [ec2user@clientB ~]$ myloader h <cluster_dns_endpoint> \ u master p pAssw0rd B myschema t 4 d myschema_dump/ Useful Tips • The concurrency level (thread count) does not have to be the same for export and import operations A good rule of thumb is to use one thread per server CPU core (for dumps) and one thread per two CPU cores (for imports) • The schema and data dumps produced by mydumper use an SQL format and are compatible with MySQL 56 Although you will typically use the pair of mydumper and myloader tools together for best results technically you can import the dump files from myloader by using any other MySQL compatible client tool You can find more tips and best practices for self managed migrations in t he AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora Heterogeneous Migrations For detailed step bystep instructions on how to migrate schema and data from a non MySQL compatib le database into an Aurora DB cluster using AWS SCT and AWS DMS see the AWS whitepaper Migrating Your Databases to Amazon Aurora Prior to running migration we suggest you to review Proof of Concept with Aurora to This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 46 understand the volume of data and representative of your production environment as a blueprint Testing and Cutover Once the schema and data have been successfully migrated from the source database to Amazon Aurora you are no w ready to perform end toend testing of your migration process The testing approach should be refined after each test migration and the final migration plan should include a test plan that ensures adequate testing of the migrated database Migration T esting Test Category Purpose Basic acceptance tests These pre cutover tests should be automatically executed upon completion of the data migration process Their primary purpose is to verify whether the data migration was successful Following are some common outputs from these tests: • Total number of items processed • Total number of items imported • Total number of items skipped • Total number of warnings • Total number of errors If any of these totals reported by the tests deviate from the expec ted values then it means the migration was not successful and the issues need to be resolved before moving to the next step in the process or the next round of testing This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 47 Test Category Purpose Functional tests These post cutover tests exercise the functionality of the applicat ion(s) using Aurora for data storage They include a combination of automated and manual tests The primary purpose of the functional tests is to identify problems in the application caused by the migration of the data to Aurora Nonfunctional tests Thes e post cutover tests assess the nonfunctional characteristics of the application such as performance under varying levels of load User acceptance tests These post cutover tests should be executed by the end users of the application once the final data migration and cutover is complete The purpose of these tests is for the end users to decide if the application is sufficiently usable to meet its primary function in the organization Cutover Once you have completed the final migration and testing it is time to point your application to the Amazon Aurora database This phase of migration is known as cutover If the planning and testing phase has been executed properly cutover should not lead to unexpected issues Precutover Actions • Choose a cutover window: Identify a block of time when you can accomplish cutover to the new database with minimum disruption to the business Normally you would select a low activity period for the database (typically nights and/or weekends) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 48 • Make sure changes are caught up: If a near zero downtime migration approach was used to replicate database changes from the source to the target database make sure that all database changes are caught up and your target database is not significantly lagging behind the sour ce database • Prepare scripts to make the application configuration changes: In order to accomplish the cutover you need to modify database connection details in your application configuration files Large and complex applications may require updates to co nnection details in multiple places Make sure you have the necessary scripts ready to update the connection configuration quickly and reliably • Stop the application: Stop the application processes on the source database and put the source database in read only mode so that no further writes can be made to the source database If the source database changes aren’t fully caught up with the target database wait for some time while these changes are fully propagated to the target database • Execute pre cutove r tests: Run automated pre cutover tests to make sure that the data migration was successful Cutover • Execute cutover: If pre cutover checks were completed successfully you can now point your application to Amazon Aurora Execute scripts created in the p re cutover phase to change the application configuration to point to the new Aurora database • Start your application: At this point you may start your application If you have an ability to stop users from accessing the application while the application is running exercise that option until you have executed your post cutover checks Post cutover Checks • Execute post cutover tests: Execute predefined automated or manual test cases to make sure your application works as expected with the new database It ’s a good strategy to start testing read only functionality of the database first before executing tests that write to the database This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 49 Enable user access and closely monitor: If your test cases were executed successfully you may give user access to the app lication to complete the migration process Both application and database should be closely monitored at this time Troubleshooting The following sections provide examples of common issues and error messages to help you troubleshoot heterogenous DMS migrat ions Troubleshooting MyS QL Specific Issues The following issues are specific to using AWS DMS with MySQL databases Topics • CDC Task Failing for Amazon RDS DB Instance Endpoint Because Binary Logging Disabled • Connections to a target MySQL instance are disconnected during a task • Adding Autocommit to a MySQL compatible Endpoint • Disable Foreign Keys on a Target MySQL compatible Endpoint • Characters Replaced with Question Mark • "Bad event" Log Entries • Change Data Capture with MySQL 55 • Increasing Binary Log Retention for Amazon RDS DB Instances • Log Message: Some changes from the source database had no impact when applied to the target database • Error: Identifier too long • Error: Unsupported Character Set Causes Field Data Conversion to Fail • Error: Codepage 1252 to UTF8 [120112] A field data conversion failed This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 50 CDC Task Failing for Amazon RDS DB Instance E ndpoint Because Binary Logging Disabled This issue occurs with Amazon RDS DB instances because automated backups are disabled Enable automatic backups by setting the backup retention period to a non zero value Connections to a target MySQL instance are disconnected during a task If you have a task with LOBs that is getting disconnected from a MySQL target with the following type of errors in the task log you might need to adjust some of your task settings [TARGET_LOAD ]E: RetCode: SQL_ ERROR SqlState : 08S01 NativeError: 2013 Message: [ MySQL][ODBC 53(w) Driver ][mysqld5716log]Lost connection to MySQL server during query [122502] ODBC general error To solve the issue where a task is being disconnected from a MySQL target do the following: • Check that you have your database variable max_allowed_packet set large enough to hold your largest LOB • Check that you have the following variables set to have a large timeout value We suggest you use a value of at least 5 minutes for each of these variables o net_read_timeout o net_write_timeout o wait_timeout o interactive_timeout Adding Autocommit to a MySQL compatible Endpoint To add autocommit to a target MySQL compatible endpoint use the following procedure: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 51 1 Sign in to the AWS Management Console and sel ect DMS 2 Select Endpoints 3 Select the MySQL compatible target endpoint that you want to add autocommit to 4 Select Modify 5 Select Advanced and then add the following code to the Extra connection attributes text box: Initstmt = SET AUTOCOMMIT= 1 6 Choose Modify Disable Foreign Keys on a Target MySQL compatible Endpoint You can disable foreign key checks on MySQL by adding the following to the Extra Connection Attributes in the Advanced section of the target MySQL Am azon Aurora with MySQL compatibility or MariaDB endpoint To disable foreign keys on a target MySQL compatible endpoint use the following procedure: 1 Sign in to the AWS Management Console and select DMS 2 Select Endpoints 3 Select the MySQL Aurora MySQL or MariaDB target endpoint that you want to disable foreign keys 4 Select Modify 5 Select Advanced and then add the following code to the Extra connection attributes text box: Initstmt =SET FOREIGN_KEY_CHECKS= 0 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 52 6 Choose Modify Characters Replaced with Question Mark The most common situation that causes this issue is when the source endpoint characters have been encoded by a character set that AWS DMS doesn't support For example AWS DMS engine versions prior to version 311 do n't support the UTF8MB4 character set Bad event Log Entries Bad event entries in the migration logs usually indicate that an unsupported DDL operation was attempted on the source database endpoint Unsupported DDL operations cause an event that the repli cation instance cannot skip so a bad event is logged To fix this issue restart the task from the beginning which will reload the tables and will start capturing changes at a point after the unsupported DDL operation was issued Change Data Capture with MySQL 55 AWS DMS change data capture (CDC) for Amazon RDS MySQL compatible databases requires full image row based binary logging which is not supported in MySQL version 55 or lower To use AWS DMS CDC you must up upgrade your Amazon RDS DB instance t o MySQL version 56 Increasing Binary Log Retention for Amazon RDS DB Instances AWS DMS requires the retention of binary log files for change data capture To increase log retention on an Amazon RDS DB instance use the following procedure The following example increases the binary log retention to 24 hours call mysqlrds_set_confi guration( 'binlog retention hours' 24); This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 53 Log Message: Some changes from the source database had no impact when applied to the target database When AWS DMS updates a MySQL database column’s value to its existing value a message of zero rows a ffected is returned from MySQL This behavior is unlike other database engines such as Oracle and SQL Server that perform an update of one row even when the replacing value is the same as the current one Error: Identifier too long The following error oc curs when an identifier is too long: TARGET_LOAD E: RetCode: SQL_ERROR SqlState: HY000 NativeError: 1059 Message: MySQLhttp://ODBC 53(w) Driverhttp://mysqld 5610Identifier name '<name>' is too long 122502 ODBC general error (ar_odbc_stmtc: 4054) When AWS DMS is set to create the tables and primary keys in the target database it currently does not use the same names for the Primary Keys that were used in the source database Instead AWS DMS creates the Primary Key na me based on the tables name When the table name is long the auto generated identifier created can be longer than the allowed limits for MySQL The solve this issue currently pre create the tables and Primary Keys in the target database and use a task w ith the task setting Target table preparation mode set to Do nothing or Truncate to populate the target tables Error: Unsupported Character Set Causes Field Data Conversion to Fail The following error occurs when an unsupported character set causes a fi eld data conversion to fail: "[SOURCE_CAPTURE ]E: Column '<column name>' uses an unsupported character set [120112] A field data conversion failed (mysql_endpoint_capturec: 2154) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 54 This error often occurs because of tables or databases using U TF8MB4 encoding AWS DMS engine versions prior to 311 don't support the UTF8MB4 character set In addition check your database's parameters related to connections The following command can be used to see these parameters: SHOW VARIABLES LIKE '%char%' ; Error: Codepage 1252 to UTF8 [120112] A field data conversion failed The following error can occur during a migration if you have non codepage 1252 characters in the source MySQL database [SOURCE_CAPTURE ]E: Error converting column 'column_xyz' in tabl e 'table_xyz with codepage 1252 to UTF8 [120112] A field data conversion failed (mysql_endpoint_capturec: 2248) As a workaround you can use the CharsetMapping extra connection attribute with your source MySQL endpoint to specify character set mapping You might need to restart the AWS DMS migration task from the beginning if you add this extra connection attribute For example the following extra connection a ttribute could be used for a MySQL source endpoint where the source character set is utf8 or latin1 65001 is the UTF8 code page identifier CharsetMapping =utf865001 CharsetMapping =latin165001 Conclusion Amazon Aurora is a high performance highly available and enterprise grade database built for the cloud Leveraging Amazon Aurora can result in better performance and greater availability than other open source databases and lower costs than most commercial grade databases This paper proposes stra tegies for identifying the best method to migrate databases to Amazon Aurora and details the procedures for planning This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 55 and executing those migrations In particular AWS Database Migration Service (AWS DMS) as well as the AWS Schema Conversion Tool are the r ecommended tools for heterogeneous migration scenarios These powerful tools can greatly reduce the cost and complexity of database migrations Multiple factors contribute to a successful database migration: • The choice of the database product • A migration approach (eg methods tools) that meets performance and uptime requirements • Welldefined migration procedures that enable database administrators to prepare test and complete all migration steps with confidence • The ability to identify diagnose and deal with issues with little or no interruption to the migration process We hope that the guidance provided in this document will help you introduce meaningful improvements in all of these areas and that it will ultimately contribute to creating a bette r overall experience for your database migrations into Amazon Aurora Contributors Contributors to this document include : • Bala Mugunthan Sr Partner Solution Architect Amazon Web Services • Ashar Abbas Database Specialty Architect • Sijie Han SA Manager A mazon Web Services • Szymon Komendera Database Engineer Amazon Web Services This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 56 Further Reading For additional information see: • Aurora on Amazon RDS User Guide • Migrating Your Databases t o Amazon Aurora AWS whitepaper • Best Practices for Migrating MySQL Databases to Amazon Aurora AWS whitepaper Document Revisions Date Description July 2020 Added information for the large databases migrations on Amazon Aurora and functional p artition and data shard consolidation strategies are discussed in homogenous migration s ection s Multi threaded migration using mydumper and myload er open source tools are introduced Overall basic acceptance testing functional test non functional test and user acceptance tests are explained in the testing phase and pre cutover and post cut overs phase scenarios are further explained September 2019 First publication
|
General
|
consultant
|
Best Practices
|
Amazon_Aurora_MySQL_Database_Administrators_Handbook_Connection_Management
|
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Aurora MySQL Database Administrato r’s Handbook Connection Management First Published January 2018 Updated October 20 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlContents Introduction 1 DNS endpoints 2 Connection handling in Aurora MySQL and MySQL 2 Common misconceptions 4 Best practices 5 Using smart drivers 5 DNS caching 7 Connection management and pooling 7 Connection scaling 9 Transaction management and autocommit 10 Connection handshakes 12 Load balancing with the reader endpoint 12 Designing for fault tolerance and quick recovery 13 Server configuration 14 Conclusion 16 Contributors 16 Further reading 16 Document revisions 17 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAbstract This paper outlines the best practices for managing database connections setting server connection parameters and configuring client programs drivers and connectors It’s a recommended read for Amazon Aurora MySQL Database Administrators (DBAs) and application developers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 1 Introduction Amazon Aurora MySQL (Aurora MySQL) is a managed relational database engine wirecompatible with MySQL 56 and 57 Most of the drivers connectors and tools that you currently use with MySQL can be used with Aurora MySQL with little or no change Aurora MySQL database (DB) clusters provide advanced fe atures such as: • One primary instance that supports read/write operations and up to 15 Aurora Replicas that support read only operations Each of the Replicas can be automatically promoted to the primary role if the current primary instance fails • A cluster endpoint that automatically follows the primary instance in case of failover • A reader endpoint that includes all Aurora Replicas and is automatically updated when Aurora Replicas are added or removed • Ability to create custom DNS endpoints contain ing a user configured group of database instances within a single cluster • Internal server connection pooling and thread multiplexing for improved scalability • Near instantaneous database restarts and crash recovery • Access to near realtime cluster metada ta that enables application developers to build smart drivers connecting directly to individual instances based on their read/write or read only role Client side components (applications drivers connectors and proxies) that use sub optimal configurati on might not be able to react to recovery actions and DB cluster topology changes or the reaction might be delayed This can contribute to unexpected downtime and performance issues To prevent that and make the most of Aurora MySQL features AWS encourag es Database Administrators (DBAs) and application developers to implement the best practices outlined in this whitepaper This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 2 DNS endpoints An Aurora DB cluster consists of one or more instances and a cluster volume that manages the data for those instances There are two types of instances: • Primary instance – Supports read and write statements Currently there can be one primary instance per DB cluster • Aurora Replica – Supports read only statements A DB cluster can have up to 15 Aurora Replicas The Auror a Replicas can be used for read scaling and are automatically used as failover targets in case of a primary instance failure Amazon Aurora supports the following types of Domain Name System (DNS) endpoints: • Cluster endpoint – Connects you to the primary instance and automatically follows the primary instance in case of failover that is when the current primary instance is demoted and one of the Aurora Replicas is promoted in its place • Reader endpoint – Includes all Aurora Replicas in the DB cluster und er a single DNS CNAME You can use the reader endpoint to implement DNS round robin load balancing for read only connections • Instance endpoint – Each instance in the DB cluster has its own individual endpoint You can use this endpoint to connect directly to a specific instance • Custom endpoints – User defined DNS endpoints containing a selected group of instances from a given cluster For more information refer to the Overview of Amazon Aurora page Connection handling in Aurora MySQL and MySQL MySQL Community Edition manages connections in a one thread perconnection fashion This means that each individual user connection receives a dedicated operating system thread in the mysqld process Issues with this type of connection handling include: • Relatively high memory use when there is a large number of user connections even if the connections are completely idle • Higher internal server contention and context switching overhead when working with thousands of user connections This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 3 Aurora MySQL supports a thread pool approach that addresses these issues You can characterize the thread pool approach as follows: • It uses thread multiplexing where a number of worker threads can switch between user sessions (connections) A worker thread is not fixe d or dedicated to a single user session Whenever a connection is not active (for example is idle waiting for user input waiting for I/O and so on) the worker thread can switch to another connection and do useful work You can think of worker threads as CPU cores in a multi core system Even though you only have a few cores you can easily run hundreds of programs simultaneously because they're not all active at the same time This highly efficient approach means that Aurora MySQL can handle thousands of concurrent clients with just a handful of worker threads • The thread pool automatically scales itself The Aurora MySQL database process continuously monitors its thread pool state and launches new workers or destroys existing ones as needed This is tr ansparent to the user and doesn’t need any manual configuration Server thread pooling reduces the server side cost of maintaining connections However it doesn’t eliminate the cost of setting up these connections in the first place Opening and closing c onnections isn't as simple as sending a single TCP packet For busy workloads with short lived connections (for example keyvalue or online transaction processing (OLTP) ) consider using an application side connection pool The following is a network pack et trace for a MySQL connection handshake taking place between a client and a MySQL compatible server located in the same Availability Zone: 04:23:29547316 IP client32918 > servermysql: tcp 0 04:23:29547478 IP servermysql > client32918: tcp 0 04:23:29547496 IP client32918 > servermysql: tcp 0 04:23:29547823 IP servermysql > client32918: tcp 78 04:23:29547839 IP client32918 > servermysql: tcp 0 04:23:29547865 IP client32918 > servermysql: tcp 191 04:23:29547993 IP servermysql > client329 18: tcp 0 04:23:29548047 IP servermysql > client32918: tcp 11 04:23:29548091 IP client32918 > servermysql: tcp 37 04:23:29548361 IP servermysql > client32918: tcp 99 04:23:29587272 IP client32918 > servermysql: tcp 0 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 4 This is a packet trace for closing the connection: 04:23:37117523 IP client32918 > servermysql: tcp 13 04:23:37117818 IP servermysql > client32918: tcp 56 04:23:37117842 IP client32918 > servermysql: tcp 0 As you can see even the simple act of opening and closing a single connection involves an exchange of several network packets The connection overhead becomes more pronounced when you consider SQL statements issued by drivers as part of connection setup (for example SET variable_name = value commands used to set session level configuration) Server side thread pooling doesn’t eliminate this type of overhead Common misconceptions The following are common misconceptions for database connection management • If the server uses connection pooling you don’t need a pool on the application side As explained previously this isn’t true for workloads where connections are opened and torn down very frequently and clients run relatively few statements per connectio n You might not need a connection pool if your connections are long lived This means that connection activity time is much longer than the time required to open and close the connection You can run a packet trace with tcpdump and see how many packets yo u need to open or close connections versus how many packets you need to run your queries within those connections Even if the connections are long lived you can still benefit from using a connection pool to protect the database against connection surges that is large bursts of new connection attempts • Idle connections don’t use memory This isn’t true because the operating system and the database process both allocate an in memory descriptor for each user connection What is typically true is that Auror a MySQL uses less memory than MySQL Community Edition to maintain the same number of connections However memory usage for idle connections is still not zero even with Aurora MySQL The general best practice is to avoid opening significantly more connect ions than you need This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 5 • Downtime depends entirely on database stability and database features This isn’t true because the application design and configuration play an important role in determining how fast user traffic can recover following a database event For more details refer to the Best practices section of this whitepaper Best practices The following are best practices for managing database connections and configuring connection drivers and pools Using smart drivers The cluster and reader endpoints abstract the role changes (primary instance promotion and demotion) and topology changes (addition and removal of instances) occurring in the DB cluster However DNS updates are not instantaneous In addition they can sometimes contribute to a slightly longer delay between the time a database event occurs and the time it’s noticed and handled by the application Aurora MySQL exposes near realtime metadata about DB instances in the INFORMATION_SCHEMAREPLICA_HOST_STATUS table Here is an example of a query against the metadata table: mysql> select server_id if(session_id = 'MASTER_SESSION_ID' 'writer' 'reader' ) as role replica_lag_in_milliseconds from information_schemareplica_host_status; + + + + | server_id | role | replica_lag_in_milliseconds | + + + + | aurora nodeusw2a | writer | 0 | | aurora nodeusw2b | reader | 19253999710083008 | + + + + 2 rows in set (000 sec) Notice that the table contains cluster wide metadata You can query the table on any instance in the DB cluster For the purpose of this whitepaper a smart driver is a database driver or connector with the ability to read DB cluster topology from the metadata table It can rou te new connections to individual instance endpoints without relying on high level cluster This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 6 endpoints A smart driver is also typically capable of load balancing read only connections across the available Aurora Replicas in a round robin fashion The MariaDB Connector/J is an example of a third party Java Database Connectivity (JDBC) smart driver with native support for Aurora MySQL DB clusters Application developers can draw inspiration from the MariaDB driver to build drivers and connectors for languages other than Java Refer to the MariaDB Connector/J page for details The AWS JDBC Driver for MySQL (preview) is a client driver designed for the high availability of Aurora MySQL The AWS JDBC Driver for MySQL is drop in compatible with the MySQL Connector/J driver The AWS JDBC Driver for MySQL takes full advantage of the failover capabilities of Aurora MySQL The AWS JDBC Driver for MySQL fully maintains a cache of the DB cluster topology and each DB in stance's role either primary DB instance or Aurora Replica It uses this topology to bypass the delays caused by DNS resolution so that a connection to the new primary DB instance is established as fast as possible Refer to the AWS JDBC Driver for MySQL GitHub repository for details If you’re using a smart driver the recommendations listed in the following sections still apply A smart driver can automate and abstract certain layers of database connectivity However it doesn’t automatically configure itself with optimal settings or automatically make the application resilient to failures For example when using a smart driver you still need to ensure that the connection val idation and recycling functions are configured correctly there’s no excessive DNS caching in the underlying system and network layers transactions are managed correctly and so on It’s a good idea to evaluate the use of smart drivers in your setup Note that if a third party driver contains Aurora MySQL –specific functionality it doesn’t mean that it has been officially tested validated or certified by AWS Also note that due to the advanced builtin features and higher overall complexity smart driver s are likely to receive updates and bug fixes more frequently than traditional (bare bones) drivers You should regularly review the driver’s release notes and use the latest available version whenever possible This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 7 DNS caching Unless you use a smart databas e driver you depend on DNS record updates and DNS propagation for failovers instance scaling and load balancing across Aurora Replicas Currently Aurora DNS zones use a short Time ToLive (TTL) of five seconds Ensure that your network and client confi gurations don’t further increase the DNS cache TTL Remember that DNS caching can occur anywhere from your network layer through the operating system to the application container For example Java virtual machines (JVMs) are notorious for caching DNS in definitely unless configured otherwise Here are some examples of issues that can occur if you don’t follow DNS caching best practices: • After a new primary instance is promoted during a failover applications continue to send write traffic to the old insta nce Data modifying statements will fail because that instance is no longer the primary instance • After a DB instance is scaled up or down applications are unable to connect to it Due to DNS caching applications continue to use the old IP address of tha t instance which is no longer valid • Aurora Replicas can experience unequal utilization for example one DB instance receiving significantly more traffic than the others Connection management and pooling Always close database connections explicitly inst ead of relying on the development framework or language destructors to do it There are situations especially in container based or code asaservice scenarios when the underlying code container isn’t immediately destroyed after the code completes In su ch cases you might experience database connection leaks where connections are left open and continue to hold resources (for example memory and locks) If you can’t rely on client applications (or interactive clients) to close idle connections use the server’s wait_timeout and interactive_timeout parameters to configure idle connection timeout The default timeout value is fairly high at 28800 seconds ( 8 hours) You should tune it down to a value that’s acceptable in your environment Refer to the MySQL Reference Manual for details Consider using connection pooling to protect the database against connection surges Also consider connection pooling if the appli cation opens large numbers of connections (for example thousands or more per second) and the connections are short lived that is the time required for connection setup and teardown is significant compared to the This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 8 total connection lifetime If your develo pment framework or language doesn’t support connection pooling you can use a connection proxy instead Amazon RDS Proxy is a fully managed highly available database proxy for Amazon Relational Database Service (Amazon RDS) that makes applications more scalable more resilient to database failures and more secure ProxySQL MaxScale and ScaleArc are examples of third party proxies compatible with the MySQL protocol Refer to the Connection scaling section of this document for more notes on connection pools versus proxies By using Amazon RDS Proxy you can allow your applications to pool and share database connections to improve their ability to scale Amazon RDS Proxy make s applications more resilient to database failures by automatically connecting to a standby DB instance while preserving application connections AWS recommend s the following for configuring connection pools and proxies: • Check and validate connection healt h when the connection is borrowed from the pool The validation query can be as simple as SELECT 1 However in Amazon Aurora you can also use connection checks that return a different value depending on whether the instance is a primary instance (read/wri te) or an Aurora Replica (read only) For example you can use the @@innodb_read_only variable to determine the instance role If the variable value is TRUE you're on an Aurora Replica • Check and validate connections periodically even when they're not borrowed It helps detect and clean up broken or unhealthy connections before an application thread attempts to use them • Don't let connections remain in the pool indefinitely Recycle connections by closing and reopening them periodically (for example ev ery 15 minutes) which frees the resources associated with these connections It also helps prevent dangerous situations such as runaway queries or zombie connections that clients have abandoned This recommendation applies to all connections not just idl e ones This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 9 Connection scaling The most common technique for scaling web service capacity is to add or remove application servers (instances) in response to changes in user traffic Each application server can use a database connection pool This approach ca uses the total number of database connections to grow proportionally with the number of application instances For example 20 application servers configured with 200 database connections each would require a total of 4000 database connections If the app lication pool scales up to 200 instances (for example during peak hours) the total connection count will reach 40000 Under a typical web application workload most of these connections are likely idle In extreme cases this can limit database scalabil ity: idle connections do take server resources and you’re opening significantly more of them than you need Also the total number of connections is not easy to control because it’s not something you configure directly but rather depends on the number of application servers You have two options in this situation: • Tune the connection pools on application instances Reduce the number of connections in the pool to the acceptable minimum This can be a stop gap solution but it might not be a long term solut ion as your application server fleet continues to grow • Introduce a connection proxy between the database and the application On one side the proxy connects to the database with a fixed number of connections On the other side the proxy accepts applicat ion connections and can provide additional features such as query caching connection buffering query rewriting/routing and load balancing Connection proxies • Amazon RDS Proxy is a fully managed highly available database proxy for Amazon RDS that makes applications more scalable more resilient to database failures and more secure Amazon RDS Proxy reduces the memory and CPU overhead for connection management on the database • Using Amazon RDS Proxy you can handle unpredictable surges in database traffic that otherwise might cause issues due to oversubscribing connections or creating new connections at a fast rate To protect the database against oversubscription you can control the number of database connections that are created This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 10 • Each RDS proxy performs connection pooling for the writer instance of its associated Amazon RDS or Aurora database Connection pooling is an optimization that reduces the overhead associated with opening and closing connections and with keeping many connections ope n simultaneously This overhead includes memory needed to handle each new connection It also involves CPU overhead to close each connection and open a new one such as Transport Layer Security/Secure Sockets Layer (TLS/SSL) handshaking authentication ne gotiating capabilities and so on Connection pooling simplifies your application logic You don't need to write application code to minimize the number of simultaneous open connections Connection pooling also cuts down on the amount of time a user must w ait to establish a connection to the database • To perform load balancing for read intensive workloads you can create a read only endpoint for RDS proxy That endpoint passes connections to the reader endpoint of the cluster That way your proxy connectio ns can take advantage of Aurora read scalability • ProxySQL MaxScale and ScaleArc are examples of third party proxies compatible with the MySQL protocol For even greater scalability and availability you can use multiple proxy instances behind a single D NS endpoint Transaction management and autocommit With autocommit enabled each SQL statement runs within its own transaction When the statement ends the transaction ends as well Between statements the client connection is not in transaction If you need a transaction to remain open for more than one statement you explicitly begin the transaction run the statements and then commit or roll back the transaction With autocommit disabled the connection is always in transaction You can commit or roll back the current transaction at which point the se rver immediately opens a new one Refer to the MySQL Reference Manual for details Running with autocommit disabled is not recommended because it encourages long running transactions where they’re not needed Open transactions block a server’s internal garbage collection mechanisms which are essential to maintaini ng optimal performance In extreme cases garbage collection backlog leads to excessive storage consumption elevated CPU utilization and query slowness This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 11 Recommendations : • Always run with autocommit mode enabled Set the autocommit parameter to 1 on the database side (which is the default) and on the application side (which might not be the default) • Always double check the autocommit settings on the application side For example Python drivers such as MySQLdb and PyMySQL disable autocommit by default • Manage transactions explicitly by using BEGIN/START TRANSACTION and COMMIT/ROLLBACK statements You should start transactions when you need them and commit as soon as the transactional work is done Note that these recommendations are not specific to Aurora MySQL They apply to MySQL and other databases that use the InnoDB storage engine Long transactions and garbage collection backlog are easy to monitor: • You can obtain the metadata of currently running transactions from the INFORMATION_SCHEMAINNODB_TRX table The TRX_STARTED column contains the transaction start time and you can use it to calculate transaction age A transaction is worth investigating if it has been running for several minutes or more Refer to the MySQL Reference Manua l for details about the table • You can read the size of the garbage collection backlog from the InnoDB’s trx_rseg_history_len counter in the INFORMATION_SCHEMAINNODB_METRICS table Refer to the MySQL Reference Manual for details about the table The larger the counter value is the more severe the impact might be in terms of query performance CPU usage and storage consumption Values in the range of tens of thousands indicate that the garbage collection is somewhat delayed Values in the range of millions or tens of millions might be dangerous and should be investigated Note – In Amazon Aurora all DB instances use the same storage volume which means that the garbage collection is cluster wide and not specific to each instance Consequently a runaway transaction on one instance can impact all instances Therefore you sho uld monitor long transactions on all DB instances This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 12 Connection handshakes A lot of work can happen behind the scenes when an application connector or a graphical user interface (GUI) tool opens a new database session Drivers and client tools commonly run series of statements to set up session configuration (for example SET SESSION variable = value ) This increases the cost of creating new connections and delays when your application can start issuing queries The cost of connection handshakes becomes even more important if your applications are very sensitive to latency OLTP or keyvalue workloads that expect single digit millisecond latency can be visibly impacted if each connection is expensive to open For example if the driver runs six statements to set up a connection and each statement takes just one millisecond to run your application will be delayed by six milliseconds before it issues its first query Recommendations : • Use the Aurora MySQL Advanced Au dit the General Query Log or network level packet traces (for example with tcpdump ) to obtain a record of statements run during a connection handshake Whether or not you’re experiencing connection or latency issues you should be familiar with the inte rnal operations of your database driver • For each handshake statement you should be able to explain its purpose and describe its impact on queries you'll subsequently run on that connection • Each handshake statement requires at least one network roundtrip and will contribute to higher overall se ssion latency If the number of handshake statements appears to be significant relative to the number of statements doing actual work determine if you can disable any of the handshake statements Consider using connection pooling to reduce the number of c onnection handshakes Load balancing with the reader endpoint Because the reader endpoint contains all Aurora Replicas it can provide DNS based round robin load balancing for new connections Every time you resolve the reader endpoint you'll get an inst ance IP that you can connect to chosen in round robin fashion DNS load balancing works at the connection level (not the individual query level) You must keep resolving the endpoint without caching DNS to get a different instance IP on each resolution I f you only resolve the endpoint once and then keep the connection in This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 13 your pool every query on that connection goes to the same instance If you cache DNS you receive the same instance IP each time you resolve the endpoint You can use Amazon RDS Proxy to create additional read only endpoints for an Aurora cluster These endpoints perform the same kind of load balancing as the Aurora reader endpoint Applications can reconnect more quickly to the proxy endpoints than the Aurora reader endpoint if reader in stances become unavailable If you don’t follow best practices these are examples of issues that can occur: • Unequal use of Aurora Replicas for example one of the Aurora Replicas is receiving most or all of the traffic while the other Aurora Replicas sit idle • After you add or scale an Aurora Replica it doesn’t receive traffic or it begins to receive traffic after an unexpectedly long delay • After you remove an Aurora Replica applications continue to send traffic to that instance For more information refer to the DNS endpoints and DNS caching sections of this document Designing for fault tolerance and quick recovery In large scale database operations you’re statistically more likely to experience issues such as connection interruptions or hardware failures You must also take operational actions more frequently such as scaling adding or removing DB instances and performing software upgrades The only scalable way of addressi ng this challenge is to assume that issues and changes will occur and design your applications accordingly Examples : • If Aurora MySQL detects that the primary instance has failed it can promote a new primary instance and fail over to it which typically h appens within 30 seconds Your application should be designed to recognize the change quickly and without manual intervention • If you create additional Aurora Replicas in an Aurora DB cluster your application should automatically recognize the new Aurora Replicas and send traffic to them • If you remove instances from a DB cluster your application should not try to connect to them This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 14 Test your applications extensively and prepare a list of assumptions about how the application should react to database events Then experimentally validate the assumptions If you don’t follow best practices database events (for example failovers scaling and software upgrades) might result in longer than expected downtime For example you might notice that a failover took 30 seconds (per the DB cluster’s event notifications) but the application remained down for much longer Server configuration There are two major server configuration variables worth mentioning in the context of this whitepaper : max_connections and max_connect_errors Configuration variable max_connections The configuration variable max_connections limits the number of database connections per Aurora DB instance The best practice is to set it slightly higher than the maximum number of connections you expect to open on each instance If you also enabled performance_schema be extra careful with the setting The Performance Schema memory structures are sized automatically based on server configuration variables including max_connections The higher you set the variable the more memory Performance Schema uses In extreme cases this can lead to out of memory issues on smaller instance types Note for T2 and T3 instance families Using Performance Schema on T2 and T3 Aurora DB instances with less than 8 GB of memory isn’t recommended To reduce the risk of out ofmemory issues on T2 and T3 instances: • Don’t enable Performance Schema • If you must use Performance Schema leave max_connections at the default value • Disable Performance Schema if you plan to increase max_connections to a value significantly greater than the default value Refer to the MySQL Reference Manual for details about the max_connections variable This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 15 Configuration variable max_connect_errors The configuration variable max_connect_errors determines how many successive interrupted connection requests are permitted from a given client host If the client host exceeds the number of successive failed connection attempts the server blocks it Further connection attempts from that client yield an error: Host 'host_name' is blocked because of many connection errors Unblock with 'mysqladmin flush hosts' A com mon (but incorrect) practice is to set the parameter to a very high value to avoid client connectivity issues This practice isn’t recommended because it: • Allows application owners to tolerate connection problems rather than identify and resolve the underl ying cause Connection issues can impact your application health so they should be resolved rather than ignored • Can hide real threats for example someone actively trying to break into the server If you experience “host is blocked” errors increasing t he value of the max_connect_errors variable isn’t the correct response Instead investigate the server’s diagnostic counters in the aborted_connects status variable and the host_cache table Then use the information to identify and fix clients that run in to connection issues Also note that this parameter has no effect if skip_name_resolve is set to 1 (default) Refer to the MySQL Reference Manual for details on the following: • Max_connect_errors variable • “Host is blocked ” error • Aborted_connects status variable • Host_cache table This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 16 Conclusion Understanding and implementing connection management best practices is critical to achieve scalability reduce downtime and ensure smooth integration between the application and database layers You can apply most of the recommendations provided in this whitepaper with little to no engineering effort The guidance provided in this whitepaper should help you introduce improvements in your current and future application deployments using Aurora MySQL DB clusters Contributor s Contributors to this document include: • Szymon Komendera Database Engineer Amazon Aurora • Samuel Selvan Database Specialist Solutions Architect Amazon Web Services Further reading For additional information refer to : • Aurora on Amazon RDS User Guide • Communication Errors and Aborted Connections in MySQL Reference Manual This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 17 Document revisions Date Description October 20 2021 Minor content updates to follow new style guide and hyperlinks July 2021 Minor content updates to the following topics: Smart Drivers Connection Management and Pooling and Connection Scaling March 2019 Minor content updates to the following topics: Introduction DNS Endpoints and Server Configuration January 2018 First publication
|
General
|
consultant
|
Best Practices
|
Amazon_EC2_Reserved_Instances_and_Other_Reservation_Models
|
Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Amazon EC2 Reserved Instances and Other AWS Reservation Models: AWS Whitepaper Copyright © Amazon Web Services Inc and/or its affiliates All rights reserved Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to or sponsored by AmazonAmazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Table of Contents Abstract1 Abstract1 Introduction2 Amazon EC2 Reserved Instances3 Reserved Instances payment options3 Standard vs Convertible offering classes3 Regional and zonal Reserved Instances4 Differences between regional and zonal Reserved Instances4 Limitations for instance size flexibility5 Maximizing Utilization with Size Flexibility in Regional Reserved Instances5 Normalization factor for dedicated EC2 instances7 Normalization factor for bare metal instances7 Savings Plans9 Reservation models for other AWS services10 Amazon RDS reserved DB instances10 Amazon ElastiCache reserved nodes10 Amazon Elasticsearch Service Reserved Instances10 Amazon Redshift reserved nodes11 Amazon DynamoDB reservations11 Reserved Instances billing12 Usage billing 12 Consolidated billing 13 Reserved Instances: Capacity reservations13 Blended rates 14 How discounts are applied14 Maximizing the value of reservations15 Measure success15 Maximize discounts by standardizing instance type15 Reservation management techniques16 Reserved Instance Marketplace16 AWS Cost Explorer16 AWS Cost and Usage Report17 Reserved Instances on your cost and usage report17 AWS Trusted Advisor18 Conclusion 19 Contributors 20 Document revisions21 Notices22 iiiAmazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Abstract Amazon EC2 Reserved Instances and Other AWS Reservation Models Publication date: March 29 2021 (Document revisions (p 21)) Abstract This document is part of a series of AWS whitepapers designed to support your cloud journey and discusses Amazon EC2 Reserved Instances and reservation models for other AWS services Its aim is to empower you to maximize the value of your investments improve forecasting accuracy and cost predictability create a culture of ownership and cost transparency and continuously measure your optimization status 1Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Introduction The cloud is well suited for variable workloads and rapid deployment yet many cloudbased workloads follow a more predictable pattern For such applications your organization can achieve significant cost savings by using Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instances Amazon EC2 Reserved Instances enable your organization to commit to usage parameters at the time of purchase to achieve a lower hourly rate Reservation models are also available for Amazon Relational Database Service (Amazon RDS) Amazon ElastiCache Amazon Elasticsearch Service (Amazon ES) Amazon Redshift and Amazon DynamoDB This whitepaper discusses Amazon EC2 Reserved Instances and the reservation models for these other AWS services 2Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Reserved Instances payment options Amazon EC2 Reserved Instances When you purchase Reserved Instances you make a oneyear or threeyear commitment and receive a billing discount of up to 72 percent in return When used for the appropriate workloads Reserved Instances can save you a lot of money Note that a Reserved Instance is not an instance dedicated to your organization It is a billing discount applied to the use of OnDemand Instances in your account These OnDemand Instances must match certain attributes of the Reserved Instances you purchased to benefit from the billing discount You pay for the entire term of a Reserved Instance regardless of actual usage so your cost savings are closely tied to use Therefore it is important to plan and monitor your usage to make the most of your investment When you purchase a Reserved Instance in a specific Availability Zone it provides a capacity reservation This improves the likelihood that the compute capacity you need is available in a specific Availability Zone when you need it A Reserved Instance purchased for an AWS Region does not provide capacity reservation Reserved Instances payment options You can purchase Reserved Instances through the AWS Management Console The following payment options are available for most Reserved Instances: •No Upfront – No upfront payment is required You are billed a discounted hourly rate for every hour within the term regardless of whether the Reserved Instance is being used No Upfront Reserved Instances are based on a contractual obligation to pay monthly for the entire term of the reservation A successful billing history is required before you can purchase No Upfront Reserved Instances •Partial Upfront – A portion of the cost must be paid up front and the remaining hours in the term are billed at a discounted hourly rate regardless of whether you’re using the Reserved Instance •All Upfront – Full payment is made at the start of the term with no other costs or additional hourly charges incurred for the remainder of the term regardless of hours used Reserved Instances with a higher upfront payment provide greater discounts You can also find Reserved Instances offered by thirdparty sellers at lower prices and shorter terms on the Reserved Instance Marketplace As you purchase more Reserved Instances volume discounts begin to apply that let you save even more For more information see Amazon EC2 Reserved Instance Pricing Standard vs Convertible offering classes When you purchase a Reserved Instance you can choose between a Standard or Convertible offering class Table 1 – Comparison of standard and Convertible Reserved Instances Standard Reserved Instance Convertible Reserved Instance Oneyear to threeyear term Oneyear to threeyear term 3Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Regional and zonal Reserved Instances Standard Reserved Instance Convertible Reserved Instance Enables you to modify Availability Zone scope networking type and instance size (within the same instance type) of your Reserved Instance For more information see Modifying Reserved InstancesEnables you to exchange one or more Convertible Reserved Instances for another Convertible Reserved Instance with a different configuration including instance family operating system and tenancy There are no limits to how many times you perform an exchange as long as the target Convertible Reserved Instance is of an equal or higher value than the Convertible Reserved Instances that you are exchanging For more information see Exchanging Convertible Reserved Instances Can be sold in the Reserved Instance MarketplaceCannot be sold in the Reserved Instance Marketplace Standard Reserved Instances typically provide the highest discount levels Oneyear Standard Reserved Instances provide a similar discount to threeyear Convertible Reserved Instances If you want to purchase capacity reservations see OnDemand Capacity Reservations Convertible Reserved Instances are useful when: •Purchasing Reserved Instances in the payer account instead of a subaccount You can more easily modify Convertible Reserved Instances to meet changing needs across your organization •Workloads are likely to change In this case a Convertible Reserved Instance enables you to adapt as needs evolve while still obtaining discounts and capacity reservations •You want to hedge against possible future price drops •You can’t or don’t want to ask teams to do capacity planning or forecasting •You expect compute usage to remain at the committed amount over the commitment period Regional and zonal Reserved Instances When you purchase a Reserved Instance you determine the scope of the Reserved Instance The scope is either regional or zonal •Regional: When you purchase a Reserved Instance for a Region it's referred to as a regional Reserved Instance •Zonal : When you purchase a Reserved Instance for a specific Availability Zone it's referred to as a zonal Reserved Instance Differences between regional and zonal Reserved Instances The following table highlights some key differences between regional Reserved Instances and zonal Reserved Instances: Table 2 – Comparison of regional and zonal Reserved Instances 4Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Limitations for instance size flexibility Regional Reserved InstancesZonal Reserved Instances Availability Zone flexibilityThe Reserved Instance discount applies to instance usage in any Availability Zone in the specified RegionNo Availability Zone flexibility— the Reserved Instance discount applies to instance usage in the specified Availability Zone only Capacity reservationNo capacity reservation—a regional Reserved Instance does not provide a capacity reservationA zonal Reserved Instance provides a capacity reservation in the specified Availability Zone Instance size flexibilityThe Reserved Instance discount applies to instance usage within the instance family regardless of size Only supported on Amazon Linux/Unix Reserved Instances with default tenancy For more information see Instance size flexibility determined by normalization factorNo instance size flexibility— the Reserved Instance discount applies to instance usage for the specified instance type and size only Limitations for instance size flexibility Instance size flexibility does not apply to the following Reserved Instances: •Reserved Instances that are purchased for a specific Availability Zone (zonal Reserved Instances) •Reserved Instances with dedicated tenancy •Reserved Instances for Windows Server Windows Server with SQL Standard Windows Server with SQL Server Enterprise Windows Server with SQL Server Web RHEL and SUSE Linux Enterprise Server •Reserved Instances for G4 instances Maximizing Utilization with Size Flexibility in Regional Reserved Instances For additional flexibility all Regional Linux Reserved Instances with shared tenancy apply to all sizes of instances within an instance family and an AWS Region even if you are using them across multiple accounts via Consolidated Billing The only attributes that must be matched are the instance type (for example m4) tenancy (must be default) and platform (must be Linux) All new and existing Reserved Instances are sized according to a normalization factor based on instance size as follows Table 3 – Regional Reserved Instance sizes and normalization factors Instance size Normalization factor nano 025 micro 05 small 1 5Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Maximizing Utilization with Size Flexibility in Regional Reserved Instances Instance size Normalization factor medium 2 large 4 xlarge 8 2xlarge 16 4xlarge 32 8xlarge 64 9xlarge 72 10xlarge 80 12xlarge 96 16xlarge 128 24xlarge 192 32xlarge 256 For example if you have a Reserved Instance for a c48xlarge it applies to any usage of a Linux c4 instance with shared tenancy in the AWS Region such as: •One c48xlarge instance •Two c44xlarge instances •Four c42xlarge instances •Sixteen c4large instances It also includes combinations of instances for example a t2medium instance has a normalization factor of 2 If you purchase a t2medium default tenancy Amazon Linux/Unix Reserved Instance in the US East (N Virginia) Region and you have two running t2small instances in your account in that Region the billing benefit is applied in full to both instances Figure 1 – Two t2medium instances running in a Region 6Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Normalization factor for dedicated EC2 instances Or if you have one t2large instance running in your account in the US East (N Virginia) Region the billing benefit is applied to 50% of the usage of the instance Figure 2 – One t2large instance running in a Region The normalization factor is also applied when modifying Reserved Instances Normalization factor for dedicated EC2 instances For size inflexible RIs the normalization factor is always 1 The normalization factor doesn't apply to EC2 instances that do not have size flexibility The sole purpose of the normalization factor is to provide an ability to match various EC2 instances to each other within a family so that you can exchange one type for another type We do not support this use case for EC2 instances without size flexibility hence normalization factor is not used and to keep our data model uniform across different EC2 use cases we assign it an equivalent value of 1 Normalization factor for bare metal instances Instance size flexibility also applies to bare metal instances within the instance family If you have regional Amazon Linux/Unix Reserved Instances with shared tenancy on bare metal instances you can benefit from the Reserved Instance savings within the same instance family The opposite is also true: if you have regional Amazon Linux/Unix Reserved Instances with shared tenancy on instances in the same family as a bare metal instance you can benefit from the Reserved Instance savings on the bare metal instance A bare metal instance is the same size as the largest instance within the same instance family For example an i3metal is the same size as an i316xlarge so they have the same normalization factor The metal instance sizes do not have a single normalization factor They vary based on the specific instance family For the most uptodate list see Amazon EC2 Instance Types Table 4 – Bare metal instance sizes and normalization factors Instance size Normalization factor a1metal 32 c5metal 192 c5dmetal 192 7Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Normalization factor for bare metal instances Instance size Normalization factor c5nmetal 144 c6gmetal 128 c6gdmetal 128 g4dnmetal 192 i3metal 128 i3enmetal 192 m5metal 192 m5dmetal 192 m5dnmetal 192 m5nmetal 192 m5znmetal 96 m6gmetal 128 m6gdmetal 128 r5metal 192 r5bmetal 192 r5dmetal 192 r5dnmetal 192 r5nmetal 192 r6gmetal 128 r6gdmetal 128 x2gdmetal 128 z1dmetal 96 For example an i3metal instance has a normalization factor of 128 If you purchase an i3metal default tenancy Amazon Linux/Unix Reserved Instance in the US East (N Virginia) Region the billing benefit can apply as follows: •If you have one running i316xlarge in your account in that Region the billing benefit is applied in full to the i316xlarge instance (i316xlarge normalization factor = 128) •Or if you have two running i38xlarge instances in your account in that Region the billing benefit is applied in full to both i38xlarge instances (i38xlarge normalization factor = 64) •Or if you have four running i34xlarge instances in your account in that Region the billing benefit is applied in full to all four i34xlarge instances (i34xlarge normalization factor = 32) The opposite is also true For example if you purchase two i38xlarge default tenancy Amazon Linux/ Unix Reserved Instances in the US East (N Virginia) Region and you have one running i3metal instance in that Region the billing benefit is applied in full to the i3metal instance 8Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Savings Plans Savings Plans Savings Plans is another flexible pricing model that provides savings of up to 72% on your AWS compute usage This pricing model offers lower prices on Amazon EC2 instances usage regardless of instance family size OS tenancy or AWS Region and also applies to AWS Fargate and AWS Lambda usage Savings Plans offer significant savings over OnDemand Instances just like EC2 Reserved Instances in exchange for a commitment to use a specific amount of compute power (measured in $/hour) for a one or threeyear period You can sign up for Savings Plans for a one or threeyear term and easily manage your plans by taking advantage of recommendations performance reporting and budget alerts in the AWS Cost Explorer AWS offers two types of Savings Plans: •Compute Savings Plans provide the most flexibility and help to reduce your costs by up to 66% (just like Convertible RIs) These plans automatically apply to EC2 instance usage regardless of instance family size AZ Region operating system or tenancy and also apply to Fargate and Lambda usage For example with Compute Savings Plans you can change from C4 to M5 instances shift a workload from EU (Ireland) to EU (London) or move a workload from Amazon EC2 to Fargate or Lambda at any time and automatically continue to pay the Savings Plans price •EC2 Instance Savings Plans provide the lowest prices offering savings up to 72% (just like Standard RIs) in exchange for commitment to usage of individual instance families in a Region (for example M5 usage in N Virginia) This automatically reduces your cost on the selected instance family in that region regardless of AZ size operating system or tenancy EC2 Instance Savings Plans give you the flexibility to change your usage between instances within a family in that Region For example you can move from c5xlarge running Windows to c52xlarge running Linux and automatically benefit from the Savings Plans prices Note that Savings Plans does not provide a capacity reservation You can however reserve capacity with On Demand Capacity Reservations and pay lower prices on them with Savings Plans You can continue purchasing RIs to maintain compatibility with your existing cost management processes and your RIs will work alongside Savings Plans to reduce your overall bill However as your RIs expire we encourage you to sign up for Savings Plans as they offer the same savings as RIs but with additional flexibility 9Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Amazon RDS reserved DB instances Reservation models for other AWS services In addition to Amazon EC2 reservation models are available for Amazon RDS Amazon ElastiCache Amazon ES Amazon Redshift and Amazon DynamoDB Topics •Amazon RDS reserved DB instances (p 10) •Amazon ElastiCache reserved nodes (p 10) •Amazon Elasticsearch Service Reserved Instances (p 10) •Amazon Redshift reserved nodes (p 11) •Amazon DynamoDB reservations (p 11) Amazon RDS reserved DB instances Similar to Amazon EC2 Reserved Instances there are three payment options for Amazon RDS reserved DB instances: No Upfront Partial Upfront and All Upfront All reserved DB instance types are available for Aurora MySQL MariaDB PostgreSQL Oracle and SQL Server database engines Sizeflexible reserved DB instances are available for Amazon Aurora MariaDB MySQL PostgreSQL and the “Bring Your Own License” (BYOL) edition of the Oracle database engine For more information about Amazon RDS reserved DB instances see the following: •Amazon RDS Reserved Instances •Working with Reserved DB Instances •Amazon DynamoDB Pricing Amazon ElastiCache reserved nodes Amazon ElastiCache reserved nodes give you the option to make a low onetime payment for each cache node you want to reserve In turn you receive a significant discount on the hourly charge for that node Amazon ElastiCache provides three reserved cache node types (Light Utilization Medium Utilization and Heavy Utilization) that enable you to balance the amount you pay up front with your effective hourly price Based on your application workload and the amount of time you plan to run them Amazon ElastiCache Reserved Nodes might provide substantial savings over running ondemand Nodes Reserved Cache Nodes are available for both Redis and Memcached For more information see Amazon ElastiCache Reserved Nodes Amazon Elasticsearch Service Reserved Instances Amazon Elasticsearch Service (Amazon ES) Reserved Instances (RIs) offer significant discounts compared to standard OnDemand Instances The instances themselves are identical—RIs are just a billing discount 10Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Amazon Redshift reserved nodes applied to OnDemand Instances in your account For longlived applications with predictable usage RIs can provide considerable savings over time Amazon ES RIs require one or threeyear terms and have three payment options that affect the discount rate For more information see Amazon Elasticsearch Service Reserved Instances Amazon Redshift reserved nodes In AWS the charges that you accrue for using Amazon Redshift are based on compute nodes Each compute node is billed at an hourly rate The hourly rate varies depending on factors such as AWS Region node type and whether the node receives ondemand node pricing or reserved node pricing If you intend to keep an Amazon Redshift cluster running continuously for a prolonged period you should consider purchasing reservednode offerings These offerings provide significant savings over on demand pricing However they require you to reserve compute nodes and commit to paying for those nodes for either a oneyear or a threeyear duration For more information about Amazon Redshift reserved node pricing see Reserved Instance Pricing and Purchasing Amazon Redshift Reserved Nodes Amazon DynamoDB reservations If you can predict your need for Amazon DynamoDB readandwrite throughput reserved capacity offers significant savings over the normal price of DynamoDB provisioned throughput capacity You pay a onetime upfront fee and commit to paying for a minimum usage level at specific hourly rates for the duration of the reserved capacity term Any throughput you provision in excess of your reserved capacity is billed at standard rates for provisioned throughput Provisioned capacity mode might be best if you •Have predictable application traffic •Run applications whose traffic is consistent or ramps gradually •Can forecast capacity requirements to control costs For more information see Pricing for Provisioned Capacity 11Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Usage billing Reserved Instances billing All Reserved Instances provide you with a discount compared to OnDemand Instance pricing With Reserved Instances you pay for the entire term regardless of actual use You can choose to pay for your Reserved Instance upfront partially upfront or monthly depending on the payment option specified for the Reserved Instance When Reserved Instances expire you are charged OnDemand Instance rates You can queue a Reserved Instance for purchase up to three years in advance This can help you ensure that you have uninterrupted coverage For more information see Queuing your purchase You can set up a billing alert to warn you when your bill exceeds a threshold that you define For more information see Monitoring Charges with Alerts and Notifications Usage billing Except for DynamoDB reservations which are billed based on throughput reservations are billed for every clockhour during the term you select regardless of whether an instance is running or not A clock hour is defined as the standard 24hour clock that runs from midnight to midnight and is divided into 24 hours (for example 1:00:00 to 1:59:59 is one clockhour) A Reserved Instance billing benefit can be applied to a running instance on a persecond basis Per second billing is available for instances using an opensource Linux distribution such as Amazon Linux and Ubuntu Perhour billing is used for commercial Linux distributions such as Red Hat Enterprise Linux and SUSE Linux Enterprise Server A Reserved Instance billing benefit can apply to a maximum of 3600 seconds (one hour) of instance usage per clockhour You can run multiple instances concurrently but can only receive the benefit of the Reserved Instance discount for a total of 3600 seconds per clockhour Instance usage that exceeds 3600 seconds in a clockhour is billed at the OnDemand Instance rate For example if you purchase one m4xlarge Reserved Instance and run four m4xlarge instances concurrently for one hour one instance is charged at one hour of Reserved Instance usage and the other three instances are charged at three hours of OnDemand Instance usage However if you purchase one m4xlarge Reserved Instance and run four m4xlarge instances for 15 minutes (900 seconds) each within the same hour the total running time for the instances is one hour which results in one hour of Reserved Instance usage and 0 hours of OnDemand Instance usage 12Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Consolidated billing Figure 3 – Running four instances for 15 minutes each in the same hour If multiple eligible instances are running concurrently the Reserved Instance billing benefit is applied to all the instances at the same time up to a maximum of 3600 seconds in a clockhour Thereafter the On Demand Instance rates apply Figure 4 – Running four instances concurrently over the hour You can find out about the charges and fees to your account by viewing the AWS Billing and Cost Management console You can also examine your utilization and coverage and receive reservation purchase recommendations via AWS Cost Explorer You can dive deeper into your reservations and Reserved Instance discount allocation via the AWS Cost and Usage Report For more information on Reserved Instance usage billing see Usage Billing Consolidated billing AWS Organizations is an account management service that lets you consolidate multiple AWS accounts into an organization that you create and centrally manage AWS Organizations includes consolidated billing and account management capabilities that enable you to better meet the budgetary security and compliance needs of your business For more information see What Is AWS Organizations? For more information on consolidated bills and how they are calculated see Understanding Consolidated Bills The pricing benefits of Reserved Instances are shared when the purchasing account is billed under a consolidated billing payer account The instance usage across all member accounts is aggregated in the payer account every month This is useful for companies that have different functional teams or groups then the normal Reserved Instance logic is applied to calculate the bill Reserved Instances: Capacity reservations AWS also offers discounted hourly rates in exchange for an upfront fee and term contract Services such as Amazon EC2 and Amazon RDS use this approach to sell reserved capacity for hourly use of Reserved Instances For more information see Reserved Instances in the Amazon EC2 User Guide for Linux Instances and Working with Reserved DB Instances in the Amazon Relational Database Service User Guide 13Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Blended rates When you reserve capacity with Reserved Instances your hourly usage is calculated at a discounted rate for instances of the same usage type in the same Availability Zone (AZ) When you launch additional instances of the same instance type in the same Availability Zone and exceed the number of instances in your reservation AWS averages the rates of the Reserved Instances and the OnDemand Instances to give you a blended rate Blended rates A line item for the blended rate of that instance is displayed on the bill of any member account that is running an instance that matches the specifications of a reservation in the organization The payer account of an organization can turn off Reserved Instance sharing for member accounts in that organization via the AWS Billing Preferences This means that Reserved Instances are not shared between that member account and other member accounts Each estimated bill is computed using the most recent set of preferences For information on how to configure sharing see Turning Off Reserved Instance Sharing How discounts are applied The application of Amazon EC2 Reserved Instances is based on instance attributes including the following: •Instance type – Instance types comprise varying combinations of CPU memory storage and networking capacity (for example m4xlarge) This gives you the flexibility to choose the appropriate mix of resources for your applications such as computeoptimized storageoptimized and so on Each instance type includes one or more instance sizes enabling you to scale your resources to the requirements of your target workload •Platform – You can purchase Reserved Instances for Amazon EC2 instances running Linux Unix SUSE Linux Red Hat Enterprise Linux Windows Server and Microsoft SQL Server platforms •Tenancy – Reserved Instances can be default tenancy or dedicated tenancy •Regional or zonal – See Regional and zonal Reserved Instances (p 4) If you purchase a Reserved Instance and you already have a running instance that matches the attributes of the Reserved Instance the billing benefit is immediately applied You don’t have to restart your instances If you do not have an eligible running instance launch an instance and ensure that you match the same criteria that you specified for your Reserved Instance For more information see Using Your Reserved Instances 14Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Measure success Maximizing the value of reservations This section discusses how you can maximize the value of your reservations Topics •Measure success (p 15) •Maximize discounts by standardizing instance type (p 15) •Reservation management techniques (p 16) •Reserved Instance Marketplace (p 16) •AWS Cost Explorer (p 16) •AWS Cost and Usage Report (p 17) •AWS Trusted Advisor (p 18) Measure success Making the most of reservations means measuring your reservation coverage (portion of instances enjoying reservation discount benefits) and reservation utilization (degree to which purchased Reserved Instances are used) Establish a standardized review cadence in which you focus on the following questions: •Do you need to modify any of our existing reservations to increase utilization? •Are any currently utilized reservations expiring? •Do you need to purchase any reservations to increase your coverage? A standardized review cadence ensures that issues are surfaced and addressed in a timely manner As your RIs expire we encourage you to sign up for Savings Plans as they offer the same savings as RIs but with additional flexibility Maximize discounts by standardizing instance type By standardizing the instance types that your organization uses you can ensure that deployments match the characteristics of your reservations to maximize your discounts Standardization maximizes utilization and minimizes the level of effort associated with management of reservations Three services that can help you standardize your instances are: •AWS Config – Enables you to assess audit and evaluate the configurations of your AWS resources AWS Config continuously monitors and records your AWS resource configurations and lets you automate the evaluation of recorded configurations against desired configurations •AWS Service Catalog – Lets you create and manage catalogs of IT services that are approved for use on AWS These IT services can include everything from virtual machine (VM) images servers software and databases to complete multitier application architecture •AWS Compute Optimizer Recommends optimal AWS compute resources for your workloads to reduce costs and improve performance by using Machine Learning algorithms to analyze historical utilization metrics The Compute Optimizer focuses on the configuration and resource utilization of your workload to identify dozens of defining characteristics such as whether a workload is CPU intensive exhibits a daily pattern or accesses local storage frequently The service processes these characteristics and identifies the hardware resource headroom required by the workload It also infers 15Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Reservation management techniques how the workload would have performed on various hardware platforms (for example Amazon EC2 instances types) and offers recommendations Reservation management techniques You can manage reservations either by using a central IT operations or management team or by using a specific team or business unit The following table summarizes the different reservation management techniques Table 5 – Comparison of different reservation management techniques Central reservation management Team/Business Unit reservation management Maximizes reservation coverage by covering aggregate usage across a businessIncreases likelihood of high reservation utilization (for example using alreadypurchased reservations) because a single team should understand its capacity commitment of RIs Simplifies overall reservation management especially when combining central management and Convertible Reserved InstancesReduces interfacing or planning between the business unit and the central team Reduces the requirement for an individual team to understand reservationsStreamlines decisions about purchases purchase process and reservation account location Reserved Instance Marketplace Reserved Instance Marketplace supports the sale of thirdparty and AWS customers' unused Standard Reserved Instances which vary in term lengths and pricing options For example you might want to sell Reserved Instances after moving instances to a new AWS Region changing to a new instance type ending projects before the term expiration when your business needs change or if you have unneeded capacity If you want to sell your unused Reserved Instances on the Reserved Instance Marketplace you must meet certain eligibility criteria For more information see Reserved Instance Marketplace AWS Cost Explorer AWS Cost Explorer lets you visualize understand and manage your AWS costs and usage over time You can analyze your cost and usage data at a high level (for example total costs and usage across all accounts in your organization) or for highly specific requests (for example m22xlarge costs within account Y that are tagged project: secretProject ) You can dive deeper into your reservations using the Reserved Instance utilization and coverage reports Using these reports you can set custom Reserved Instance utilization and coverage targets and visualize progress toward your goals From there you can refine the underlying data using the available filtering dimensions (for example account instance type scope and more) AWS Cost Explorer provides the following prebuilt reports: •EC2 RI Utilization % offers relevant data to identify and act on opportunities to increase your Reserved Instance usage efficiency It’s calculated by dividing Reserved Instance hours used by the total Reserved Instance purchased hours 16Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper AWS Cost and Usage Report •EC2 RI Coverage % shows how much of your overall instance usage is covered by Reserved Instances This lets you make informed decisions about when to purchase or modify a Reserved Instance to ensure maximum coverage It’s calculated by dividing Reserved Instance hours used by the total EC2 OnDemand and Reserved Instance hours Also AWS Cost Explorer provides Reserved Instance purchase recommendations for zonal and sizeflexible Reserved Instances to help payer accounts achieve greater cost efficiencies For more information see AWS Cost Explorer AWS Cost and Usage Report The AWS Cost and Usage Report contains the most comprehensive set of data about your AWS costs and usage including additional information regarding AWS services pricing and reservations By using the AWS Cost and Usage report you can gain a wealth of reservationrelated insights about the Amazon Resource Name (ARN) for a reservation the number of reservations the number of units per reservation and more It can help you do the following: •Calculate savings – Each hourly line item of usage contains the discounted rate that was charged in addition to the public OnDemand Instance rate for that usage type at that time You can quantify your savings by calculating the difference between the public OnDemand Instance rates and the rates you were charged •Track the allocation of Reserved Instance discounts – Each line item of usage that receives a discount contains information about where the discount came from This makes it easier to trace which instances are benefitting from specific reservations These reports update up to three times per day Reserved Instances on your cost and usage report The Fee line item is added to your bill when you purchase an All Upfront or Partial Upfront Reserved Instance as shown Figure 5 – Fee line item from AWS Cost and Usage Report The RI Fee line item describes the recurring monthly charges that are associated with Partial Upfront and No Upfront Reserved Instances The RI Fee is calculated by multiplying your discounted hourly rate by the number of hours in the month as shown 17Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper AWS Trusted Advisor Figure 6 – RI Fee line item from AWS Cost and Usage Report The Discounted Usage line item describes the instance usage that received a matching Reserved Instance discount benefit It’s added to your bill when you have usage that matches one of your Reserved Instances as shown Figure 7 – Discounted Usage line item from AWS Cost and Usage Report AWS Trusted Advisor AWS Trusted Advisor is an online resource to help you reduce cost increase performance and improve security by optimizing your AWS environment AWS Trusted Advisor provides realtime guidance to help you provision your resources following AWS best practices To help you maximize utilization of Reserved Instances AWS Trusted Advisor checks your Amazon EC2 computingconsumption history and calculates an optimal number of Partial Upfront Reserved Instances Recommendations are based on the previous calendar month's hourbyhour usage aggregated across all consolidated billing accounts Note that Trusted Advisor does not provide sizeflexible Reserved Instance recommendations For more information about how the recommendation is calculated see "Reserved Instance Optimization Check Questions" in the Trusted Advisor FAQs 18Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Conclusion Effectively planned and managed reservations can help you achieve significant discounts for AWS workloads that run on a predictable schedule It’s important to analyze your current AWS usage to select the right reservation attributes from the start and to devise a longerterm strategy for monitoring and managing your Reserved Instances Using tools such as the AWS Compute Optimizer AWS Cost and Usage report and the Reserved Instance Utilization and Coverage reports in AWS Cost Explorer you can examine your overall usage and discover opportunities for greater cost efficiencies 19Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Contributors Contributors to this document include: •Pritam Pal Senior Specialist Solution Architect EC2 Spot Amazon Web Services 20Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Document revisions To be notified about updates to this whitepaper subscribe to the RSS feed updatehistorychange updatehistorydescription updatehistorydate Updated bare metal instance types and normalization factors Removed link to Scheduled Instances (p 21)Minor update March 29 2021 Updated Reserved Instances billing information and normalization factors Savings Plan section added (p 21)Whitepaper updated August 31 2020 Initial publication (p 21) Whitepaper published March 1 2018 21Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved 22
|
General
|
consultant
|
Best Practices
|
Amazon_Elastic_File_System_Choosing_Between_Different_Throughput_and_Performance_Mode
|
This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System Choosing Between the D ifferent Throughput & Performance Modes July 2018 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representat ions contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify a ny agreement between AWS and its customers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 Performance Modes 1 General Purp ose 1 Max I/O 1 Selecting the right performance mode 2 Throughput Modes 3 Bursting Throughput 3 Provisione d Throughput 4 Selecting the right throughput mode 5 Conclusion 6 Contributors 6 Further Reading 7 Document Revisions 7 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract Storage types can generally be divided in to three different categories : block file and object Each storage type has made its way into the enterprise and a large majority of data reside s on file storage Network shared file systems have become a critical storage platform for businesses of any size These systems are accessed by a single client or multiple (tens hundreds or thousands) concurrently so they can access and use a common data set Amazon Elastic File System (Amazon EFS) satisfies these demands and gives custom ers the flexibility to choose different performance and throughput modes that best suits their needs This paper outlines the best practices for running network shared file systems on the AWS cloud platform and offers guidance to select the right Amazon EF S performance and throughput modes for your workload This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Page 1 Introduction Amazon Elastic File System (Amazon EFS)1 provides simple scalable elastic file storage for use with AWS Cloud services and on premises resources The file systems you create using Amazon EFS are elastic growing and shrinking automatically as you add and remove data They can grow to petabytes in size distributing data across an unconstrained number of storage se rvers in multiple Availability Zones Amazon EFS supports Network File System version 4 (NFSv40 & 41) provides POSIX file system semantics and guarantees open after close semantics Amazon EFS is a regional service built on a foundation of high availability and high durability and is designed to satisfy the performance and throughput demands of a wide spectrum of use cases and workloads including web serving and content management enterprise applications media and entertainment processing workflows home directories database backups developer tools container storage and big data analytics EFS file systems provide customizable performance and throughput options so you can tune your file system to match the needs of your application Performance Modes Amazon EFS offers two performance modes: General Purpose and Max I /O You can select one when creating your file system There is no price difference between the modes so your file system is billed and metered the same The performance m ode can’t be changed after the file system has been created General Purpose General Purpose is the default performance mode and is recommended for the majority of uses cases and workloads It is the most commonly used performance mode and is ideal for latency sensitive applications like web serving content management systems and general file serving These file systems experience the lowest latency per file system operation and can achieve this for random or sequential IO patterns There is a limit of 7000 file system operation per second aggregated across all clients for General Purpose performance mode file systems Max I /O File systems created in Max I /O performance mode can scale to higher levels of aggregate throughput and operations per second when compared to General Purpose file systems These file systems are designed for highly parallelized applications like big data a nalytics video transcoding a nd processing and genomic analytics which can scale out to tens hundreds or thousands of Amazon EC2 instances Max I /O file systems do not have a 7000 file system operation per second limit but latency per file system operation is slightly higher when compared to General Purpose performance mode file systems This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System : Choosing between the different Performance & Throughput Modes Page 2 Selecting the right performance mode We recommend creating the file system in the default General Purpose performance mode and testing your workload for a period of time to test its performance We pr ovide eight Amazon CloudWatch metrics per file system to help you understand how your workload is driving the file system One of these metrics Percen tIOLimit is specific to General Purpose performance mode file systems and indicates as a percent how close you are to the 7000 file system operations per second limit If the PercentIOLimit value returned is at or near 100 percent for a significant amount of time during your test (see figure 1) we recommend you use a Max I /O performance mode file system To move to a different performance mode you migrate the data to a different file system that was created in the other performance mode You can use Amazon EFS File Sync to migrate the data For more information on Amazon EFS File Sync please refer to the Amazon EFS File Sync section of the Amazon EFS User Guide 2 There are some workloads that need to scale out to the higher I/O levels provided by Max I /O performance mode but are also latency sensitive and require the lower latency provided by General Purpose performance mode In situations like this and if the work load and Figure 1 Figure 2 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System : Choosing between the different Performance & Throughput Modes Page 3 applications support it we recommend creating multiple General Purpose performance mode file systems and spread the application workload across all these file systems This would allow you to create a logical file system and shard data across multiple EFS file systems Each file system would be mounted as a subdirectory and the application can access th ese subdirectories in parallel (s ee figure 2 ) This allows latency sensitive workload s to scale to higher levels of file system operatio ns per second aggregated across multiple file systems and at the same time take advantage of the lower latencies offered by General Purpose performance mode file systems Throughput Modes The throughput mode of the file system helps determine the overal l throughput a file system is able to achieve You can select the throughput mode at any time (subject to daily limits) Changing the throughput mode is a nondisruptive operation and can be run while clients continuously access the file system You can c hoose between two throughput modes Bursting or Provisioned There are price and throughput level differences between the two modes so understand ing each one their differences and when to select one throughput mode over the other is valuable Bursting Throughput Bursting Throughput is the default mode and is recommended for a majority of uses cases and workloads Throughput sc ales as your file system grows and you are billed only for the amount of data stored on the file system in GB Month Because file based workloads are typically spiky – driving high levels of throughput for short periods of time and low levels of throughput the rest of the time – file systems using Bursting Throughput mode allow for high throug hput levels for a limited period of time All Bursting T hroughput mode file systems regardless of size can burst up to 100 MiB/s Throughput also scales as the file s ystem grows and will scale at the bursting throughput rate of 100 MiB/s per TiB of data stored subject to regional default file system throughput limits These bursting throughput numbers can be achieved when the file system has a positive burst credit balance You can monitor and alert on your file system’s burst credit balance using the BurstCreditBalance file system metric in Am azon CloudWatch File systems earn burst credits at the baseline throughput rate of 50 MiB/s per TiB of data stored and can accumulate burst credits up to the maximum size of 21 TiB per TiB of data stored This allows larger file systems to accumulate and store more burst credits which allows them to burst for longer periods of time If the file system’s burst credit balance is ever depleted the permitted throughput becomes the baseline throughput Permitted throughput is the maximum amount of throughput a file system is allowed and this value is available as an Amazon CloudWatch metric This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System : Choosing between the different Performance & Throughput Modes Page 4 Provisioned Throughp ut Provisioned T hroughput is available for applications that require a higher throughput to storage ratio than those allowed by Bursting Throughput mod e In this mode you can provision the file system’s throughput independent of the amount of data stored in the file system This allows you to optimize your file system’s throughput performance to match your application’s needs and your application can dr ive up to the provisioned throughput continuously This concept of provisioned performance is similar to features offered by other AWS services like provisioned IOPS for Amazon Elastic Block Store PIOPS (io1) volumes and provisioned throughput with read a nd write capacity units for Amazon DynamoDB As with these services you are billed separately for the performance or throughput you provision and the storage you use eg two billing dimensions When file systems are running in Provisioned Throughput mod e you are billed for the storage you use in GB Month and for the throughput provisioned in M iB/sMonth The storage charge for both Bursting and Provisioned Throughput modes includes the baseline throughput of the file system in the price of storage Thi s means the price of storage includes 1 MiB/s of throughput per 20 GiB of data stored so you will be billed for the throughput you provision above this limit For more information on pricing see the Amaz on EFS pricing page 3 You can increase Provisioned T hroughput as often as you need You can decrease Provisioned Throughput or switch throughput modes as long as it’s been more than 24 hours since the last decrease or throughput mode change File systems continuously earn burst credits up to the maximum burst credit balance allowed for the file system The maximum burst credit balance is 21 TiB for file systems smaller than 1 TiB or 21 TiB per TiB stored for file systems larger than 1 TiB File systems running in Provisioned Throughput mode still earn burst credits They earn at the higher of the two rates either the P rovisioned Throughput rate or the baseline Bursting Throughput rate of 50 MiB/s per TiB of storage You could find yourself in t he situation where your file system is running in Provisioned Throughput mode and over time the size of it grows so that its provisioned throughput is less than the baseline throughput it is entitled to had the file system been in Bursting Throughput mode In a case like this you will be entitled to the higher throughput of the two modes including the burst throughput of Bursting Throughput mode and you will not be billed for throughput above the storage price For example you set the provisioned throug hput of your 1 TiB file system to 200 MiB/s Over time the file system grows to 5 TiB A file system in Bursting Throughput mode would be entitled to a baseline throughput of 50 MiB/s per TiB of data stored and a burst throughput of 100 MiB/s per TiB of da ta stored Though your file system is still running in Provisioned Throughput mode its entitled to a baseline throughput of 250 MiB/s and a bu rst throughput of 500 MiB/s and will only incur a storage charge for a 5 TiB file system For information on maxi mum provisioned throughput limits please refer to the Amazon EFS Limits section of the Amazon EFS User Guide 4 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System : Choosing between the different Performance & Throughput Modes Page 5 Selecting the right throughput mode We recommend running file systems in Bursting Throughput mode because it offers a simple and scalable experience that provides the right ratio of throughput to storage capacity for most workloads There are times when a file system needs a higher throughput to storage capacity ratio than what is offered by Bursting Throughput mode Knowing the throughput demands of your application or monitoring key indicators are two important ways in determining when you’ll need these higher levels of throughput We recommend using Amazon CloudWatch to monitor how your file system is performing One of these metrics BurstCreditBalance is a key performance indicator that will help determine if your file system is better suited for Provisioned Throughput mode If this value is zero or steadily decreasing over a period of normal operations (see figure 3 ) your file system is consuming more burst credits than it is earn ing This means your workload requires a throughput to storage capacity ratio greater than what is allowed by Bursting Throughput mode If this occurs we recommend provisioning throughput for your file system This can be done by modifying the file syste m to change the throughput mode using the AWS Management Console AWS CLIs AWS SDKs or EFS APIs When choosing to run in Provisioned Throughput mode you must also indicate the amount of throughput you want to provision for your file system To help dete rmine how much throughput to provision we recommend monitoring another key performance indicator available from Amazon CloudWatch TotalIOBytes This metric gives you throughput in terms of the total numbers of bytes (data read data write and metadata) for each file system operation during a selected period To calculate the average throughput in MiB/s for a period convert the Sum statistic to MiB (Sum of TotalIOBytes x 1048576) and divide by the number of seconds in the period Use Metric Math expres sions in Amazon CloudWatch to make it even easier to see throughput in MiB/s For more information on using Metric Math see Using Metric Math with Amazon EFS in the Amaz on EFS User Guide 5 Calculate this during the same period when your BurstCreditBalance metric was continuously decreasing This will give you the average throughput you were Figure 3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System : Choosing between the different Performance & Throughput Modes Page 6 achieving during this period and is a good starting point when choosing the amount of throughput to provision If your file system is running in Provisioned Throughput mode and you experience no performance issues while your BurstCreditBalance continuously increases for long periods of normal operations then consider decreasing the amount of provisioned throughput to reduce costs To help determine how much throughput to provision we also recommend monitoring the Amazon CloudWatch metric TotalIOBytes Calculate this during the same period when your BurstCredi tBalance metric was continuously increasing This will give you the average throughput you were achieving during this period and is a good starting point when choosing the amount of throughput to provision Remember you can increase the amount of provisio ned throughput as often as you need but you can only decrease the amount of provisioned throughput or switch thro ughput modes as long as it’s been more than 24 hours since the last decrease or throughput mode change If you’re planning on migrating large a mounts of data into your file system you may also want to consider switching to Provisioned Throughput mode and provision a higher throughput beyond your allotted burst capability to accelerate loading data Following the migration you may decide to lowe r the amount of provisioned throughput or switch to Bursting Throughput mode for normal operations Monitor the average total throughput of the file system using the TotalIOBytes metric in Amazon CloudWatch Use Metric Math expressions in Amazon CloudWatch to make it even easier to see throughput in MiB/s Compare the average throughput you’re driving the file system to the PermittedThroughput metric If the calculated average throughput you’re driving the file system is less than the permitted throughput then consider making a throughput change to lower costs If the calculated average throughput during normal operations is at or below the baseline throughput to storage capacity ratio of Bursting Throughput mode (50 MiB/s per TiB of data stored) then cons ider switching to Bursting Throughput mode If the calculated average throughput during normal operations is above this ratio then consider lowering the amount of provisioned throughput to some level in between your current provisioned throughput and the calculated average throughput during normal operations Remember you can switch throughput modes or decrease the amou nt of provisioned throughput as long as it’s been more than 24 hours since the last decrease or throughput mode change Conclusion Amazon EFS gives you the flexibility to choose different performance and throughput modes to customize your file system to meet the needs for a wide spectrum of workloads Knowing the performance and throughput demands of your appl ication and monitoring key performance indicators will help you select the right performance and throughput mode to satisfy your file system’s needs Contributors The following individuals and organizations contributed to this document: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Elastic File System : Choosing between the different Performance & Throughput Modes Page 7 Darryl S Osborne solutions architect Amazon File Services Further Reading For additional information see the following : Amazon EFS User Guide6 Document Revisions Date Description July 2018 First publication 1 https://awsamazoncom/efs/ 2 https://docsawsamazoncom/efs/latest/ug/get started filesynchtml 3 https://awsamazon com/efs/pricing/ 4 https://docsawsamazoncom/efs/latest/ug/limitshtml 5 https://docsawsamazon com/efs/latest/ug/monitoring metric mathhtml 6 https://docsawsamazoncom/efs/latest/ug/whatisefshtml
|
General
|
consultant
|
Best Practices
|
Amazon_Virtual_Private_Cloud_Network_Connectivity_Options
|
Amazon Virtual Private Cloud Connectivity Options January 2018 © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the info rmation in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditio ns or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its custom ers Contents Introduction 1 Network toAmazon VPC Connectivity Options 2 AWS Managed VPN 4 AWS Direct Connect 6 AWS Direct Connect + VPN 8 AWS VPN CloudHub 10 Software VPN 11 Transit VPC 13 Amazon VPC toAmazon VPC Connectivity Options 14 VPC Peering 16 Software VPN 17 Software toAWS Managed VPN 19 AWS Managed VPN 20 AWS Direct Connect 22 AWS PrivateLink 25 Internal User toAmazon VPC Connectivity Options 26 Software Remote Acce ss VPN 27 Conclusion 29 Appendix A: High Level HA Architecture for Software VPN Instances 30 VPN Monitoring 31 Contributors 31 Document Revisions 32 Abstract Amazon Virtual Private Cloud (Amazon VPC) lets customers provision a private isolated section of the Amazon Web Services (AWS) Cloud where they can launch AWS resources in a virtual network using customer defined IP address ranges Amazon VPC provides customers with several options for connecting their AWS virtual networks with other remote networks This document describes several common network connectivity op tions available to our customers These include connectivity options for integrating remote customer networks with Amazon VPC and connecting multiple Amazon VPCs into a contiguous virtual network This whitepaper is intended for corporate network architect s and engineers or Amazon VPC administrators who would like to review the available connectivity options It provides an overview of the various options to facilitate network connectivity discussions as well as pointers to additional documentation and reso urces with more detailed information or examples Amazon Web Services – Amazon VPC Connectivity Options Page 1 Introduction Amazon VPC provides multiple network connectivity options for you to leverage depending on your current network designs and requirements These connectivity options inc lude leveraging either the internet or an AWS Direct Connect connection as the network backbone and terminating the connection into either AWS or user managed network endpoints Additionally with AWS you can choose how network routing is delivered betwee n Amazon VPC and your networks leveraging either AWS or user managed network equipment and routes This whitepaper considers the following options with an overview and a high level comparison of each: User Network –to–Amazon VPC Connectivity Options • AWS Managed VPN – Describes establishing a VPN connection from your network equipment on a remote network to AWS managed network equipment attached to your Amazon VPC • AWS Direct Connect – Describes establishing a private logical connection from your remote network to Amazon VPC leveraging AWS Direct Connect • AWS Direct Connect + VPN – Describes establishing a private encrypted connection from your remote network to Amazon VPC leveraging AWS Direct Connect • AWS VPN CloudHub – Describes establishing a hub andspoke model for connecting remote branch offices • Software VPN – Describes establishing a VPN connection from your equipment on a remote network to a user managed software VPN appliance running inside an Amazon VPC • Transit VPC – Describes establishing a global transit network on AWS using Software VPN in conjunction with AWS managed VPN Amazon VPC –to–Amazon VPC Connectivity Options • VPC Peering – Describes the AWS recommended approach for connecting multiple Amazon VPCs within and across regions using the Amazon VPC peering feature Amazon Web Services – Amazon VPC Connectivity Options Page 2 • Software VPN – Describes connecting multiple Amazon VPCs using VPN connections established between user managed software VPN appliances running inside of each Amazon VPC • Software toAWS Managed VPN – Describes connecting multiple Amazon VPCs with a VPN connection established between a user managed software VPN appliance in one Amazon VPC and AWS managed network equipment attached to the other Amazon VPC • AWS Managed VPN – Describes connecting multiple Amazon VPCs leveraging multiple VPN connections between your remote network and each of your Amazon VPCs • AWS Direct Connect – Describes connecting multiple Amazon VPCs leveraging logical connections on customer managed AWS Direct Connect routers • AWS PrivateLink – Describes connecting multiple Amazon VPCs leveraging VPC interface endpoints and VPC endpoint services Internal User toAmazon VPC Connectivity Options • Software Remote Access VPN – In addition to customer network –to– Amazon VPC connectivity options for connecting remote users to VPC resources this section describes leveraging a remote access solution for providing end user VP N access into an Amazon VPC Network toAmazon VPC Connectivity Options This section provides design patterns for you to connect remote networks with your Amazon VPC environment These options are useful for integrating AWS resources with your existing on site services (for example monitoring authentication security data or other systems) by extending your internal networks into the AWS Cloud This network extension also allows your internal users to seamlessly connect to resources hosted on AWS just like any other internally facing resource VPC connectivity to remote customer networks is best achieved when using nonoverlapping IP ranges for each network being connected For example if Amazon Web Services – Amazon VPC Connectivity Options Page 3 you’d like to connect one or more VPCs to your home network make sure they are configured with unique Classless Inter Domain Routing (CIDR) ranges We advise allocating a single contiguous non overlapping CIDR block to be used by each VPC For additional information about Amazon VPC routing and constraints see the Amazon VPC Frequently Asked Questions 1 Option Use Case Advantages Limitations AWS Managed VPN AWS managed IPsec VPN connection over the internet Reuse existing VPN equipment and processes Reuse existing internet connections AWS managed endpoint includes multi data center redundancy and automated failover Supports static routes or dynamic Border Gateway Protocol (BGP) peering and routing policies Network latency variability and availability are dependent on internet conditions Customer managed endpoint is responsible for implementing redundancy and failover (if required) Customer device must support single hop BGP (when leveraging BGP for dynamic routing) AWS Direct Connect Dedicated network connection over private lines More predictable network performance Reduced bandwidth costs 1 or 10 Gbps provisioned connections Supports BGP peering and routing policies May require additional telecom and hosting provider relationships or new network circuits to be provisioned AWS Direct Connect + VPN IPsec VPN connection over private lines Same as the previous option with the addition of a secure IPsec VPN connection Same as the previous option with a little additional VPN complexity Amazon Web Services – Amazon VPC Connectivity Options Page 4 Option Use Case Advantages Limitations AWS VPN CloudHub Connect remote branch offices in a hubandspoke model for primary or backup connectivity Reuse existing internet connections and AWS VPN connections (for example use AWS VPN CloudHub as backup connectivity to a third party MPLS network) AWS managed virtual private gateway includes multi data center redundancy and automated failover Supports BGP for exchanging routes and routing priorities (for example prefer MPLS connections over backup AWS VPN connections) Network latency variability and availability are dependent on the internet User managed branch office endpoints are responsible for implementing redundancy and failover (if required) Software VPN Software appliance based VPN connection over the internet Supports a wider array of VPN vendors products and protocols Fully customer managed solution Customer is responsible for implementing HA (high availability) solutions for all VPN endpoints (if required) Transit VPC Software appliance based VPN connection with hub VPC AWS managed IPsec VPN connection for spoke VPC connection Same as the previous option with the addition of AWS managed VPN connection between hub and spoke VPCs Same as the previous section AWS Managed VPN Amazon VPC provides the option of creating an IPsec VPN connection between remote customer networks and their Amazon VPC over the internet as shown in the following figure Consider taking this approach when you want to take advantage of an AWS managed VPN endpoint that includes automated multi – data center redundancy and failover built into the AWS side of the VPN connection Although not shown the Amazon virtual private gateway represents two distinct VPN endpoints physically located in separate data centers to increase the availability of your VPN connection Amazon Web Services – Amazon VPC Connectivity Options Page 5 AWS managed VPN The virtual private gateway also supports and encourages multiple user gateway connections so you can implement redundancy and failover on your side of the VPN connection as shown in the following figure Both dynamic and static routing options are provided to give you flexibility in your routing configuration Dynamic routing uses BGP peering to exchange routing information between AWS and these remote endpoints With dynamic routing you can also specify routing priorities policies and weights (metrics) in your BGP advertisements and influence the network path between your networks and AWS It is important to note that when you use BGP both the IPSec and the BGP connections must be terminated on the same user gateway device so it must be capable of terminating both IPSec and BGP connections Amazon Web Services – Amazon VPC Connectivity Options Page 6 Redundant AWS managed VPN connections Additional Resources • Adding a Virtual Private Gateway to Your VPC2 • Customer Gateway device minimum requirements3 • Customer Gateway devices known to work with Amazon VPC4 AWS Direct Connect AWS Direct Connect makes it easy to establish a dedicated connection from an onpremises network to Amazon VPC Using AWS Direct Connect you can establish private connectivity between AWS and your data center office or colocation environment This private connection can reduce network costs increa se bandwidth throughput and provide a more consistent network experience than internet based connections AWS Direct Connect lets you establish 1 Gbps or 10 Gbps dedicated network connections (or multiple connections) between AWS networks and one of the AWS Direct Connect locations It uses industry standard VLANs to access Amazon Web Services – Amazon VPC Connectivity Options Page 7 Amazon Elastic Compute Cloud (Amazon EC2) instances running within an Amazon VPC using private IP addresses You can choose from an ecosystem of WAN service providers fo r integrating your AWS Direct Connect endpoint in an AWS Direct Connect location with your remote networks The following figure illustrates this pattern You can also work with your provider to create sub 1G connection or use link aggregation group (LAG) to aggregate multiple 1 gigabit or 10 gigabit connections at a single AWS Direct Connect endpoint allowing you to treat them as a single managed connection AWS Direct Connect AWS Direct Connect allows you to connect your AWS Direct Connect connection to one or more VPCs in your account that are located in the same or different regions You can use Direct Connect gateway to achieve this A Direct Connect gateway is a globally available resource You can create the Direct Connect gateway in any public r egion and access it from all other public regions This feature also allows you to connect to any of the participating VPCs from any Direct Connect location further reducing your costs for using AWS services on a cross region basis The following figure illustrates this pattern Amazon Web Services – Amazon VPC Connectivity Options Page 8 AWS Direct Connect Gateway Additional Resources • AWS Direct Connect product page5 • AWS Direct Connect locations6 • AWS Direct Connect FAQs • AWS Direct Connect LAGs • AWS Direct Connect Gateway s7 • Getting Started with AWS Direct Connect8 AWS Direct Connect + VPN With AWS Direct Connect + VPN you can combine one or more AWS Direct Connect dedicated network connections with the Amazon VPC VPN This combination provides an IPsec encrypted private connection that also reduces network costs increases bandwidth throughput and provides a more consistent network experience than internet based VPN connections You can use AWS Direct Connect to establish a dedicated network connection between your network create a logical connection to public AWS resources such as an Amazon virt ual private gateway IPsec endpoint This solution combines the AWS managed benefits of the VPN solution with low latency increased Amazon Web Services – Amazon VPC Connectivity Options Page 9 bandwidth more consistent benefits of the AWS Direct Connect solution and an end toend secure IPsec conne ction The following figure illustrates this option AWS Direct Connect and VPN Additional Resources • AWS Direct Connect product page9 • AWS Direct Connect FAQs10 • Adding a Virtual Private Gateway to Your VPC11 EC2 Instances AWS Public Direct Connect Customer WAN Availability Zone AWS Direct Virtual VPN Private EC2 Instances VPC Subnet 2 Remote Servers Amazon VPC Amazon Web Services – Amazon VPC Connectivity Options Page 10 AWS VPN CloudHub Building on the AWS managed VPN and AWS Direct Connect options described previously you can securely communicate from one site to another using the AWS VPN CloudHub The AWS VPN CloudHub operates on a simple hub and spoke mo del that you can use with or without a VPC Use this design if you have multiple branch offices and existing internet connections and would like to implement a convenient potentially low cost hub andspoke model for primary or backup connectivity between these remote offices The following figure depicts the AWS VPN CloudHub architecture with blue dashed lines indicating network traffic between remote sites being routed over their AWS VPN connections AWS VPN CloudHub AWS VPN CloudHub leverages an Amazon VPC virtual private gateway with multiple gateways each using unique BGP autonomous system numbers (ASNs) Your gateways advertise the appropriate routes (BGP prefixes) over their VPN connections These routing advertisements are received and Customer Customer Network EC2 Instances Customer Availability Zone Virtual Customer Network EC2 Instances VPC Subnet 2 Customer Customer Network Amazon Web Services – Amazon VPC Connectivity Options Page 11 readvertised to each BGP peer so that each site can send dat a to and receive data from the other sites The remote network prefixes for each spoke must have unique ASNs and the sites must not have overlapping IP ranges Each site can also send and receive data from the VPC as if they were using a standard VPN conn ection This option can be combined with AWS Direct Connect or other VPN options (for example multiple gateways per site for redundancy or backbone routing that you provide) depending on your requirements Additional Resources • AWS VPN CloudHub12 • Amazon VPC VPN Guide13 • Customer Gateway device minimum requirements14 • Customer Gateway device s known to work with Amazon VPC15 • AWS Direct Connect product page16 Software VPN Amazon VPC offers you the flexibility to fully manage both sides of your Amazon VPC connectivity by creating a VPN connection between your remote network and a software VPN appliance running in your Amazon VPC network This option is recommended if you must manage both ends of the VPN connection either for compliance purposes or for leveraging gateway devices that are not currently supported by Amazon VPC’s VPN solution The following figure shows this option Amazon Web Services – Amazon VPC Connectivity Options Page 12 Software VPN You can choose from an ecosystem of multiple partners and open source communities that have produced software VPN appliances that run on Amazon EC2 These include products from well known security companies like Check Point Astaro OpenVPN Technologies and Microsoft as well as popular open source tools like OpenVPN Openswan and IPsec Tools Along with this choice comes the responsib ility for you to manage the software appliance including configuration patches and upgrades Note that this design introduces a potential single point of failure into the network design because the software VPN appliance runs on a single Amazon EC2 ins tance For additional information see Appendix A: High Level HA Architecture for Software VPN Instances Additional Resources • VPN Appliances from the AWS Marketplace17 • Tech Brief Connecting Cisco ASA to VPC EC2 Instance (IPSec)18 Clients Clients Software VPN Appliance Availability Zone internet VPC Router VPN Customer VPN internet EC2 Instances Remote VPC Subnet 2 Servers Amazon Web Services – Amazon VPC Connectivity Options Page 13 • Tech Brief Connecting Multiple VPCs with EC2 Instances (IPSec)19 • Tech Brief Connecting Multiple VPCs with EC2 Instances (SSL)20 Transit VPC Building on the Software VPN design mentioned above you can create a global tran sit network on AWS A transit VPC is a common strategy for connecting multiple geographically disperse VPCs and remote networks in order to create a global network transit center A transit VPC simplifies network management and minimizes the number of con nections required to connect multiple VPCs and remote networks The following figure illustrates this design Software VPN and Transit VPC Amazon Web Services – Amazon VPC Connectivity Options Page 14 Along with providing direct network routing between VPCs and on premises networks this design also enables the transit VPC to implement more complex routing rules such as network address translation between overlapping network ranges or to add additional network level packet filtering or inspection The transit VPC design can be used to su pport important use cases like private networking shared connectivity and cross account AWS usage Additional Resources • Tech Brief Global Transit Network • Solution Transit VPC Amazon VPC toAmazon VPC Connectivity Options Use these design patterns when you want to integrate multiple Amazon VPCs into a larger virtual network This is useful if you require multiple VPCs due to security billing presence in multiple regions or internal charge back requirements to more easily integrate AWS resources between Amazon VPCs You can also combine these patterns with the Network –to–Amazon VPC Connectivity Options for creating a corporate network that spans remote networks and multiple VPCs VPC connectivity between VPCs is best achieved when using non overlapping IP ranges for each VPC being connected For example if you’d like to connect multiple VPCs make sure each VPC is configured with unique Classless Inter Domain Routing (CIDR) ranges Therefore we advise you to allocate a single contiguous non overlapping CIDR block to be used by each VPC For additional information about Amazon VPC routing and constraints see the Amazon VPC Frequently Asked Questions 21 Amazon Web Services – Amazon VPC Connectivity Options Page 15 Option Use Case Advantages Limitations VPC Peering AWS provided network connectivity between two VPCs Leverages AWS networking infrastructure Does not rely on VPN instances or a separate piece of physical hardware No single point of failure No bandwidth bottleneck VPC peering does not support transitive peering relationships Software VPN Software appliance based VPN connections between VPCs Leverages AWS networking equipment in region and internet pipes between regions Supports a wider array of VPN vendors products and protocols Managed entirely by you You are responsible for implementing HA solutions for all VPN endpoints (if required) VPN instances could become a network bottleneck Software to AWS managed VPN Software appliance to VPN connection between VPCs Leverages AWS networking equipment in region and internet pipes between regions AWS managed endpoint includes multi data center redundancy and automated failover You are responsible for implementing HA solutions for the software appliance VPN endpoints (if required) VPN instances could become a network bottleneck AWS managed VPN VPCtoVPC routing managed by you over IPsec VPN connections using your equipment and the internet Reuse existing Amazon VPC VPN connections AWS managed endpoint includes multi data center redundancy and automated failover Supports static routes and dynamic BGP peering and routing policies Network latency variability and availability depend on internet conditions The endpoint you manage is responsible for implementing red undancy and failover (if required) AWS Direct Connect VPCtoVPC routing managed by you using your equipment in an AWS Direct Connect location and private lines Consistent network performance Reduced bandwidth costs 1 or 10 Gbps provisioned connections Supports static routes and BGP peering and routing policies May require additional telecom and hosting provider relationships Amazon Web Services – Amazon VPC Connectivity Options Page 16 Option Use Case Advantages Limitations AWS PrivateLink AWS provided network connectivity between two VPCs using interface endpoints Leverages AWS networking infrastructure No single point of failure VPC Endpoint services only available in AWS region in which they are created VPC Peering A VPC peering connection is a networking connection between two VPCs that enables routing using each VPC’s private IP addresses as if they were in the same network This is the AWS recommended method for connecting VPCs VPC peering connections can be created between your own VPCs or with a VPC in another AWS account VPC peering also supports inter region peering Traffic using inter region VPC Peering always stays o n the global AWS backbone and never traverses the public internet thereby reducing threat vectors such as common exploits and DDoS attacks VPC toVPC peering Amazon Web Services – Amazon VPC Connectivity Options Page 17 AWS uses the existing infrastructure of a VPC to create VPC peering connections These connections are neither a gateway nor a VPN connection and do not rely on a separate piece of physical hardware Therefore they do not introduce a potential single point of failure or network bandwidth bottleneck between VPCs Additiona lly VPC routing tables security groups and network access control lists can be leveraged to control which subnets or instances are able to utilize the VPC peering connection A VPC peering connection can help you to facilitate the transfer of data betw een VPCs You can use them to connect VPCs when you have more than one AWS account to connect a management or shared services VPC to application or customer specific VPCs or to connect seamlessly with a partner’s VPC For more examples of scenarios in w hich you can use a VPC peering connection see the Amazon VPC Peering Guide22 Additional Resources • Amazon VPC User Guide23 • Amazon VPC Peering Guide24 Software VPN Amazon VPC provides network routing flexibility This includes the ability to create secure VPN tunnels between two or more software VPN appliances to connect multiple VPCs into a larger virtual private network so that instances in each VPC can seamlessly connect to each other using private IP addresses This option is recommended when you want to connect VPCs across multiple AWS Regions and manage both ends of the VPN connection using your preferred VPN software provider This option uses an internet gatew ay attached to each VPC to facilitate communication between the software VPN appliances Amazon Web Services – Amazon VPC Connectivity Options Page 18 Inter region VPC toVPC routing You can choose from an ecosystem of multiple partners and open source communities that have produced software VPN appliances that run on Amazon EC2 These include products from well known security companies like Check Point Sophos OpenVPN Technologies and Microsoft as well as popular open source tools like OpenVPN Openswan and IPsec Tools Along wit h this choice comes the responsibility for you to manage the software appliance including configuration patches and upgrades Note that this design introduces a potential single point of failure into the network design as the software VPN appliance runs on a single Amazon EC2 instance For additional information see Appendix A: High Level HA Architecture for Software VPN Instances Additional Resources • VPN Appliances from the AWS Marketplace25 • Tech Brief Connecting Multiple VPCs with EC2 Instances (IPSec)26 • Tech Brief Connecting Multiple VPCs with EC2 Instances (SSL)27 Amazon Web Services – Amazon VPC Connectivity Options Page 19 Software toAWS Managed VPN Amazon VPC provides the flexibility to combine the AWS managed VPN and software VPN options to connect multiple VPCs With this design you can create secure VPN tunnels between a software VPN appliance and a virtual private gateway to connect multiple VPCs into a larger virtual private network allowing instances in each VPC to seamlessly connect to each other using private IP addresses This option is recommended when you want to connect VPCs across multiple AWS regions and would like to take advantage of the AWS managed VPN endpoint including automated multi data center redundancy and failover built into the virtual private gateway side of the VPN connection This option uses a virtual private gateway in one Amazon VPC and a combination of an internet gateway and software VPN appliance in anot her Amazon VPC as shown in the following figure Intra region VPC toVPC routing Amazon Web Services – Amazon VPC Connectivity Options Page 20 Note that this design introduces a potential single point of failure into the network design as the software VPN appliance runs on a single Amazon EC2 instance For additional information see Appendix A: HighLevel HA Architecture for Software VPN Instances Additional Resources • Tech Brief Connecting Multiple VPCs with Sophos Security Gateway28 • Configuring Windows Server 2008 R2 as a Customer Gateway for Amazon Virtua l Private Cloud29 AWS Managed VPN Amazon VPC provides the option of creating an IPsec VPN to connect your remote networks with your Amazon VPCs over the internet You can take advantage of multiple VPN connections to route traffic between your Amazon VPCs as shown in the following figure Amazon Web Services – Amazon VPC Connectivity Options Page 21 Routing traffic between VPCs Amazon Web Services – Amazon VPC Connectivity Options Page 22 We recommend this approach when you want to take advantage of AWS managed VPN endpoints including the automated multi data center redundancy and failover built into the AWS side of each VPN connection Although not shown the Amazon virtual private gateway represents two distinct VPN endpoints physically located in sepa rate data centers to increase the availability of each VPN connection Amazon virtual private gateway also supports multiple customer gateway connections (as described in the Customer Network –to–Amazon VPC Options and AWS man aged VPN sections and shown in the figure Redundant AWS managed VPN connections) allowing you to implement redundancy and failover on your side of the VPN connection This solution can also leverage BGP peering to exchange routing information between AWS and these remote endpoints You can specify routing priorities policies and weights (metrics) in your BGP advertisements to influence the network path traffic will take to and from your networks and AWS This approach is suboptimal from a routing perspe ctive since the traffic must traverse the internet to get to and from your network but it gives you a lot of flexibility for controlling and managing routing on your local and remote networks and the potential ability to reuse VPN connections Additiona l Resources • Amazon VPC Users Guide30 • Customer Gateway device minimum requirements31 • Customer Gateway devices known to work with Amazon VPC32 • Tech Brief Connecting a Single Router to Multiple VPCs33 AWS Direct Connect AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to your Amazon VPC or among Amazon VPCs This option can potentially reduce network costs increase bandwidth throughput and provide a more consistent network experience than the other VPC toVPC connectivity options Amazon Web Services – Amazon VPC Connectivity Options Page 23 You can divide a physical AWS Direct Connect connection into multiple logical connections one for each VPC You can then use these logical connections for routing traffic between VPCs as shown in the following figure In addition to intra region routing you can connect AWS Direct Connect locations in other regions using your existing WAN providers and leverage AWS Direct C onnect to route traffic between regions over your WAN backbone network Amazon Web Services – Amazon VPC Connectivity Options Page 24 Intra region VPC toVPC routing with AWS Direct Connect Amazon Web Services – Amazon VPC Connectivity Options Page 25 We recommend this approach if you’re already an AWS Direct Connect customer or would like to take advantage of AWS Direct Connect’s reduced network costs increased bandwidth throughput and more consistent network experience AWS Direct Connect can provide very efficient routing since traffic can take advantage of 1 G bps or 10 G bps fiber connecti ons physically attached to the AWS network in each region Additionally this service gives you the most flexibility for controlling and managing routing on your local and remote networks as well as the potential ability to reuse AWS Direct Connect connec tions Additional Resources • AWS Direct Connect product page34 • AWS Direct Connect locations35 • AWS Direct Connect FAQs36 • Get Started with AWS Direct Connect37 AWS PrivateLink An interface VPC endpoint (AWS PrivateLink) enables you to connect to services powered by AWS PrivateLink These services include some AWS services services hosted by other AWS accounts (referred to as endpoint services ) and supported AWS Marketplace par tner services The interface endpoints are created directly inside of your VPC using elastic network interfaces and IP addresses in your VPC’s subnets The service is now in your VPC enabling connectivity to AWS services or AWS PrivateLink powered servic e via private IP addresses That means that VPC Security Groups can be used to manage access to the endpoints Also interface endpoint s can be accessed from your premises via AWS Direct Connect In the following diagram the account owner of VPC B is a s ervice provider and account owner of VPC A is service consumer Amazon Web Services – Amazon VPC Connectivity Options Page 26 VPC toVPC routing with AWS PrivateLink We recommend this approach if you want to use service s offered by another VPC securely over private connection You can create an interface endpoint to keep all traffic within AWS network Additional Resources • Interf ace VPC Endpoints • VPC Endpoint Services Internal User toAmazon VPC Connectivity Options Internal user access to Amazon VPC resources is typically accomplished either through your network –toAmazon VPC options or the use of software remote access VPNs to connect internal users to VPC resources With the former option you can reuse your existing on premises and remote access solutions for managing end user access while still providing a seamless experience connecting to AWS hosted resources Describing on premises internal and remote access solutions in any more detail than what has been described in Amazon Web Services – Amazon VPC Connectivity Options Page 27 Customer Network –to–Amazon VPC Options is beyond the scope of this document With software remote access VPN you can leverage low cost ela stic and secure AWS services to implement remote access solutions while also providing a seamless experience connecting to AWS hosted resources In addition you can combine software remote access VPNs with your network toAmazon VPC options to provide re mote access to internal networks if desired This option is typically preferred by smaller companies with less extensive remote networks or who have not already built and deployed remote access solutions for their employees The following table outlines the advantages and limitations of these options Option Use Case Advantages Limitations Network to Amazon VPC Connectivity Options Virtual extension of your data center into AWS Leverages existing end user internal and remote access policies and technologies Requires existing end user internal and remote access implementations Software Remote Access VPN Cloud based remote access solution to Amazon VPC and/or internal networks Leverages low cost elastic and secure web services provided by AWS for implementing a remote access solution Could be redundant if internal and remote access implementations already exist Software Remote Access VPN You can choose from an ecosystem of multiple partners and open source communities that have produced remote access solutions that run on Am azon EC2 These include products from well known security companies like Check Point Sophos OpenVPN Technologies and Microsoft The following figure shows a simple remote access solution leveraging an internal remote user database Amazon Web Services – Amazon VPC Connectivity Options Page 28 Remote access solution Remote access solutions range in complexity support multiple client authentication options (including multifactor authentication) and can be integrated with either Amazon VPC or remotely hosted identity and access management soluti ons (leveraging one of the network toAmazon VPC options) like Microsoft Active Directory or other LDAP/multifactor authentication solutions The following figure shows this combination allowing the remote access server to leverage internal access manage ment solutions if desired Amazon Web Services – Amazon VPC Connectivity Options Page 29 Combination remote access solution As with the software VPN options the customer is responsible for managing the remote access software including user management configuration patches and upgrades Additionally consider that this design introduces a potential single point of failure into the network design as the remote access server runs on a single Amazon EC2 instance For additional information see Appendix A: High Level HA Architecture for Software VPN Instances Additional Resources • VPN Appliances from the AWS Marketplace38 • OpenVPN Access Server Quick Start Guide39 Conclusion AWS provides a number of efficient secure connectivity options to help you get the most out of AWS when integrating your remote networks with Amazon VPC Amazon Web Services – Amazon VPC Connectivity Options Page 30 The options provided in this whitepaper highlight several of the connectivity options and patterns that customers have used to successfully integrate their remote networks or multiple Amazon VPC networks You can use the information provided here to determine the most appropriate mechanism for connecting the infrastructure required to run your business regardless of where it is physically located or hosted Appendix A: High Level HA Architect ure for Software VPN Instances Creating a fully resilient VPC connection for software VPN instances requires the setup and configuration of multiple VPN instances and a monitoring instance to monitor the health of the VPN connections High level HA design We recommend configuring your VPC route tables to leverage all VPN instances simultaneously by directing traffic from all of the subnets in one Availability Amazon Web Services – Amazon VPC Connectivity Options Page 31 Zone through its respective VPN instances in the same Availabi lity Zone Each VPN instance then provides VPN connectivity for instances that share the same Availability Zone VPN Monitoring To monitor Software based VPN appliance you can create a VPN Monitor The VPN monitor is a custom instance that you will need t o run the VPN monitoring scripts This instance is intended to run and monitor the state of VPN connection and VPN instances If a VPN instance or connection goes down the monitor needs to stop terminate or restart the VPN instance while also rerouting traffic from the affected subnets to the working VPN instance until both connections are functional again Since customer requirements vary AWS does not currently provide prescriptive guidance for setting up this monitoring instance However an example s cript for enabling HA between NAT instances could be used as a starting point for creating an HA solution for Software VPN instances We recommend that you think through the necessary busin ess logic to provide notification or attempt to automatically repair network connectivity in the event of a VPN connection failure Additionally you can monitor the AWS Managed VPN tunnels using Amazon CloudWatch metrics which collects data points from the VPN service into readable near real time metrics Each VPN connection collects and publishes a variety of tunnel metrics to Amazon CloudWatch These metrics will allow you to monitor tunnel health activity and create automated actions Contributors The following individuals contributed to this document: • Garvit Singh Solutions Builder AWS Solution Architecture • Steve Morad Senior Manager Solution Builders AWS Solution Architecture • Sohaib Tahir Solutions Architect AWS Solution Architecture Amazon Web Services – Amazon VPC Connectivity Options Page 32 Document Revisions Date Description January 2018 Updated information throughout Focus on the following designs/features: transit VPC direct connect gateway and private link July 2014 First publication Notes 1 http://awsamazoncom/vpc/faqs/ 2 http://docsamazonwebservicescom/AmazonVPC/latest/UserGuide/VPC_VP Nhtml 3 https://docsawsamazoncom/vpc/latest/adminguide/Introduction html#CGRe quirements 4 https://docsawsamazoncom/vpc/latest/adminguide/Introductionhtml#Device sTested 5 http://awsamazoncom/directconnect/ 6 http://a wsamazoncom/directconnect/#details 7 http://awsamazoncom/directconnect/faqs/ 8 http://docsamazonwebservicescom/DirectConnect/latest/GettingStartedGui de/Welcomehtml 9 http://awsamazoncom/directconnect/ 10 http://awsamazoncom/directconnect/faqs/ 11 http://docsamazonwebservicescom/AmazonVPC/latest/UserGuide/VPC_VP Nhtml 12 http://docsamazonwebservicescom/AmazonVPC/latest/UserGuide/VPN_Cl oudHubhtml 13 http://docsamazonwebservicescom/AmazonVPC/latest/UserGuide/VPC_VP Amazon Web Services – Amazon VPC Connectivity Options Page 33 Nhtml Amazon Web Services – Amazon VPC Connectivity Options Page 34 14 http://awsamazoncom/vpc/faqs/#C8 15 http://awsamazoncom/vpc/faqs/#C9 16 http://awsamazoncom/directconnect/ 17 https://awsamazoncom/marketplace/search/results/ref%3Dbrs_navgno_se arch_box?searchTerms=vpn 18 http://awsamazoncom/articles/8800869755706543 19 http://awsamazoncom/articles/5472675506466066 Although these guid es specifically address connecting multiple Amazon VPCs they are easily adaptable to support this network configuration by substituting one of the VPCs with an on premises VPN device connecting to an IPsec or SSL software VPN appliance running in an Amazo n VPC 20 https://awsamazoncom/articles/0639686206802544 21 http://awsamazoncom/vpc/faqs/ 22 http://docsawsamazoncom/AmazonVPC/latest/PeeringGuide/ 23 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/vpc peeringhtml 24 http://docsawsamazoncom/AmazonVPC/latest/PeeringGuide/ 25 https:/ /awsamazoncom/marketplace/search/results/ref%3Dbrs_navgno_se arch_box?searchTerms=vpn 26 http://awsamazoncom/articles/5472675506466066 27 http://awsamazoncom/articles/0639686206802544 28 http://awsamazoncom/articles/1909971399457482 29 http://docsamazonwebservicescom/AmazonVPC/late st/UserGuide/Custom erGateway Windowshtml 30 http://d ocsamazonwebservicescom/AmazonVPC/latest/UserGuide/VPC_VP Nhtml 31 https://docsawsamazoncom/vpc/latest/adminguide/Introductionhtml#CGRe quirements Amazon Web Services – Amazon VPC Connectivity Options Page 35 32 https://docsawsamazoncom/vpc/latest/adminguide/Introductionhtml#Device sTested Amazon Web Services – Amazon VPC Connectivity Options Page 36 33 http://awsamazoncom/vpc/faqs/#C9 34 http://awsamazoncom/directconnect/ 35 http://awsamazoncom/directconnect/#details 36 http://awsamazoncom/directconnect/faqs/ 37 http://docsamazonwebservicescom/DirectConnect/latest/GettingStartedGui de/Welcomehtml 38 https://awsamazoncom/marketplace/search/results/ref%3Dbrs_navgno_se arch_box?searchTerms=vpn 39 http://doc sopenvpnnet/how totutorialsguides/virtual platforms/amazon ec2appliance amiquick start guide/
|
General
|
consultant
|
Best Practices
|
Amdocs_Optima_Digital_Customer_Management_and_Commerce_Platform_in_the_AWS_Cloud
|
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmdocs Digital Brand Experience Platform in AWS Cloud First Published February 2018 Updated November 18 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlContents Introduction 1 BSS applications are mission critical workloads 2 Amdocs BSS portfolio 3 Amdocs Digital Brand Experience Suite overview 3 Functional capabilities 4 Functional architecture 8 Data management 11 Digital Brand Experience Suite deployment architecture 13 Technical architecture 13 Digital Brand Experience Suite SaaS model 19 AWS Well Architected Framework 21 Conclusion 24 Contributors 24 Further reading 25 Document revisions 25 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAbstract Amdocs Digital Brand Experience Suite is a digital customer management and commerce platform designed to rapidly and securely monetize any product or service Serving innovative communications operators utilities and other subscription based service providers Digital Brand Experience Suite ’s open platform has been available onpremises but is now also available on the AWS Cloud This whitepaper provides an architectural overview of how the Digital Brand Experience Suite business support systems (BSS) solution operates on the AWS Cloud The document is written for executive s architect s and development teams that want to deploy a business support solution for their consumer or enterprise business on the AWS Cloud This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 1 Introduction Amdocs provides the Amdocs Digital Brand Experience Suite: a digital customer management commerce and monetization software as a service ( SaaS ) solution designed specifically for the needs of digital brands and other small service providers who need to provide digital experience to their customers while being agile innovative and with rapid time to market The Amdocs solution helps these commun ications service provider s (CSPs) to focus on their business by simplifying their business support through prebuilt packages of business and technical processes spanning the full customer lifecycle: care commerce ordering and monetization Provided as a service the solution is ready to support simple models with minimal time to market including integrations to key external partners and an extensive set of application programming interface s (APIs ) More complex business models can be configured in the s ystem and integrations within bespoke ecosystems are supported through the open API architecture The enterprise market in particular involves unique challenges that require an industry proven solution Service providers focusing on the enterprise and sma ll and medium sized enterprise (SME) business segments can deliver a significant increase in revenue and market share However when trying to perform an enterprise business strategy many operators find they lack the required capability to support the continuous demand for their corporate services They find that their BSS platforms lack business flexibility and operational efficiency and are not cost effective Key challenges include : underperforming systems the high cost of managing legacy operation s and maintaining regulatory compliance Many companies need to adopt a pan Regional architecture to onboard additional countries Regions customer verticals and products This situation demands a significant change in both revenue and customer manageme nt systems as well as in the IT environment This whitepaper provides an overview of the Amdocs Digital Brand Experience platform and a reference architecture for deploying Amdocs on AWS This whitepaper also discusses the benefits of running the platform on AWS and various use cases By running Amdocs Digital Brand Experience on the AWS Cloud and especially delivered as SaaS the Amdocs platform can deliver significant required improvements to the operations and capabilities of customers in every indust ry while enabling future growth and expansion to new domains Customers can also benefit from the compliance and security credentials of the AWS Cloud instead of incurring an ongoing cost of audits related to storing customer data This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 2 BSS applications are mi ssion critical workloads BSS are the backbone of a service provider’s customer facing strategy BSS encompasses the spectrum from marketing shopping ordering charging taxation invoicing payments collection dunning and ultimately financial reporting There are four primary domains : product management order management revenue management and customer management Product management Product management supports the sellable entities or catalog of a provider From conception to sale to revenue recognition this is the toolset for managing services products pricing discounts and many other attributes of the product lifecycle Order management Order management is an extension of the sales process and encompasses four areas: order decomposition order orchestration order fallout and order status management Ordering may be synchronous where service is enabled in real time Or the actual service delivery may take days with complex installation processes It is incumbent on the BSS to accurat ely and efficiently process ing orders avoiding fallout s while providing status both to the service provider and the customer Revenue management Revenue management focuses on the financial aspects of the business both from the customer and service provi der perspective It includes pricing charging and discounting those feeds into the invoicing process and taxing The invoice in turn feeds the accounts receivable processes —payment collection and dunning —and becomes the foundation for revenue recognition reporting ( general ledger) C onsumer billing for consumer enterprise and wholesale services as well as prepaid and postpaid models are supported in the system Revenue management also include s fraud management and revenue assurance Customer management The relationship of the service provider to their customers is of critical importance From the initial contact through self care and mobile applications shopping online and to customer care i t is important to provide the multi channel exposure of a single customer view Complex customer models are supported through robust mechanisms of customer groups Enterprises are modeled through a combination of accounts This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 3 hierarchies groups and organiza tions —providing support for real world charg ing billing and reporting responsibilities Amdocs BSS portfolio Amdocs is a software and services vendor with nearly 40 years of expertise specifically focused on the communications and media industry It’s a trusted partner to the world’s leading communications and media companies serving more than 350 service providers in more than 85 countries Amdocs’ product lines encompass digital customer experience monetization network and service automation and mor e supporting more than 17 billion digital customer journeys every day Amdocs C ES21 is a 5G native integrated BSS operations support system (OSS ) suite It is a cloud native open and modular suite that supports many of the world’s top CSPs on their dig ital and 5G journeys The Amdocs Digital Brand Experience Suite is a SaaS solution that’s specifically built for the needs of digital brands and other small service providers It is a pre integrated suite with an extensive set of built in processes and con figuration templates to simplify commerce care ordering and monetization and empowering business users through “shift left” to a truly digital experience for the BSS itself As SaaS it provides unparalleled time to market and scalability while benefi tting from Amdocs ’ robust operations and a “pay as you grow” business model Amdocs Digital Brand Experience Suite overview Amdocs Digital Brand Experience Suite provides flexibility while implementing a high level of complexity It enables customers to capitalize on digital era opportunities by growing customer’s business with an open system that seamlessly interacts with ancillary app lication s It offers the freedom to address a div erse set of product and service markets as well as a range of end customer types Encompassing a set of established and progressive BSS products Amdocs Digital Brand Experience Suite represents proven functionality under a preconfigured industry standar d integration layer This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 4 Configurability smart interoperability and consistent experience • Swift onboarding of the service provider onto the platform With the SaaS solution onboarding can be done immediately Complex business models and dedicated instances of Digital Brand Experience Suite for larger service providers take slightly longer • Timetomarket for new products services and bundles occurs in minutes instead of months • Simple table driven configuration doesn’t require codin g The data model is highly flexible without requiring software changes • Provides s upport for multiple lines of business Within a single instance or tenant Amdocs Digital Brand Experience Suite supports any number of li nes of business (mobile fixed line broadband cable finance and utilities) and uses a flexible catalog to offer converged services to a sophisticated market Flexible deployment • Multi tenancy capabilities allow for a “define once utilize many” strateg y as different tenants are hosted on a single hardware and software platform that is operated in one location CSPs can deploy Amdocs Digital Brand Experience Suite on AWS as a service or as a dedicated instance Support options • Amdocs offers support for subscription usage based and “billing as a service” models over multiple networks and protocols of any kind and across borders In addition Amdocs supports any service product and payment method as well as multiple currencies and languages Open and secure integration model • More than 500 o penstandard partner friendly pre integrated microservices use RESTful service methods • Secur ity and compliance is provided by both AWS Cloud and the Digital Brand Experience Suite architecture Functional capa bilities The Digital Brand Experience Suite comes with the following capabilities: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 5 Digital channels • Responsive with multi modal web presentation layer – Multimodal user interfaces provide users with different ways of interacting with applications This has advantages both in providing interaction solutions with additional robustness in environments • Bespoke native mobile application – The goal of bespoke software or mobile apps is to create operational efficiency reduce cost improve retention and drive up revenue • Selfcare – Web interface enables customers to use the selfservice capability • Customer service representative (CSR) interfaces – The customer service interface includes tools and information for supporting the system admin users customers and transactions Business process foundation • Identity management – Authentication roles user management and single sign on • Security usage throttling service level agreements ( SLAs ) – Authorization metrics and SLA enforcement around exposed northbound APIs • Microservice based REST APIs – API framework to deliver business services through a standardized REST API model • Configurable service logic – Orchestration of underlying APIs to deliver business oriented functions enhanced flexibility and extensibility • Data mapping – Management of the Digital Brand Experience Suite data model and virtualization of external third party applications • Commerce catalog – Rules matching products and services to customers Rules can be based on account segment hierarchy geography equipment serviceability or any number of other factors and defined business processes serving both B2B and B2C customers With optional intelligence capabilities the rules can be extended to support marketing campaigns such as Next Best Offer /Next Best Action ( NBO/ NBA) • Shopping cart – Product browsing and search cart item management (including product options and features) and pricing This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 6 • Quotation service – A view into what a bill would look like for a given order including prices discounts and taxation • Messaging – Asynchronous message queuing technology with persistence for internal event notification and synchronization and routing to the relevant professional (system admin istrator CSR and so on ) Customer management layer capabilities • Customer management – Definition of customer profiles customer interactions and customer hierarchies supporting simple to extremely complex B2B hierarch ies and B2C scenarios • Case management – Customer interaction mechanism which can initiate actions in the system and queue up issues for service provider personnel Configurable rules determine actions and routing for a particular case • Inventory – Manag es serialized logical inventory for association to billing products Inventory can be categorized by type or line with corequisite rules defined in the catalog • Resource management – Manages dynamic lifecycle policy for all resources Revenue management • Billing rules – Configurable management of rules related to the billing operation This is the foundation for how charges are derived from a combination of price and customer service attributes • Event and order fulfillment – A workflow driven process to pro vision and activate billing orders in the system This involves instantiation of the relevant products to their respective customer databases • Usage and file processing – Integrity checks on the input event usage files before passing to rating • Rating engine – Offline and online rating engine including filebased offline rating typically for prepaid and postpaid subscribers The rating engine can use multiple factors related to the subscriber account and service to calculate the price for the usage o Offline rating engine – Filebased offline rating typically for postpaid subscribers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 7 o Online rating engine – Real time rating and promotional calculations based on network events • Rated usage management – Persistence and indexing of billed unbille d non billable usage and usage details • Bill preparer – The billing processor (BIP) identifies accounts within a particular bill cycle gathers data for bill processing calculates billable charges and generates processed information for bill formatting • Billtime discount – Calculates bill time discounts based on total usage for the period total charges and applicable discount tiers • Billtime taxation – Calculates appropriate taxes given the geography account information info and installed tax packages • Invoice generator (IGEN) – Combines the processed bill information from the BIP with invoice formats from the invoice designer to produce formatted bills The IGEN supports conditional logic in the templates and multi language pres entation formats • Accounts receivable (AR) balance management – Applies bill charges to an account’s AR balances Thresholds defined against the balance may trigger notifications and/or lifecycle state changes • Payments – Requests for payment payment hist ory and payment profiles • Adjustments and refunds – Allow for charges to be disputed adjusted or fully refunded A manager approval mechanism with workflow ensures that all adjustments have been reviewed and authorized • Journal ( general ledger) feeds – Reporting function that maps all financially significant activities in the system to operator defined general ledger codes Journaling generates feed files on a regular basis with the charges organized based on the specified codes and categories These f iles are then imported into the operator’s account systems • Collections – Driven process through which past due bills launch various external notification and collection activities ultimately leading to debt resolution or write off Interfaces are provide d to restore account state upon successful collection action • Recharge – Balance allotments and related promotions launch ed by recharge actions This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 8 • Balance management – Full lifecycle of cyclical authorization balances updated in real time • Online promotions – Realtime bonus awards and discounts applied immediately to balances • Notification s – Threshold based external notification s (for example invoked in response to a low balance ) Order management • Order management – Processing of ordered servi ces and their elements prior to order fulfillment Typically initiated at the end of the shopping experience t his can include editing or cancelling pending orders or forcing pending orders to immediately activate workflow driven processes configured to m eet business needs • Order fulfillment – A workflow driven process to provision and activate orders in the system Configurable m ilestones define the workflow model for each service and may involve many steps a route to service activation on thirdparty sy stems • Provisioning – Runs the provisioning processes of all ordered services on various network s including : Home Location Registers unified communication platforms electrical grids media servers Home Subscriber Servers and others • Network protocol integration – Supports authentication authorization and accounting functionality for all types of online and offline charging as well as major network protocols Formats are provided for common event record types Interfaces to online charging system (OCS) support all the protocols involved in voice and data charging especially 5G Functional architecture Digital Brand Experience Suite architecture includes three layers: user experience integration and application The following diagram illus trates the high level architecture This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 9 Digital Brand Experience Suite functional architecture This whitepaper focuses primarily on the integration and application layers because these features are deployed in AWS While the UI applications are downloaded from AWS the actual UI runtime occurs client side The APIs of the integration layer support the Digital Brand Experience Suite user interfaces ( UIs) as well as other thirdparty client integr ations These APIs expose the capabilities of the application layer as well as orchestrate the different applications to form higher level business services Integration layer capabilities are marked in the green box and application layer capabilities ar e marked in the blue box Additional detailed capabilities can be reviewed in the following diagram This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 10 Digital Brand Experience Suite functional capabilities Note that the OCS domain in t he preceding diagram depicts a reference implementation; integration with an OCS (as well as the specific OCS used) is an optional aspect of the Digital Brand Experience Suite solution Integration layer capabilities • Throttling and SLAs – Metrics and SLA reporting around the exposed northbound APIs • Identity management – Centralized authentication and authorization • Business logic and i ntegration – Service oriented APIs and their supporting capabilities • Commerce catalog – Definition and management of products related to the shopping experience Includes eligi bility aspects references to marketing collateral bundling constructions and so forth • Commerce engine – Technical APIs to manage shopping carts and catalog browsing • Extensible business logic – Business rules which extend the core logic of the APIs This also includes business process management to model flow based scenarios such as case handling and post checkout approval This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 11 • Dynamic data storage – Persistence for objects that are required for Digital Brand Experience Suite capabilities but not part of the existing and native application models This i ncludes things like consents contacts metadata for order supporting documentation and assigned and applied product instances Application layer capabilities • Billing catalog – Definition and management of products related to the billing operation Products and their elements include rate plans discount plans recurring and non recurring charges and associated configuration Product lifecycle allows for advance sales windows sunsetting and so forth For o ther billing application capabilities refer to the Revenue management section of this document Data management The following diagram shows the main entities managed by Digital Brand Experience Suite with the functional domains which are primarily responsible for each Digital Brand Experience Suite functional domains Optima Web UI Business Logic & Integration Layer BSS Application Commerce Engine Dynamic Data Business Process / Case Document MetadataApplied Products Shopping Carts BPO/SPO/AddOn CollateralBook Pricing / DiscountsEligibility Rules Compatibility Rules Dependency Rules Cart Validation Rules Collections WorkflowsLogical InventoryPayments Package/Component/Element Adjustments / Refunds Balances Invoice Details Invoice Formats Orders / ServiceOrders Workflow Jobs BT Discounts / Promos BT Rates / TaxPrivacy / Consent Contact / Individual Contracts / Signatures Business Processes Case Handling Rules BP/Case Workflow DefinitionsCase Instance Data BP/Case Workflow InstancesInteractions / Notes Rated Usage AuroraAuroraAurora CouchbaseBT Charges Account/ServiceBusiness Access Layer AuroraUsers / User GroupsRoles / Permissions Amazon Aurora Amazon AuroraAmazon Aurora Amazon AuroraThis version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 12 Benefits of deploying Digital Brand Experience Suite on AWS With the increase of the subscriber base and high demands of 5G cost reduction becomes an essential factor to build a successful business model CSPs that are running Digital Brand Experience Suite on AWS will pay only for the resources they use With the “pay as you go ” model customers also can spin u p experiment and iterate BSS environments (testing dev and so forth ) and pay based on consumption An on premises environment usually provides a limited set of environments to work with—provisioning additional environments can take a long time or migh t not be possible With AWS CSPs can create virtually man y new environments in minutes as required In addition CSPs can create a logical separation between projects environments and loosely decoupled application thereby enabling each of their teams to work independently with the resources they need Teams can subsequently converge in a common integration environment when they are ready At the conclusion of a project customers can shut down the environment and cease payment Customers often over size on premises environments for the initial phases of a project but subsequently cannot cope with growth in later phases With AWS customers can scale their compute resources up or down at any time Customers pay only for the individual services they need for as long as they use them In addition customers can change instance sizes in minutes through AWS Management Console AWS API or AWS Command Line Interface (AWS CLI) Because of the exponential growth of data worldwide and specifically in the telecom world designing and deploying backup solutions has become more complicated With AWS c ustomers have multiple options to set up a disaster recovery strategy depending on the recovery point objective (RPO) and recovery time objective (RTO) using the expansive AWS Global Cloud I nfrastructure Amdocs Digital Brand Experience Suite platform offers rich product and service management capabilities which can be integrated with AWS Cloud Analytics services for use cases such as subscriber customer and usage analytics Digital Brand Experience Suite capabilities can be also empowered by machine learning and artificial intellige nce capabilities through AWS services This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 13 Digital Brand Experience Suite deployment architecture Although there are multiple options for deploying the Digital Brand Experience Suite into an AWS environment the diagrams in this section primarily focus on depl oying into a multi tenant SaaS architecture Where possible common aspects of the architecture for nonSaaS deployments will be highlighted Technical architecture Common deployment architecture The following diagram depicts the main resources deployed for the Digital Brand Experience Suite The application is using the same AWS services regardless of the nature of the cloud deployment ( for example SaaS vs non SaaS) Digital Brand Experience Suite common cloud resources detail The Digital Brand Experience Suite uses Amazon Virtual Private Cloud (VPC) that is divided into three subnets which organize the access compute and storage resources needed for the Digital Brand Experience Suite All of these subnets are private —access is handled by a demilitarized zone (DMZ ) such as the inbound services VPC of the SaaS offering VPC BSS DBCustomers Amazon S3 – Web UI Amazon CloudWatch Amazon S3 Amazon ECR AWS Systems ManagerEndpoints Security groupSecurity group Security group Security group Security group Security group Security groupSecurity group AWS PrivateLink – For customers AWS PrivateLink – For Amdocs platform AWS Lambda – Web UI backend AWS Lambda – Payment gatewaySecurity group Amazon API Gateway Amazon EFS Amazon EKS Amazon EC2 –BAL Amazon EC2 –BIL Amazon EC2 –ESB Amazon EC2 –BSS Amazon EC2 – BP Batch Amazon Aurora Bill DB Amazon Aurora –BSS DB Amazon EC2 –Couchbase on EC2 Application Load Balancer (ALB) Customers ALB – Amdocs platform This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 14 Customers subnet The customers subnet provides access and load balancing capabilities into the VPC This is the entry point from the DMZ ( for example inbound services VPC through AWS PrivateLink for customers interface ) As such access here is focused on the services that the end users need for their Digital Brand Experience BSS subnet The BSS subnet holds the primary computing resources These comprise different Auto Scaling groups managed by Amazon Elastic Kubernetes Service (Amazon EKS) • Business Access Layer ( BAL) nodes – Used for API access path based routing metrics and throttling to support the Digital Brand Experience Suite APIs These capabilities are provided by the APIMAN package These nodes support inherent SLAs and enable customers to set throttling rules based on the number of requests per second for each method in APIs • Enterprise Service Bus ( ESB) nodes – Implement the Digital Brand Experience Suite SaaS APIs which are organized into microservices based on functional areas (for example account management shopping cart and i nvoicing ) These APIs and their integration logic translate between the high level service oriented requests received by the Digital Brand Experience Suite APIs and the low level technical APIs needed to fulfill the requests across the various Digital Brand Experience Suite resources • Bill Processing ( BP) batch nodes – Run the billing applications which perform bill calculation invoice generation collections and journal processing These applications are taskbased meaning that they are initiat ed on a schedule and on a particular set of input data For example bill processing for cycle 15 will run on the determined day ( for example the fifteenth day of the month) for the subset of accounts who have selected the fifteenth day as their bill cycle date By using native auto scaling BP batch nodes dynamically scale Amazon Elastic Compute Cloud (Amazon EC2) instances based on configurable parameters (such as the number of customers services and products ) and is one of the major benefits of running the application on AWS With AWS Auto Scaling BP batch application s always have the right resources at the right time This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 15 • BSS nodes – Host the low level service APIs which expose the billing capabilities to the Integra tion layer For example fetching the invoice details from processed bills or inquiring about a particular collections’ scenario • Business Integration Layer ( BIL) nodes – Contain applications to support the middleware —the shopping cart application Red Hat Decision Manager (RHDM) which is used to extend the BIL API business logic and RedHat Process Automation Manager (RHPAM) which is used for case handling and post cart processing ( for example credit review) Using each of these di fferent node groups highly depends on the traffic profiles of the specific operator; as a result deploying these node groups into separate Auto Scaling groups allows for greater platform efficiency by scaling the specific node group accordingly AWS Fargate is used for BP batch which comprise s of scheduled and taskbased applications like the billing processor and invoice generator Rather than port these applications Fargate is used to containe rize them while maintaining their established technology stack An Amazon Elastic File System (Amazon EFS) instance is deployed within this subnet that is used by the various processes of the billing application (for example usage files which are shared between the different usage file rating processes) As part of the overall migration of the Digital Brand Experience Suite solution to be more AWS native several processes have already moved to use serverless computing resources For example the payment gateway and web UI backend are implemented through AWS Lambda functions for event based handling Serverless computing on AWS —such as AWS Lambda —includes automatic scaling built in high availability and a payforvalue billing model AWS Lambda is an event driven compute servic e that enables customer to run code in response to events from over 200 natively integrated AWS and SaaS sources —all without managing any servers Internal Amdocs operations and support users access BSS subnet from the management VPC through PrivateLink for Amdocs interface s PrivateLink provides private connectivity between VPCs AWS services and customer’s onpremises networks without exposing their traffic to the public internet Database subnet The database subnet holds the resources for the Digital Brand Experience Suite persistence layer (such as multiple database technologies ) that are used across the This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 16 Digital Brand Experience Suite SaaS solution The BIL database and BSS database use Amazon Aurora databases for commerce (shopping cart) and billing respectively Database resources are only accessible from the BSS subnet Not only does this secure the actual persisted data but it decouples the storage technology from the external services and hides storage details like database schemas from the end users This all ows the solution to evolve over time and introduce and update storage technology while minimizing the impact on the rest of the solution and its users External services integration Interface VPC endpoints are used to securely access various AWS services such as Amazon CloudWatch Amazon Simple Storage Service (Amazon S3) Amazon Elastic Container Registry (Amazon ECR) and AWS Systems Manager VPC endpoints allows communication between instances and database in customer VPCs and management services such as CloudWatch and Systems Manager without imposing availability risks and bandwidth constraints on network traffic High availability The following diagram d epicts how Digital Brand Experience Suite can be deployed in multiple Availability Zones (AZs) configuration to promote high availability This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 17 Digital Brand Experience Suite high availability in AWS Digital Brand Experience Suite architecture on AWS is highly available The solution is built across a minimum of two Availability Zones All Availability Zones in an AWS Region are interconnected with high bandwidth low latency networking Availability Zones are physi cally separated by a meaningful distance although all are within 100 km (60 miles) of each other If one of the Availability Zones becomes unavailable the application continues to stay available because the architecture is high ly available in all layer s—databases utilizing multi AZ set up as well as Kubernetes spreads the pods in a deployment across nodes and multiple Availability Zones —and impact of an Availability Zone failure is mitigated Digital Brand Experience Suite architecture on AWS supports Cluster Autoscaling as well as Horizontal Pod Autoscaling and it adjusts the size of Amazon EKS cluster by adding or removing worker nodes in multiple Availability Zones In addition applicat ion components are stateless and based on containers with Elastic Load Balancing with native awareness of failure boundaries like A vailability Zones to keep your applications available across a Region without requiring Global Server Load Balancing This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 18 Scalability The solution is fully s calab le using Auto Scaling groups of various container types This allows for more fine grained scalability as the various compute needs change over time Auto Scal ing groups can be configured with different scaling models either scaling up or down based on events system measurements or a preset schedule Digital Brand Experience Suite architecture uses Amazon Aurora a MySQL and PostgreSQL –compatible relational database built for the cloud Amazon Aurora scales in many ways including storage instance and read scaling The a pplication also uses Couchbase on Amazon EC2 setting up Couchbase in a way that makes it scalable Security Access management The access is following rolebased access control through AWS Identity and Access Management (IAM) The solution has defined roles based on who needs access to what As a best practice customers could assign permissions at IAM group role level to access applications in the spec ific VPCs and never grant privileges beyond the minimum required for a user or group to fulfill their job requirements The list of roles and groups change with each project Secure data at rest Data at rest will be encrypted on the storage volume level (using AWS built in capabilities) as well as on the database level (on configurable PII fields) Digital Brand Experience Suite architecture us es AWS Key Management Service (AWS KMS) to create and control t he encryption keys and makes it easy for customers to create and manage cryptographic keys and control their use across a wide range of AWS services and applications Encryption is applied by solution components and AWS services Decryption is applied by each data consumer Secure data in transit The w eb UIs access will be encrypted with SSL encryption (HTTPS) The solution API layer access will be encrypted with SSL encryption (HTTPS) Additionally the encryption keys will be stored in AWS KMS The system credentials will be securely stored in AWS Secrets Manager Automated clearing house and credit card data will be tokenized by purchaser’s paym ent gateway system and the solution store s the credit card token only This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 19 Digital Brand Experience Suite SaaS model The following diagram provides a high level network layout view identifying the three major VPCs configured Digital Brand Expe rience Suite SaaS overall view This diagram also addresses the two primary means of accessing the solution: end customer and user access by the inbound services VPC and Amdocs operations access by the management VPC Both methods can then access the common resources in the Digital Brand Experience Suite SaaS VPC End customer and user access is secured by AWS Shield Advanced to provide managed distributed denial of service (DDo S) protection and AWS Web Application Firewall (AWS WAF) to protect their application from common web exploits In addition Amazon CloudFront is deployed in front of the Amazon S3 buckets used to host the web UI application client for download This improves initial application download performance by placing the application closer to the user This layout is more tailored to SaaS offerings because it provides two main access channels: individual tenant and global operations Non SaaS cloud offerings employ a different network architecture Amazon S3 – Web UI AWS PrivateLink – For customersRegion AWS PrivateLink – For Amdocs platform Amdocs data center Users Amazon Route 53 – Public Hosted zone amdocsoptimacloud AWS WAF Amazon CloudFront Download distribution VPC Inbound Services VPC –Amdocs platform SaaS VPC Management AWS Direct ConnectAWS Shield Advanced This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 20 Inbound services VPC (SaaS Offering) The following diagram provides more detail on the inbound services VPC Digital Brand Experience Suite SaaS inbound services VPC detail The public DMZ subnet is the approachable point for all users —it primarily provides authentication services so that further secured services can be accessed To protect the solution from mal icious attacks such as DDoS AWS WAF and AWS Shield are deployed Management VPC (SaaS offering) The following diagram provides more detail on the management VPC Amazon S3 – Web UI VPC AWS PrivateLink – For customersDMZ Amdocs Digital Brand Experience Amazon Route 53 – Private Hosted zone amdocsoptimacloud Amazon Route 53 – Public Hosted zone amdocsoptimacloud AWS WAFAmazon CloudFront – Download distribution AWS Shield Advanced Internet gateway Network Load Balancer – Amdocs Platform Security group Elastic network interface This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 21 Digital Brand Experience Suite SaaS management VPC detail The resources within the private management subnet provide access to Digital Brand Experience Suite SaaS for the operations engineers Microsoft Windows instances in Amazon EC2 as Bastion instance are running i n the private management VPC Operations engineers can use the Remote Desktop Protocol to administrate and access the compute resources inside the VPC remotely PrivateLink is also used to connect services across accounts and VPCs without exposing the traffic to the public interne t AWS Well Architected Framework The AWS Well Architected Framework helps cloud architects build secure high performing resilient and efficient infrastructure for their applications and workloads The AWS Well Architected Framework is based on five pillars : •Operational excellence •Security •Reliability •Performance efficie ncy •Cost optimization Amazon S3 – Web UI VPC Management Amazon Route 53 – Private Hosted zone amdocsoptimacloud Internet gateway Security group Security group Elastic network interface Security group Endpoints AWS PrivateLink – For Amdocs platform Amazon S3 Windows BastionThis version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 22 AWS Well Architected provides a consistent approach for customers and partners to evaluate architectures and implement designs that can scale over time The AWS Well Architected Framework helped Amdocs to adapt best practices and to achieve an optimized architecture of their Digital Bran d Experience Suite on AWS The following is an overview of the five pillars of the AWS Well Architected Framework with reference to the Digital Brand Experience Suite architecture on AWS Operational excellence This pillar focuses on t he ability to run and monitor systems to deliver business value and continually improve supporting processes and procedures Digital Brand Experience Suite architecture on AWS has the ability to support development and run workloads effectively The application gains insight s into th e operations aspects by using CloudWatch to collect metrics send alarms monitor Amazon Aurora metrics and use CloudWatch Container Insights from an Amazon EKS cluster The application uses AWS Lambda to respond to operational events automate changes and continuously manage and improve processes to deliver a business value Customers can find prescriptive guidance on implementation in the Operational Excellence Pillar whitepaper Security This pillar focuses on the ability to protect information systems and assets while delivering busine ss value through ri sk assessments and mitigation strategies Digital Brand Experience Suite architecture on AWS takes advantage of inherent prevention features such as : •Amazon VPCs to logically isolate environments per customer requirements •Subnets to logically isolate multiple layers in VPC and control the communication between them •Network access control lists and security groups to control incoming and outgoing traffic Digital Bran d Experience Suite uses AWS KMS for security of data at rest SSL encryption for data in transit as well as Secrets Manager for systems credential management rolebased access control through IAM for access management Customers can find prescrip tive guidance on implementation in the Security Pillar whitepaper This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 23 Reliability This pillar focuses on the ability of a system to recover from infrastructure or service failures to dynamically acquire computing resources to meet demand and to mitigate disruptions such as misconfigurations or transient network issues Digital Brand Experience Suite quickly rec overs from database failure by using Amazon Aurora which spans across multiple Availability Zones in an AWS Region and each Availability Zone contains a copy of the cluster volume data This functionality means that database cluster can tolerate a failure of an Availability Zone without any loss of data Digital Brand Experience Suite on AWS supports Cluster Autoscaling as well as Horizontal Pod Autoscaling handling scalability and reliability of application Changes are made through automation using AWS CloudFormation The architecture of Digital Brand Experience Suite on AWS encompasses the ability to perform its intended function correctly and consistently when it’s expected to This includes the abil ity to operate and test the workload through its total lifecycle Customers can find prescriptive guidance on implementation in the Reliability Pillar whitepaper Performance efficiency This pillar deals with the ability to use computing resources efficiently to meet system requirements an d to maintain that efficiency as demand changes and technologies evolve The architecture of Digital Brand Experience Suite on AWS ensures an efficient usage of the compute storage and database resources to meet system requirements and to maintain th em as demand changes and technologies evolve Customers can find prescriptive guidance on implementation in the Performance Efficiency Pillar whitepa per Cost optimization This pillar deals with the ability to avoid or eliminate unneeded cost or suboptimal resources Digital Brand Experience Suite on AWS uses Amazon Aurora PostgreSQL which considerable reduces database costs Amazon Aurora PostgreSQL is three times faster than standard PostgreSQL databases It provides the security availabili ty and reliability of commercial databases at onetenth the cost Additionally Digital Brand Experience Suite on AWS supports Cluster Autoscaling as well as Horizontal Pod Autoscaling contributing to considerable cost reduction The architecture of Digi tal Brand Experience Suite on AWS has the ability to run systems to deliver business value at the lowest price point This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 24 Customers can find prescriptive guidance on implementation in the Cost Optimization Pillar whitepaper Conclusion Amdocs Digital Brand Experience Suite is a pre integrated complete digital customer management and commerce platform designed to rapidly and securely monetize any product or service The richness of Amdocs Digital Brand Experience Suite ’s capabilities and flexibility —a strong BSS engine enabled by modern digital open source components such as JBoss Fuse REST APIs React Nodejs and other advanced technologies —enable s customers to enjoy the superior performance of a wellproven solution Amdocs Digital Brand Experience Suite combine s the effectiveness of a lean archi tecture and future readiness to provide customers the ability to step into the digital economy By deploying Amdocs Digital Brand Experience Suite in the AWS Cloud customers c an increase deployment velocity reduce infrastructure cost significantly and i ntegrate with IoT analytics and machine learning services Customers can further use the compliance benefits of the AWS Cloud for sensitive customer data AWS is the costeffective secure scalable high performing and flexible option for deploying Amdocs Digital Brand Experience Suite BSS Contributors Contributors to this document include : •David Sell Lead Software Architect Amdocs Digital Brand Experience Amdocs •Shahar Dumai Head of marketing for Amdocs Digital Brand Experience Amdocs •Efrat NirBerger Sr Partner Solutions Architect OSS/BSS Amazon Web Services •Visu Sontam Sr Partner Solutions Architect OSS/BSS Amazon Web Services •Mounir Chennana Solutions Architect Amazon Web ServicesThis version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amdocsdigitalbrandexperienceplatform/ amdocsdigitalbrandexperienceplatformhtmlAmazon Web Services Amdocs Digital Brand Experience Platform in AWS Cloud 25 Further reading For additional information see: •5G Network Evolution with AWS whitepaper •Continuous Integration and Continuous Delivery for 5G Networks on AWS whitepaper •NextGeneration Mobile Private Network s Powered by AWS whitepaper •AWS Well Architected Framework whitepaper •NextGeneration OSS with AWS whitepaper Document revisions Date Description November 18 2021 Updated for technical accuracy February 2018 First publication
|
General
|
consultant
|
Best Practices
|
An_Introduction_to_High_Performance_Computing_on_AWS
|
High Performance Computing (HPC) has been key to solving the most complex problems in every industry and changing the way we work and live From weather modeling to genome mapping to the search for extraterrestrial intelligence HPC is helping to push the boundaries of what’s possible with advanced computing technologies Once confined to government labs large enterprises and select academic organizations today it is found across a wide range of industries In this paper we will discuss how cloud services put the world’s most advanced computing capabilities within reach for more organizations helping them to innovate faster and gain a competitive edge We will discuss the advantages of running HPC workloads on Amazon Web Services (AWS) with Intel® Xeon® technology compared to traditional onpremises architectures We will also illustrate these benefits in actual deployments across a variety of industries High Performance Computing on AWS Redefines What is Possible In 2017 the market for cloud HPC solutions grew by 44% compared to 2016i https://awsamazoncom/hpc 2HPC FUNDAMENTALS Although HPC applications share some common building blocks they are not all similar HPC applications are often based on complex algorithms that rely on high performing infrastructure for efficient execution These applications need hardware that includes high performance processors memory and communication subsystems For many applications and workloads the performance of compute elements must be complemented by comparably high performance storage and networking elements Some may demand high levels of parallel processing but not necessarily fast storage or high performance interconnect Other applications are interconnectsensitive requiring low latency and high throughput networking Similarly there are many I/Osensitive applications that without a very fast I/O subsystem will run slowly because of storage bottlenecks And still other applications such as game streaming video encoding and 3D application streaming need performance acceleration using GPUs Today many large enterprises and research institutions procure and maintain their own HPC infrastructure This HPC infrastructure is shared across many applications and groups within the organization to maximize utilization of this significant capital investment Cloudbased services have opened up a new frontier for HPC Moving HPC workloads to the cloud can provide near instant access to virtually unlimited computing resources for a wider community of users and can support completely new types of applications Today organizations of all sizes are looking to the cloud to support their most advanced computing applications For smaller enterprises cloud is a great starting point enabling fast agile deployment without the need for heavy capital expenditure For large enterprises cloud provides an easier way to tailor HPC infrastructure to changing business needs and to gain access to the latest technologies without having to worry about upfront investments in new infrastructure or ongoing operational expenses When compared to traditional onpremises HPC infrastructures cloud offers significant advantages in terms of scalability flexibility and costONPREMISES HPC HAS ITS LIMITS Today onpremises HPC infrastructure handles most of the HPC workloads that enterprises and research institutions employ Most HPC system administrators maintain and operate this infrastructure at varying levels of utilization However business is always competitive so efficiency needs to be coupled with the flexibility and opportunity to innovate continuously Some of the challenges with onpremises HPC are well known These include long procurement cycles high initial capital investment and the need for midcycle technology refreshes For most organizations planning for and procuring an HPC system is a long and arduous process that involves detailed capacity forecasting and system evaluation cycles Often the significant upfront capital investment required is a limiting factor for the amount of capacity that can be procured Maintaining the infrastructure over its lifecycle is an expensive proposition as well Previously technology refreshes every three years was enough to stay current with the compute technology and incremental demands from HPC workloads However to take advantage of the faster pace of innovation HPC customers are needing to refresh their infrastructure more often than before And it is worth the effort IDC reports that for every $1 spent on HPC businesses see $463 in incremental revenues and $44 in incremental profit so delaying incremental investments in HPC – and thus delaying the innovations it brings – has large downstream effects on the businesshttps://awsamazoncom/hpc 3Stifled Innovation: Often the constraints of onpremises infrastructure mean that use cases or applications that did not meet the capabilities of the hardware were not considered When engineers and researchers are forced to limit their imagination to what can be tried out with limited access to infrastructure the opportunity to think outside the box and tinker with new ideas gets lost Reduced Productivity: Onpremises systems often have long queues and wait times that decrease productivity They are managed to maximize utilization – often resulting in very intricate scheduling policies for jobs However even if a job requires only a couple of hours to run it may be stuck in a prioritized queue for weeks or months – decreasing overall productivity and limiting innovation In contrast with virtually unlimited capacity the cloud can free users to get the same job done but much faster without having to stand in line behind others who are just as eager to make progressLimited Scalability and Flexibility: HPC workloads and their demands are constantly changing and legacy HPC architectures cannot always keep pace with evolving requirements For example infrastructure elements like GPUs containers and serverless technologies are not readily available in an onpremises environment Integrating new OS or container capabilities – or even upgrading libraries and applications – is a major systemwide undertaking And when an onpremises HPC system is designed for a specific application or workload it’s difficult and expensive to take on new HPC applications as well as forecast and scale for future (frequently unknown) requirements Lost Opportunities: Onpremises HPC can sometimes limit an organization’s opportunities to take full advantage of the latest technologies For example as organizations adopt leadingedge technologies like artificial intelligence/ machine learning technologies (AI/ML) and visualization the complexity and volume of data is pushing on premises infrastructure to its limits Furthermore most AI/ML algorithms are cloudnative These algorithms will deliver superior performance on large data sets when running in the cloud especially with workloads that involve transient data that does not need to be stored long term There are other limitations of onpremises HPC infrastructure that are less visible and so are often overlooked leading to misplaced optimization efforts https://awsamazoncom/hpc 4CLOUD IS A BETTER WAY TO HPC To move beyond the limits of onpremises HPC many organizations are leveraging cloud services to support their most advanced computing applications Flexible and agile the cloud offers strong advantages compared to traditional onpremises HPC approaches HPC on AWS with Intel® Xeon® processors deliver significant leaps in compute performance memory capacity and bandwidth and I/O scalability The highly customizable computing platform and robust partner community enable your staff to imagine new approaches so they can fail forward faster delivering more answers to more questions without the need for costly onpremises upgrades In short AWS frees you to rethink your approach to every HPC and big data analysis initiative and invites your team to ask questions and seek answers as often as possible Innovate Faster with a Highly Scalable Infrastructure Moving HPC workloads to the cloud can bring down barriers to innovation by opening up access to virtually unlimited capacity and scale And one of the best features of working in a cloud environment is that when you solve a problem it stays solved You’re not revisiting it every time you do a major systemwide software upgrade or a biannual hardware refresh Limits on scale and capacity with onpremises infrastructure usually led to organizations being reluctant to consider new use cases or applications that exceeded their capabilities Running HPC in the cloud enables asking the business critical questions they couldn’t address before and that means a fresh look at project ideas that were shelved due to infrastructure constraints Migrating HPC applications to AWS eliminates the need for tradeoffs between experimentation and production AWS and Intel bring the most costeffective scalable solutions to run the most computationallyintensive applications ondemand Now research development and analytics teams can test every theory and process every data set without straining onpremises systems or stalling other critical work streams Flexible configuration and virtually unlimited scalability allow engineers to grow and shrink the infrastructure as workloads dictate not the other way around Additionally with easy access to a broad range of cloudbased services and a trusted partner network researchers and engineers can quickly adopt tested and verified HPC applications so that they can innovate faster without having to reinvent what already exists Increase Collaboration with Secure Access to Clusters Worldwide Running HPC workloads on the cloud enables a new way for globally distributed teams to collaborate securely With globallyaccessible shared data engineers and researchers can work together or in parallel to get results faster For example the use of the cloud for collaboration and visualization allows a remote design team to view and interact with a simulation model in near real time without the need to duplicate and proliferate sensitive design data Using the cloud as a collaboration platform also makes it easier to ensure compliance with everchanging industry regulations The AWS cloud is compliant with the latest revisions of GDPR HIPAA FISMA FedRAMP PCI ISO 27001 SOC 1 and other regulations Encryption and granular permission features guard sensitive data without interfering with the ability to share data across approved users and detailed audit trails for virtually every API call or cloud orchestration action means environments can be designed to address specific governance needs and submit to continuous monitoring and surveillance With a broad global presence and the wide availability of Intel® Xeon® technologypowered Amazon EC2 instances HPC on AWS enables engineers and researchers to share and collaborate efficiently with team members across the globe without compromising on securityhttps://awsamazoncom/hpc 5 Optimize Cost with Flexible Resource Selection Running HPC in the cloud enables organizations to select and deploy an optimal set of services for their unique applications and to pay only for what they use Individuals and teams can rapidly scale up or scale down resources as needed commissioning or decommissioning HPC clusters in minutes instead of days or weeks With HPC in the cloud scientists researchers and commercial HPC users can gain rapid access to resources they need without a burdensome procurement process Running HPC in the cloud also minimizes the need for job queues Traditional HPC systems require researchers and analysts to submit their projects to open source or commercial cluster and job management tools which can be time consuming and vulnerable to submission errors Moving HPC workloads to the cloud can help increase productivity by matching the infrastructure configuration to the job With onpremises infrastructure engineers were constrained to running their job on the available configuration With HPC in the cloud every job (or set of related jobs) can run on its own ondemand cluster customized for its specific requirements The result is more efficient HPC spending and fewer wasted resources AWS HPC solutions remove the traditional challenges associated with onpremises clusters: fixed infrastructure capacity technology obsolescence and high capital expenditures AWS gives you access to virtually unlimited HPC capacity built from the latest technologies You can quickly migrate to newer more powerful Intel® Xeon® processorbased EC2 instances as soon as they are made available on AWS This removes the risk of onpremises CPU clusters becoming obsolete or poorly utilized as your needs change over time As a result your teams can trust that their workloads are running optimally at every stage Data Management & Data Transfer Running HPC applications in the cloud starts with moving the required data into the cloud AWS Snowball and AWS Snowmobile are data transport solutions that use devices designed to be secure to transfer large amounts of data into and out of the AWS Cloud Using Snowball addresses common challenges with large scale data transfers including high network costs long transfer times and security concerns AWS DataSync is a data transfer service that makes it easy for you to automate moving data between onpremises storage and Amazon S3 or Amazon Elastic File System (Amazon EFS) DataSync automatically handles many of the tasks related to data transfers that can slow down migrations or burden your IT operations including running your own instances handling encryption managing scripts network optimization and data integrity validation AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS Using AWS Direct Connect you can establish private connectivity between AWS and your datacenter office or colocation environment which in many cases can reduce your network costs increase bandwidth throughput and provide a more consistent network experience than Internetbased connectionsAWS AND INTEL® DELIVER A COMPLETE HPC SOLUTION AWS HPC solutions with Intel® Xeon® technologypowered compute instances put the full power of HPC in reach for organizations of every size and industry AWS provides a comprehensive set of components required to power today’s most advanced HPC applications giving you the ability to choose the most appropriate mix of resources for your specific workload Key products and services that make up the HPC on AWS solution include: https://awsamazoncom/hpc 5https://awsamazoncom/hpc 6 https://awsamazoncom/hpc 6Compute The AWS HPC solution lets you choose from a variety of compute instance types that can be configured to suit your needs including the latest Intel® Xeon® processor powered CPU instances GPUbased instances and field programmable gate array (FPGA)powered instances The latest Intelpowered Amazon EC2 instances include the C5n C5d and Z1d instances C5n instances feature the Intel Xeon Platinum 8000 series (SkylakeSP) processor with a sustained all core Turbo CPU clock speed of up to 35 GHz C5n instances provide up to 100 Gbps of network bandwidth and up to 14 Gbps of dedicated bandwidth to Amazon EBS C5n instances also feature 33% higher memory footprint compared to C5 instances For workloads that require access to highspeed ultralow latency local storage AWS offers C5d instances equipped with local NVMebased SSDs Amazon EC2 z1d instances offer both high compute capacity and a high memory footprint High frequency z1d instances deliver a sustained all core frequency of up to 40 GHz the fastest of any cloud instance For HPC codes that can benefit from GPU acceleration the Amazon EC2 P3dn instances feature 100 Gbps network bandwidth (up to 4x the bandwidth of previous P3 instances) local NVMe storage the latest NVIDIA V100 Tensor Core GPUs with 32 GB of GPU memory NVIDIA NVLink for faster GPUtoGPU communication AWScustom Intel® Xeon® Scalable (Skylake) processors running at 31 GHz sustained allcore Turbo AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady predictable performance at the lowest possible cost Using AWS Auto Scaling it’s easy to setup application scaling for multiple resources across multiple services in minutes Networking Amazon EC2 instances support enhanced networking that allow EC2 instances to achieve higher bandwidth and lower interinstance latency compared to traditional virtualization methods Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables you to run HPC applications requiring high levels of internode communications at scale on AWS Its custombuilt operating system (OS) bypass hardware interface enhances the performance of interinstance communications which is critical to scaling HPC applications AWS also offers placement groups for tightlycoupled HPC applications that require low latency networking Amazon Virtual Private Cloud (VPC) provides IP connectivity between compute instances and storage components Storage Storage options and storage costs are critical factors when considering an HPC solution AWS offers flexible object block or file storage for your transient and permanent storage requirements Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 Provisioned IOPS allows you to allocate storage volumes of the size you need and to attach these virtual volumes to your EC2 instances Amazon Simple Storage Service (S3) is designed to store and access any type of data over the Internet and can be used to store the HPC input and output data long term and without ever having to do a data migration project again Amazon FSx for Lustre is a high performance file storage service designed for demanding HPC workloads and can be used on Amazon EC2 in the AWS cloud Amazon FSx for Lustre works natively with Amazon S3 making it easy for you to process cloud data sets with high performance file systems When linked to an S3 bucket an FSx for Lustre file system transparently presents S3 objects as files and allows you to write results back to S3 You can also use FSx for Lustre as a standalone highperformance file system to burst your workloads from onpremises to the cloud By copying onpremises data to an FSx for Lustre file system you can make that data available for fast processing by compute instances running on AWS Amazon Elastic File System (Amazon EFS) provides simple scalable file storage for use with Amazon EC2 instances in the AWS Cloudhttps://awsamazoncom/hpc 7Automation and Orchestration Automating the job submission process and scheduling submitted jobs according to predetermined policies and priorities are essential for efficient use of the underlying HPC infrastructure AWS Batch lets you run hundreds to thousands of batch computing jobs by dynamically provisioning the right type and quantity of compute resources based on the job requirements AWS ParallelCluster is a fully supported and maintained open source cluster management tool that makes it easy for scientists researchers and IT administrators to deploy and manage High Performance Computing (HPC) clusters in the AWS Cloud NICE EnginFrame is a web portal designed to provide efficient access to HPCenabled infrastructure using a standard browser EnginFrame provides you a userfriendly HPC job submission job control and job monitoring environment Operations & Management Monitoring the infrastructure and avoiding cost overruns are two of the most important capabilities that can help an HPC system administrators efficiently manage your organization’s HPC needs Amazon CloudWatch is a monitoring and management service built for developers system operators site reliability engineers (SRE) and IT managers CloudWatch provides you with data and actionable insights to monitor your applications understand and respond to systemwide performance changes optimize resource utilization and get a unified view of operational health AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amountVisualization Tools The ability to visualize results of engineering simulations without having to move massive amounts of data to/from the cloud is an important aspect of the HPC stack Remote visualization helps accelerate the turnaround times for engineering design significantly NICE Desktop Cloud Visualization enables you to remotely access 2D/3D interactive applications over a standard network In addition Amazon AppStream 20 is another fully managed application streaming service that can securely deliver application sessions to a browser on any computer or workstation Security and Compliance Security management and regulatory compliance are other important aspects of running HPC in the cloud AWS offers multiple security related services and quicklaunch templates to simplify the process of creating a HPC cluster and implementing best practices in data security and regulatory compliance The AWS infrastructure puts strong safeguards in place to help protect customer privacy All data is stored in highly secure AWS data centers AWS Identity and Access Management (IAM) provides a robust solution for managing users roles and groups that have rights to access specific data sources Organizations can issue users and systems individual identities and credentials or provision them with temporary access credentials using the Amazon Security Token Service (Amazon STS) AWS manages dozens of compliance programs in its infrastructure This means that segments of your compliance have already been completed AWS infrastructure is compliant with many relevant industry regulations such as HIPAA FISMA FedRAMP PCI ISO 27001 SOC 1 and others https://awsamazoncom/hpc 7https://awsamazoncom/hpc 8Flexible Pricing and Business Models With AWS capacity planning worries become a thing of the past AWS offers ondemand pricing for shortterm projects contract pricing for longterm predictable needs and spot pricing for experimental work or research groups with tight budgets AWS customers enjoy the flexibility to choose from any combination of payasyougo options procuring only the capacity they need for the duration that it’s needed and AWS Trusted Advisor will alert you first to any costsaving actions you can take to minimize your bill This simplified flexible pricing structure and approach allows research institutions to break free from the time and budget constraining CapExintensive data center model With HPC on AWS organizations can flexibly tune and scale their infrastructure as workloads dictate instead of the other way around AWS Partners and Marketplace For organizations looking to build highly specific solutions AWS Marketplace is an online store for applications and services that build on top of AWS AWS partner solutions and AWS Marketplace lets organizations immediately take advantage of partners’ builtin optimizations and best practices leveraging what they’ve learned from building complex services on AWS A variety of open source HPC applications are also available on the AWS Marketplace HPC ON AWS DELIVERS ADVANTAGES FOR A RANGE OF HPC WORKLOADS AWS cloud provides a broad range of scalable flexible infrastructure solutions that organizations can select to match their workloads and tasks This gives HPC users the ability to choose the most appropriate mix of resources for their specific applications Let us take a brief look at the advantages that HPC on AWS delivers for these workload types Tightly Coupled HPC: A typical tightly coupled HPC application often spans across large numbers of CPU cores in order to accomplish demanding computational workloads To study the aerodynamics of a new commercial jet liner design engineers often run computational fluid dynamics simulations using thousands of CPU cores Global climate modeling applications are also executed at a similar scale AWS cloud provides scalable computing resources to execute such applications These applications can be deployed on the cloud at any scale Organizations can set a maximum number of cores per job dependent on the application requirements aligning it to criteria like model size frequency of jobs cost per computation and urgency of the job completion A significant benefit of running such workloads on AWS is the ability to scale out to experiment with more tunable parameters For example an engineer performing electromagnetic simulations can run larger numbers of parametric sweeps in his Design of Experiment (DoE) study using very large numbers of Amazon EC2 OnDemand instances and using AWS Auto Scaling to launch independent and parallel simulation jobs Such DoE jobs would often not be possible because of the hardware limits of onpremises infrastructure A further benefit for such an engineer is to use Amazon Simple Storage Service (S3) NICE DCV and other AWS solutions like AI/ML services to aggregate analyze and visualize the results as part of a workflow pipeline any element of which can be spun up (or down) independently to meet needs Amazon EC2 features that help with applications in this category also include EC2 placement groups and enhanced networking for reduced nodetonode latencies and consistent network performance Loosely Coupled Grid Computing: The cloud provides support for a variety of loosely coupled grid computing applications that are designed for faulttolerance enabling individual nodes to be added or removed during the course of job execution This category of applications includes Monte Carlo simulations for financial risk analysis material science study for proteomics and more A typical job distributes independent computational workloads across large numbers of CPU cores or nodes in a grid without high demand for high performance nodetonode interconnect or on highperformance storage The cloud lets organizations deliver the faulttolerance https://awsamazoncom/hpc 9these applications require and choose the instance types they require for specific compute tasks that they plan to execute Such applications are ideally suited to Amazon EC2 Spot instances which are EC2 instances that opportunistically take advantage of Amazon EC2’s spare computing capacity Coupled with Amazon EC2 Auto Scaling and jobs can be scaled up when excess spare capacity makes Spot instances cheaper than normal AWS Batch brings all these capabilities together in a single batchoriented service that is easy to use containerfocused for maximum portability and integrates with a range of commercial and open source workflow engines to make job orchestration easy High Volume Data Analytics and Interpretation: When grid and cluster HPC workloads handle large amounts of data their applications require fast reliable access to many types of data storage AWS services and features that help HPC users optimize for data intensive computing include Amazon S3 Amazon Elastic Block Store (EBS) and Amazon EC2 instance types that are optimized for high I/O performance (including those configured with solidstate drive (SSD) storage) Solutions also exist for creating high performance virtual network attached storage (NAS) and network file systems (NFS) in the cloud allowing applications running in Amazon EC2 to access high performance scalable cloudbased shared storage resources Example applications in this category include genomics highresolution image processing and seismic data processing Visualization: Using the cloud for collaboration and visualization makes it much easier for members in global organizations to share their digital data instantly from any part of the world For example it lets subcontractors or remote design teams view and interact with a simulation model in near real time from any location They can securely collaborate on data from anywhere without the need to duplicate and share it AWS services that enable these types of workloads include graphics optimized instances remote visualization services like NICE DCV and managed services like Amazon Workspaces and Amazon AppStream 20Accelerated Computing: There are many HPC workloads that can benefit from offloading computationintensive tasks to specialized hardware coprocessors such as GPUs or FPGAs Many tightlycoupled and visualization workloads are apt for accelerated computing AWS HPC solutions offer the flexibility to choose from many available CPU GPU or FPGAbased instances to deploy optimized infrastructure to meet the needs of specific applications Machine Learning and Artificial Intelligence: Machine learning requires a broad set of computing resource options ranging from GPUs for computeintensive deep learning FPGAs for specialized hardware acceleration to highmemory instances for inference study With HPC on AWS organizations can select instance types and services to fit their machine learning needs They can choose from a variety of CPU GPU FPGA memory storage and networking options and tailor instances to their specific requirements whether they are training models or running inference on trained models AWS uses the latest Intel® Xeon®Scalable CPUs which are optimized for machine learning and AI workloads at scale The Intel® Xeon®Scalable processors incorporated in AWS EC2 C5 instances along with optimized deep learning functions in the Intel MKLDNN library provide sufficient compute for deep learning training workloads (in addition to inference classical machine learning and other AI algorithms) In addition CPU and GPU optimized frameworks such as TensorFlow MxNet and PyTorch are available in Amazon Machine Image (AMI) format for customers to deploy their AI workloads on optimized software and hardware stacks Recent advances in distributed algorithms have also enabled the use of hundreds of servers to reduce the time to train from weeks to minutes Data scientists can get excellent deep learning training performance using Amazon EC2 and further reduce the timetotrain by using multiple CPU nodes scaling near linearly to hundreds of nodeshttps://awsamazoncom/hpc 10Life Sciences and Healthcare Running HPC workloads on AWS lets healthcare and life sciences professionals easily and securely scale genomic analysis and precision medicine applications For AWS users the scalability is builtin bolstered by an ecosystem of partners for tools and datasets designed for sensitive data and workloads They can efficiently dynamically store and compute their data collaborate with peers and integrate findings into clinical practice—while conforming with security and compliance requirements For example BristolMyers Squibb (BMS) a global biopharmaceutical company used AWS to build a secure selfprovisioning portal for hosting research The solution lets scientists run clinical trial simulations ondemand and enables BMS to set up rules that keep compute costs low Computeintensive clinical trial simulations that previously took 60 hours are finished in only 12 hours on the AWS Cloud Running simulations 98% faster has led to more efficient less costly clinical trials—and better conditions for patients DRIVING INNOVATION ACROSS INDUSTRIES Every industry tackles a different set of challenges AWS HPC solutions available with the power of the latest Intel technologies help companies of all sizes in nearly every industry achieve their HPC results with flexible configuration options that simplify operations save money and get results to market faster These workloads span the traditional HPC applications like genomics life sciences research financial risk analysis computeraided design and seismic imaging to the emerging applications like machine learning deep learning and autonomous vehicles “The time and money savings are obvious but probably what is most important factor is we are using fewer subjects in these trials we are optimizing dosage levels we have higher drug tolerance and safety and at the end of the day for these kids it’s fewer blood samples” Sr Solutions Specialist BristolMyers SquibbFinancial Services Insurers and capital markets have long been utilizing grid computing to power actuarial calculations determine capital requirements model risk scenarios price products and handle other key tasks Taking these computeintensive workloads out of the data center and moving them to AWS helps them boost speed scale better and save money For example MAPRE the largest insurance company in Spain needed fast flexible environments in which to develop sales management insurance policy applications The firm was looking for a costeffective technology platform that could deliver rapid analysis and enable quick deployment of development environments in remote installations sites Its onpremises infrastructure simply could not support these needs The company turned to AWS for high performance computing risk analysis of customer data and to create test and development environments for its commercial application “The onpremises hardware investment for three years cost approximately €15 million whereas the AWS infrastructure cost the company €180000 for the same period a savings of 88 percent” MAPFRE https://awsamazoncom/hpc 11KEEPING PACE WITH CHANGING FINANCIAL REGULATIONS AWS customers in financial services are preparing for new Fundamental Review of Trading Book (FRTB) regulations that will come into effect between 2019 and 2021 As part of the proposed regulations these financial services institutions will need to perform computeintensive “value at risk” calculations in the four hours after trading ends in New York and begins in Tokyo The periodic nature of the calculation along with the amount of processing power and storage needed to run it within four hours made it a great fit for an environment where a vast amount of costeffective compute power is available on an ondemand basis To help its financial services customers meet these new regulations AWS worked with TIBCO (an onpremises marketleading infrastructure platform for grid and elastic computing) to run a proof of concept grid in AWS Cloud The grid grew to 61299 Spot instances with 13 million vCPUs and cost approximately $30000 an hour to run This proofofconcept is a strong example of the potential for AWS to deliver a vast amount of cost effective compute power on an ondemand basishttps://awsamazoncom/hpc 12Design and Engineering Using simulations on AWS HPC infrastructure lets manufacturers and designers reduce costs by replacing expensive development of physical models with virtual ones during product development The result? Improved product quality shorter time to market and reduced product development costs TLG Aerospace in Seattle Washington put these capabilities to work to perform aerodynamic simulations on aircraft and predict the pressure and temperature surrounding airframes Its existing cloud provider was expensive and could not scale to handle more performanceintensive applications TLG turned to Amazon EC2 Spot instances which provide a way to use unused EC2 computing capacity at a discounted price The solution dramatically decreased simulation costs and can scale easily to take on new jobs as needed Energy and Geo Sciences Reducing runtimes for computeintensive applications like seismic analysis and reservoir simulation is just one of the many ways the energy and geosciences industry has been utilizing HPC applications in the cloud By moving HPC applications to the cloud organizations reduce job submission time track runtime and efficiently manage the large datasets associated with daily workloads For example using AWS ondemand computing resources Zenotech a simulation service provider can power simulations that help energy companies support advanced reservoir models“We saw a 75% reduction in the cost per CFD simulation as soon as we started using Amazon EC2 Spot instances We are able to pass those savings along to our customers–and be more competitive” TLG Aerospace Using the resources available within a typical small company it would take several years to complete a sophisticated reservoir simulation Zenotech completed it at a computing cost for AWS resources of only $750 over a 12day periodhttps://awsamazoncom/hpc 13Media and Entertainment The movie and entertainment industries are shifting content production and post production to cloudbased HPC to take advantage of highly scalable elastic and secure cloud services to accelerate content production and reduce capital infrastructure investment Content production and postproduction companies are leveraging the cloud to accelerate and streamline production editing and rendering workloads with highly scalable cloud computing and storage One design and visual effects (VFX) company Fin Design + Effects needed the ability to access vast amounts of compute capacity when big deadlines came around Its onpremises render servers had a finite capacity and were difficult and expensive to scale Fin started by using AWS Direct Connect to scale its rendering capabilities by establishing a dedicated Gigabit network connection from the Fin data center to AWS Fin is also taking advantage of Amazon EC2 Spot instances Fin now has the agility to add compute resources on the fly to meet lastminute project demands AI/ML and Autonomous Vehicles The AI revolution which started with the rapid increase in accuracy brought by deep learning methods has the potential to revolutionize a variety of industries Autonomous driving is a particularly popular use case for AI/ML Developing and deploying autonomous vehicles requires the ability to collect store and manage massive amounts of data high performance computing capacity and advanced deep learning frameworks along with the capability to do realtime processing of local rules and events in the vehicle AWS’s virtually unlimited storage and compute capacity and support for popular deep learning frameworks help accelerate algorithm training and testing and drive faster time to market“We are reducing our operational costs by 50 percent by using Amazon EC2 Spot instances” Fin Design Developing and deploying autonomous vehicles requires the ability to collect store and manage massive amounts of data high performance computing capacity and advanced deep learning frameworks SUMMARY AND RECOMMENDATION Technology continues to change rapidly and it’s clear that HPC has a critical role to play in enabling organizations to innovate faster and enable them to adopt other leadingedge technologies like AI/ML and IoT AWS puts the advanced capabilities of High Performance Computing in reach for more people and organizations while simplifying processes like management deployment and scaling Accessible flexible and cost effective it lets organizations unleash the creativity of their engineers analysts and researchers from the limitations of onpremises infrastructures Unlike traditional onpremise HPC systems AWS offers virtually unlimited capacity to scale out HPC infrastructure It also provides the flexibility for organizations to adapt their HPC infrastructure to changing business priorities With flexible deployment and pricing models it lets organizations of all sizes and industries take advantage of the most advanced computing capabilities available HPC on AWS lets you take a fresh approach to innovation to solve the world’s most complex problems Learn more about running your HPC workloads on AWS at http://awsamazoncom/hpc i “HPC Market Update ISC18” Intersect360 Research 2018
|
General
|
consultant
|
Best Practices
|
An_Overview_of_AWS_Cloud_Data_Migration_Services
|
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/overviewawscloud datamigrationservices/overviewawsclouddatamigrationserviceshtmlAn Overview of AWS Cloud Data Migration Services Published May 1 2016 Updated June 13 2021 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 Cloud Data Migration Challenges 2 Security and Data Sensitivity 2 Cloud Data Migration Tools 5 Time and Performance 6 Choosing a Mig ration Method 7 Selfmanaged Migration Methods 8 AWS Managed Migration Tools 9 Cloud Data Migration Use Cases 18 Use Case 1: One Time Massive Data Migration 18 Use Case 2: Continuous On premises Data Migration 21 Use Case 3: Continuous Streaming Data In gestion 25 Conclusion 26 Contributors 26 Further Reading 26 Document revisions 27 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract One of the most challenging steps required to deploy an application infrastructure in the cloud is moving data into and out of the cloud Amazon Web Services (AWS) provides multiple services for moving data and each solution offers various levels of speed security cost and performance This white paper outlines the different AWS services that can help seamlessly transfer data to and from the AWS Cloud This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Mi gration Services 1 Introduction As you plan your data migration strategy you will need to determine the best approach to use based on the specifics of your environment There are many different ways to lift andshift data to the cloud such as onetime large batches constant device streams intermittent updates or even hybrid data storage combining the AWS Cloud and on prem ises data stores These methods can be used individually or together to help streamline the realities of cloud data migration projects This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 2 Cloud Data Migration Challenges When planning a data migration you need to determine how much data is being moved and the bandwidth available for the transfer of data This will determine how long the transfer will take AWS offers several methods to transfer data into your account including the AWS Snow Family of storage devi ces AWS Direct Connect and AWS SitetoSite VPN over your existing internet connectio n The network bandwidth that is consume d for data migration will not be available for your organization’s typical application traffic In addition your organization might be concerned with moving sensitive business information from your internal network to a secure AWS environment Determining the security level for your organization helps you select the appropriate AWS services for your data migration Security and Data Sensitivity When customers migrate data ensuring the security of data both in transit and at rest is critical AWS takes security very seriously and build s security features into all data migration services Every service uses AWS Identity and Access Management (IAM) to control programmatic and AWS Console access to resources The following table lists these featur es Table 1 – AWS Services Security Features AWS Service Security Feature s AWS Direct Connect • Provides a dedicated physical connection with no data transfer over the Internet • Integrates with AWS CloudTrail to capture API calls made by or on behalf of a customer account AWS Snow Family • Integrates with the AWS Key Manage ment Service (AWS KMS) to encrypt data atrest that is stored on AWS Snow cone Snowball or Snowmobile • Uses an industry standard Trusted Platform Module (TPM) that has a dedicated processor designed to detect any unauthorized modifications to the hardware firmware or software to physically secure the AWS Snowcone or Snowball device This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 3 AWS Service Security Feature s AWS Transfer Family • SFTP use s SSH while FTPS use s TLS to transfer data through a secure and encrypted channel • AWS Transfer Family is PCI DSS and GDPR compliant and HIPAA eligible The service is also SOC 1 2 and 3 compliant Learn more about services in scope grouped by compliance programs • The service supports three modes of authentication: Service Managed where you store user identities within the service Microsoft Active Directory and Custom (BYO) which enables you to i ntegrate an identity provider of your choice Service Managed authentication is supported for server endpoints that are enabled for SFTP only • You can use Amazon CloudWatch to monitor your end users’ activ ity and use AWS CloudTrail to access a record of all S3 API operations invoked by your server to service your end users’ data requests This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 4 AWS Service Security Feature s AWS DataSync • All data transferred between the source and destination is encrypted via Transport Layer Security (TLS) w hich replaced Secure Sockets Layer (SSL) Data is never persisted in AWS DataSync itself The service supports using default encryption for S3 buckets Amazon EFS file system encryption of data at rest and Amazon FSx For Window s File Server encryption at rest and in transit • When copying data to or from your premises there is no need to setup a VPN/tunnel or allow inbound connections Your AWS DataSync agent can be configured to route through a firewall using standard network p orts • Your AWS DataSync agent connects to DataSync service endpoints within your chosen AWS Region You can choose to have the agent connect to pu blic internet facing endpoints Federal Information Processing Standards (FIPS) validated endpoints or endpoints within one of your VPCs AWS Storage Gateway • Encrypts all data in transit to and from AWS by using SSL/TLS • All data in AWS Storage Gateway is encrypted at rest using AES 256 while data transfers are encrypted with AES128 GCM or AES 128 CCM • Authentication between your gateway and iSCSI initiators can be secured by using Challenge Handshake Authentication Protocol (CHAP) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 5 AWS Service Security Feature s Amazon S3 Transfer Acceleration • Access to Amazon S3 can be restricted by granting other AWS accounts and users permission to perform the resource operations by writing an access policy • Encrypt dat a atrest by performing server side encryption using Amazon S3 Managed Keys (SSE S3) AWS Key Management Service (KMS) Managed Keys (SSE KMS) or Customer Provided Key s (SSE C) Or by performing client side encryption using AWS KMS –Managed Customer Master Key (CMK) or Client Side Master Key • Data in transit can be secured by us ing SSL /TLS or client side encryption • Enable Multi Factor Authentication (MFA) Delete for an Amazon S3 bucket AWS Kinesis Data Firehose • Data in transit can be secured by using SSL /TLS • If you send data to your delivery stream using PutRecord or PutRecordBatch or if you send the data using AWS IoT Amazon CloudWatch Logs or CloudWatch Events you can turn on server side encryption by using the StartDeliveryStreamEncryption operation • You can also enable SSE when you create the delivery stream Cloud Data Migration Tools This section discusses managed and self managed migration tools with a brief description of how each solution works Y ou can select AWS managed or selfmanaged migration methods and make your choice based on your specific use case This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 6 Time and Performance When you migrate data from your onpremises storage to AWS storage services you want to take the least amount of time to move data over your internet connection with minimal disruption to the existing systems To calculate the number of days required to migrate a given amount of data you can use the following formula: Number of Days = (Terabytes * 8 bites per Byte)/(CIRCUIT gigabits per second * NETWORK_UTILIZATION percent * 3600 seconds per hour * AVAILABLE_HOURS ) For example if you have a n GigabitEthernet connection (1 Gbps) to the Internet and 100 TB of data to move to AWS theoretically the minimum time it would take over the network connection at 80 percent utilization is approximately 28 days (100000000000000 Bytes * 8 bits per byte ) /(1000000000 bps * 80 percent * 3600 seconds per hour * 10 hours per day ) = 2777 days If this amount of time is not practical for you there are many ways to reduce migration time for large amounts of data You can use AWS managed migration tools that automate dat a transfers and optimize your internet connection to the AWS Cloud Alternatively you may develop or purchase your own tools and create your own transfers processes that the utilize the native HTTP interfaces to Amazon Simple Storage Service (Amazon S3) For moving small amounts of data from your on site location to the AWS Cloud you may use ad hoc methods that get the job done quickly with minimal use of automation methods discussed in the AWS migration tools section For the best results we suggest th e following: Table 2 – Recommended migration methods Connection & Data Scale Method Duration Less than 10 Mbps & Less than 100 GB Selfmanaged ~ 3 days Less than 10 Mbps & Between 100 GB – 1 TB AWS Managed ~ 30 days Less than 10 Mbps & Greater than 1 TB AWS Snow Family ~ weeks Less than 1 Gbps & Between 100 GB – 1 TB Selfmanaged ~ days This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 7 Connection & Data Scale Method Duration Less than 1 Gbps & Greater than 1 TB AWS Managed / Snow Family ~ weeks Choosing a Migration Method There are several factors to consider when choosing the appropriate migration method and tool As discussed in the previous section time allocated to perform data transfers the volume of data and network speeds influence the decision between different d ata migration methods You should also consider for each data store server or application stack the number of repetitive steps required to transfer data from source to target Then evaluate the variance of these steps as they are repeated In other wo rds are there unique requirements per data store that require non trivial changes to the data migration procedures? Then evaluate the level of existing investments in custom tooling and automation in your organization You will need to determine if it is more worthwhile to use existing selfmanaged tooling and automation or sunset them in favor of managed services and tools You can use following decision tree as a framework to choose a suitable migration method and tool: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 8 Figure 1 Migration Method Decision Tree Selfmanaged Migration Methods Small one time data transfers on limited bandwidth connections may be accomplished using these very simple tools This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Mi gration Services 9 Amazon S3 AWS Command Line Interface For migrating small amounts of data you can use the Amazon S3 AWS Command Line Interface to write commands that move data into an Amazon S3 bucket You can upload objects up to 5 GB in size in a single operation If your object is greater than 5 GB you can use multipart upload Multipart uploading is a three step process: You initiate the upload you upload the object parts and after you have uploaded all the parts you complete the multipart upload Upon receiving the complete multipart upload request Amaz on S3 constructs the object from the uploaded parts Once complete you can access the object just as you would any other object in your bucket Amazon Glacier AWS Command Line Interface For migrating small amounts of data you can write commands using the Amazon Glacier AWS Command Line I nterface to move data into Amazon Glacier In a single operation you can upload archives from 1 byte to up to 4 GB in size However for archives greater than 100 MB in size we recommend using multipart upload Using the multipart upload API you can upload larg e archives up to about 40000 GB (10000 * 4 GB) Storage Partner Solutions Multiple Storage Partner solutions work seamlessly to access storage across on premises and AWS Cloud envi ronments Partner hardware and software solutions can help customers do tasks such as backup create primary file storage/cloud NAS archive perform disaster recovery and transfer files AWS Managed Migration Tools AWS has designed several sophisticated services to help with cloud data migration AWS Direct Connect AWS Direct Connect lets you establish a dedicated network connection between your corporate network and one AWS Direct Connect location Using this connection you can create virtual interfaces directly to AWS services This bypasses Internet service providers (ISPs) in you r network path to your target AWS region By setting up private connectivity over AWS Direct Connect you could reduce network costs increase bandwidth throughput and provide a more consistent network experience than with Internet based connections This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 10 Using AWS Direct Connect you can easily establish a dedicated network connection from your premises to AWS at speeds starting at 50 Mbps and up to 100 Gbps You can use the connection to access Amazon Virtual Privat e Cloud (Amazon VPC) as well as AWS public services such as Amazon S3 AWS Direct Connect in itself is not a data transfer service Rather AWS Direct Connect provides a high bandwidth connection that can be used to transfer data between your corporate n etwork and AWS with more consistent performance and without ever having the data routed over the Internet Encryption methods may be applied to secure the data transfers over the AWS Direct Connect such as AWS Site toSite VPN AWS APN Partners can help you set up a new connection between an AWS Direct Connect location and your corporate data center office or colocation facility Additionally many of our partners offer AWS Direct Connect Bundles that provide a set of advanced hybrid architectures that can reduce complexity and provide peak performance You can extend your on premises networking security storage and compute technologi es to the AWS Cloud using managed hybrid architecture compliance infrastructure managed security and converged infrastructure With 108 Direct Connect locations worldwide and more than 50 Direct Connect delivery partners you can establish links between your on premises network and AWS Direct Connect locations With AWS Direct C onnect you only pay for what you use and there is no minimum fee associated with using the service AWS Direct Connect has two pricing components: porthour rate (based on port speed) and data transfer out (per GB per month) Additionally i f you are using an APN partner to facilitate a n AWS Direct Connect connection contact the partner to discuss any fees they may charge For information about pricing see Amazon Direct Connect Pricing AWS Snow Family The AWS Snow Family accelerates moving large amounts of data into and out of AWS using AWS managed hardware and software The Snow Family comprised of AWS Snowcone AWS Snowball and AWS Snowmobile are various physical devices each with different form factors and capacities They are purpose built for efficient data storage and transfer and have built in compute capabilities The AWS Snowcone device is a lightweight handheld storage device that accommodates field environments where access to power may be limited and WiFi is necessary to make the connection An AWS Snowball Edge device is rugged enough to withstand a 70 G shock and at 497 pounds (2254 kg) it is light enough for one person to carry It is entirely self contained with This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 11 110240 VAC power ships with country specific power cables as well as an E Ink display and control panel on the front Each AWS Snowball Edge appliance is weather resistant and serves as its own shipping con tainer With AWS Snowball you have the choice of two devices as of the date of this writing Snowball Edge Compute Optimized with more computing capabilities suited for higher performance workloads or Snowball Edge Storage Optimized with more storage which is suited for large scale data migrations and capacity oriented workloads Snowball Edge Compute Optimized provides powerful computing resources for use cases such as machine learning full motion video analysis analytics and local computing stacks These capabilities include 52 vCPUs 208 GiB of memory and an optional NVIDIA Tesla V100 GPU For storage the device provides 42 TB usable HDD capacity for S3 compatible object storage or EBS compatible block volumes as well as 768 TB of usable NVMe SSD capacity for EBS compatible block volumes Snowball Edge Compute Optimized devices run Amazon EC2 sbe c and sbe g instances which are equivalent to C5 M5a G3 and P3 instances Snowball Edge Storage Optimized devices are well suited for large scale data migrations and recurring transfer workflows as well as local computing with higher capacity needs Snowball Edge Storage Optimized provides 80 TB of HDD capacity for block volumes and Amazon S3 compatible object storage and 1 TB of SATA SSD for block volumes For computing resources the device provides 40 vCPUs and 80 GiB of memory to support Amazon EC2 sbe1 instances (equivalent to C5) AWS transfers your data directly onto Snowball Edge device using on premises high speed connections ships the device to AWS facilities and transfers data off of AWS Snowball Edge devices using Amazon’s high speed internal network The data transfer process bypass es the corporate Internet connection and mitigates the requirement for an AWS Direct Connect services For datasets of significant size AWS Snowball is often faster than transferring data via the Internet and more cost effective than upgrading your data center’s Internet connection AWS Snowball supports importing data into and exporting data from Amazon S3 buckets From there the data can be copied or moved to other AWS services such as Amazon Elastic Block Store ( Amazon EBS) Amazon Elastic File System (Amazon EFS) Amazon FSx File Gateway and Amazon Glacier AWS Snowball is ideal for transferring large amounts of data up to many petabytes in and out of the AWS cloud securely This approach is effective especially in cases where you don’t want to make expensive upgrades to your network infrastructure ; if you frequently experience large backlogs of data ; if you are in a physically isolated This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 12 environment ; or if you are in an area where high speed Internet connections are not available or cost prohibitive In general if loading your data over the Internet would take a week or more you should consider using AWS Snow Family Common use cases include cloud migration disaster recovery d ata center decommission and content distribution When you d ecommission a data center many steps are involved to make sure valuable data is not lost and the AWS Snow Family can help ensure data is securely and cost effectively transferred to AWS In a content distribution scenario you might u se Snowball Edge devices if you regularly receive or need to share large amounts of data with clients customers or business partners Snowball appliances can be sent directly from AWS to client or customer locations If you need to move massive amounts of data AWS Snowmobile is an Ex abyte scale data transfer service Each Snowmobile is a 45 foot long ruggedized shipping container hauled by a trailer truck with up to 100 PB data storage capacity Snowmobile also handles all of the logistics AWS personnel transport and configure the Sn owmobile They will also work with your team to connect a temporary high speed network switch to your local network The local high speed network facilitates rapid transfer of data from within your datacenter to the Snowmobile Once you’ve loaded all your data the Snowmobile drives back to AWS where the data is imported into Amazon S3 Moving data at this massive scale requires additional preparation precautions and security Snowmobile uses GPS tracking round the clock video surveillance and dedicated security personnel Snowmobile offers an optional security escort vehicle while your data is in transit to AWS Management of and access to the shipping container and data stored within is limited to AWS personnel using hardware secur e access control meth ods AWS Snow Family might not be the ideal solution if your data can be transferred over the Internet in less than one week or if your applications cannot tolerate the offline transfer time With AWS Snow Family as with most other AWS services you pay only for what you use Snowball has three pricing components: a service fee (per job) extra day charges as required and data transfer out The first 5 days of Snowcone usage and the first 10 days of onsite Snowball includes 10 days of device use For the destination storage the standard Amazon S3 storage pricing applies For pricing information see AWS Snowball pricing Snowmobile pricing is based on the amount of data stored on the truck p er month For more information about AWS Regions and availability see AWS Regional Services This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 13 AWS Storage Gateway AWS Storage Gateway makes backing up to the cloud extremely simple It connects an onpremises software appliance with cloud based storage to provide seamless and secure integration between an organization’s on premises IT environment and the AWS storage infrastructure The service enables you to securely store data in the AWS Cloud for scalable and cost effective storage AWS Storage Gateway supports three types of storage interfaces used in on premises environment including file volume and tap e It uses industry standard network storage protocols such as Network File System (NFS) and Server Message Block (SMB) that work with your existing applications enabling data storage using S3 File Gateway function to store data in Amazon S3 It provides lowlatency performance by maintaining an on premises cache of frequently accessed data while securely storing all of your data encrypted in Amazon S3 Once data is stored in Amazon S3 it can be archived in Amazon S3 Glacier For disaster re covery scenarios AWS Storage Gateway together with Amazon Elastic Compute Cloud (Amazon EC2) can serve as a cloud hosted solution that mirrors your entire production environment You can download the AWS Storage Gateway software appliance as a virtual machine (VM) image that you install on a host in your data center or as an EC2 instance After you’ve installed your gateway and associated it with your AWS account through the AWS activation process you can use the AWS Management Console to create gatewa ycached volumes gateway stored volumes or a gateway –virtual tape library (VTL) each of which can be mounted as an iSCSI device by your on premises applications Volume Gateway supports iSCSI connections that enable storing of volume data in S3 With caching enabled you can use Amazon S3 to hold your complete set of data while caching some portion of it locally for onpremises frequently accessed data Gateway cached volumes minimize the need to scale your on premises storage infrastructure while sti ll providing your applications with low latency access to frequently accessed data You can create storage volumes up to 32 T iB in size and mount them as iSCSI devices from your on premises application servers Each gateway configured for gateway cached vo lumes can support up to 32 volumes and total volume storage per gateway of 1024 Ti B Data written to these volumes is stored in Amazon S3 with only a cache of recently written and recently read data stored locally on your on premises storage hardware Gateway stored volumes store your locally sourced data in cache while asynchronously backing up data to AWS These volumes provide your on premises applications with lowlatency access to their entire datasets while providing durable off site backups This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 14 You can create storage volumes up to 1 6 TiB in size and mount them as iSCSI devices from your on premises application servers Each gateway configured for gateway stored volumes can support up to 32 volumes with a total volume storage of 512 TiB Data written to your gateway stored volumes is stored on your on premises storage hardware and asynchronously backed up to Amazon S3 in the form of Amazon EBS snapshots A gateway VTL allows you to perform offline data archiving by presenting your existing backup a pplication with an iSCSI based VTL consisting of a virtual media changer and virtual tape drives You can create virtual tapes in your VTL by using the AWS Management Console and you can size each virtual tape from 100 G iB to 5 T iB A VTL can hold up to 1 500 virtual tapes with a maximum aggregate capacity of 1 PiB After the virtual tapes are created your backup application can discover them using its standard media inventory procedure Once created tapes are available for immediate access and are stor ed in Amazon S3 Virtual tapes you need to access frequently should be stored in a VTL Data that you don't need to retrieve frequently can be archived to your virtual tape shelf (VTS) which is stored in Amazon Glacier further reducing your storage costs Organizations are using AWS Storage Gateway to support a number of use cases These use cases include corporate file sharing enabling existing on premises backup applications to store primary backups on Amazon S3 disaster recovery and mir roring data to cloud based compute resources and then later archiving the data to Amazon Glacier With AWS Storage Gateway you pay only for what you use AWS Storage Gateway has the following pricing components: gateway usage (per gateway appliance per month) and data transfer out (per GB per month) Based on type of gateway appliance you use there are snapshot storage usage (per GB per month) and volume storage usage (per GB per month) for gateway cached volumes/gateway stored v olumes and virtual tape shelf storage (per GB per month) virtual tape library storage (per GB per month) and retrieval from virtual tape shelf (per GB) for Gateway Virtual Tape Library For information about pricing see AWS Storage Gateway pricing Amazon S3 Transfer Acceleration (S3 TA) Amazon S3 Transfer Acceleration (S3 TA) enables fast easy and secure transfers of files over long distances between your client and your Amazon S3 bucket Transfer Acceleration lever ages Amazon CloudFront globally distributed AWS edge locations As This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 15 data arrives at an AWS edge location data is routed to your Amazon S3 bucket over an optimized network path Transfer Acceleration helps you fully utilize your bandwidth minimize the effe ct of distance on throughput and ensure consistently fast data transfer to Amazon S3 regardless of your client’s location Acceleration primarily depends on your available bandwidth the distance between the source and destination and packet loss rates o n the network path Generally you will see more acceleration when the source is farther from the destination when there is more available bandwidth and/or when the object size is bigger You can use the online speed comparison tool to get the preview of the performance benefit from uploading data from your location to Amazon S3 buckets in different AWS Regions using Transfer Acceleration Organ izations are using Transfer Acceleration on a bucket for a variety of reasons For example they have customers that upload to a centralized bucket from all over the world transferring gigabytes to terabytes of data on a regular basis across continents or having underutilize d the available bandwidth over the Internet when uploading to Amazon S3 The best part about using Transfer Acceleration on a bucket is that the feature can be enabled by a single click of a button in the Amazon S3 console; this makes the accelerate endpoint available to use in place of the regular Amazon S3 endpoint With Tra nsfer Acceleration you pay only for what you use and for transferring data over the accelerated endpoint Transfer Acceleration has the following pricing components: data transfer in (per GB) data transfer out (per GB) and data transfer between Amazon S3 and another AWS Region (per GB) Transfer acceleration pricing is in addition to data transfer (per GB per month) pricing for Amazon S3 For information about pricing see Amazon S3 pricing AWS Kinesis Data Firehose Amazon Kinesis Data Firehose is the easiest way to load streaming data into AWS The service can capture and automatically load st reaming data into Amazon S3 Amazon Redshift Amazon Elasticsearch Service or Splunk Amazon Kinesis Data Firehose is a fully managed service making it easier to capture and load massive volumes of streaming data from hundreds of thousands of sources The service can automatically scale to match the throughput of your data and requires n o ongoing administration Additionally Amazon Kinesis Data Firehose c an also batch compress transform and encrypt data before loading it This process minimiz es the amount of storage used at the destination and increas es security This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 16 You can use Data Firehose by creating a delivery stream and sending the data to it The streaming data originators are called data producers A producer can be as simple as a PutRecord() or PutRecordBatch() API call or you can build your producers using Kinesis Agent You can send a record (before base64 encoding) as large as 1000 KiB Additionally Firehose buffers incoming streaming data to a certain size called a Buffer Size (1 MiB to 12 8 MiB) or for a certain period of time called a Buffer Interval (60 to 900 seconds) before delivering to destinations With Amazon Kinesis Data Firehose you pay only for the volume of data you transmit through the service Amazon Kinesis Data Firehose has a single pricing component: data ingested (per G iB) which is calculated as the number of data records you send to the service times the size of each record rounded up to the nearest 5 KiB There may be charges associated with PUT requests a nd storage on Amazon S3 and Amazon Redshift and Amazon Elasticsearch instance hours based on the destination you select for loading data For information about pricing see Amazon Kinesis Da ta Firehose pricing AWS Transfer Family If you are looking to modernize your file transfer workflows for business processes that are heavily dependent on FTP SFTP and FTPS ; the AWS Transfer Family service provides fully managed file transfers in and out of Amazon S3 buckets and Amazon EFS shares The AWS Transfer Family uses a highly available multi AZ architecture that automatically scales to add capacity based on your file transfer demand This means no more FTP SFTP and FTPS servers to manage The AWS Transfer Family allows the authentication of users through multiple methods including self managed AWS Directory Service on premises Active Directory systems through AWS Managed Microsoft AD connectors or custom identity providers Custom identity pr oviders may be configured through the Amazon API Gateway enabling custom configurations DNS entries used by existing users partners and applications are maintained using Route 53 for minimal disruption and seamless migration With your data residing in Amazon S3 or Amazon EFS you can use other AWS services for analytics and data processing workflows There are many use cases that require a standards based file transfer protocol like FTP SFTP or FTPS AWS Transfer Family is a good fit for secure file sharing between an organization and third parties Examples of data that are shared between organizations are l arge files such as audio/video media files technical documents research data and EDI data such as purchase orders and invoices Another u se case is providing a central location where users can download and globally access your data This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Mi gration Services 17 securely A third use case is to facilitate data ingestion for a data lake Organizations and third parties can FTP SFTP or FTPS research analytics or busine ss data into an Amazon S3 bucket which can then be further processed and analyzed With the AWS Transfer Family you only pay for the protocols you have enabled for access to your endpoint and the amount of data transferred over each of the protocols There are no upfront costs and no resources to manage yourself You select the protocols identity provider and endpoint configuration to enable transfers over the chosen protocols You are billed on an hourly basis for each of the protocols enabled to acce ss your endpoint until the time you delete it You are also billed based on the amount of data (Gigabytes) uploaded and downloaded over each of the protocols For more details on pricing per region see AWS Transfer Family pricing Third Party Connectors Many of the most popular third party backup software packages such as CommVault Simpana and Veritas NetBackup include Amazon S3 connectors This allows the backup software to point direc tly to the cloud as a target while still keeping the backup job catalog complete Existing backup jobs can simply be rerouted to an Amazon S3 target bucket and the incremental daily changes are passed over the Internet Lifecycle management policies can m ove data from Amazon S3 into lower cost storage tiers for archival status or deletion Eventually and invisibly local tape and disk copies can be aged out of circulation and tape and tape automation costs can be entirely removed These connectors can be used alone or they can be used with a gateway provided by AWS Storage Gateway to back up to the cloud without affecting or re architecting existing on premises processes Backup administrators will appreciate the integration into their d aily console activities and cloud architects will appreciate the behind the scenes job migration into Amazon S3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 18 Cloud Data Migration Use Cases Use Case 1: One Time Massive Data Migration Figure 2 Onetime massive data migra tion In use case 1 a customer goes through the process of decommissioning a data center and moving the entire workload to the cloud First all the current corporate data needs to be migrated To complete this migration AWS Snowball appliances are used to move the data from the customer’s existing data center to an Amazon S3 bucket in the AWS Cloud 1 Customer creates a new data transfer job in the AWS Snowball Management Console by providing the following information a Choose Import into Amazon S3 to start c reating the import job b Enter the shipping address of the corporate data center and shipping speed (one or two day) c Enter job details such as name of the job destination AWS Region destination Amazon S3 bucket to receive the imported data and Snowba ll Edge device type This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 19 d Enter security settings indicating the IAM role Snowball assumes to import the data and AWS KMS master key used to encrypt the data within Snowball e Set Amazon Simple Notification Service (SNS) notification options and provide a list o f comma separated email addresses to receive email notifications for this job Choose which job status values trigger notifications f Download AWS OpsHub for Snow family to manage yo ur devices and their local AWS services With AWS OpsHub you can unlock and configure single or clustered devices transfer files and launch/manage instances running on Snow Family devices 2 After the job is created AWS ships the Snowball Appliances to th e customer data center by AWS In this example the customer is importing 200 TB of data into Amazon S3 they will need to create three Import jobs of 80 TB Snowball Edge Storage Optimized capacity 3 After receiving the Snowball appliance the customer performs the following tasks a Customer connects the powered off appliance to their internal network and uses the supplied power cables to connect to a power outlet b After the Snowball is ready the customer uses the E Ink display to choose the network settings and assign an IP address to the appliance 4 The customer transfers the data to the Snowball appliance using the following steps a Download the credentials consisting of a manifest file and an unlock code for a specific Snowball job from AWS Snow Family Management Console b Install the Snowball Client on an on premises machine to manage the flow of data from the on premise s data source to the Snowball c Access the Snowball client using the terminal or command prompt on the workstation and typing the following command: snowball Edge unlockdevice endpoint [https:// Snowball IP Address] manifest [Path/to/manifest/file] –unlockcode [29 character unlock code] d Begin transferring data onto the Snowball using the following tools: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 20 i Version 11614 or earlier of the AWS CLI s3 cp or s3 sync commands Detailed installation and command syntax are found here ii AWS OpsHub which was installed in step 1f Detailed commands and instructions on managing S3 Storage can be found here 5 After the data transfer is complete disconnect the S nowball from your network and seal the Snowball After being properly sealed the return shipping label appears on the E Ink display Arrange UPS pickup of the appliance for shipment back to AWS 6 UPS automatically report s back a tracking number for the job to the AWS Snowball Management Console The customer can access that tracking number and a link to the UPS tracking website by viewing the job's status details in the console 7 After the appliance is received at the AWS Region the job status changes from In transit to AWS to At AWS On average it takes a day for data import into Amazon S3 to begin When the import starts the status of the job changes to Importing From this point on it takes an average of two business days for your import to reach Comp leted status You can track status changes through the AWS Snowball Management Console or by Amazon SNS notifications This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 21 Use Case 2: Continuous On premises Data Migration Figure 3 Ongoing data migration from onpremises storage solution In use case 2 a customer has a hybrid cloud deployment with data being used by both an on premises environment and systems deployed in AWS Additionally the customer wants a dedicated connection to AWS that provides consisten t network performance As part of the on going data migration AWS Direct Connect acts as the backbone providing a dedicated connection that bypasses the Internet to connect to AWS cloud Additionally the customer deploys AWS Storage Gateway with Gateway Cached Volume s in the data center which sends data to an Amazon S3 bucket in their target AWS region The following steps describe the required steps to build this solution: e The customer creates an AWS Direct Connect connection between their corporate data center and the AWS Cloud This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 22 a To set up the connection using the Connection Wizard ordering type the customer provides the following information using the AWS Direct Connect Console : i Choose a resiliency level 1 Maximum Resiliency (for critical workloads) : You can achieve maximum resiliency for critical workloads by using separat e connections that terminate on separate devices in more than one location This topology provides resiliency against device connectivity and complete location failures 2 High Resiliency (for critical workloads): You can achieve high resiliency for critic al workloads by using two independent connections to multiple locations This topology provides resiliency against connectivity failures caused by a fiber cut or a device failure It also helps prevent a complete location failure 3 Development and Test (non critical or test/dev workloads): You can achieve development and test resiliency for non critical workloads by using separate connections that terminate on separate devices in one location This topology provides resiliency agains t device failure but does not provide resiliency against location failure ii Enter connection settings: 1 Bandwidth – choose from 1Gbps to 100Gbps 2 First location – the first physical location for your first Direct Connect connection 3 First location service provider 4 Second location – the second physical location for your second Direct Connect connection 5 Second location service provider iii Review and create menu : confirm your selections and click create b After the customer creates a connection using the AWS Direct Connect console AWS will send an email within 72 hours The email will include a This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 23 Letter of Authorization and Connecting Facility Assignment (LOA CFA) After receiv ing the LOA CFA the customer will forward it to their network provider so they can order a cross connect for the customer The customer is not able to order a cross connect for themselves in the AWS Direct Connect location if the customer does not already have equipment there The network provider will have to do this for the custome r c After the physical connection is set up the customer create s the virtual interface s within AWS Direct Connect to connect to AWS public services such Amazon S3 d After creating virtual interface s the customer runs the AWS Direct Connect failover test to make sure that traffic routes to alternate online virtual interfaces 2 After the AWS Direct Connect connection is setup the customer create s an Amazon S3 bucket into which the on premises data can be backed up 3 The customer deploys the AWS Storage Gateway in their existing data center using following steps : a Deploy a new gateway using AWS Storage Gateway console b Select Volume Gateway Cached volumes for the type of gateway c Download the gateway virtual machine (VM) image and deploy on the on premis es virtualization environment d Provision two local disks to be attached to the VM e After the gateway VM is powered on record the IP address of the machine and then enter the IP address in the AWS Storage Gateway console to activate the gateway 4 After the gateway is activated the customer can configure the volume gateway in the AWS Storage Gateway console: a Configure the local storage by selecting one of the two local disks attached to the storage gateway VM to be used as the upload buffer and cache storage This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 24 b Create volumes on the Amazon S3 bucket 5 The customer connects the Amazon S3 gateway volume as an iSCSI connection through the storage gateway IP address on a client machine 6 After setup is completed and the customer applications write data to t he storage volumes in AWS the gateway at first stores the data on the on premises disks (referred to as cache storage ) before uploading the data to Amazon S3 The cache storage acts as the on premises durable store for data that is waiting to upload to Am azon S3 from the upload buffer The cache storage also lets the gateway store the customer application's recently accessed data on premises for lowlatency access If an application requests data the gateway first checks the cache storage for the data bef ore checking Amazon S3 To prepare for upload to Amazon S3 the gateway also stores incoming data in a staging area referred to as an upload buffer Storage G ateway uploads this buffer data over an encrypted Secure Sockets Layer (SSL) connection to AWS w here it is stored encrypted in Amazon S3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Mi gration Services 25 Use Case 3: Continuous Streaming Data Ingestion Figure 4 Continuous streaming data ingestion In use case 3 the customer wants to ingest a social media feed continuously in Amazon S3 As part of the continuous data migration the customer uses Amazon Kinesis Data Firehose to ingest data without having to provision a dedicated set of servers 1 The c ustomer creates an Amazon Kinesis Data Firehose Delivery Stream using the following steps in the Amazon Kinesis Data Firehose console : a Choose the Delivery Stream name b Choose the Amazon S3 bucket; c hoose the IAM role that grants Firehose access to Amazon S3 bucket c Firehose buffers incoming records before delivering the data to Amazon S3 The customer chooses Buffer Size (1 128 MBs) or Buffer Interval (60 900 seconds) Whichever condition is satisfie d first triggers the data delivery to Amazon S3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 26 d The customer chooses from three compression formats (GZIP ZIP or SNAPPY) or no data compression e The customer chooses whether to encrypt the data or not with a key from the list of AWS Key Management S ervice (AWS KMS) keys that they own 2 The customer sends the streaming data to an Amazon Kinesis Firehose delivery stream by writing appropriate code using AWS SDK Conclusion This whitepaper walked you through different AWS managed and selfmanaged storage migration options Additionally the pap er covered different use cases showing how multiple storage services can be used together to solve different migration needs Contributors Contributors to this document include: • Shruti Worlikar Solutions Architect Amazon Web Services • Kevin Fernandez Sr Solutions Architect Amazon Web Services • Scott Wainner Sr Solutions Architect Amazon Web Services Further Reading For additional information see : • AWS Direct Connect • AWS Snow Family • AWS Storage Gateway • AWS Kinesis Data Firehose • Storage Partner Solutions This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services An Overview of AWS Cloud Data Migration Services 27 Document revisions Date Description July 13 2021 Repaired broken links Updated Time/Performance characteristics Added decision tree Added AWS Transfer Family Updated with new AWS Snow Family services Updated procedures in use cases May 2016 First publication
|
General
|
consultant
|
Best Practices
|
An_Overview_of_the_AWS_Cloud_Adoption_Framework
|
AWS Cloud Adoption Framework (CAF) 30 Translated Whitepapers Language AWS Whitepaper Link Arabic (عربي) Whitepaper Link Brazilian Portuguese (Português) Whitepaper Link Chinese Simplified (中文 (简体 )) Whitepaper Link Chinese Traditional (中文 (繁體 )) Whitepaper Link English Whitepaper Link Finnish (Suomalainen) Whitepaper Link French Canadian (Français Canadien) Whitepaper Link French France (Français) Whitepaper Link German (Deutsch) Whitepaper Link Hebrew (עברית) Whitepaper Link Indonesian (Bahasa Indonesia) Whitepaper Link Italian (Italiano) Whitepaper Link Japanese (日本語 ) Whitepaper Link Korean (한국어 ) Whitepaper Link Russian (Ρусский) Whitepaper Link Spanish (Español) Whitepaper Link Swedis h (Svenska) Whitepap er Link Thai (ไทย) Whitepaper Link Turkish (Türkçe) Whitepaper Link Vietnamese (Tiếng Việt) Whitepaper Link
|
General
|
consultant
|
Best Practices
|
Architecting_for_Genomic_Data_Security_and_Compliance_in_AWS
|
ArchivedArchitecting for Genomic Data Security and Compliance in AWS Working with ControlledAccess Datasets from dbGaP GWAS and other IndividualLevel Genomic Research Repositories Angel Pizarro Chris Whalley December 2014 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 2 of 17 Table of Contents Overview 3 Scope 3 Considerations for Genomic Data Privacy and Security in Human Research 3 AWS Approach to Shared Security Responsibilities 4 Architecting for Compliance with dbGaP Security Best Practices in AWS 5 Deployment Model 6 Data Location 6 Physical Server Access 7 Portable Storage Media 7 User Accounts Passwords and Access Control Lists 8 Internet Networking and Data Transfers 9 Data Encryption 11 File Systems and Storage Volumes 13 Operating Systems and Applications 14 Auditing Logging and Monitoring 15 Authorizing Access to Data 16 Cleaning Up Data and Retaining Results 17 Conclusion 17 ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 3 of 17 Overview Researchers who plan to work with genomic sequence data on Amazon Web Services (AWS) often have questions about security and compliance; specifically about how to meet guidelines and best practices set by government and grant funding agencies such as the National Institutes of Health In this whitepaper we review the current set of guidelines and discuss which services from AWS you can use to meet particular requirements and how to go about evaluating those services Scope This whitepaper focuses on common issues raised by Amazon Web Services (AWS) customers about security best practices for human genomic data and controlled access datasets such as those from National Institutes of Health (NIH) repositories like Database of Genotypes and Phenotypes (dbGaP) and genomewide association studies (GWAS) Our intention is to provide you with helpful guidance that you can use to address common privacy and security requirements However we caution you not to rely on this whitepaper as legal advice for your specific use of AWS We strongly encourage you to obtain appropriate compliance advice about your specific data privacy and security requirements as well as applicable laws relevant to your human research projects and datasets Considerations for Genomic Data Privacy and Security in Human Research Research involving individuallevel genotype and phenotype data and deidentified controlled access datasets continues to increase The data has grown so fast in volume and utility that the availability of adequate data processing storage and security technologies has become a critical constraint on genomic research T he global research community is recognizing the practical benefits of the AWS cloud and scientific investigators institutional signing officials IT directors ethics committees and data access committees must answer privacy and security questions as they evaluate the use of AWS in connection with individuallevel genomic data and other controlled access datasets Some common questions include: Are data protected on secure servers? Where are data located? How is access to data controlled? Are data protections appropriate for the Data Use Certification? These considerations are not new and are not cloudspecific Whether data reside in an investigator lab an institution al network an agencyhosted data repository or within the AWS cloud the essential considerations for human genomic data are the same You must correctly implement data protection and security controls in the system by first defining the system requirements and then architecting the system security controls to meet those requirements particularly the shared responsibilities amongst the parties who use and maintain the system ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 4 of 17 AWS Approach to Shared Security Responsibilities AWS delivers a robust web services platform with features that enable research teams around the world to create and control their own private area in the AWS cloud so they can quickly build install and use their data analysis applications and data stores without having to purchase or maintain the necessary hardware and facilities As a researcher you can create your private AWS environment yourself using a selfservice signup process that establishes a unique AWS account ID creates a root user account and account ID and provides you with access to the AWS Management Console and Application Programming Interfaces (APIs) allowing control and management of the private AWS environment Because AWS does not access or manage your private AWS environment or the data in it you retain responsibility and accountability for the configuration and security controls you implement in your AWS account This customer accountability for your private AWS environment is fundamental to understanding the respective roles of AWS and our customers in the context of data protections and security practices for human genomic data Figure 1 depicts the AWS Shared Responsibility Model Figure 1 Shared Responsibility Model In order to deliver and maintain the features available within every customer ’s private AWS environment AWS works vigorously to enhance the security features of the platform and ensure that the feature delivery operations are secure and of high quality AWS defines quality and security as confidentiality integrity and availability of our services and AWS seeks to provide researchers with visibility and assurance of our quality and security practices in four important ways ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 5 of 17 First AWS infrastructure is designed and managed in alignment with a set of internationally recognized security and quality accreditations standards and bestpractices including industry standards ISO 27001 ISO 9001 AT 801 and 101 (formerly SSAE 16) as well as government standards NIST FISMA and FedRAMP Independent third parties perform accreditation assessments of AWS These third parties are auditing experts in cloud computing environments and each brings a unique perspective from their compliance backgrounds in a wide range of industries including healthcare life sciences financial services government and defense and others Because each accreditation carries a unique audit schedule including continuous monitoring AWS security and quality controls are constantly audited and improved for the benefit of all AWS customers including those with dbGaP HIPAA and other health data protection requirements Second AWS provides transparency by making these ISO SOC FedRAMP and other compliance reports available to customers upon request Customers can use these reports to evaluate AWS for their particular needs You can request AWS compliance reports at https://awsamazoncom/compliance/contact and you can find more information on AWS compliance certifications customer case studies and alignment with best practices and standards at the AWS compliance website http://awsamazoncom/compliance/ Third as a controlled US subsidiary of Amazoncom Inc Amazon Web Services Inc participates in the Safe Harbor program developed by the US Department of Commerce the European Union and Switzerland respectively Amazoncom and its controlled US subsidiaries have certified that they adhere to the Safe Harbor Privacy Principles agreed upon by the US the EU and Switzerland respectively You can view the Safe Harbor certification for Amazoncom and its control led US subsidiaries on the US Department of Commerce’s Safe Harbor website The Safe Harbor Principles require Amazon and its controlled US subsidiaries to take reasonable precautions to protect the personal information that our customers give us in order to create their account This certification is an illustration of our dedication to security privacy and customer trust Lastly AWS respects the rights of our customers to have a choice in their use of the AWS platform The AWS Account Management Console and Customer Agreement are designed to ensure that every customer can stop using the AWS platform and export all their data at any time and for any reason This not only helps customers maintain control of their private AWS environment from creation to deletion but it also ensures that AWS must continuously work to earn and keep the trust of our customers Architecting for Compliance with dbGaP Security Best Practices in AWS A primary principle of the dbGaP security best practices is that researchers should download data to a secure computer or server and not to unsecured network drives or servers1 The remainder of the dbGaP security best practices can be broken into a set of three IT security control domains that you must address to ensure that you meet the primary principle: 1 http://wwwncbinlmnihgov/projects/gap/pdf/dbgap_2b_security_procedurespdf ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 6 of 17 Physical Security refers to both physical access to resources whether they are located in a data center or in your desk drawer and to remote administrative access to the underlying computational resources Electronic Security refers to configuration and use of networks servers operating systems and applicationlevel resources that hold and analyze dbGaP data Data Access Security refers to managing user authentication and authorization of access to the data how copies of the data are tracked and managed and having policies and processes in place to manage the data lifecycle Within each of these control domains are a number of control areas which are summarized in Table 1 Table 1 Summary of dbGaP Security Best Practices Control Domain Control Areas Physical Security Deployment Model Data Location Physical Server Access Portable Storage Media Electronic Security User Accounts Passwords and Access Control Lists Internet Networking and Data Transfers Data Encryption File Systems and Storage Volumes Operating Systems and Applications Auditing Logging And Monitoring Data Access Security Authorizing Access to Data Cleaning Up Data and Retaining Results The remainder of this paper focuses on the control areas involved in architecting for security and compliance in AWS Deployment Model A basic architectural consideration for dbGaP compliance in AWS is determining whether the system will run entirely on AWS or as a hybrid deployment with a mix of AWS and nonAWS resources This paper focus es on the control areas for the AWS resources If you are architecting for hybrid deployments you must also account for your nonAWS resources such as the local workstations you might download data to and from your AWS environment any institutional or external networks you connect to your AWS environment or any thirdparty applications you purchase and install in your AWS environment Data Location The AWS cloud is a globally available platform in which you can choose the geographic region in which your data is located AWS data centers are built in clusters in various global regions AWS calls these data center clusters Availability zones (AZs) As of December 2014 AWS ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 7 of 17 maintains 28 AZs organized into 11 regions globally As an AWS customer you can choose to use one region all regions or any combination of regions using builtin features available within the AWS Management Console AWS regions and Availability Zones ensure that if you have locationspecific requirements or regional data privacy policies you can establish and maintain your private AWS environment in the appropriate location You can choose to replicate and back up content in more than one region but you can be assured that AWS does not move customer data outside the region(s) you configure Physical Server Access Unlike traditional laboratory or institutional server systems where researchers install and control their applications and data directly on a specific physical server the applications and data in a private AWS account are decoupled from a specific physical server This decoupling occurs through the builtin features of the AWS Foundation Services layer (see Figure 1 Shared Responsibility Model ) and is a key attribute that differentiates the AWS cloud from traditional server systems or even traditional server virtualization Practically this means that every resource (virtual servers firewalls databases genomic data etc) within your private AWS environment is reduced to a single set of software files that are orchestrated by the Foundational Services layer across multiple physical servers Even if a physical server fails your private AWS resources and data maintain confidentiality integrity and availability This attribute of the AWS cloud also adds a significant measure of security because even if someone were to gain access to a single physical server they would not have access to all the files needed to recreate the genomic data within the your private AWS account AWS owns and operates its physical servers and network hardware in highlysecure state of theart data centers that are included in the scope of independent thirdparty security assessments of AWS for ISO 27001 Service Organization Controls 2 (SOC 2) NIST’s federal information system security standards and other security accreditations Physical access to AWS data centers and hardware is based on the least privilege principle and access is authorized only for essential personnel who have experience in cloud computing operating environments and who are required to maintain the physical environment When individuals are authorized to access a data center they are not given logical access to the servers within the data center When anyone with data center access no longer has a legitimate need for it access is immediately revoked even if they remain an employee of Amazon or Amazon Web Services Physical entry into AWS data centers is controlled at the building perimeter and ingress points by professional security staff who use video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to enter data center floors and all physical access to AWS data centers is logged monitored and audited routinely Portable Storage Media The decision to run entirely on AWS or in a hybrid deployment model has an impact on your system security plans for portable storage media Whenever data are downloaded to a portable device such as a laptop or smartphone the data should be encrypted and hardcopy printouts controlled When genomic data are stored or processed i n AWS customers can encrypt their ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 8 of 17 data but there is no portable storage media to consider because all AWS customer data resides on controlled storage media covered under AWS’s accredited security practices When controlled storage media reach the end of their useful life AWS procedures include a decommissioning and media sanitization process that is designed to prevent customer data from being exposed to unauthorized individuals AWS uses the techniques detailed in DoD 522022 M (“National Industrial Security Program Operating Manual” ) or NIST 800 88 (“Guidelines for Media Sanitization”) to destroy data as part of the decommissioning process All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industrystandard practices For more information see Overview of Security Processes 2 User Accounts Passwords and Access Control Lists Managing user access under dbGaP requirements relies on a principle of least privilege to ensure that individuals and/or processes are granted only the rights and permissions to perform their assigned tasks and functions but no more3 When you use AWS there are two types of user accounts that you must address : Accounts with direct access to AWS resources and Accounts at the operating system or application level Managing user accounts with direct access to AWS resources is centralized in a service called AWS Identity and Access Management (IAM) After you establish your root AWS account using the selfservice signup process you can use IAM to create and manage additional users and groups within your private AWS environment In adherence to the least privilege principle new users and groups have no permissions by default until you associate them with an IAM policy IAM policies allow access to AWS resources and support finegrained permissions allowing operationspecific access to AWS resources For example you can define an IAM policy that restricts an Amazon S3 bucket to readonly access by specific IAM users coming from specific IP addresses In addition to the users you define within your private AWS environment you can define IAM roles to grant temporary credentials for use by externally authenticated users or applications running on Amazon EC2 servers Within IAM you can assign users individual credentials such as passwords or access keys Multifactor authentication (MFA) provides an extra level of user account security by prompting users to enter an additional authentication code each time they log in to AWS dbGaP also requires that users not share their passwords and recommends that researchers communicate a written password policy to any users with permissions to controlled access data Additionally dbGaP recommends certain password complexity rules for file access IAM provides robust features to manage password complexity reuse and reset rules How you manage user accounts at the operating system or application level depends largely on which operating systems and applications you choose For example applications developed specifically for the AWS cloud might leverage IAM users and groups whereas you'll need to assess and plan the compatibility of thirdparty applications and operating systems with IAM on a case bycase basis You should always configure passwordenabled screen savers on any 2 http://mediaamazonwebservicescom/pdf/AWS_Security_Whitepaperpdf 3 http://wwwncbinlmnihgov/projects/gap/pdf/dbgap_2b_security_procedurespdf ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 9 of 17 local workstations that you use to access your private AWS environment and configure virtual server instances within the AWS cloud environment with OSlevel passwordenabled screen savers to provide an additional layer of protection More information on IAM is available in the IAM documentation and IAM Best Practices guide as well as on the MultiFactor Authentication page Internet Networking and Data Transfers The AWS cloud is a set of web services delivered over the Internet but data within each customer’s private AWS account is not exposed directly to the Internet unless you specifically configure your security features to all ow it This is a critical element of compliance with dbGaP security best practices and the AWS cloud has a number of builtin features that prevent direct Internet exposure of genomic data Processing genomic data in AWS typically involves the Amazon Elastic Compute Cloud (Amazon EC2) Amazon EC2 is a service you can use to create virtual server instances that run operating systems like Linux and Microsoft Windows When you create new Amazon EC2 instances for downloading and processing genomic data by default those instances are accessible only by authorized users within the private AWS account The instances are not discoverable or directly accessible on the Internet unless you configure them otherwise Additionally genomic data within an Amazon EC2 instance resides in the operating system ’s file directory which requires that you set OSspecific configurations before any data can be accessible outside of the instance When you need clusters of Amazon EC2 instances to process large volumes of data a Hadoop framework service called Amazon Elastic MapReduce (Amazon EMR) allows you to create multiple identical Amazon EC2 instances that follow the same basic rule of least privilege unless you change the configuration otherwise Storing genomic data in AWS typically involves object stores and file systems like Amazon Simple Storage Service (Amazon S3) and Amazon Elastic Block Store (Amazon EBS) as well as database stores like Amazon Relational Database Service (Amazon RDS) Amazon Redshift Amazon DynamoDB and Amazon ElastiCache Like Amazon EC2 all of these storage and databases services default to least privilege access and are not discoverable or directly accessible from the Internet unless you configure them to be so Individual compute instances and storage volumes are the basic building blocks that researchers use to architect and build genomic data processing systems in AWS Individually these building blocks are private by default and networking them together within the AWS environment can provide additional layers of security and data protections Using Amazon Virtual Private Cloud (Amazon VPC) you can create private isolated networks within the AWS cloud where you retain complete control over the virtual network environment including definition of the IP address range creation of subnets and configuration of network route tables and network gateways Amazon VPC also offers stateless firewall capabilities through the use of Network Access Control Lists (NACLs) that control the source and destination network traffic endpoints and ports giving you robust security controls that are independent of the computational resources launched within Amazon VPC subnets In addition to the stateless firewalling capabilities of Amazon VPC NACLs Amazon EC2 instances and some services are launched within the context of AWS Security Groups Security groups define networklevel stateful firewall rules to protect computational resources at the Amazon EC2 instance or service ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 10 of 17 layer level Using security groups you can lock down compute storage or application services to strict subsets of resources running within an Amazon VPC subnet adhering to the principal of least privilege Figure 2 Protecting data from direct Internet access using Amazon VPC In addition to networking and securing the virtual infrastructure within the AWS cloud Amazon VPC provides several options for connecting to your AWS resources The first and simplest option is providing secure public endpoints to access resources such as SSH bastion servers A second option is to create a secure Virtual Private Network (VPN) connection that uses Internet Protocol Security (IPSec) by defining a virtual private gateway into the Amazon VPC You can use the connection to establish encrypted network connectivity over the Internet between an Amazon VPC and your institutional network Lastly research institutions can establish a dedicated and private network connection to AWS using AWS Direct Connect AWS Direct Connect lets you establish a dedicated highbandwidth (1 Gbps to 10 Gbps) network connection between your network and one of the AWS Direct Connect locations Using industry standard 8021q VLANs this dedicated connection can be partitioned into multiple virtual interfaces allowing you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space and private resources such as Amazon EC2 instances running within an Amazon Virtual Private Cloud (Amazon VPC) using private IP space while maintaining network separation between the public and private environments You can reconfigure virtual interfaces at any time to meet your changing needs 1 1 2 2 3 3 dbGaP data in Amazon S3 bucket; accessible only by Amazon EC2 instance within VPC security group Amazon EC2 instance hosts Aspera Connect download software running within VPC security group Amazon VPC network configured with private subnet requiring SSH client VPN gateway or other encrypted connection Amazon S3 bucket w/ dbGaP data EC2 instance w / Aspera Connect ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 11 of 17 Using a combination of hosted and selfmanaged services you can take advantage of secure robust networking services within a VPC and secure connectivity with another trusted network To learn more about the finer details see our Amazon VPC whitepaper the Amazon VPC documentation and the Amazon VPC Connectivity Options Whitepaper Data Encrypti on Encrypting data intransit and at rest is one of the most common methods of securing controlled access datasets As an Internetbased service provider AWS understands that many institutional IT security policies consider the Internet to be an insecure communications medium and consequently AWS has invested considerable effort in the security and encryption features you need in order to use the AWS cloud platform for highly sensitive data including protected health information under HIPAA and controlled access genomic datasets from the National Institutes of Health (NIH) AWS uses encryption in three areas: Service management traffic Data within AWS services Hardware security modules As an AWS customer you use the AWS Management Console to manage and configure your private environment Each time you use the AWS Management Console an SSL/TLS4 connection is made between your web browser and the console endpoints Service management traffic is encrypted data integrity is authenticated and the client browser authenticates the identity of the console service endpoint using an X509 certificate After this encrypted connection is established all subsequent HTTP traffic including data in transit over the Internet is protected within the SSL/TLS session Each AWS service is also enabled with application programming interfaces (APIs) that you can use to manage services either directly from applications or thirdparty tools or via Software Development Kits (SDK) or via AWS command line tools AWS APIs are web services over HTTPS and protect commands within an SSL/TLS encrypted session Within AWS there are several options for encrypting genomic data ranging from completely automated AWS encryption solutions (serverside) to manual clientside options Your decision to use a particular encryption model may be based on a variety of factors including the AWS service(s) being used your institutional policies your technical capability specific requirements of the data use certificate and other factors A s you architect your systems for controlled access datasets it’s important to identify each AWS service and encryption model you will use with the genomic data There are three different models for how you and/or AWS provide the encryption method and work with the key management infrastructure (KMI) as illustrated in Figure 3 4 Secure Sockets Layer (SSL)/Transport Layer Security (TLS) ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 12 of 17 Customer Managed AWS Managed Model A Researcher manages the encryption method and entire KMI Model B Researcher manages the encryption method; AWS provides storage component of KMI while researcher provides management layer of KMI Model C AWS manages the encryption method and the entire KMI Figure 2 Encryption Models in AWS In addition to the clientside and serverside encryption features builtin to many AWS services another common way to protect keys in a KMI is to use a dedicated storage and data processing device that performs cryptographic operations using keys on the devices These devices called hardware security modules (HSMs) typically provide tamper evidence or resistance to protect keys from unauthorized use For researchers who choose to use AWS encryption capabilities for your controlled access datasets the AWS CloudHSM service is another encryption option within your AWS environment giving you use of HSMs that are designed and validated to government standards (NIST FIPS 140 2) for secure key management If you want to manage the keys that control encryption of data in Amazon S3 and Amazon EBS volumes but don’t want to manage the needed KMI resources either within or external to AWS you can leverage the AWS Key Management Service (AWS KMS) AWS Key Management Service is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data and uses HSMs to protect the security of your keys AWS Key Management Service is integrated with other AWS services including Amazon EBS Amazon S3 and Amazon Redshift AWS Key Management Service is also integrated with AWS CloudTrail discussed later to provide you with logs of all key usage to help meet your regulatory and compliance needs AWS KMS also allows you to implement key creation rotation and usage policies AWS KMS is designed so that no one has access to your master keys The service is built on systems that are designed to protect your master keys with extensive hardening techniques such as never storing plaintext master keys on disk not persisting them in memory and limiting which systems can connect to the device All access to update software on the service is controlled by a multilevel approval process that is audited and reviewed by an independent group within Amazon KMI Encryption Method KMI Encryption Method KMI Encryption Method Key Storage Key Management Key Storage Key Management Key Storage Key Management ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 13 of 17 As mentioned in the Internet Network and Data Transfer section of this paper you can protect data transfers to and from your AWS environment to an external network with a number of encryptionready security features such as VPN For more information about encryption options within the AWS environment see Securing Data at Rest with Encryption as well as the AWS CloudHSM product details page To learn more about how AWS KMS works you can read the AWS Key Management Service whitepaper5 File Systems and Storage Volumes Analyzing and securing large datasets like whole genome sequences requires a variety of storage capabilities that allow you to make use of that data Within your private AWS account you can configure your storage services and security features to limit access to authorized users Additionally when research collaborators are authorized to access the data you can configure your access controls to safely share data between your private AWS account and your collaborator’s private AWS account When saving and securing data within your private AWS account you have several options Amazon Web Services offers two flexible and powerful storage options The first is Amazon Simple Storage Service (Amazon S3) a highly scalable webbased object store Amazon S3 provides HTTP/HTTPS REST endpoints to upload and download data objects in an Amazon S3 bucket Individual Amazon S3 objects can range from 1 byte to 5 terabytes Amazon S3 is designed for 9999% availability and 99999999999% object durability thus Amazon S3 provides a highly durable storage infrastructure designed for missioncritical and primary data storage The service redundantly stores data in multiple data centers within the Region you designate and Amazon S3 calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data Unlike traditional systems which can require laborious data verification and manual repair Amazon S3 performs regular systematic data integrity checks and is built to be automatically selfhealing Amazon S3 provides a base level of security whereby defaultonly bucket and object owners have access to the Amazon S3 resources they create In addition you can write security policies to further restrict access to Amazon S3 objects For example dbGaP recommendations call for all data to be encrypted while the data are in flight With an Amazon S3 bucket policy you can restrict an Amazon S3 bucket so that it only accepts requests using the secure HTTPS protocol which fulfills this requirement Amazon S3 bucket policies are best utilized to define broad permissions across sets of objects within a single bucket The previous examples for restricting the allowed protocols or source IP ranges are indicative of best practices For data that need more variable permissions based on whom is trying to access data IAM user policies are more appropriate As discussed previously IAM enables organizations with multiple employees to create and manage multiple users under a single AWS account With IAM user policies you can grant these IAM users finegrained control to your Amazon S3 bucket or data objects contained within Amazon S3 is a great tool for genomics analysis and is well suited for analytical applications that are purposebuilt for the cloud However many legacy genomic algorithms and applications cannot work directly with files stored in a HTTPbased object store like Amazon S3 but rather need a traditional file system In contrast to the Amazon S3 objectbased storage approach 5 https://d0awsstaticcom/whitepapers/KMSCryptographicDetailspdf ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 14 of 17 Amazon Elastic Block Store (Amazon EBS) provides networkattached storage volumes that can be formatted with traditional file systems This means that a legacy application running in an Amaz on EC2 instance can access genomic data in an Amazon EBS volume as if that data were stored locally in the Amazon EC2 instance Additionally Amazon EBS offers wholevolume encryption without the need for you to build maintain and secure your own key management infrastructure When you create an encrypted Amazon EBS volume and attach it to a supported instance type data stored at rest on the volume disk I/O and snapshots created from the volume are all encrypted The encryption occurs on the servers that host Amazon EC2 instances providing encryption of data intransit from Amazon EC2 instances to Amazon EBS storage Amazon EBS encryption uses AWS Key Management Service (AWS KMS) Customer Master Keys (CMKs) when creating encrypted volumes and any snapshots created from your encrypted volumes The first time you create an encrypted Amazon EBS volume in a region a default CMK is created for you automatically This key is used for Amazon EBS encryption unless you select a CMK that you created separately using AWS Key Management Service Creating your own CMK gives you more flexibility including the ability to create rotate disable define access controls and audit the encryption keys used to protect your data For more information see the AWS Key Management Service Developer Guide There are three options for Amazon EBS volumes: Magnetic volumes are backed by magnetic drives and are ideal for workloads where data are accessed infrequently and scenarios where the lowest storage cost is important General Purpose (SSD) volumes are backed by SolidState Drives (SSDs) and are suitable for a broad range of workloads including small to mediumsized databases development and test environments and boot volumes Provisioned IOPS (SSD) volumes are also backed by SSDs and are designed for applications with I/Ointensive workloads such as databases Provisioned IOPs offer storage with consistent and lowlatency performance and support up to 30 IOPS per GB which enables you to provision 4000 IOPS on a volume as small as 134 GB You can also achieve up to 128MBps of throughput per volume with as little as 500 provisioned IOPS Additionally you can stripe multiple volumes together to achieve up to 48000 IOPS or 800MBps when attached to larger Amazon EC2 instances While generalpurpose Amazon EBS volumes represent a great value in terms of performance and cost and can support a diverse set of genomics applications you should choose which Amazon EBS volume type to use based on the particular algorithm you're going to run A benefit of scalable ondemand infrastructure is that you can provision a diverse set of resources each tuned to a particular workload For more information on the security features available in Amazon S3 see the Access Control and Using Data Encryption topics in the Amazon S3 Developer Guide For an overview on security on AWS including Amazon S3 see Amazon Web Services: Overview of Security Processes For more information about Amazon EBS security features see Amazon EBS Encryption and Amazon Elastic Block Store (Amazon EBS) Operating Systems and Applications Recipients of controlledaccess data need their operating systems and applications to follow predefined configuration standards Operating systems should align with standards such as ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 15 of 17 NIST 80053 dbGaP Security Best Practices Appendix A or other regionally accepted criteria Software should also be configured according to applicationspecific best practices and OS and software patches should be kept up todate When you run operating systems and applications in AWS you are responsible for configuring and maintaining your operating systems and applications as well as the feature configurations in the associated AWS services such as Amazon EC2 and Amazon S3 As a concrete example imagine that a security vulnerability in the standard SSL/TLS shared library is discovered In this scenario AWS will review and remediate the vulnerability in the foundation services (see Figure 1) and you will review and remediate the operating systems and applications as well as any service configuration updates needed for hybrid deployments You must also take care to properly configure the OS and applications to restrict remote access to the instances and applications Examples include locking down security groups to only allow SSH or RDP from certain IP ranges ensuring strong password or other authentication policies and restricting user administrative rights on OS and applications Auditing Logging and Monitoring Researchers who manage controlled access data are required to report any inadvertent data release in accordance with the terms in the Data Use Certification breach of data security or other data management incidents contrary to the terms of data access The dbGaP security recommendations recommend use of security auditing and intrusion detection software that regularly scans and detects potential data intrusions Within the AWS ecosystem you have the option to use builtin monitoring tools such as Amazon CloudWatch as well as a rich partner ecosystem of security and monitoring software specifically built for AWS cloud services The AWS Partner Network lists a variety of system integrators and software vendors that can help you meet security and compliance requirements For more information see the AWS Life Science Partner webpage6 Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS You can use Amazon CloudWatch to collect and track metrics collect and monitor log files and set alarms Amazon CloudWatch provides performance metrics on the individual resource level such as Amazon EC2 instance CPU load and network IO and sets up thresholds on these metrics to raise alarms when the threshold is passed For example you can set an alarm to detect unusual spikes in network traffic from an Amazon EC2 instance that may be an indication of a compromised system CloudWatch alarms can integrate with other AWS services to send the alerts simultaneous ly to multiple destinations Example methods and destinations might include a message queue in Amazon Simple Queuing Service (Amazon SQS) which is continuously monitored by watchdog processes that will automatically quarantine a system; a mobile text message to security and operations staff that need to react to immediate threats; an email to security and compliance teams who audit the event and take action as needed Within Amazon CloudWatch you can also define custom metrics and populate these with whatever information is useful even outside of a security and compliance requirement For instance an Amazon CloudWatch metric can monitor the size of a data ingest queue to trigger 6 http://awsamazoncom/partners/competencies/lifesciences/ ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 16 of 17 the scaling up (or down) of computational resources that process data to handle variable rates of data acquisition AWS CloudTrail and AWS Config are two services that enable you to monitor and audit all of the operations against th e AWS product API’s AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service With AWS CloudTrail you can get a history of AWS API calls for your account including API calls made via the AWS Management Console AWS SDKs command line tools and hig herlevel AWS services (such as AWS CloudFormation) The AWS API call history produced by AWS CloudTrail enables security analysis resource change tracking and compliance auditing AWS Config builds upon the functionality of AWS CloudTrail and provides you with an AWS resource inventory configuration history and configuration change notifications to enable security and governance With AWS Config you can discover existing AWS resources export a complete inventory of your AWS resources with all configuration details and determine how a resource was configured at any point in time These capabilities enable compliance auditing security analysis resource change tracking and troubleshooting Lastly AWS has implemented various methods of external communication to support all customers in the event of security or operational issues that may impact our customers Mechanisms are in place to allow the customer support team to be notified of operational and security issues that impact each customer’s account The AWS incident management team employs industrystandard diagnostic procedures to drive resolution during businessimpacting events within the AWS cloud platform The operational systems that support the platform are extensively instrumented to monitor key operational metrics and alarms are configured to automatically notify operations and management personnel when early warning thresholds are cross ed on those key metrics Staff operators provide 24 x 7 x 365 coverage to detect incidents and to manage their impact and resolution An oncall schedule is used so that personnel are always available to respond to operational issues Authorizing Access to Data Researchers using AWS in connection with controlled access datasets must only allow authorized users to access the data Authorization is typically obtained either by approval from the Data Access Committee (DAC) or within the terms of the researcher’s existing Data Use Certification ( DUC) Once access is authorized you can grant that access in one or more ways depending on where the data reside and where the collaborator requiring access is located The scenarios below cover the situations that typically arise: Provide the collaborator access within an AWS account via an IAM user (see User Accounts Passwords and Access Control Lists ) Provide the collaborator access to their own AWS accounts (see File Systems Storage Volumes and Databases ) Open access to the AWS environment to an external network (see Internet Networking and Data Transfers ) ArchivedAmazon Web Services – Architecting for Genomic Data Security and Compliance in AWS December 2 014 Page 17 of 17 Cleaning U p Data and Retaining Results Controlledaccess datasets for closed research projects should be deleted upon project close out and only encrypted copies of the minimum data needed to comply with institutional policies should be retained In AWS deletion and retention operations on data are under the complete control of a researcher You might opt to replicate archived data to one or more AWS regions for disaster recovery or highavailability purposes but you are in complete control of that process As it is for onpremises infrastructure data provenance7 is the sole responsibility of the researcher Through a combination of data encryption and other standard operating procedures such as resource monitoring and security audits you can comply with dbGaP security recommendations in AWS With respect to AWS storage services after Amazon S3 data objects or Amazon EBS volumes are deleted removal of the mapping from the public name to the object starts immediately and is generally processed across the distributed system within several seconds After the mapping is removed there is no remote access to the deleted object The underlying storage area is then reclaimed for use by the system Conclusion The AWS cloud platform provides a number of important benefits and advantages to genomic researchers and enables them to satisfy the NIH security best practices for controlled access datasets While AWS delivers these benefits and advantages through our services and features researchers are still responsible for properly building using and maintaining the private AWS environment to help ensure the confidentiality integrity and availability of the controlled access datasets they manage Using the practices in this whitepaper we encourage you to build a set of security policies and processes for your organization so you can deploy applications using controlled access data quickly and securely Notices © 2014 Amazon Web Services Inc or its affiliates All rights reserved This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS it s affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers 7 The process of tracing and recording the origins of data and its movement between databases
|
General
|
consultant
|
Best Practices
|
Architecting_for_HIPAASecurity_and_Compliance_on_AWS
|
ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper This version has been archived For the latest version of this document refer to https://docsawsamazoncom/whitepapers/latest/ architectinghi paasecurityandcomplianceonaws/ welcomehtmlArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Architecting for HIPAA Security and Compliance on Amazon Web Services: AWS Whitepaper Copyright © Amazon Web Services Inc and/or its affiliates All rights reserved Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to or sponsored by AmazonArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Table of Contents Abstract 1 Introduction 2 Encryption and protection of PHI in AWS 3 Alexa for Business 6 Amazon API Gateway 6 Amazon AppFlow 7 Amazon AppStream 20 7 Amazon Athena 7 Amazon Aurora 8 Amazon Aurora PostgreSQL 8 Amazon CloudFront 8 Lambda@Edge 8 Amazon CloudWatch 9 Amazon CloudWatch Events 9 Amazon CloudWatch Logs 9 Amazon Comprehend 9 Amazon Comprehend Medical 9 Amazon Connect 9 Amazon DocumentDB (with MongoDB compatibility) 10 Amazon DynamoDB 10 Amazon Elastic Block Store 10 Amazon EC2 11 Amazon Elastic Container Registry 11 Amazon ECS 11 Amazon EFS 12 Amazon EKS 12 Amazon ElastiCache for Redis 12 Encryption at Rest 13 Transport Encryption 13 Authentication 13 Applying ElastiCache Service Updates 14 Amazon OpenSearch Service 14 Amazon EMR 14 Amazon EventBridge 14 Amazon Forecast 15 Amazon FSx 15 Amazon GuardDuty 16 Amazon HealthLake 16 Amazon Inspector 16 Amazon Kinesis Data Analytics 16 Amazon Kinesis Data Firehose 17 Amazon Kinesis Streams 17 Amazon Kinesis Video Streams 17 Amazon Lex 17 Amazon Managed Streaming for Apache Kafka (Amazon MSK) 18 Amazon MQ 18 Amazon Neptune 19 AWS Network Firewall 19 Amazon Pinpoint 19 Amazon Polly 20 Amazon Quantum Ledger Database (Amazon QLDB) 20 Amazon QuickSight 21 Amazon RDS for MariaDB 21 Amazon RDS for MySQL 21 iiiArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon RDS for Oracle 22 Amazon RDS for PostgreSQL 22 Amazon RDS for SQL Server 22 Encryption at Rest 23 Transport Encryption 23 Auditing 23 Amazon Redshift 23 Amazon Rekognition 23 Amazon Route 53 24 Amazon S3 Glacier 24 Amazon S3 Transfer Acceleration 24 Amazon SageMaker 24 Amazon SNS 25 Amazon Simple Email Service (Amazon SES) 25 Amazon SQS 25 Amazon S3 26 Amazon Simple Workflow Service 26 Amazon Textract 26 Amazon Transcribe 27 Amazon Translate 27 Amazon Virtual Private Cloud 27 Amazon WorkDocs 27 Amazon WorkSpaces 28 AWS App Mesh 28 AWS Auto Scaling 28 AWS Backup 29 AWS Batch 29 AWS Certificate Manager 30 AWS Cloud Map 30 AWS CloudFormation 30 AWS CloudHSM 30 AWS CloudTrail 30 AWS CodeBuild 31 AWS CodeDeploy 31 AWS CodeCommit 31 AWS CodePipeline 31 AWS Config 32 AWS Data Exchange 32 AWS Database Migration Service 32 AWS DataSync 33 AWS Directory Service 33 AWS Directory Service for Microsoft AD 33 Amazon Cloud Directory 33 AWS Elastic Beanstalk 33 AWS Fargate 34 AWS Firewall Manager 34 AWS Global Accelerator 34 AWS Glue 35 AWS Glue DataBrew 35 AWS IoT Core and AWS IoT Device Management 35 AWS IoT Greengrass 35 AWS Lambda 35 AWS Managed Services 36 AWS Mobile Hub 36 AWS OpsWorks for Chef Automate 36 AWS OpsWorks for Puppet Enterprise 36 AWS OpsWorks Stack 37 ivArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Organizations 37 AWS RoboMaker 37 AWS SDK Metrics 37 AWS Secrets Manager 38 AWS Security Hub 38 AWS Server Migration Service 38 AWS Serverless Application Repository 39 AWS Service Catalog 39 AWS Shield 39 AWS Snowball 39 AWS Snowball Edge 40 AWS Snowmobile 40 AWS Step Functions 40 AWS Storage Gateway 40 File Gateway 41 Volume Gateway 41 Tape Gateway 41 AWS Systems Manager 41 AWS Transfer for SFTP 41 AWS WAF – Web Application Firewall 42 AWS XRay 42 Elastic Load Balancing 42 FreeRTOS 42 Using AWS KMS for Encryption of PHI 43 VM Import/Export 43 Auditing backups and disaster recovery 44 Document revisions 45 Notices 48 vArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Architecting for HIPAA Security and Compliance on Amazon Web Services Publication date: September 9 2021 (Document revisions (p 45)) This paper briefly outlines how customers can use Amazon Web Services (AWS) to run sensitive workloads regulated under the US Health Insurance Portability and Accountability Act (HIPAA) We will focus on the HIPAA Privacy and Security Rules for protecting Protected Health Information (PHI) how to use AWS to encrypt data in transit and atrest and how AWS features can be used to run workloads containing PHI 1ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Introduction The Health Insurance Portability and Accountability Act of 1996 (HIPAA) applies to “covered entities” and “business associates” HIPAA was expanded in 2009 by the Health Information Technology for Economic and Clinical Health (HITECH) Act HIPAA and HITECH establish a set of federal standards intended to protect the security and privacy of PHI HIPAA and HITECH impose requirements related to the use and disclosure of protected health information (PHI) appropriate safeguards to protect PHI individual rights and administrative responsibilities For more information on HIPAA and HITECH go to the Health Information Privacy Home Covered entities and their business associates can use the secure scalable lowcost IT components provided by Amazon Web Services (AWS) to architect applications in alignment with HIPAA and HITECH compliance requirements AWS offers a commercialofftheshelf infrastructure platform with industry recognized certifications and audits such as ISO 27001 FedRAMP and the Service Organization Control Reports (SOC1 SOC2 and SOC3) AWS services and data centers have multiple layers of operational and physical security to help ensure the integrity and safety of customer data With no minimum fees no termbased contracts required and payasyouuse pricing AWS is a reliable and effective solution for growing healthcare industry applications AWS enables covered entities and their business associates subject to HIPAA to securely process store and transmit PHI Additionally as of July 2013 AWS offers a standardized Business Associate Addendum (BAA) for such customers Customers who execute an AWS BAA may use any AWS service in an account designated as a HIPAA Account but they may only process store and transmit PHI using the HIPAA eligible services defined in the AWS BAA For a complete list of these services see the HIPAA Eligible Services Reference page AWS maintains a standardsbased risk management program to ensure that the HIPAAeligible services specifically support HIPAA administrative technical and physical safeguards Using these services to store process and transmit PHI helps our customers and AWS to address the HIPAA requirements applicable to the AWS utilitybased operating model AWS’s BAA requires customers to encrypt PHI stored in or transmitted using HIPAAeligible services in accordance with guidance from the Secretary of Health and Human Services (HHS): Guidance to Render Unsecured Protected Health Information Unusable Unreadable or Indecipherable to Unauthorized Individuals (“Guidance”) Please refer to this site because it may be updated and may be made available on a successor (or related) site designated by HHS AWS offers a comprehensive set of features and services to make key management and encryption of PHI easy to manage and simpler to audit including the AWS Key Management Service (AWS KMS) Customers with HIPAA compliance requirements have a great deal of flexibility in how they meet encryption requirements for PHI When determining how to implement encryption customers can evaluate and take advantage of the encryption features native to the HIPAAeligible services Or customers can satisfy the encryption requirements through other means consistent with the guidance from HHS 2ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Encryption and protection of PHI in AWS The HIPAA Security Rule includes addressable implementation specifications for the encryption of PHI in transmission (“in transit”) and in storage (“at rest”) Although this is an addressable implementation specification in HIPAA AWS requires customers to encrypt PHI stored in or transmitted using HIPAA eligible services in accordance with guidance from the Secretary of Health and Human Services (HHS): Guidance to Render Unsecured Protected Health Information Unusable Unreadable or Indecipherable to Unauthorized Individuals (“Guidance”) Please refer to this site because it may be updated and may be made available on a successor (or related site) designated by HHS AWS offers a comprehensive set of features and services to make key management and encryption of PHI easy to manage and simpler to audit including the AWS Key Management Service (AWS KMS) Customers with HIPAA compliance requirements have a great deal of flexibility in how they meet encryption requirements for PHI When determining how to implement encryption customers may evaluate and take advantage of the encryption features native to the HIPAAeligible services or they can satisfy the encryption requirements through other means consistent with the guidance from HHS The following sections provide high level details about using available encryption features in each of the HIPAAeligible services and other patterns for encrypting PHI and how AWS KMS can be used to encrypt the keys used for encryption of PHI on AWS Topics •Alexa for Business (p 6) •Amazon API Gateway (p 6) •Amazon AppFlow (p 7) •Amazon AppStream 20 (p 7) •Amazon Athena (p 7) •Amazon Aurora (p 8) •Amazon Aurora PostgreSQL (p 8) •Amazon CloudFront (p 8) •Amazon CloudWatch (p 9) •Amazon CloudWatch Events (p 9) •Amazon CloudWatch Logs (p 9) •Amazon Comprehend (p 9) •Amazon Comprehend Medical (p 9) •Amazon Connect (p 9) •Amazon DocumentDB (with MongoDB compatibility) (p 10) •Amazon DynamoDB (p 10) •Amazon Elastic Block Store (p 10) •Amazon Elastic Compute Cloud (p 11) •Amazon Elastic Container Registry (p 11) •Amazon Elastic Container Service (p 11) •Amazon Elastic File System (Amazon EFS) (p 12) 3ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper •Amazon Elastic Kubernetes Service (Amazon EKS) (p 12) •Amazon ElastiCache for Redis (p 12) •Amazon OpenSearch Service (p 14) •Amazon EMR (p 14) •Amazon EventBridge (p 14) •Amazon Forecast (p 15) •Amazon FSx (p 15) •Amazon GuardDuty (p 16) •Amazon HealthLake (p 16) •Amazon Inspector (p 16) •Amazon Kinesis Data Analytics (p 16) •Amazon Kinesis Data Firehose (p 17) •Amazon Kinesis Streams (p 17) •Amazon Kinesis Video Streams (p 17) •Amazon Lex (p 17) •Amazon Managed Streaming for Apache Kafka (Amazon MSK) (p 18) •Amazon MQ (p 18) •Amazon Neptune (p 19) •AWS Network Firewall (p 19) •Amazon Pinpoint (p 19) •Amazon Polly (p 20) •Amazon Quantum Ledger Database (Amazon QLDB) (p 20) •Amazon QuickSight (p 21) •Amazon RDS for MariaDB (p 21) •Amazon RDS for MySQL (p 21) •Amazon RDS for Oracle (p 22) •Amazon RDS for PostgreSQL (p 22) •Amazon RDS for SQL Server (p 22) •Amazon Redshift (p 23) •Amazon Rekognition (p 23) •Amazon Route 53 (p 24) •Amazon S3 Glacier (p 24) •Amazon S3 Transfer Acceleration (p 24) •Amazon SageMaker (p 24) •Amazon Simple Notification Service (Amazon SNS) (p 25) •Amazon Simple Email Service (Amazon SES) (p 25) •Amazon Simple Queue Service (Amazon SQS) (p 25) •Amazon Simple Storage Service (Amazon S3) (p 26) •Amazon Simple Workflow Service (p 26) •Amazon Textract (p 26) •Amazon Transcribe (p 27) •Amazon Translate (p 27) •Amazon Virtual Private Cloud (p 27) •Amazon WorkDocs (p 27) 4ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper •Amazon WorkSpaces (p 28) •AWS App Mesh (p 28) •AWS Auto Scaling (p 28) •AWS Backup (p 29) •AWS Batch (p 29) •AWS Certificate Manager (p 30) •AWS Cloud Map (p 30) •AWS CloudFormation (p 30) •AWS CloudHSM (p 30) •AWS CloudTrail (p 30) •AWS CodeBuild (p 31) •AWS CodeDeploy (p 31) •AWS CodeCommit (p 31) •AWS CodePipeline (p 31) •AWS Config (p 32) •AWS Data Exchange (p 32) •AWS Database Migration Service (p 32) •AWS DataSync (p 33) •AWS Directory Service (p 33) •AWS Elastic Beanstalk (p 33) •AWS Fargate (p 34) •AWS Firewall Manager (p 34) •AWS Global Accelerator (p 34) •AWS Glue (p 35) •AWS Glue DataBrew (p 35) •AWS IoT Core and AWS IoT Device Management (p 35) •AWS IoT Greengrass (p 35) •AWS Lambda (p 35) •AWS Managed Services (p 36) •AWS Mobile Hub (p 36) •AWS OpsWorks for Chef Automate (p 36) •AWS OpsWorks for Puppet Enterprise (p 36) •AWS OpsWorks Stack (p 37) •AWS Organizations (p 37) •AWS RoboMaker (p 37) •AWS SDK Metrics (p 37) •AWS Secrets Manager (p 38) •AWS Security Hub (p 38) •AWS Server Migration Service (p 38) •AWS Serverless Application Repository (p 39) •AWS Service Catalog (p 39) •AWS Shield (p 39) •AWS Snowball (p 39) •AWS Snowball Edge (p 40) •AWS Snowmobile (p 40) 5ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Alexa for Business •AWS Step Functions (p 40) •AWS Storage Gateway (p 40) •AWS Systems Manager (p 41) •AWS Transfer for SFTP (p 41) •AWS WAF – Web Application Firewall (p 42) •AWS XRay (p 42) •Elastic Load Balancing (p 42) •FreeRTOS (p 42) •Using AWS KMS for Encryption of PHI (p 43) •VM Import/Export (p 43) Alexa for Business Alexa for Business makes it easy to configure install and manage fleets of Alexaenabled devices in the enterprise Alexa for Business allows the enterprise to control which skills (Alexa apps) are available to its users and which corporate resources (email calendar directories etc) designated Alexa skills have access to Through this access it extends Alexa’s capabilities with new enterprisespecific skills such as starting meetings and checking if conference rooms are booked The Alexa for Business system consists of two components First is the Alexa for Business management console an AWS service that configures and monitors the Alexaenabled hardware and allows configuration of the system It also provides the hooks so that designated Alexa skills can access corporate resources The second is the Alexa system which processes enduser queries and commands takes action and provides responses The Alexa system is not an AWS service The Alexa for Business management console does not process or store any PHI Therefore Alexa for Business can be used in conjunction with Alexa skills that do not process PHI such as starting meetings checking on conference rooms or using any Alexa skill that also does not process PHI If customers want to process PHI with Alexa and Alexa for Business customers must use a HIPAAeligible Alexa skill and sign a BAA with the Alexa organization Customers can find out more about building HIPAAeligible Alexa skills at Alexa Healthcare Skills Amazon API Gateway Customers can use Amazon API Gateway to process and transmit protected health information (PHI) While Amazon API Gateway automatically uses HTTPS endpoints for encryption inflight customers can also choose to encrypt payloads clientside API Gateway passes all noncached data through memory and does not write it to disk Customers can use AWS Signature Version 4 for authorization with API Gateway For more information see the following: •Amazon API Gateway FAQs: Security and Authorization •Controlling and managing access to a REST API in API Gateway Customers can integrate with any service that is connected to API Gateway provided that when PHI is involved the service is configured consistent with the Guidance and BAA For information on integrating API Gateway with backend services see Set up REST API methods in API Gateway Customers can use AWS CloudTrail and Amazon CloudWatch to enable logging that is consistent with their logging requirements Ensure that any PHI sent through API Gateway (such as in headers URLs and request/response) is only captured by HIPAAeligible services that have been configured to be consistent 6ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon AppFlow with the Guidance For more information on logging with API Gateway see How do I enable CloudWatch Logs for troubleshooting my API Gateway REST API or WebSocket API? Amazon AppFlow Amazon AppFlow is a fully managed integration service that enables customers to securely transfer data between SoftwareasaService (SaaS) applications such as Salesforce Marketo Slack and ServiceNow and AWS services such as Amazon S3 and Amazon Redshift AppFlow can run data flows at a frequency the customer chooses on a schedule in response to a business event or on demand Customers can also configure data transformation capabilities like filtering and validation to generate rich readytouse data as part of the flow itself without additional steps Amazon AppFlow can be used to process and transfer data containing PHI Encryption of data while in transit between AppFlow and the configured source/destination is provided by default using TLS 12 or later Data stored atrest in S3 is automatically encrypted using an AWS KMS customer master key (CMK) that is specified by the customer For PHI data transferred to non S3 destinations customers must ensure the atrest storage for the chosen destination meets their security needs AppFlow enables application monitoring by integrating with AWS CloudTrail to log API calls and Amazon EventBridge to emit flow execution events Amazon AppStream 20 Amazon AppStream 20 is a fully managed application streaming service Customers own their data and must configure the necessary Windows applications in a manner that meets their regulatory requirements Customers are able to configure persistent storage via Home Folders Files and folders are encrypted in transit using Amazon S3's SSL endpoints Files and folders are encrypted atrest using Amazon S3managed encryption keys For more information see Enable and Administer Persistent Storage for Your AppStream 20 Users If customers choose to use a thirdparty storage solution they are responsible for ensuring the configuration of that solution is consistent with the guidance All public API communication with Amazon AppStream 20 is encrypted using TLS For more information please see Amazon AppStream 20 Documentation Amazon AppStream 20 is integrated with AWS CloudTrail a service that logs API calls made by or on behalf of Amazon AppStream 20 in customer’s AWS account and delivers the log files to the specified Amazon S3 bucket CloudTrail captures API calls made from the Amazon AppStream 20 console or from the Amazon AppStream 20 API Customers can also use Amazon CloudWatch to log resource usage metrics For more information see Monitoring Amazon AppStream 20 Resources and Logging AppStream 20 API Calls with AWS CloudTrail Amazon Athena Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL Athena helps customers analyze unstructured semistructured and structured data stored in Amazon S3 Examples include CSV JSON or columnar data formats such as Apache Parquet and Apache ORC Customers can use Athena to run ad hoc queries using ANSI SQL without the need to aggregate or load the data into Athena Amazon Athena can now be used to process data containing PHI Encryption of data while in transit between Amazon Athena and S3 is provided by default using SSL/TLS Encryption of PHI while atrest on S3 should be performed according to the guidance provided in the S3 section Encryption of query results from and within Amazon Athena including staged results should be enabled using serverside encryption with Amazon S3 managed keys (SSES3) AWS KMSmanaged keys (SSEKMS) or clientside 7ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon Aurora encryption with AWS KMSmanaged keys (CSEKMS) Amazon Athena uses AWS CloudTrail to log all API calls Amazon Aurora Amazon Aurora allows customers to encrypt Aurora database clusters and snapshots at rest using keys that they manage through AWS KMS On a database instance running with Amazon Aurora encryption data stored atrest in the underlying storage is encrypted as are automated backups read replicas and snapshots Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon Aurora encryption satisfies their compliance and regulatory requirements For more information on encryption atrest using Amazon Aurora see Protecting data using encryption Connections to DB clusters running Aurora MySQL must use transport encryption utilizing Secure Socket Layer (SSL) or Transport Layer Security (TLS) For more information on implementing SSL/TLS see Using SSL/TLS with Aurora MySQL DB clusters Amazon Aurora PostgreSQL Amazon Aurora allows customers to encrypt Aurora database clusters and snapshots at rest using keys that they manage through AWS KMS On a database instance running with Amazon Aurora encryption data stored atrest in the underlying storage is encrypted as are automated backups read replicas and snapshots Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon Aurora encryption satisfies their compliance and regulatory requirements For more information on encryption atrest using Amazon Aurora see Protecting data using encryption Connections to DB clusters running Aurora PostgreSQL must use transport encryption utilizing Secure Socket Layer (SSL) or Transport Layer Security (TLS) For more information on implementing SSL/TLS see Securing Aurora PostgreSQL data with SSL Amazon CloudFront Amazon CloudFront is a global content delivery network (CDN) service that accelerates delivery of customer websites APIs video content or other web assets It integrates with other Amazon Web Services products to give developers and businesses an easy way to accelerate content to end users with no minimum usage commitments To ensure encryption of PHI while in transit with CloudFront customers must configure CloudFront to use HTTPS endtoend from the origin to the viewer This includes traffic between CloudFront and the viewer CloudFront redistributing from a custom origin and CloudFront distributing from an Amazon S3 origin Customers should also ensure that the data is encrypted at the origin to ensure it remains encrypted atrest while cached in CloudFront If using Amazon S3 as an origin customers can make use of S3 serverside encryption features If customers distribute from a custom origin they must ensure that the data is encrypted at the origin Lambda@Edge Lambda@Edge is a compute service that allows for the execution of Lambda functions at AWS edge locations Lambda@Edge can be used to customize content delivered through CloudFront When using Lambda@Edge with PHI customers should follow the Guidance for the use of CloudFront All connections into and out of Lambda@Edge should be encrypted using HTTPS or SSL/TLS 8ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon CloudWatch Amazon CloudWatch Amazon CloudWatch is a monitoring service for AWS Cloud resources and the applications that customers run on AWS Customers can use Amazon CloudWatch to collect and track metrics collect and monitor log files and set alarms Amazon CloudWatch itself does not produce store or transmit PHI Customers can monitor CloudWatch API calls with AWS CloudTrail For more information see Logging Amazon CloudWatch API Calls with AWS CloudTrail For more details on configuration requirements see the Amazon CloudWatch Logs section Amazon CloudWatch Events Amazon CloudWatch Events delivers a nearrealtime stream of system events that describe changes in AWS resources Customers should ensure that PHI does not flow into CloudWatch Events and any AWS resource emitting a CloudWatch event that is storing processing or transmitting PHI is configured in accordance with the Guidance Customers can configure Amazon CloudWatch Events to register as an AWS API call in CloudTrail For more information see Creating a CloudWatch Events Rule That Triggers on an AWS API Call Using AWS CloudTrail Amazon CloudWatch Logs Customers can use Amazon CloudWatch Logs to monitor store and access their log files from Amazon Elastic Compute Cloud (Amazon EC2) instances AWS CloudTrail Amazon Route 53 and other sources They can then retrieve the associated log data from CloudWatch Logs Log data is encrypted while in transit and while it is atrest As a result it is not necessary to reencrypt PHI emitted by any other service and delivered to CloudWatch Logs Amazon Comprehend Amazon Comprehend uses natural language processing to extract insights about the content of documents Amazon Comprehend processes any text file in UTF8 format It develops insights by recognizing the entities key phrases language sentiments and other common elements in a document Amazon Comprehend can be used with data containing PHI Amazon Comprehend does not retain or store any data and all calls to the API are encrypted with SSL/TLS Amazon Comprehend uses CloudTrail to log all API calls Amazon Comprehend Medical For guidance see the previous Amazon Comprehend (p 9) section Amazon Connect Amazon Connect is a selfservice cloudbased contact center service that enables dynamic personal and natural customer engagement at any scale Customers should not include any PHI in any fields associated with managing users security profiles and contact flows within Amazon Connect 9ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon DocumentDB (with MongoDB compatibility) Amazon Connect Customer Profiles a feature of Amazon Connect equips contact center agents with a more unified view of a customer’s profile with the most up to date information to provide more personalized customer service Customer Profiles is designed to automatically bring together customer information from multiple applications into a unified customer profile delivering the profile directly to the agent as soon as the support call or interaction begins Customers should refrain from naming domains or object keys with PHI data The contents of Domains and Objects are encrypted and protected but the key identifiers are not Amazon DocumentDB (with MongoDB compatibility) Amazon DocumentDB (with MongoDB compatibility) (Amazon DocumentDB) offers encryption at rest during cluster creation via AWS KMS which allows customers to encrypt databases using AWS or customermanaged keys On a database instance running with encryption enabled data stored atrest is encrypted consistent with the Guidance in effect at the time of publication of this whitepaper as are automated backups read replicas and snapshots Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon DocumentDB encryption satisfies their compliance and regulatory requirements For more information on encryption atrest using Amazon DocumentDB see Encrypting Amazon DocumentDB Data at Rest Connections to Amazon DocumentDB containing PHI must use endpoints that accept encrypted transport (HTTPS) By default a newly created Amazon DocumentDB cluster only accepts secure connections using Transport Layer Security (TLS) For more information see Encrypting Data in Transit Amazon DocumentDB uses AWS CloudTrail to log all API calls For more information see Logging and Monitoring in Amazon DocumentDB For certain management features Amazon DocumentDB uses operational technology that is shared with Amazon RDS Amazon DocumentDB console AWS CLI and API calls are logged as calls made to the Amazon RDS API Amazon DynamoDB Connections to Amazon DynamoDB containing PHI must use endpoints that accept encrypted transport (HTTPS) For a list of regional endpoints see AWS service endpoints Amazon DynamoDB offers DynamoDB encryption which allows customers to encrypt databases using keys that customers manage through AWS KMS On a database instance running with Amazon DynamoDB encryption data stored atrest in the underlying storage is encrypted consistent with the Guidance in effect at the time of publication of this whitepaper as are automated backups read replicas and snapshots Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon DynamoDB encryption satisfies their compliance and regulatory requirements For more information on encryption atrest using Amazon DynamoDB see DynamoDB Encryption at Rest Amazon Elastic Block Store Amazon EBS encryption atrest is consistent with the Guidance that is in effect at the time of publication of this whitepaper Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon EBS encryption satisfies their compliance and regulatory requirements With Amazon EBS encryption a unique volume encryption key is generated for each EBS volume Customers 10ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon EC2 have the flexibility to choose which master key from the AWS Key Management Service is used to encrypt each volume key For more information see Amazon EBS encryption Amazon Elastic Compute Cloud Amazon EC2 is a scalable userconfigurable compute service that supports multiple methods for encrypting data at rest For example customers might elect to perform application or fieldlevel encryption of PHI as it is processed within an application or database platform hosted in an Amazon EC2 instance Approaches range from encrypting data using standard libraries in an application framework such as Java or NET; leveraging Transparent Data Encryption features in Microsoft SQL or Oracle; or by integrating other thirdparty and software as a service (SaaS)based solutions into their applications Customers can choose to integrate their applications running in Amazon EC2 with AWS KMS SDKs simplifying the process of key management and storage Customers can also implement encryption of data at rest using filelevel or full disk encryption (FDE) by using thirdparty software from AWS Marketplace Partners or native file system encryption tools (such as dmcrypt LUKS etc) Network traffic containing PHI must encrypt data in transit For traffic between external sources (such as the internet or a traditional IT environment) and Amazon EC2 customers should use open standard transport encryption mechanisms such as Transport Layer Security (TLS) or IPsec virtual private networks (VPNs) consistent with the Guidance Internal to an Amazon Virtual Private Cloud (VPC) for data traveling between Amazon EC2 instances network traffic containing PHI must also be encrypted; most applications support TLS or other protocols providing in transit encryption that can be configured to be consistent with the Guidance For applications and protocols that do not support encryption sessions transmitting PHI can be sent through encrypted tunnels using IPsec or similar implementations between instances Amazon Elastic Container Registry Amazon Elastic Container Registry (Amazon ECR) is integrated with Amazon Elastic Container Service (Amazon ECS) and allows customers to easily store run and manage container images for applications running on Amazon ECS After customers specify the Amazon ECR repository in their Task Definition Amazon ECS will retrieve the appropriate images for their applications No special steps are required to use Amazon ECR with container images that contain PHI Container images are encrypted while in transit and stored encrypted while atrest using Amazon S3 serverside encryption (SSES3) Amazon Elastic Container Service Amazon Elastic Container Service (Amazon ECS) is a highly scalable highperformance container management service that supports Docker containers and allows customers to easily run applications on a managed cluster of Amazon EC2 instances Amazon ECS eliminates the need for customers to install operate and scale their own cluster management infrastructure With simple API calls customers can launch and stop Dockerenabled applications query the complete state of their cluster and access many familiar features like security groups Elastic Load Balancing EBS volumes and IAM roles Customers can use Amazon ECS to schedule the placement of containers across their cluster based on their resource needs and availability requirements Using ECS with workloads that process PHI requires no additional configuration ECS acts as an orchestration service that coordinates the launch of containers (images for which are stored in S3) on EC2 and it does not operate with or upon data within the workload being orchestrated Consistent with 11ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon EFS HIPAA regulations and the AWS Business Associate Addendum PHI should be encrypted in transit and atrest when accessed by containers launched with ECS Various mechanisms for encrypting atrest are available with each AWS storage option (for example S3 EBS and KMS) Ensuring complete encryption of PHI sent between containers may also lead customers to deploy an overlay network (such as VNS3 Weave Net or similar) in order to provide a redundant layer of encryption Nevertheless complete logging should also be enabled (for example through CloudTrail) and all container instance logs should be directed to CloudWatch Amazon Elastic File System (Amazon EFS) Amazon Elastic File System (Amazon EFS) provides simple scalable elastic file storage for use with AWS Cloud services and onpremises resources It is easy to use and offers a simple interface that allows customers to create and configure file systems quickly and easily Amazon EFS is built to elastically scale on demand without disrupting applications growing and shrinking automatically as customers add and remove files To satisfy the requirement that PHI be encrypted atrest two paths are available on EFS EFS supports encryption atrest when a new file system is created During creation the option for “Enable encryption of data at rest” should be selected Selecting this option ensures that all data placed on the EFS file system will be encrypted using AES256 encryption and AWS KMSmanaged keys Customers may alternatively choose to encrypt data before it is placed on EFS but they are then responsible for managing the encryption process and key management PHI should not be used as all or part of any file name or folder name Encryption of PHI while in transit for Amazon EFS is provided by Transport Layer Security (TLS) between the EFS service and the instance mounting the file system EFS offers a mount helper to facilitate connecting to a file system using TLS By default TLS is not used and must be enabled when mounting the file system using the EFS mount helper Ensure that the mount command contains the “o tls” option to enable TLS encryption Alternatively customers who choose not to use the EFS mount helper can follow the instructions in the EFS documentation to configure their NFS clients to connect through a TLS tunnel Amazon Elastic Kubernetes Service (Amazon EKS) Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that makes it easy for customers to run Kubernetes on AWS without needing to stand up or maintain their own Kubernetes control plane Kubernetes is an opensource system for automating the deployment scaling and management of containerized applications Using Amazon EKS with workloads that process PHI data requires no additional configuration Amazon EKS operates as an orchestration service coordinating the launch of containers (the images for which are stored in S3) on EC2 and does not directly operate with or upon data within the workload being orchestrated Amazon EKS uses AWS CloudTrail to log all API calls Amazon ElastiCache for Redis Amazon ElastiCache for Redis is a Rediscompatible inmemory data structure service that can be used as a data store or cache In order to store PHI customers must ensure that they are running the latest HIPAAeligible ElastiCache for Redis engine version and current generation node types Amazon ElastiCache for Redis supports storing PHI for the following node types and Redis engine version: • Node Types: current generation only (for example as of the time of publication of this whitepaper M4 M5 R4 R5 T2 T3) 12ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Encryption at Rest • ElastiCache for Redis engine version: 326 and 4010 onwards For more information about choosing current generation nodes see Amazon ElastiCache pricing For more information about choosing an ElastiCache for Redis engine see What Is Amazon ElastiCache for Redis? Customers must also ensure that the cluster and nodes within the cluster are configured to encrypt data at rest enable transport encryption and enable authentication of Redis commands In addition customers must also ensure that their Redis clusters are updated with the latest ‘Security’ type service updates on or before the ‘Recommended Apply by Date’ (the date by which it is recommended the update be applied) at all times For more information see the sections below Topics •Encryption at Rest (p 13) •Transport Encryption (p 13) •Authentication (p 13) •Applying ElastiCache Service Updates (p 14) Encryption at Rest Amazon ElastiCache for Redis provides data encryption for its cluster to help protect the data at rest When customers enable encryption atrest for a cluster at the time of creation Amazon ElastiCache for Redis encrypts data on disk and automated Redis backups Customer data on disk is encrypted using hardware accelerated Advanced Encryption Standard (AES)512 symmetric keys Redis backups are encrypted through Amazon S3managed encryption keys (SSES3) A S3 bucket with serverside encryption enabled will encrypt the data using hardwareaccelerated Advanced Encryption Standard (AES)256 symmetric keys before saving it in the bucket For more details on Amazon S3managed encryption keys (SSES3) see Protecting Data Using Server Side Encryption with Amazon S3Managed Encryption Keys (SSES3) On an ElastiCache Redis cluster (single or multinode) running with encryption data stored atrest is encrypted consistent with the Guidance in effect at the time of publication of this whitepaper This includes data on disk and automated backups in S3 bucket Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon ElastiCache for Redis encryption satisfies their compliance and regulatory requirements For more information about encryption atrest using Amazon ElastiCache for Redis see What Is Amazon ElastiCache for Redis? Transport Encryption Amazon ElastiCache for Redis uses TLS to encrypt the data in transit Connections to ElastiCache for Redis containing PHI must use transport encryption and evaluate the configuration for consistency with the Guidance For more information see CreateReplicationGroup For more information on enabling transport encryption see ElastiCache for Redis InTransit Encryption (TLS) Authentication Amazon ElastiCache for Redis clusters (single/multi node) that contain PHI must provide a Redis AUTH token to enable authentication of Redis commands Redis AUTH is available when both encryption at rest and encryptionin transit are enabled Customers should provide a strong token for Redis AUTH with following constraints: • Must be only printable ASCII characters • Must be at least 16 characters and no more than 128 characters in length • Cannot contain any of the following characters: '/' '"' or "@" 13ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Applying ElastiCache Service Updates This token must be set from within the Request Parameter at the time of Redis replication group (single/ multi node) creation and can be updated later with a new value AWS encrypts this token using AWS Key Management Service (AWS KMS) For more information on Redis AUTH see ElastiCache for Redis In Transit Encryption (TLS) Applying ElastiCache Service Updates Amazon ElastiCache for Redis clusters (single/multi node) that contain PHI must be updated with the latest ‘Security’ type service updates on or before the ‘Recommended Apply by Date’ ElastiCache offers this as a selfservice feature that customers can use to apply the updates anytime on demand and in real time Each service update comes with a ‘Severity’ and ‘Recommended Apply by Date’ and is available only for the applicable Redis replication groups The ‘SLA Met’ field in the service update feature will state whether the update was applied on or before the ‘Recommended Apply by Date’ If customers choose to not apply the updates to the applicable Redis replication groups by the ‘Recommended Apply by Date’ ElastiCache will not take any action to apply them Customers can use the service updates history dashboard to review the application of updates to their Redis replication groups over time For more information on how to use this feature see Self Service Updates in Amazon ElastiCache Amazon OpenSearch Service Amazon OpenSearch Service (OpenSearch Service) enables customers to run a managed OpenSearch cluster in a dedicated Amazon Virtual Private Cloud (Amazon VPC) When using OpenSearch Service with PHI customers should use OpenSearch 60 or later Customers should ensure PHI is encrypted at rest and intransit within Amazon OpenSearch Service Customers may use AWS KMS key encryption to encrypt data at rest in their OpenSearch Service domains which is only available for OpenSearch 51 or later For more information about how to encrypt data at rest see Encryption of Data at Rest for Amazon OpenSearch Service Each OpenSearch Service domain runs in its own VPC Customers should enable nodetonode encryption which is available in OpenSearch 60 or later If customers send data to OpenSearch Service over HTTPS nodetonode encryption helps ensure that customer’s data remains encrypted as OpenSearch distributes (and redistributes) it throughout the cluster If data arrives unencrypted over HTTP OpenSearch Service encrypts the data after it reaches the cluster Therefore any PHI that enters an Amazon OpenSearch Service cluster should be sent over HTTPS For more information see Nodeto node Encryption for Amazon OpenSearch Service Logs from the OpenSearch Service configuration API can be captured in AWS CloudTrail For more information see Managing Amazon OpenSearch Service Domains Amazon EMR Amazon EMR deploys and manages a cluster of Amazon EC2 instances into a customer’s account For information on encryption with Amazon EMR see Encryption Options Amazon EventBridge Amazon EventBridge (formerly Amazon CloudWatch Events) is a serverless event bus that enables you to create scalable eventdriven applications EventBridge delivers a stream of real time data from event sources such as Zendesk Datadog or Pagerduty and routes that data to targets like AWS Lambda 14ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon Forecast By default EventBridge encrypts data using 256bit Advanced Encryption Standard (AES256) under an AWS owned CMK which helps secure customer data from unauthorized access Customers should ensure that any AWS resource emitting an event that is storing processing or transmitting PHI is configured in accordance with best practices Amazon EventBridge is integrated with AWS CloudTrail and customers can view the most recent events in the CloudTrail console in Event history For more information see EventBridge Information in CloudTrail Amazon Forecast Amazon Forecast is a fully managed service that uses machine learning to deliver highly accurate forecasts Based on the same machine learning forecasting technology used by Amazoncom Every interaction customers have with Amazon Forecast is protected by encryption Any content processed by Amazon Forecast is encrypted with customer keys through Amazon Key Management Service and encrypted atrest in the AWS Region where customers are using the service Amazon Forecast is integrated with AWS CloudTrail a service that provides a record of actions taken by a user role or an AWS service in Amazon Forecast CloudTrail captures all API calls for Amazon Forecast as events The calls captured include calls from the Amazon Forecast console and code calls to the Amazon Forecast API operations If customers create a trail customers can enable continuous delivery of CloudTrail events to an Amazon S3 bucket including events for Amazon Forecast For more information see Logging Forecast API Calls with AWS CloudTrail By default the log files delivered by CloudTrail to their bucket are encrypted by Amazon serverside encryption with Amazon S3managed encryption keys (SSES3) To provide a security layer that is directly manageable customers can instead use serverside encryption with AWS KMS–managed keys (SSEKMS) for their CloudTrail log files Enabling serverside encryption encrypts the log files but not the digest files with SSEKMS Digest files are encrypted with Amazon S3managed encryption keys (SSES3) AWS Forecast imports and exports data to/from S3 buckets When importing and exporting data from Amazon S3 customers should ensure S3 buckets are configured in a manner consistent with the guidance For more information see Getting Started Amazon FSx Amazon FSx is a fullymanaged service providing featurerich and highlyperformant file systems Amazon FSx for Windows File Server provides highly reliable and scalable file storage and is accessible over the Server Message Block (SMB) protocol Amazon FSx for Lustre provides highperformance storage for compute workloads and is powered by Lustre the world's most popular highperformance file system Amazon FSx supports two forms of encryption for file systems encryption of data in transit and encryption at rest Amazon FSx for Windows File Server also supports logging of all API calls using AWS CloudTrail Encryption of data in transit is supported by Amazon FSx for Windows File Server on compute instances supporting SMB protocol 30 or newer and by Amazon FSx for Lustre on Amazon EC2 instances that support encryption in transit Alternatively customers may encrypt data before storing on Amazon FSx but are then responsible for the encryption process and key management Encryption of data at rest is automatically enabled when creating an Amazon FSx file system using AES256 encryption algorithm and AWS KMSmanaged keys Data and metadata are automatically encrypted before being written to the file system and automatically decrypted before being presented to the application PHI should not be used in any file or folder name 15ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon GuardDuty Amazon GuardDuty Amazon GuardDuty is a managed threat detection service that continuously monitors for malicious or unauthorized behavior to help customers protect their AWS accounts and workloads It monitors for activity such as unusual API calls or potentially unauthorized deployments that indicate a possible account compromise Amazon GuardDuty also detects potentially compromised instances or reconnaissance by attackers Amazon GuardDuty continuously monitors and analyzes the following data sources: VPC Flow Logs AWS CloudTrail event logs and DNS logs It uses threat intelligence feeds such as lists of malicious IPs and domains and machine learning to identify unexpected and potentially unauthorized and malicious activity within an AWS environment As such Amazon GuardDuty should not encounter any PHI as this data is not to be stored in any of the AWS based data sources listed above Amazon HealthLake Amazon HealthLake enables customers in the healthcare and life sciences industries to store transform query and analyze health data at petabyte scale Customers can use Amazon HealthLake to transmit process and store PHI Amazon HealthLake encrypts data at rest in customer’s data stores by default All service data and metadata is encrypted with a service owned KMS key Per Fast Healthcare Interoperability Resources (FHIR) specifications if a customer deletes FHIR resource it will only be hidden from retrieval and will be retained by the service for versioning When customers use StartFHIRImportJob API Amazon HealthLake will enforce requirement to export data to an encrypted Amazon S3 bucket Amazon HealthLake also encrypts data in transit It uses Transport Layer Security (TLS) 12 to encrypt data in transit through the public endpoint and through backend services Clients must support TLS 10 or later although AWS recommends TLS 12 or later Clients must also support cipher suites with perfect forward secrecy (PFS) such as Ephemeral DiffieHellman (DHE) or Elliptic Curve Ephemeral Diffie Hellman (ECDHE) Additionally requests must be signed by using an access key ID and a secret access key that is associated with an IAM principal Alternatively customers can use the AWS Security Token Service (AWS STS) to generate temporary security credentials to sign requests Amazon HealthLake is integrated with AWS CloudTrail CloudTrail captures all API calls to Amazon HealthLake as events including calls made as result of interaction with AWS Management Console commandline interface (CLI) and programmatically using software development kit (SDK) Amazon Inspector Amazon Inspector is an automated security assessment service for customers seeking to improve their security and compliance of applications deployed on AWS Amazon Inspector automatically assesses applications for vulnerabilities or deviations from best practices After performing an assessment Amazon Inspector produces a detailed list of security findings prioritized by level of severity Customers may run Amazon Inspector on EC2 instances that contain PHI Amazon Inspector encrypts all data transmitted over the network as well as all telemetry data stored atrest Amazon Kinesis Data Analytics Amazon Kinesis Data Analytics enables customers to quickly author SQL code that continuously reads processes and stores data in near real time Using standard SQL queries on the streaming data customers can construct applications that transform and provide insights into their data Kinesis Data Analytics supports inputs from Kinesis Data Streams and Kinesis Data Firehose delivery streams as 16ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon Kinesis Data Firehose sources for analytics application If the stream is encrypted Kinesis Data Analytics accesses the data in the encrypted stream seamlessly with no further configuration needed Kinesis Data Analytics does not store unencrypted data read from Kinesis Data Streams For more information see Configuring Application Input Kinesis Data Analytics integrates with both AWS CloudTrail and Amazon CloudWatch Logs for application monitoring For more information see Monitoring Tools and Working with Amazon CloudWatch Logs Amazon Kinesis Data Firehose When customers send data from their data producers to their Kinesis data stream Amazon Kinesis Data Streams encrypts data using an AWS KMS key before storing it atrest When the Kinesis Data Firehose delivery stream reads data from the Kinesis stream Kinesis Data Streams first decrypts the data and then sends it to Kinesis Data Firehose Kinesis Data Firehose buffers the data in memory based on the buffering hints specified by the customer It then delivers the data to the destinations without storing the unencrypted data at rest For more information about encryption with Kinesis Data Firehose see Data Protection in Amazon Kinesis Data Firehose AWS provides various tools that customers can use to monitor Amazon Kinesis Data Firehose including Amazon CloudWatch metrics Amazon CloudWatch Logs Kinesis Agent and API logging and history For more information see Monitoring Amazon Kinesis Data Firehose Amazon Kinesis Streams Amazon Kinesis Streams enables customers to build custom applications that process or analyze streaming data for specialized needs The serverside encryption feature allows customers to encrypt data at rest When serverside encryption is enabled Kinesis Streams will use an AWS KMS key to encrypt the data before storing it on disks For more information see Data Protection in Amazon Kinesis Data Streams Connections to Amazon S3 containing PHI must use endpoints that accept encrypted transport (that is HTTPS) For a list of regional endpoints see AWS service endpoints Amazon Kinesis Video Streams Amazon Kinesis Video Streams is a fully managed AWS service that customers can use to stream live video from devices to the AWS Cloud or build applications for realtime video processing or batch oriented video analytics Serverside encryption is a feature in Kinesis Video Streams that automatically encrypts data at rest by using an AWS KMS customer master key (CMK) that is specified by the customer Data is encrypted before it is written to the Kinesis Video Streams stream storage layer and it is decrypted after it is retrieved from storage The Amazon Kinesis Video Streams SDK can be used to transmit streaming video data containing PHI By default the SDK uses TLS to encrypt frames and fragments generated by the hardware device on which it is installed The SDK does not manage or affect data stored atrest Amazon Kinesis Video Streams uses AWS CloudTrail to log all API calls Amazon Lex Amazon Lex is an AWS service for building conversational interfaces for applications using voice and text With Amazon Lex the same conversational engine that powers Amazon Alexa is now available to 17ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon Managed Streaming for Apache Kafka (Amazon MSK) any developer enabling customers to build sophisticated natural language chatbots into their new and existing applications Amazon Lex provides the deep functionality and flexibility of natural language understanding (NLU) and automatic speech recognition (ASR) so customers can build highly engaging user experiences with lifelike conversational interactions and create new categories of products Lex uses the HTTPS protocol to communicate both with clients as well as other AWS services Access to Lex is APIdriven and appropriate IAM least privilege can be enforced For more information see Data Protection in Amazon Lex Monitoring is important for maintaining the reliability availability and performance of customer’s Amazon Lex chatbots To track the health of Amazon Lex bots use Amazon CloudWatch With CloudWatch customers can get metrics for individual Amazon Lex operations or for global Amazon Lex operations for their account Customers can also set up CloudWatch alarms to be notified when one or more metrics exceeds a threshold that customers define For example customers can monitor the number of requests made to a bot over a particular time period view the latency of successful requests or raise an alarm when errors exceed a threshold Lex is also integrated with AWS CloudTrail to log Lex API calls For more information see Monitoring in Amazon Lex Amazon Managed Streaming for Apache Kafka (Amazon MSK) Amazon MSK provides encryption features for data at rest and for data intransit For data at rest encryption Amazon MSK cluster uses Amazon EBS serverside encryption and AWS KMS keys to encrypt storage volumes For data intransit Amazon MSK clusters have encryption enabled via TLS for inter broker communication The encryption configuration setting is enabled when a cluster is created Also by default intransit encryption is set to TLS for clusters created from CLI or AWS Console Additional configuration is required for clients to communicate with clusters using TLS encryption Customers can change the default encryption setting by selecting the TLS/plaintext settings For more information see Amazon MSK Encryption Customers can monitor the performance of customer’s clusters using the Amazon MSK console Amazon CloudWatch console or customers can access JMX and host metrics using Open Monitoring with Prometheus an open source monitoring solution Tools that are designed to read from Prometheus exporters are compatible with Open Monitoring like: Datadog Lenses New Relic Sumologic or a Prometheus server For details on Open Monitoring see Amazon MSK Open Monitoring documentation Please note that the default version of Apache Zookeeper bundled with Apache Kafka does not support encryption However it is important to note that communications between Apache Zookeeper and Apache Kafka brokers is limited to broker topic and partition state information The only way data can be produced and consumed from an Amazon MSK cluster is over a private connection between their clients in their VPC and the Amazon MSK cluster Amazon MSK does not support public endpoints Amazon MQ Amazon MQ is a managed message broker service for Apache ActiveMQ that makes it easy to set up and operate message brokers in the cloud Amazon MQ works with existing applications and services without the need for a customer to manage operate or maintain their own messaging systemTo provide the encryption of PHI data while in transit the following protocols with TLS enabled should be used to access brokers: 18ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon Neptune • AMQP • MQTT • MQTT over WebSocket • OpenWire • STOMP • STOMP over WebSocket Amazon MQ encrypts messages atrest and in transit using encryption keys that it manages and stores securely Amazon MQ uses CloudTrail to log all API calls Amazon Neptune Amazon Neptune is a fast reliable fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets The core of Amazon Neptune is a purpose built highperformance graph database engine that is optimized for storing billions of relationships and querying the graph with milliseconds latency Amazon Neptune supports the popular graph query languages Apache TinkerPop Gremlin and W3C’s SPARQL Data containing PHI can now be retained in an encrypted instance of Amazon Neptune An encrypted instance of Amazon Neptune can be specified only at the time of creation by choosing ‘Enable Encryption’ from the Amazon Neptune console All logs backups and snapshots are encrypted for an Amazon Neptune encrypted instance Key management for encrypted instances of Amazon Neptune is provided through the AWS KMS Encryption of data in transit is provided through SSL/TLS Amazon Neptune uses CloudTrail to log all API calls AWS Network Firewall AWS Network Firewall is a managed firewall service that makes it easy to deploy essential network protections for all your Amazon Virtual Private Clouds (Amazon VPCs) The service automatically scales with network traffic volume to provide highavailability protections without the need to set up or maintain the underlying infrastructure Both customer rules and access logs may contain end user IP addresses which are encrypted both at rest and in transit within the AWS architecture Furthermore AWS Network Firewall encrypts all data at rest and in transit between component AWS services (Amazon S3 Amazon DynamoDB Amazon CloudWatch Logs Amazon EBS) The service automatically encrypts data without requiring special configuration Amazon Pinpoint Amazon Pinpoint offers developers a single API layer CLI support and clientside SDK support to extend application communication channels with users The eligible channels include: email SMS text messaging mobile push notifications and custom channels Amazon Pinpoint also provides an analytics system that tracks app user behavior and user engagement With this service developers can learn how each user prefers to engage and can personalize the user's experience to increase user satisfaction Amazon Pinpoint also helps developers address multiple messaging use cases such as direct or transactional messaging targeted or campaign messaging and eventbased messaging By integrating and enabling all enduser engagement channels via Amazon Pinpoint developers can create a 360 degree view of user engagement across all customer touch points Amazon Pinpoint stores user endpoint and event data so customers can create segments send messages to recipients and capture engagement data 19ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon Polly Amazon Pinpoint encrypts data both atrest and intransit For more information see Amazon Pinpoint FAQs While Amazon Pinpoint encrypts all data at rest and in transit the final channel such as SMS or email may not be encrypted and customers should configure any channel in a manner consistent with their requirements Additionally customers who need to send PHI through the SMS channel should use a dedicated short code (5 6 digit origination phone numbers) for the explicit purpose of sending PHI For more information on how to request a short code see Requesting Dedicated Short Codes for SMS Messaging with Amazon Pinpoint Customers may also choose not to send PHI through the final channel and instead provide a mechanism to securely access PHI over HTTPS API calls to Amazon Pinpoint can be captured using AWS CloudTrail The captured calls include those from the Amazon Pinpoint console and code calls to Amazon Pinpoint API operations If customers create a trail customers can enable continuous delivery of AWS CloudTrail events to an Amazon S3 bucket including events for Amazon Pinpoint If customers don't configure a trail they can still view the most recent events by using Event history on the AWS CloudTrail console Using the information collected by AWS CloudTrail customers can determine that the request was made to Amazon Pinpoint the IP address of the request who made the request when the request was made and additional details For more information see Logging Amazon Pinpoint API Calls with AWS CloudTrail Amazon Polly Amazon Polly is a cloud service that converts text into lifelike speech Amazon Polly provides simple API operations that customers can easily integrate with existing applications Amazon Polly uses the HTTPS protocol to communicate with clients Access to Amazon Polly is APIdriven and appropriate IAM least privilege can be enforced For more information see Data Protection Some examples of use cases that include PHI: • Caregiver converts a text report containing PHI into synthesized speech so they can listen to the report while walking or performing other duties • Visually impaired patient is given medical guidance and consumes the guidance in the form of synthesized speech The final delivery channel from Amazon Polly could result in playing audio with PHI in a public space and precautions should be taken that delivery takes this into consideration The synthesized speech output can also be sent asynchronously to an Amazon S3 bucket with encryption enabled When supported event activity occurs in Amazon Polly that activity is recorded in a AWS CloudTrail event along with other AWS service events in Event History For an ongoing record of events in a customer AWS account including events for Amazon Polly create a trail A trail enables CloudTrail to deliver log files to an Amazon S3 bucket Using the information collected by CloudTrail customers can determine the request that was made to Amazon Polly the IP address from which the request was made who made the request when it was made and additional details Amazon Quantum Ledger Database (Amazon QLDB) Amazon QLDB is a fully managed ledger database that provides a transparent immutable and cryptographically verifiable transaction log owned by a central trusted authority Amazon QLDB tracks each and every application data change and maintains a complete and verifiable history of changes over time Data containing PHI can now be retained in a QLDB instance By default all Amazon QLDB data in transit and at rest is encrypted Data in transit is encrypted using TLS and data at rest is encrypted using 20ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon QuickSight AWS managed keys For data protection purposes we recommend that customers protect AWS account credentials and set up individual user accounts with AWS Identity and Access Management (IAM) so that each user is given only the permissions necessary to fulfill their job duties For more information see Data Protection in Amazon QLDB Amazon QLDB is integrated with AWS CloudTrail a service that provides a record of actions taken by a user role or an AWS service in QLDB CloudTrail captures all control plane API calls for QLDB as events The calls that are captured include calls from the QLDB console and code calls to the QLDB API operations If customers create a trail customers can enable continuous delivery of CloudTrail events to an Amazon Simple Storage Service (Amazon S3) bucket including events for QLDB If customers don't configure a trail customers can still view the most recent events on the CloudTrail console in Event history Using the information collected by CloudTrail customers can determine the request that was made to QLDB the IP address from which the request was made who made the request when it was made and additional details Amazon QuickSight Amazon QuickSight is a business analytics service that customers can use to build visualizations perform ad hoc analysis and quickly get business insights from their data Amazon QuickSight discovers AWS data sources enables organizations to scale to hundreds of thousands of users and delivers responsive performance by using a robust inmemory engine (SPICE) Customers can only use the Enterprise edition of Amazon QuickSight to work with data containing PHI as it provides support for encryption of data stored atrest in SPICE Data encryption is performed using AWS managed keys Amazon RDS for MariaDB Amazon RDS for MariaDB allows customers to encrypt MariaDB databases using keys that they manage through AWS KMS On a database instance running with Amazon RDS encryption data stored atrest in the underlying storage is encrypted consistent with the Guidance in effect at the time of publication of this whitepaper as are automated backups read replicas and snapshots Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon RDS for MariaDB encryption satisfies their compliance and regulatory requirements For more information on encryption atrest using Amazon RDS see Encrypting Amazon RDS Resources Connections to RDS for MariaDB containing PHI must use transport encryption For more information on enabling encrypted connections see Using SSL/TLS to Encrypt a Connection to a DB Instance Amazon RDS for MySQL Amazon RDS for MySQL allows customers to encrypt MySQL databases using keys that customers manage through AWS KMS On a database instance running with Amazon RDS encryption data stored atrest in the underlying storage is encrypted consistent with the Guidance in effect at the time of publication of this whitepaper as are automated backups read replicas and snapshots Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon RDS for MySQL encryption satisfies their compliance and regulatory requirements For more information on encryption atrest using Amazon RDS see Encrypting Amazon RDS Resources Connections to RDS for MySQL containing PHI must use transport encryption For more information on enabling encrypted connections see Using SSL/TLS to Encrypt a Connection to a DB Instance 21ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon RDS for Oracle Amazon RDS for Oracle Customers have several options for encrypting PHI atrest using Amazon RDS for Oracle Customers can encrypt Oracle databases using keys that they manage through AWS KMS On a database instance running with Amazon RDS encryption data stored atrest in the underlying storage is encrypted consistent with the Guidance in effect at the time of publication of this whitepaper as are automated backups read replicas and snapshots Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon RDS for Oracle encryption satisfies their compliance and regulatory requirements For more information on encryption atrest using Amazon RDS see Encrypting Amazon RDS Resources Customers can also use Oracle Transparent Data Encryption (TDE) and they should evaluate the configuration for consistency with the Guidance Oracle TDE is a feature of the Oracle Advanced Security option available in Oracle Enterprise Edition This feature automatically encrypts data before it is written to storage and automatically decrypts data when the data is read from storage Customers can also use AWS CloudHSM to store Amazon RDS Oracle TDE keys For more information see the following: • Amazon RDS for Oracle Transparent Data Encryption: Oracle Transparent Data Encryption • Using AWS CloudHSM to store Amazon RDS Oracle TDE keys: What Is Amazon Relational Database Service (Amazon RDS)? Connections to Amazon RDS for Oracle containing PHI must use transport encryption and evaluate the configuration for consistency with the Guidance This is accomplished using Oracle Native Network Encryption and enabled in Amazon RDS for Oracle option groups For detailed information see Oracle Native Network Encryption Amazon RDS for PostgreSQL Amazon RDS for PostgreSQL allows customers to encrypt PostgreSQL databases using keys that customers manage through AWS KMS On a database instance running with Amazon RDS encryption data stored atrest in the underlying storage is encrypted consistent with the Guidance in effect at the time of publication of this whitepaper as are automated backups read replicas and snapshots Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon RDS for PostgreSQL encryption satisfies their compliance and regulatory requirements For more information on encryption atrest using Amazon RDS see Encrypting Amazon RDS Resources Connections to RDS for PostgreSQL containing PHI must use transport encryption For more information on enabling encrypted connections see Using SSL/TLS to Encrypt a Connection to a DB Instance Amazon RDS for SQL Server RDS for SQL Server supports storing PHI for the following version and edition combinations: • 2008 R2 Enterprise Edition only • 2012 2014 and 2016 Web Standard and Enterprise Editions Important: SQL Server Express edition is not supported and should never be used for the storage of PHI In order to store PHI customers must ensure that the instance is configured to encrypt data at rest and enable transport encryption and auditing as detailed below 22ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Encryption at Rest Encryption at Rest Customers can encrypt SQL Server databases using keys that they manage through AWS KMS On a database instance running with Amazon RDS encryption data stored atrest in the underlying storage is encrypted consistent with the Guidance in effect at the time of publication of this whitepaper as are automated backups and snapshots Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon RDS for SQL Server encryption satisfies their compliance and regulatory requirements For more information about encryption atrest using Amazon RDS see Encrypting Amazon RDS Resources If customers use SQL Server Enterprise Edition they can use Server Transparent Data Encryption (TDE) as an alternative This feature automatically encrypts data before it is written to storage and automatically decrypts data when the data is read from storage For more information on RDS for SQL Server Transparent Data Encryption see Support for Transparent Data Encryption in SQL Server Transport Encryption Connections to Amazon RDS for SQL Server containing PHI must use transport encryption provided by SQL Server Forced SSL Forced SSL is enabled from within the parameter group for Amazon RDS SQL Server For more information on RDS for SQL Server Forced SSL see Using SSL with a Microsoft SQL Server DB Instance Auditing RDS for SQL Server instances that contain PHI must have auditing enabled Auditing is enabled from within the parameter group for Amazon RDS SQL Server For more information on RDS for SQL Server auditing see Compliance Program Support for Microsoft SQL Server DB Instances Amazon Redshift Amazon Redshift provides database encryption for its clusters to help protect data at rest When customers enable encryption for a cluster Amazon Redshift encrypts all data including backups by using hardwareaccelerated Advanced Encryption Standard (AES)256 symmetric keys Amazon Redshift uses a fourtier keybased architecture for encryption These keys consist of data encryption keys a database key a cluster key and a master key The cluster key encrypts the database key for the Amazon Redshift cluster Customers can use either AWS KMS or an AWS CloudHSM (Hardware Security Module) to manage the cluster key Amazon Redshift encryption atrest is consistent with the Guidance that is in effect at the time of publication of this whitepaper Because the Guidance might be updated customers should continue to evaluate and determine whether Amazon Redshift encryption satisfies their compliance and regulatory requirements For more information see Amazon Redshift database encryption Connections to Amazon Redshift containing PHI must use transport encryption and customers should evaluate the configuration for consistency with the Guidance For more information see Configuring security options for connections Amazon Redshift Spectrum enables customers to run Amazon Redshift SQL queries against exabytes of data in Amazon S3 Redshift Spectrum is a feature of Amazon Redshift and thus is also in scope for the HIPAA BAA Amazon Rekognition Amazon Rekognition makes it easy to add image and video analysis to customer applications A customer only needs to provide an image or video to the Amazon Rekognition API and the service can identify 23ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon Route 53 the objects people text scenes and activities as well as detect any inappropriate content Amazon Rekognition also provides highly accurate facial analysis and facial recognition Amazon Rekognition is eligible to operate with images or video containing PHI Amazon Rekognition operates as a managed service and does not present any configurable options for the handling of data Amazon Rekognition only uses discloses and maintains PHI as permitted by the terms of the AWS BAA All data is encrypted atrest and in transit with Amazon Rekognition Amazon Rekognition uses AWS CloudTrail to log all API calls Amazon Route 53 Amazon Route 53 is a managed DNS service that provides customers the ability to register domain names route internet traffic customer domain resources and check the health of those resources While Amazon Route 53 is a HIPAA Eligible Service no PHI should be stored in any resource names or tags within Amazon Route 53 as there is no support for encrypting such data Instead Amazon Route 53 can be used to provide access to customer domain resources that transmit or store PHI such as web servers running on Amazon EC2 or storage such as Amazon S3 Amazon S3 Glacier Amazon S3 Glacier automatically encrypts data at rest using AES 256bit symmetric keys and supports secure transfer of customer data over secure protocols Connections to Amazon S3 Glacier containing PHI must use endpoints that accept encrypted transport (HTTPS) For a list of regional endpoints see AWS service endpoints Do not use PHI in archive and vault names or metadata because this data is not encrypted using Amazon S3 Glacier serverside encryption and is not generally encrypted in clientside encryption architectures Amazon S3 Transfer Acceleration Amazon S3 Transfer Acceleration (S3TA) enables fast easy and secure transfers of files over long distances between a customer’s client and an S3 bucket Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations As the data arrives at an edge location data is routed to Amazon S3 over an optimized network path Customers should ensure that any data containing PHI transferred using AWS S3TA is encrypted in transit and atrest Refer to the Guidance for Amazon S3 to understand the available encryption options Amazon SageMaker Amazon SageMaker is a fully managed machine learning service With Amazon SageMaker data scientists and developers can quickly and easily build and train machine learning models and then directly deploy them into a productionready hosted environment It provides an integrated Jupyter authoring notebook instance for easy access to data sources for exploration and analysis Amazon SageMaker also provides common machine learning algorithms that are optimized to run efficiently against extremely large data in a distributed environment With native support for bringyourownalgorithms and frameworks Amazon SageMaker offers flexible distributed training options that adjust to a customer’s specific workflows Amazon SageMaker is eligible to operate with data containing PHI Encryption of data in transit is provided by SSL/TLS and is used when communicating both with the frontend interface of Amazon SageMaker (to the Notebook) and 24ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon SNS whenever Amazon SageMaker interacts with any other AWS service (for example pulling data from Amazon S3) To satisfy the requirement that PHI be encrypted atrest encryption of data stored with the instance running models with Amazon SageMaker is enabled using AWS Key Management Service (KMS) when setting up the endpoint (DescribeEndpointConfig:KmsKeyID) Encryption of model training results (artifacts) is enabled using AWS KMS and keys should be specified using the KmsKeyID in the OutputDataConfig description If a KMS Key ID isn’t provided the default Amazon S3 KMS Key for the role’s account will be used Amazon SageMaker uses AWS CloudTrail to log all API calls Amazon Simple Notification Service (Amazon SNS) Customers should understand the following key encryption requirement in order to use Amazon Simple Notification Service (SNS) with Protected Health Information (PHI) Customers must use the HTTPS API endpoint that SNS provides in each AWS Region The HTTPS endpoint leverages encrypted connections and protects the privacy and integrity of the data sent to AWS For a list of all HTTPS API endpoints see AWS service endpoints Additionally Amazon SNS uses CloudTrail a service that captures API calls made by or on behalf of Amazon SNS in the customer’s AWS account and delivers the log files to an Amazon S3 bucket that they specify CloudTrail captures API calls made from the Amazon SNS console or from the Amazon SNS API Using the information collected by CloudTrail customers can determine what request was made to Amazon SNS the source IP address from which the request was made who made the request and when it was made For more information on logging SNS operations see Logging Amazon SNS API calls using CloudTrail Amazon Simple Email Service (Amazon SES) Amazon Simple Email Service (Amazon SES) is a flexible and highly scalable email sending and receiving service It supports both S/MIME and PGP protocols to encrypt messages for full endtoend encryption and all communication with Amazon SES is secured using SSL (TLS 12) Customers have the option to store messages encrypted atrest by configuring Amazon SES to receive and encrypt messages before storing them in an Amazon S3 bucket For more information see How Amazon Simple Email Service (Amazon SES) uses AWS KMS to find out more information about encrypting messages for storage Messages are secured in transit to Amazon SES either through an HTTPS endpoint or encrypted SMTP connection For messages sent from Amazon SES to a receiver Amazon SES will first attempt to make a secure connection to the receiving mail server but if a secure connection cannot be established it will send the message unencrypted To require encryption for delivery to a receiver customers must create a configuration set in Amazon SES and use the AWS CLI to set the TlsPolicy property to Require For more information see Amazon SES and Security Protocols Amazon SES integrates with AWS CloudTrail to monitor all API calls Using the information collected by AWS CloudTrail customers can determine that the request was made to Amazon SES the IP address of the request who made the request when the request was made and additional details For more information see Logging Amazon SES API Calls with AWS CloudTrail Amazon SES also provides methods to monitor sending activity such as sends rejects bounce rates deliveries opens and clicks For more information see Monitoring Your Amazon SES Sending Activity Amazon Simple Queue Service (Amazon SQS) Customers should understand the following key encryption requirements in order to use Amazon SQS with PHI 25ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon S3 • Communication with the Amazon SQS Queue via the Query Request must be encrypted with HTTPS For more information on making SQS requests see Making Query API requests • Amazon SQS supports serverside encryption integrated with the AWS KMS to protect data at rest The addition of serverside encryption allows customers to transmit and receive sensitive data with the increased security of using encrypted queues Amazon SQS serverside encryption uses the 256 bit Advanced Encryption Standard (AES256 GCM algorithm) to encrypt the body of each message The integration with AWS KMS allows customers to centrally manage the keys that protect Amazon SQS messages along with keys that protect their other AWS resources AWS KMS logs every use of encryption keys to AWS CloudTrail to help meet regulatory and compliance needs For more information and to check Region for the availability for SSE for Amazon SQS see Encryption at Rest • If serverside encryption is not used the message payload itself must be encrypted before being sent to SQS One way to encrypt the message payload is by using the Amazon SQS Extended Client along with the Amazon S3 encryption client For more information about using clientside encryption see Encrypting Message Payloads Using the Amazon SQS Extended Client and the Amazon S3 Encryption Client Amazon SQS uses CloudTrail a service that logs API calls made by or on behalf of Amazon SQS in a customer’s AWS account and delivers the log files to the specified Amazon S3 bucket CloudTrail captures API calls made from the Amazon SQS console or from the Amazon SQS API Customers can use the information collected by CloudTrail to determine which requests are made to Amazon SQS the source IP address from which the request is made who made the request when it is made and so on For more information about logging SQS operations see Logging Amazon SQS API calls using AWS CloudTrail Amazon Simple Storage Service (Amazon S3) Customers have several options for encryption of data at rest when using Amazon S3 including both serverside and clientside encryption and several methods of managing keys For more information see Protecting data using encryption Connections to Amazon S3 containing PHI must use endpoints that accept encrypted transport (HTTPS) For a list of regional endpoints see AWS service endpoints Do not use PHI in bucket names object names or metadata because this data is not encrypted using S3 serverside encryption and is not generally encrypted in clientside encryption architectures Amazon Simple Workflow Service Amazon Simple Workflow Service (Amazon SWF) helps developers build run and scale background jobs that have parallel or sequential steps Amazon SWF can be thought of as a fully managed state tracker and task coordinator in the Cloud The Amazon Simple Workflow Service is used to orchestrate workflows and is not able to store or transmit data PHI should not be placed in metadata for Amazon SWF or within any task description Amazon SWF uses AWS CloudTrail to log all API calls Amazon Textract Amazon Textract uses machine learning technologies to automatically extract text and data from scanned documents that goes beyond simple optical character recognition (OCR) to identify understand and extract data from forms and tables For example customers can use Amazon Textract to automatically extract data and process forms with protected health information (PHI) without human intervention to fulfill medical claims 26ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon Transcribe Amazon Textract can also be used to maintain compliance in document archives For example customers can use Amazon Textract to extract data from insurance claims or medical prescriptions and automatically recognize keyvalue pairs in those documents so that sensitive ones can be redacted Amazon Textract supports serverside encryption (SSES3 and SSEKMS) for input documents and TLS encryption for data in transit between the service and agent Customers can use Amazon CloudWatch to track resource usage metrics and AWS CloudTrail to capture API calls to Amazon Textract Amazon Transcribe Amazon Transcribe uses advanced machine learning technologies to recognize speech in audio files and transcribe them into text For example customers can use Amazon Transcribe to convert US English and Mexican Spanish audio to text and to create applications that incorporate the content of audio files Amazon Transcribe can be used with data containing PHI Amazon Transcribe does not retain or store any data and all calls to the API are encrypted with SSL/TLS Amazon Transcribe uses CloudTrail to log all API calls Amazon Translate Amazon Translate uses advanced machine learning technologies to provide highquality translation on demand Customers can use Amazon Translate to translate unstructured text documents or to build applications that work in multiple languages Documents containing PHI can be processed with Amazon Translate No additional configuration is required when translating documents that contain PHI Encryption of data while in transit is provided by SSL/TLS and no data remains atrest with Amazon Translate Amazon Translate uses CloudTrail to log all API calls Amazon Virtual Private Cloud Amazon Virtual Private Cloud (Amazon VPC) offers a set of network security features wellaligned to architecting for HIPAA compliance Features such as stateless network access control lists and dynamic reassignment of instances into stateful security groups afford flexibility in protecting the instances from unauthorized network access Amazon VPC also allows customers to extend their own network address space into AWS as well as providing a number of ways to connect their data centers to AWS VPC Flow Logs provide an audit trail of accepted and rejected connections to instances processing transmitting or storing PHI For more information on Amazon VPC see Amazon Virtual Private Cloud Amazon WorkDocs Amazon WorkDocs is a fully managed secure enterprise file storage and sharing service with strong administrative controls and feedback capabilities that improve user productivity Amazon WorkDocs files are encrypted atrest using keys that customers manage through AWS Key Management Service (KMS) All data in transit is encrypted using SSL/TLS AWS web and mobile applications and desktop sync clients transmit files directly to Amazon WorkDocs using SSL/TLS Using the Amazon WorkDocs Management Console WorkDocs administrators can view audit logs to track file and user activity by time and choose whether to allow users to share files with others outside their organization Amazon WorkDocs is also integrated with CloudTrail (a service that captures API calls made by or on behalf of Amazon WorkDocs in customer’s AWS account) and delivers CloudTrail log files to an Amazon S3 bucket that customers specify 27ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Amazon WorkSpaces Multifactor authentication (MFA) using a RADIUS server is available and can provide customers with an additional layer of security during the authentication process Users log in by entering their user name and password followed by an OTP (OneTime Passcode) supplied by a hardware or a software token For more information see: •Amazon WorkDocs feature •Logging Amazon WorkDocs API calls using AWS CloudTrail Customers should not store PHI in file names or directory names Amazon WorkSpaces Amazon WorkSpaces is a fully managed secure DesktopasaService (DaaS) solution that runs on AWS With Amazon WorkSpaces customers can easily provision virtual cloudbased Microsoft Windows desktops for their users providing them access to the documents applications and resources they need anywhere anytime from any supported device Amazon WorkSpaces stores data in Amazon Elastic Block Store volumes Customers can encrypt customer’s WorkSpaces storage volumes using keys that customers manage through AWS Key Management Service When encryption is enabled on a WorkSpace both the data stored atrest in the underlying storage and the automated backups (EBS Snapshots) of the disk storage are encrypted consistent with the Guidance Communication from the WorkSpace clients to WorkSpace is secured using SSL/TLS For more information on encryption atrest using Amazon WorkSpaces see Encrypted WorkSpaces AWS App Mesh AWS App Mesh is a service mesh that provides applicationlevel networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure like Amazon ECS Amazon EKS or Amazon EC2 services App Mesh configures Envoy proxies to collect and transmit observability data to the monitoring set vices that you configure to give you endtoend visibility It can route traffic based on routing and traffic policies configured to ensure highavailability of your applications Traffic between applications can be configured to use TLS App Mesh can be used using AWS SDK or App Mesh controller for Kubernetes While AWS App Mesh is a HIPAA Eligible Service no PHI should be stored in any resource names/attributes within AWS App Mesh as there is no support for protecting such data Instead AWS App Mesh can be used to monitor control and secure customer domain resources that transmit or store PHI AWS Auto Scaling AWS Auto Scaling enables customers to configure automatic scaling for the AWS resources that are part of a customer’s application in a matter of minutes Customers can use AWS Auto Scaling for a number of services that involve PHI such as Amazon DynamoDB Amazon ECS Amazon RDS Aurora replicas and Amazon EC2 instances in an Auto Scaling Group AWS Auto Scaling is an orchestration service that does not directly process store or transmit customer content; for that reason customers can use this service with encrypted content The AWS shared responsibility model applies to data protection in AWS Auto Scaling: AWS is responsible for the AWS network security procedures whereas the customer is responsible for maintaining control over a customer’s content that is hosted on this infrastructure This content includes the security configuration 28ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Backup and management tasks for the AWS services that customers use For data protection purposes we recommend that customers protect AWS account credentials and set up individual user accounts with AWS Identity and Access Management (IAM) That way each user is given only the permissions necessary to fulfill their job duties AWS strongly recommend that customers never put sensitive identifying information such as customers' account numbers into freeform fields such as a Name field This includes when customers work with AWS Auto Scaling or other AWS services using the AWS Management Console API AWS CLI or AWS SDKs Any data that customers enter into AWS Auto Scaling or other services might get picked up for inclusion in diagnostic logs When customers provide a URL to an external server they should not include credentials information in the URL to validate their request to that server AWS also recommends that customers secure their data in the following ways: • Use multifactor authentication (MFA) with each account • Use SSL/TLS to communicate with AWS resources AWS recommends TLS 12 or later • Set up API and user activity logging with AWS CloudTrail • Use AWS encryption solutions along with all default security controls within AWS services • Use advanced managed security services such as Amazon Macie which assists in discovering and securing personal data that is stored in Amazon S3 AWS Backup AWS Backup offers a centralized fullymanaged and policybased service to protect customer data and ensure compliance across AWS services for business continuity purposes With AWS Backup customers can centrally configure data protection (backup) policies and monitor backup activity across customer AWS resources including Amazon EBS volumes Amazon Relational Database Service (Amazon RDS) databases (including Aurora clusters) Amazon DynamoDB tables Amazon Elastic File System (Amazon EFS) Amazon FSx file systems Amazon EC2 instances and AWS Storage Gateway volumes AWS Backup encrypts customer data in transit and at rest Backups from services with existing snapshot capabilities are encrypted using the source service’s snapshot encryption methodology For example EBS snapshots are encrypted using the encryption key of the volume that the snapshot was created from Backups from newer AWS services that introduce backup functionality built on AWS Backup such as Amazon EFS are encrypted intransit and atrest independently from the source services giving customer backups an additional layer of protection Encryption is configured at the Backup Vault level Default Vault is encrypted When customers create a new vault an Encryption Key must be selected AWS Batch AWS Batch enables developers scientists and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS AWS Batch dynamically provisions the optimal quantity and type of compute resources (such as CPU or memoryoptimized instances) based on the volume and specific resource requirements of the batch jobs submitted AWS Batch plans schedules and executes batch computing workloads across the full range of AWS compute services and features Similar to guidance for Amazon ECS PHI should not be placed directly into the job definition the job queue or the tags for AWS Batch Instead jobs scheduled and executed with AWS Batch may operate on encrypted PHI Any information returned by stages of a job to AWS Batch should also not contain any PHI Whenever jobs being executed by AWS Batch must transmit or receive PHI that connection should be encrypted using HTTPS or SSL/TLS 29ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Certificate Manager AWS Certificate Manager AWS Certificate Manager is a service that lets customers easily provision manage and deploy public and private SSL/TLS certificates for use with AWS services and their internal connected resources AWS Certificate Manager should not be used to store data containing PHI AWS Certificate Manager uses CloudTrail to log all API calls AWS Cloud Map AWS Cloud Map is a cloud resource discovery service With AWS Cloud Map customers can define custom names for application resources such as Amazon ECS tasks Amazon EC2 instances Amazon S3 buckets Amazon DynamoDB tables Amazon SQS queues or any other cloud resource Customers can then use these custom names to discover the location and metadata of cloud resources from their applications using AWS SDK and authenticated API queries While AWS Cloud Map is a HIPAA Eligible Service no PHI should be stored in any resource names/attributes within AWS Cloud Map as there is no support for protecting such data Instead AWS Cloud Map can be used to discover customer domain resources that transmit or store PHI AWS CloudFormation AWS CloudFormation enables customers to create and provision AWS infrastructure deployments predictably and repeatedly It helps customers leverage AWS products such as Amazon EC2 Amazon Elastic Block Store Amazon SNS Elastic Load Balancing and Auto Scaling to build highly reliable highly scalable costeffective applications in the cloud without worrying about creating and configuring the underlying AWS infrastructure AWS CloudFormation enables customers to use a template file to create and delete a collection of resources together as a single unit (a stack) AWS CloudFormation does not itself store transmit or process PHI Instead it is used to build and deploy architectures that use other AWS services that might store transmit and/or process PHI Only HIPAA Eligible Services should be used with PHI Please refer to the entries for those services in this Whitepaper for guidance on use of PHI with those services AWS CloudFormation uses AWS CloudTrail to log all API calls AWS CloudHSM AWS CloudHSM is a cloudbased hardware security module (HSM) that enables customers to easily generate and use their own encryption keys on the AWS Cloud With CloudHSM customers can manage their own encryption keys using FIPS 1402 Level 3 validated HSMs CloudHSM offers customers the flexibility to integrate with their applications using open standard APIs such as PKCS#11 Java Cryptography Extensions (JCE) and Microsoft CryptoNG (CNG) libraries CloudHSM is also standardscompliant and enables customers to export all of their keys to most other commercially available HSMs As AWS CloudHSM is a hardware appliance key management service it is unable to store or transmit PHI Customers should not store PHI in Tags (metadata) No other special guidance is required AWS CloudTrail AWS CloudTrail is a service that enables governance compliance operational auditing and risk auditing of AWS accounts With CloudTrail customers can log continuously monitor and retain account activity 30ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS CodeBuild related to actions across their AWS infrastructure CloudTrail provides event history of their AWS account activity including actions taken through the AWS Management Console AWS SDKs command line tools and other AWS services This event history simplifies security analysis resource change tracking and troubleshooting AWS CloudTrail is enabled for use with all AWS accounts and can be used for audit logging as required by the AWS BAA Specific Trails should be created using the CloudTrail console or the AWS Command Line Interface CloudTrail encrypts all traffic while in transit and atrest when an encrypted Trail is created An encrypted trail should be created when the potential exists to log PHI By default an encrypted Trail stores entries in Amazon S3 using ServerSide Encryption with Amazon S3 (SSES3) managed keys If an additional management over keys is desired it can also be configured with AWS KMSmanaged keys (SSEKMS) As CloudTrail is the final destination for AWS log entries and thus a critical component of any architecture that handles PHI CloudTrail log file integrity validation should be enabled and the associated CloudTrail digest files should be periodically reviewed Once enabled a positive assertion that the log files have not been changed or altered can be established AWS CodeBuild AWS CodeBuild is a fully managed build service in the cloud AWS CodeBuild compiles source code runs unit tests and produces artifacts that are ready to deploy AWS CodeBuild uses an AWS KMS customer master key (CMK) to encrypt build output artifacts A CMK should be created and configured before building artifacts that contain PHI secrets/passwords master certificates etc AWS CodeBuild uses AWS CloudTrail to log all API calls AWS CodeDeploy AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services including Amazon EC2 AWS Fargate AWS Lambda and onpremises servers Customers use AWS CodeDeploy to rapidly release new features of containerized workload and handles the complexity of updating applications AWS CodeDeploy supports serverside encryption (SSES3) for deployment artifacts and TLS encryption for data in transit between the service and agent Customers can use Amazon CloudWatch Events to track deployments and AWS CloudTrail to capture API calls to AWS CodeDeploy AWS CodeCommit AWS CodeCommit is a secure highly scalable managed source control service that hosts private Git repositories AWS CodeCommit eliminates the need for customers to manage their own source control system or worry about scaling its infrastructure AWS CodeCommit encrypts all traffic and stored information while in transit and atrest By default when a repository is created within AWS CodeCommit an AWS managed key is created with AWS KMS and is used only by that repository to encrypt all data stored atrest AWS CodeCommit uses AWS CloudTrail to log all API calls AWS CodePipeline AWS CodePipeline is a fully managed continuous delivery service that helps customers automate customer release pipelines for fast and reliable application and infrastructure updates Customers 31ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Config use AWS CodePipeline to allow researchers to automatically process clinical trial data lab results and genomic data are few examples of workflow pipeline used by customers AWS CodePipeline supports serverside encryption (SSES3 and SSEKMS) for Code artifacts and TLS encryption for data in transit between the service and agent Customers can use Amazon CloudWatch Events to track pipeline changes and AWS CloudTrail to capture API calls to AWS CodePipeline AWS Config AWS Config provides a detailed view of the resources associated with a customer’s AWS account including how they are configured how they are related to one another and how the configurations and their relationships have changed over time AWS Config cannot itself be used to store or transmit PHI Instead it can be leveraged to monitor and evaluate architectures built with other AWS services including architectures that handle PHI to help determine whether they remain compliant with their intended design goal Architectures that handle PHI should only be built with HIPAA Eligible Services AWS Config uses AWS CloudTrail to log all results AWS Data Exchange AWS Data Exchange makes it easy to find subscribe to and use thirdparty data in the cloud Once subscribed to a data product customers can use the AWS Data Exchange API to load data directly into Amazon S3 and then analyze it with a wide variety of AWS analytics and machine learning services For data providers AWS Data Exchange makes it easy to reach the millions of AWS customers migrating to the cloud by removing the need to build and maintain infrastructure for data storage delivery billing and entitling AWS Data Exchange always encrypts all data products stored in the service atrest without requiring any additional configuration This encryption is automatically done via a service managed KMS key AWS Data Exchange uses Transport Layer Security (TLS) and clientside encryption for encryption in transit Communication with AWS Data Exchange is always done over HTTPS so customer’s data is always encrypted in transit This encryption is configured by default when customers use AWS Data Exchange For more information see Data Protection in AWS Data Exchange AWS Data Exchange is integrated with AWS CloudTrail AWS CloudTrail captures all calls to AWS Data Exchange APIs as events including calls from the AWS Data Exchange console and from code calls to the AWS Data Exchange API operations Some actions customers can take are consoleonly actions There is no corresponding API in the AWS SDK or AWS CLI These are actions that rely on AWS Marketplace functionality such as publishing or subscribing to a product AWS Data Exchange provides CloudTrail logs for a subset of these consoleonly actions For more information see Logging AWS Data Exchange API Calls with AWS CloudTrail Please note that all listings using AWS Data Exchange must adhere to AWS Data Exchange’s Publishing Guidelines and AWS Data Exchange FAQs for AWS Marketplace Providers which restrict certain categories of data For more information see AWS Data Exchange FAQs AWS Database Migration Service AWS Database Migration Service (AWS DMS) helps customers migrate databases to AWS easily and securely Customers can migrate their data to and from most widely used commercial and opensource databases such as Oracle MySQL and PostgreSQL The service supports homogeneous migrations such 32ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS DataSync as Oracle to Oracle and also heterogeneous migrations between different database platforms such as Oracle to PostgreSQL or MySQL to Oracle Databases running onpremises and being migrated to the cloud with AWS DMS can contain PHI data AWS DMS encrypts data while in transit and when data is being staged for final migration into the target database on AWS AWS DMS encrypts the storage used by a replication instance and the endpoint connection information To encrypt the storage used by a replication instance AWS DMS uses an AWS KMS key that is unique to the AWS account Refer to the Guidance for the appropriate target database to ensure that data remains encrypted once migration is complete AWS DMS uses CloudTrail to log all API calls AWS DataSync AWS DataSync is an online transfer service that simplifies automates and accelerates moving data between onpremises storage and AWS Customers can use AWS DataSync to connect their data sources to either Amazon S3 or Amazon EFS Customers should ensure that Amazon S3 and Amazon EFS are configured in a manner consistent with the Guidance By default customer data is encrypted in transit using TLS 12 For more information about encryption and AWS DataSync see AWS DataSync features Customers can monitor DataSync activity using AWS CloudTrail For more information on logging with CloudTrail see Logging AWS DataSync API Calls with AWS CloudTrail AWS Directory Service AWS Directory Service for Microsoft AD AWS Directory Service for Microsoft Active Directory (Enterprise Edition) also known as AWS Microsoft AD enables directoryaware workloads and AWS resources to use managed Active Directory in the AWS Cloud AWS Microsoft AD stores directory content (including content containing PHI) in encrypted Amazon Elastic Block Store volumes using encryption keys that AWS manages For more information see Amazon EBS Encryption Data in transit to and from Active Directory clients is encrypted when it travels through Lightweight Directory Access Protocol (LDAP) over customer’s Amazon Virtual Private Cloud (VPC) network If an Active Directory client resides in an onpremises network the traffic travels to customer’s VPC by a virtual private network link or an AWS Direct Connect link Amazon Cloud Directory Amazon Cloud Directory enables customers to build flexible cloudnative directories for organizing hierarchies of data along multiple dimensions Customers also can create directories for a variety of use cases such as organizational charts course catalogs and device registries For example customers can create an organizational chart that can be navigated through separate hierarchies for reporting structure location and cost center Amazon Cloud Directory automatically encrypts data at rest and in transit by using 256bit encryption keys that are managed by the AWS Key Management Service (AWS KMS) AWS Elastic Beanstalk With AWS Elastic Beanstalk customers can quickly deploy and manage applications in the AWS Cloud without having to learn about the infrastructure that runs those applications Customers can 33ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Fargate simply upload code and AWS Elastic Beanstalk automatically handles the deployment from capacity provisioning load balancing automatic scaling to application health monitoring At the same time customers retain full control over the AWS resources powering their application and can access the underlying resources at any time AWS Elastic Beanstalk does not itself store transmit or process PHI Instead customers can use it to build and deploy architectures with other AWS services that might store transmit and/or process PHI Customers should ensure that when picking the services that are deployed by AWS Elastic Beanstalk to only use HIPAA Eligible Services with PHI See the entries for those services in this whitepaper for guidance on use of PHI with those services Customers should not include PHI in any freeform fields within AWS Elastic Beanstalk such as the Name field AWS Elastic Beanstalk uses AWS CloudTrail to log all API calls AWS Fargate AWS Fargate is a technology that allows customer to run containers without having to manage servers or clusters With AWS Fargate customers no longer have to provision configure and scale clusters of virtual machines to run containers This removes the need to choose server types decide when to scale clusters or optimize cluster packing AWS Fargate removes the need for customers to interact with or think about servers or clusters With Fargate customers focus on designing and building their applications instead of managing the infrastructure that runs them Fargate does not require any additional configuration in order to work with workloads that process PHI Customers can run container workloads on Fargate using container orchestration services like Amazon ECS Fargate only manages the underlying infrastructure and does not operate with or upon data within the workload being orchestrated In keeping with the requirements for HIPAA PHI should still be encrypted whenever in transit or atrest when accessed by containers launched with Fargate Various mechanisms for encrypting atrest are available with each AWS storage option described in this paper AWS Firewall Manager AWS Firewall Manager is a security management service which allows customers to centrally configure and manage firewall rules across customer accounts and applications in AWS Organizations As new applications are created Firewall Manager makes it easy to bring new applications and resources into compliance by enforcing a common set of security rules Now customers have a single service to build firewall rules create security policies and enforce them in a consistent hierarchical manner across their entire infrastructure from a central administrator account AWS Firewall Manager is an orchestration service that does not directly process store or transmit user data The service does not encrypt customer content but underlying services that AWS Firewall Manager uses such as DynamoDB encrypts user data AWS Global Accelerator AWS Global Accelerator is a global load balancing service that improves the availability and latency of multiregion applications To ensure that PHI remains encrypted in transit and atrest while using AWS Global Accelerator architectures being load balanced by Global Accelerator should use an encrypted protocol such as HTTPS or SSL/TLS Refer to the guidance for Amazon EC2 Elastic Load Balancing and other AWS services to better understand the available encryption options for backend resources AWS Global Accelerator uses AWS CloudTrail to log all API calls 34ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Glue AWS Glue AWS Glue is a fully managed ETL (extract transform and load) service that makes it simple and cost effective for customers to categorize their data clean it enrich it and move it reliably between various data stores In order to ensure the encryption of data containing PHI while in transit AWS Glue should be configured to use JDBC connections to data stores with SSL/TLS Additionally to maintain encryption while intransit the setting for serverside encryption (SSES3) should be passed as a parameter to ETL jobs run with AWS Glue All data stored atrest within the Data Catalog of AWS Glue is encrypted using keys managed by AWS KMS when encryption is enabled upon creation of a Data Catalog object AWS Glue uses CloudTrail to log all API calls AWS Glue DataBrew AWS Glue DataBrew is a fully managed visual data preparation service that makes it easy for data analysts and data scientists to clean and normalize data to prepare it for analytics and machine learning In order to ensure the encryption of data containing PHI while in transit DataBrew should be configured to use JDBC connections to data stores with SSL/TLS When connecting to JDBC data sources DataBrew uses the settings on your AWS Glue connection including the “Require SSL connection” option Additionally to maintain encryption while at rest in S3 buckets the setting for serverside encryption (SSES3 or SSEKMS) should be passed as a parameter to DataBrew jobs AWS IoT Core and AWS IoT Device Management AWS IoT Core and AWS IoT Device Management provide secure bidirectional communication between internetconnected devices such as sensors actuators embedded microcontrollers or smart appliances and the AWS Cloud AWS IoT Core and AWS IoT Device Management can now accommodate devices that transmit data containing PHI All communication with AWS IoT Core and AWS IoT Device Management is encrypted using TLS AWS IoT Core and AWS IoT Device Management use AWS CloudTrail to log all API calls AWS IoT Greengrass AWS IoT Greengrass lets customers run local compute messaging data caching sync and ML inference capabilities for connected devices in a secure way AWS IoT Greengrass uses X509 certificates managed subscriptions AWS IoT policies and IAM policies and roles to ensure that customer’s Greengrass applications are secure AWS IoT Greengrass uses the AWS IoT transport security model to encrypt communication with the cloud using TLS In addition AWS IoT Greengrass data is encrypted when at rest (in the cloud) For more information on Greengrass security see Overview of AWS IoT Greengrass Security Customers can log AWS IoT Greengrass API actions using AWS CloudTrail For more information see Logging AWS IoT Greengrass API Calls with AWS CloudTrail AWS Lambda AWS Lambda lets customers run code without provisioning or managing servers on their own AWS Lambda uses a compute fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances across multiple Availability Zones in a Region which provides the high availability security performance and scalability of the AWS infrastructure 35ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Managed Services To ensure that PHI remains encrypted while using AWS Lambda connections to external resources should use an encrypted protocol such as HTTPS or SSL/TLS For example when S3 is accessed from a Lambda procedure it should be addressed with https://buckets3awsregionamazonawscom If any PHI is placed atrest or idled within a running procedure it should be encrypted clientside or serverside with keys obtained from AWS KMS or AWS CloudHSM Follow the related guidance for Amazon API Gateway when triggering AWS Lambda functions through the service When using events from other AWS services to trigger AWS Lambda functions the event data should not contain (in and of itself) PHI For example when a Lambda procedure is triggered from an S3 event such as the arrival of an object in S3 the object name that is relayed to Lambda should not have any PHI although the object itself can contain such data AWS Managed Services AWS Managed Services provides ongoing management of AWS infrastructures By implementing best practices to maintain a customer’s infrastructure AWS Managed Services helps to reduce their operational overhead and risk AWS Managed Services automates common activities such as change requests monitoring patch management security and backup services and provides fulllifecycle services to provision run and support infrastructures Customers can use AWS Managed Services to manage AWS workloads that that operate with data containing PHI Usage of AWS Managed Services does not alter the AWS Services eligible for the use with PHI Tooling and automation provided by AWS Managed Services cannot be used for the storage or transmission of PHI AWS Mobile Hub AWS Mobile Hub provides a set of tools that enable customers to quickly configure AWS services and integrate them into their mobile app AWS Mobile Hub itself does not store or transmit PHI Instead it is used to administer and orchestrate mobile architectures built with other AWS services including architectures that handle PHI Architectures that handle PHI should only be built with HIPAA eligible services and PHI should not be placed in metadata for AWS Mobile Hub AWS Mobile Hub uses AWS CloudTrail to log all actions For more information see Logging AWS Mobile CLI API Calls with AWS CloudTrail AWS OpsWorks for Chef Automate AWS OpsWorks for Chef Automate is a fully managed configuration management service that hosts Chef Automate a set of automation tools from Chef for infrastructure and application management The service itself does not contain transmit or handle any PHI or sensitive information but customers should ensure that any resources configured by OpsWorks for Chef Automate is configured consistent with the Guidance API calls are captured with AWS CloudTrail For more information see Logging AWS OpsWorks Stacks API Calls with AWS CloudTrail AWS OpsWorks for Puppet Enterprise AWS OpsWorks for Puppet Enterprise is a fully managed configuration management service that hosts Puppet Enterprise a set of automation tools from Puppet for infrastructure and application management The service itself does not contain transmit or handle any PHI or sensitive information but customers should ensure that any resource configured by OpsWorks for Puppet Enterprise is 36ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS OpsWorks Stack configured consistent with the Guidance API calls are captured with AWS CloudTrail For more information see Logging AWS OpsWorks Stacks API Calls with AWS CloudTrail AWS OpsWorks Stack AWS OpsWorks Stacks provides a simple and flexible way to create and manage stacks and applications Customers can use AWS OpsWorks Stacks to deploy and monitor applications in their stacks AWS OpsWorks Stacks encrypts all traffic while in transit However encrypted data bags (a Chef data storage mechanism) are not available and any assets that must be stored securely such as PHI secrets/passwords master certificates etc should be stored in an encrypted bucket in Amazon S3 AWS OpsWorks Stack uses AWS CloudTrail to log all API calls AWS Organizations AWS Organizations helps customers centrally manage and govern their environment as they grow and scale their AWS resources Using AWS Organizations they can programmatically create new AWS accounts and allocate resources group accounts to organize their workflows apply policies to accounts or groups for governance and simplify billing by using a single payment method for all of their accounts In addition AWS Organizations is integrated with other AWS services so customers can define central configurations security mechanisms audit requirements and resource sharing across accounts in their organization AWS Organizations is available to all AWS customers at no additional charge AWS Organizations is an orchestration service that does not directly process store or transmit user data The service does not encrypt customer content but underlying services that are launched within AWS Organizations do encrypt user data AWS Organizations is integrated with AWS CloudTrail a service that provides a record of actions taken by a user role or an AWS service in AWS Organizations AWS RoboMaker AWS RoboMaker enables customers to execute code in the cloud for application development and provides a robotics simulation service to accelerate application testing AWS RoboMaker also provides a robotics fleet management service for remote application deployment update and management Network traffic containing PHI must encrypt data in transit All management communication with the simulation server is over TLS and customers should use open standard transport encryption mechanisms for connections to other AWS services AWS RoboMaker also integrates with CloudTrail to log all API calls to a specific Amazon S3 bucket AWS RoboMaker logs do not contain PHI and the EBS volumes used by the simulation server are encrypted When transferring data that may contain PHI to other services such as Amazon S3 customers must follow the receiving service’s guidance for storing PHI For deployments to robots customers must ensure that encryption of data in transit and atrest is consistent with their interpretation of the Guidance AWS SDK Metrics Enterprise customers can use the AWS CloudWatch agent with AWS SDK Metrics for Enterprise Support (SDK Metrics) to collect metrics from AWS SDKs on their hosts and clients These metrics are shared with 37ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Secrets Manager AWS Enterprise Support SDK Metrics can help customers collect relevant metrics and diagnostic data about their application's connections to AWS services without adding custom instrumentation to their code and reduces the manual work necessary to share logs and data with AWS Support Please note that SDK Metrics is only available to AWS customers with an Enterprise Support subscription Customers can use SDK Metrics with any application that directly calls AWS services and that was built using an AWS SDK that is one of the versions listed in the AWS Metrics documentation SDK Metrics monitors calls that are made by the AWS SDK and uses the CloudWatch agent running in the same environment as a client application The CloudWatch agent encrypts the data intransit from the local machine to delivery in the destination log group The log group can be configured to be encrypted following the directions at Encrypt Log Data in CloudWatch Logs Using AWS KMS AWS Secrets Manager AWS Secrets Manager is an AWS service that makes it easier for customers to manage “secrets” Secrets can be database credentials passwords thirdparty API keys and even arbitrary text AWS Secrets Manager might be used to store PHI if such information is contained within “secrets” All secrets stored by AWS Secrets Manager are encrypted atrest using the AWS Key Management System (KMS) Users can select the AWS KMS key used when creating a new secret If no key is selected the default key for the account will be used AWS Secrets Manager uses AWS CloudTrail to log all API calls AWS Security Hub AWS Security Hub collects and consolidates findings from AWS security services enabled in a customer’s environment such as intrusion detection findings from Amazon GuardDuty vulnerability scans from Amazon Inspector Amazon S3 bucket policy findings from Amazon Macie publicly accessible and cross account resources from IAM Access Analyzer and resources lacking WAF coverage from AWS Firewall Manager AWS Security Hub also consolidates findings from integrated AWS Partner Network (APN) security solutions AWS Security Hub integrates with Amazon CloudWatch Events enabling customers to create custom response and remediation workflows Customers can easily send findings to SIEMs chat tools ticketing systems Security Orchestration Automation and Response (SOAR) tools and oncallmanagement platforms Response and remediation actions can be fully automated or they can be triggered manually in the console Customers can also use AWS Systems Manager Automation documents AWS Step Functions and AWS Lambda functions to build automated remediation workflows that can be initiated from AWS Security Hub To ensure data protection AWS Security Hub encrypts data at rest and data in transit between component services Thirdparty auditors assess the security and compliance of AWS Security Hub as part of multiple AWS compliance programs AWS Security Hub is part of AWS’s SOC ISO PCI and HIPAA compliance programs AWS Server Migration Service AWS Server Migration Service (AWS SMS) automates the migration of onpremises VMware vSphere or Microsoft HyperV/SCVMM virtual machines to the AWS Cloud AWS SMS incrementally replicates server VMs as cloudhosted Amazon Machine Images (AMIs) ready for deployment on Amazon EC2 38ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Serverless Application Repository Servers running onpremises and being migrated to the cloud with (AWS SMS) can contain PHI data AWS SMS encrypts data while in transit and when server VM images are being staged for final placement onto EC2 Refer to the guidance for EC2 and setting up encrypted storage volumes when migrating a server VM containing PHI with AWS SMS AWS SMS uses CloudTrail to log all API calls AWS Serverless Application Repository The AWS Serverless Application Repository (SAR) is a managed repository for serverless applications It enables teams organizations and individual developers to store and share reusable applications and easily assemble and deploy serverless architectures in powerful new ways The applications are AWS CloudFormation templates which contain definitions of the application infrastructure and compiled binaries of application AWS Lambda function code Although it is possible for applications that are in the AWS Serverless Application Repository to process PHI they would only do this after being deployed to a customer’s account and not as part of the SAR itself The AWS Serverless Application Repository encrypts files that customers upload including deployment packages and layer archives For data in transit the AWS Serverless Application Repository uses TLS to encrypt data between the service and the agent AWS Serverless Application Repository is integrated with AWS CloudTrail which is a service that provides a record of actions taken by a user role or an AWS service in the AWS Serverless Application Repository AWS Service Catalog AWS Service Catalog allows IT administrators to create manage and distribute portfolios of approved products to end users who can then access the products they need in a personalized portal AWS Service Catalog is used to catalog share and deploy selfservice solutions on AWS and cannot be used to store transmit or process PHI PHI should not be placed in any metadata for AWS Service Catalog items or within any item description AWS Service Catalog uses AWS CloudTrail to log all API calls AWS Shield AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards web applications running on AWS AWS Shield provides alwayson detection and automatic inline mitigations that minimize application downtime and latency so there is no need to engage AWS Support to benefit from DDoS protection AWS Shield cannot be used to store or transmit PHI but instead can be used to safeguard web applications that do operate with PHI As such no special configuration is needed when engaging AWS Shield All AWS customers benefit from the automatic protections of AWS Shield Standard at no additional charge AWS Shield Standard defends against most common frequently occurring network and transport layer DDoS attacks that target their website or applications For higher levels of protection against attacks targeting their web applications running on Elastic Load Balancing (ELB) Amazon CloudFront and Amazon Route 53 resources customers can subscribe to AWS Shield Advanced AWS Snowball With AWS Snowball (Snowball) customers can transfer hundreds of terabytes or petabytes of data between their onpremises data centers and Amazon Simple Storage Service (Amazon S3) PHI stored 39ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS Snowball Edge in AWS Snowball must be encrypted atrest consistent with the Guidance When creating an import job customers must specify the ARN for the AWS KMS master key to be used to protect data within the Snowball In addition during the creation of the import job customers should choose a destination S3 bucket that meets the encryption standards set by the Guidance While Snowball does not currently support serverside encryption with AWS KMSmanaged keys (SSE KMS) or serverside encryption with customer provided keys (SSEC) Snowball does support serverside encryption with Amazon S3managed encryption keys (SSES3) For more information see Protecting Data Using ServerSide Encryption with Amazon S3Managed Encryption Keys (SSES3) Alternatively customers can use the encryption methodology of their choice to encrypt PHI before storing the data in AWS Snowball Currently customers may use the standard AWS Snowball appliance or AWS Snowmobile as part of our BAA AWS Snowball Edge AWS Snowball Edge connects to existing customer applications and infrastructure using standard storage interfaces streamlining the data transfer process and minimizing setup and integration Snowball Edge can cluster together to form a local storage tier and process customer data onsite helping customers ensure that their applications continue to run even when they are not able to access the cloud To ensure that PHI remains encrypted while using Snowball Edge customers should make sure to use an encrypted connection protocol such as HTTPS or SSL/TLS when using AWS Lambda procedures powered by AWS IoT Greengrass to transmit PHI to/from resources external to Snowball Edge Additionally PHI should be encrypted while stored on the local volumes of Snowball Edge either through local access or via NFS Encryption is automatically applied to data placed into Snowball Edge using the Snowball Management Console and API for bulk transport into S3 For more information on data transport into S3 see the related guidance for the section called “AWS Snowball” (p 39) AWS Snowmobile AWS Snowmobile is operated by AWS as a managed service As such AWS will contact the customer to determine requirements for deployment and arrange for network connectivity as well as provide assistance moving data Data stored on Snowmobile is encrypted using the same guidance provided for AWS Snowball AWS Step Functions AWS Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows AWS Step Functions is not able to store transmit or process PHI PHI should not be placed within the metadata for AWS Step Functions or within any task or state machine definition AWS Step Functions uses AWS CloudTrail to log all API calls AWS Storage Gateway AWS Storage Gateway is a hybrid storage service that enables customers’ onpremises applications to seamlessly use AWS Cloud storage The gateway uses open standard storage protocols to connect 40ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper File Gateway existing storage applications and workflows to AWS Cloud storage services for minimal process disruption File Gateway File gateway is a type of AWS Storage Gateway that supports a file interface into Amazon S3 and that adds to the current blockbased volume and VTL storage File gateway uses HTTPS to communicate with S3 and stores all objects encrypted while on S3 using SSES3 by default or using clientside encryption with keys stored in AWS KMS File metadata such as file names remains unencrypted and should not contain any PHI Volume Gateway Volume gateway provides cloudbacked storage volumes that customers can mount as internet Small Computer System Interface (iSCSI) devices from onpremises application servers Customers should attach local disks as Upload buffers and Cache to the Volume Gateway VM in accordance with their internal compliance and regulatory requirements It is recommended that for PHI these disks should be capable of providing encryption atrest Communication between the Volume Gateway VM and AWS is encrypted using TLS 12 to secure PHI in transport Tape Gateway Tape gateway provides a VTL (virtual tape library) interface to thirdparty backup applications running onpremises Customers should enable encryption for PHI within the thirdparty backup application when setting up a tape backup job Communication between the Tape Gateway VM and AWS is encrypted using TLS 12 to secure PHI in transport Customers using any of the Storage Gateway configurations with PHI should enable full logging For more information see What Is AWS Storage Gateway? AWS Systems Manager AWS Systems Manager is a unified interface that allows customers to easily centralize operational data automate tasks across their AWS resources and shortens the time to detect and resolve operational problems in their infrastructure Systems Manager provides a complete view of a customer’s infrastructure performance and configuration simplifies resource and application management and makes it easy to operate and manage their infrastructure at scale When outputting data that may contain PHI to other services such as Amazon S3 customers must follow the receiving service’s guidance for storing PHI Customers should not include PHI in metadata or identifiers such as document names and parameter names AWS Transfer for SFTP AWS Transfer for SFTP provides Secure File Transfer Protocol (SFTP) access to a customer's S3 resources Customers are presented with a virtual server which is accessed using the standard SFTP protocol at a regional service endpoint From the point of view of the AWS customer and the SFTP client the SFTP gateway looks like a standard highly available SFTP server Although the service itself does not store process or transmit PHI the resources that the customer is accessing on Amazon S3 should be configured in a manner that is consistent with the Guidance Customers can also use AWS CloudTrail to log API calls made to AWS Transfer for SFTP 41ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper AWS WAF – Web Application Firewall AWS WAF – Web Application Firewall AWS WAF is a web application firewall that helps protect customer web applications from common web exploits that could affect application availability compromise security or consume excessive resources Customers may place AWS WAF between their web applications hosted on AWS that operate with or exchange PHI and their end users As with the transmission of any PHI while on AWS data containing PHI must be encrypted while in transit Refer to the guidance for Amazon EC2 to better understand the available encryption options AWS XRay AWS XRay is a service that collects data about requests that a customer’s application serves and provides tools that they can use to view filter and gain insights into that data to identify issues and opportunities for optimization For any traced request to a customer’s application they can see detailed information not only about the request and response but also about calls that their application makes to downstream AWS resources microservices databases and HTTP web APIs AWS XRay should not be used to store or process PHI Information transmitted to and from AWS XRay is encrypted by default When using AWS XRay do not place any PHI within segment annotations or segment metadata Elastic Load Balancing Customers can use Elastic Load Balancing to terminate and process sessions containing PHI Customers can choose either the Classic Load Balancer or the Application Load Balancer Because all network traffic containing PHI must be encrypted in transit endtoend customers have the flexibility to implement two different architectures: Customers can terminate HTTPS HTTP/2 over TLS (for Application) or SSL/TLS on Elastic Load Balancing by creating a load balancer that uses an encrypted protocol for connections This feature enables traffic encryption between the load balancer and the clients that initiate HTTPS HTTP/2 over TLS or SSL/TLS sessions and for connections between the load balancer and customer backend instances Sessions containing PHI must encrypt both frontend and backend listeners for transport encryption Customers should evaluate their certificates and session negotiation policies and maintain them consistent to the Guidance For more information see HTTPS Listeners for Your Classic Load Balancer Alternatively customers can configure Amazon ELB in basic TCPmode (for Classic) or over WebSockets (for Application) and passthrough encrypted sessions to backend instances where the encrypted session is terminated In this architecture customers manage their own certificates and TLS negotiation policies in applications running in their own instances For more information see Listeners for Your Classic Load Balancer In both architectures customers should implement a level of logging which they determine to be consistent with HIPAA and HITECH requirements FreeRTOS FreeRTOS is an operating system for microcontrollers that makes small lowpower edge devices easy to program deploy secure connect and manage FreeRTOS is based on the FreeRTOS kernel a popular open source operating system for microcontrollers and extends it with software libraries that make it easy to securely connect small lowpower devices to AWS Cloud services like AWS IoT Core or to more powerful edge devices running AWS IoT Greengrass Data containing PHI can now be encrypted in transit and while atrest when using a qualified device running FreeRTOS FreeRTOS provides two libraries to provide platform security: TLS and PKCS#11 42ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Using AWS KMS for Encryption of PHI The TLS API should be used to encrypt and authenticate all network traffic that contains PHI PKCS#11 provides a standard interface for software cryptographic operations and should be used to encrypt any PHI stored on a qualified device running FreeRTOS Using AWS KMS for Encryption of PHI Master keys in AWS KMS can be used to encrypt/decrypt data encryption keys used to encrypt PHI in a customer’s applications or in AWS services that use AWS KMS AWS KMS can be used in conjunction with a HIPAA account but PHI can only be processed stored or transmitted in HIPAA Eligible Services AWS KMS is normally used to generate and manage keys for applications running in other HIPAA Eligible Services For example an application processing PHI in Amazon EC2 could use the GenerateDataKey API call to generate data encryption keys for encrypting and decrypting PHI in the application The data encryption keys would be protected by a customer’s master keys stored in AWS KMS creating a highly auditable key hierarchy as API calls to AWS KMS are logged in AWS CloudTrail PHI should not be stored in the Tags (metadata) for any keys stored in AWS KMS VM Import/Export VM Import/Export enables customers to easily import virtual machine images from existing environment to Amazon EC2 instances and export them back to your onpremises environment This offering allows customers to leverage existing investments in the virtual machines that you have built to meet theirIT security their configuration management and their compliance requirements by bringing those virtual machines into Amazon EC2 as readytouse instances Customers can also export imported instances back to their onpremises virtualization infrastructure allowing them to deploy workloads across your IT infrastructure VM Import/Export is available at no additional charge beyond standard usage charges for Amazon EC2 and Amazon S3 To import customer images customers can use the AWS CLI or other developer tools to import a virtual machine (VM) image from their VMware environment If customers use the VMware vSphere virtualization platform they can also use the AWS Management Portal for vCenter to import their VM As part of the import process VM Import will convert customer VM into an Amazon EC2 AMI which they can use to run Amazon EC2 instances Once their VM has been imported they can take advantage of Amazon’s elasticity scalability and monitoring via offerings like Auto Scaling Elastic Load Balancing and CloudWatch to support their imported images Customers can export previously imported Amazon EC2 instances using the Amazon EC2 API tools Simply specify the target instance virtual machine file format and a destination Amazon S3 bucket and VM Import/Export will automatically export the instance to the Amazon S3 bucket along with encryption options to secure the transmission and storage of their VM images Customers can then download and launch the exported VM within their onpremises virtualization infrastructure Customers can import Windows and Linux VMs that use VMware ESX or Workstation Microsoft Hyper V and Citrix Xen virtualization formats And customers can export previously imported Amazon EC2 instances to VMware ESX Microsoft HyperV or Citrix Xen formats For a full list of supported operating systems versions and formats see VM Import/Export Requirements AWS plans to add support for additional operating systems versions and formats in the future 43ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Auditing backups and disaster recovery HIPAA’s Security Rule has detailed requirements related to indepth auditing capabilities data backup procedures and disaster recovery mechanisms The services in AWS contain many features that help customers address their requirements For example customers should consider establishing auditing capabilities to allow security analysts to examine detailed activity logs or reports to see who had access IP address entry what data was accessed etc This data should be tracked logged and stored in a central location for extended periods of time in case of an audit Using Amazon EC2 customers can run activity log files and audits down to the packet layer on their virtual servers just as they do on traditional hardware They also can track any IP traffic that reaches their virtual server instance A customer’s administrators can back up the log files into Amazon S3 for longterm reliable storage HIPAA also has detailed requirements related to maintaining a contingency plan to protect data in case of an emergency and must create and maintain retrievable exact copies of electronic PHI To implement a data backup plan on AWS Amazon EBS offers persistent storage for Amazon EC2 virtual server instances These volumes can be exposed as standard block devices and they offer offinstance storage that persists independently from the life of an instance To align with HIPAA guidelines customers can create pointintime snapshots of Amazon EBS volumes that are stored automatically in Amazon S3 and are replicated across multiple Availability Zones which are distinct locations engineered to be insulated from failures in other Availability Zones These snapshots can be accessed at any time and can protect data for longterm durability Amazon S3 also provides a highly available solution for data storage and automated backups By simply loading a file or image into Amazon S3 multiple redundant copies are automatically created and stored in separate data centers These files can be accessed at any time from anywhere (based on permissions) and are stored until intentionally deleted Moreover AWS inherently offers a variety of disaster recovery mechanisms Disaster recovery the process of protecting an organization’s data and IT infrastructure in times of disaster involves maintaining highly available systems keeping both the data and system replicated offsite and enabling continuous access to both With Amazon EC2 administrators can start server instances very quickly and can use an Elastic IP address (a static IP address for the cloud computing environment) for graceful failover from one machine to another Amazon EC2 also offers Availability Zones Administrators can launch Amazon EC2 instances in multiple Availability Zones to create geographically diverse fault tolerant systems that are highly resilient in the event of network failures natural disasters and most other probable sources of downtime Using Amazon S3 a customer’s data is replicated and automatically stored in separate data centers to provide reliable data storage designed to provide 9999% availability For more information on disaster recovery see the AWS Disaster Recovery whitepaper available at Disaster Recovery 44ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Document revisions To be notified about updates to this whitepaper subscribe to the RSS feed updatehistorychange updatehistorydescription updatehistorydate Whitepaper updated (p 45) Added information about AWS Network FirewallSeptember 9 2021 Whitepaper updated (p 45) Updated information about Amazon Connect Customer ProfilesAugust 26 2021 Whitepaper updated (p 45) Added sections Amazon AppFlow and AWS Glue DataBrewJuly 22 2021 Whitepaper updated (p 45) Updated navigation and organizationApril 26 2021 Whitepaper updated (p 45) Added the following sections: AWS CodeDeploy AWS CodePipeline Amazon Aurora Aurora PostgreSQL Amazon Textract Amazon Polly Amazon FSx AWS Auto Scaling AWS Backup AWS Elastic Beanstalk AWS Firewall Manager AWS Organizations AWS Security Hub AWS Serverless Application Repository VM Import/Export Amazon HealthLake Amazon EventBridge Updated Amazon Aurora sectionMarch 31 2021 Whitepaper updated (p 45) Added section on AWS App Mesh and updated AWS System Manager contentAugust 25 2020 Whitepaper updated (p 45) Added sections Amazon Appstream 20 AWS SDK Metrics AWS Data Exchange Amazon MSK Amazon Pinpoint Amazon Lex Amazon SES and Amazon Forecast Amazon Quantum Ledger Database (QLDB) AWS Cloud MapMay 7 2020 Whitepaper updated (p 45) Added sections on Amazon CloudWatch Amazon CloudWatch Events Amazon Kinesis Data Firehose Amazon Kinesis Data Analytics Amazon OpenSearch Service Amazon DocumentDB (with MongoDB compatibility) AWS MobileJanuary 1 2020 45ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Hub AWS IoT Greengrass AWS OpsWorks for Chef Automate AWS OpsWorks for Puppet Enterprise AWS Transfer for SFTP AWS DataSync AWS Global Accelerator Amazon Comprehend Medical AWS RoboMaker and Alexa for Business Whitepaper updated (p 45) Added sections on Amazon Comprehend Amazon Transcribe Amazon Translate and AWS Certificate ManagerJanuary 1 2019 Whitepaper updated (p 45) Added sections on Amazon Athena Amazon EKS AWS IoT Core and AWS IoT Device Management Amazon FreeRTOS Amazon GuardDuty Amazon Neptune AWS Server Migration Service AWS Database Migration Service Amazon MQ and AWS GlueNovember 1 2018 Whitepaper updated (p 45) Added sections on Amazon Elastic File System (EFS) Amazon Kinesis Video Streams Amazon Rekognition Amazon SageMaker Amazon Simple Workflow AWS Secrets Manage AWS Service Catalog and AWS Step FunctionsJune 1 2018 Whitepaper updated (p 45) Added sections on AWS CloudFormation AWS XRay AWS CloudTrail AWS CodeBuild AWS CodeCommit AWS Config and AWS OpsWorks StackApril 1 2018 Whitepaper updated (p 45) Added section on AWS Fargate January 1 2018 Updates made prior to 2018: Date Description November 2017 Added sections on Amazon EC2 Container Registry Amazon Macie Amazon QuickSight and AWS Managed Services November 2017 Added sections on Amazon ElastiCache for Redis and Amazon CloudWatch October 2017 Added sections on Amazon SNS Amazon Route 53 AWS Storage Gateway AWS Snowmobile and AWS CloudHSM Updated section on AWS Key Management Service 46ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Date Description September 2017 Added sections on Amazon Connect Amazon Kinesis Streams Amazon RDS (Maria) DB Amazon RDS SQL Server AWS Batch AWS Lambda AWS Snowball Edge and the Lambda@Edge feature of Amazon CloudFront August 2017 Added sections on Amazon EC2 Systems Manager and Amazon Inspector July 2017 Added sections on Amazon WorkSpaces Amazon WorkDocs AWS Directory Service and Amazon ECS June 2017 Added sections on Amazon CloudFront AWS WAF AWS Shield and Amazon S3 Transfer Acceleration May 2017 Removed requirement for Dedicated Instances or Dedicated Hosts for processing PHI in EC2 and EMR March 2017 Updated list of services to point to AWS Services in Scope by Compliance Program page Added description for Amazon API Gateway January 2017 Updated to newest template October 2016 First publication 47ArchivedArchitecting for HIPAA Security and Compliance on Amazon Web Services AWS Whitepaper Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved 48
|
General
|
consultant
|
Best Practices
|
Automating_Elasticity
|
ArchivedAutomating Elasticity March 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: awsamazoncom/whitepapersArchived Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2020 Amazon Web Services Inc or its affiliates All right s reserved Archived Contents Introduction 1 Monitoring AWS Service Usage and Costs 1 Tagging Resources 2 Automating Elasticity 2 Automating Time Based Elasticity 3 Automating Volume Based Elasticity 4 Conclusion 6 Archived Abstract This is the sixth in a series of whitepapers designed to support your cloud journey This paper seeks to empower you to maximize value from your investments improve forecasting accuracy and cost predictability create a culture of ownership and cost transparency and continuously me asure your optimization status This paper discusses how you can automate elasticity to get the most value out of your AWS resources and optimize costs ArchivedAmazon Web Services – Automating Elasticity Page 1 Introduction In the traditional data center based model of IT once infrastructure is deployed it typically runs whether it is needed or not and all the capacity is paid for regardless of how much it gets used In the cloud resources are elastic meaning they can instantly grow or shrink to match the requirements of a specific application Elasticity allows you to match the supply of resource s—which cost money —to demand Because cloud resources are paid for based on usage matching needs to utilization is critical for cost optimization Demand includes both external usage such as the number of customers who visit a website over a given period and internal usage such as an application team using dev elopment and test environments There are two basic types of elasticity: time based and volume based Time based elasticity means turning off resources when they are not being used such as a devel opment environment that is needed only during business hours Volume based elasticity means matching scale to the intensity of demand whether that’s compute cores storage sizes or throughput By combining monitoring tagging and automation you can get the most value out of your AWS resources and optimize costs Monitoring AWS Service Usage and Costs There are a couple of tools that you can use to monitor your service usage and costs to identify opportunities to use elasticity The Cost Optimization Monitor can help you generate reports that provide insight into service usage and costs as you deploy and operate cloud architecture They include detailed billing reports which you can access in the AWS Billing and Cost Management console These reports provide estimated costs that you can break down in different ways (by period account resource or custom resource tags) to help monitor and forecast monthly charges You can analyze this information to optimize your infrastructure and maximize your return on investment using elastic ity ArchivedAmazon Web Services – Automating Elasticity Page 2 Cost Explorer is another free tool that you can use to view your costs and find ways to take advantage of elasticity You can view data up to the la st 13 months forecast how much you are likely to spend for the next 3 months and get recommendations on what Reserved Instances to purchase You can also use Cost Explorer to see patterns in how much you spend on AWS resources over time identify areas t hat need further inquiry and see trends that can help you understand your costs In addition you can specify time ranges for the data as well as view time data by day or by month Tagging Resources Tagging resources gives you visibility and control over cloud IT costs down to seconds and pennies by team and application Tagging lets you assign custom metadata to instances images and other resources For example you can categorize resources by owner purpose or environment which help s you organize th em and assign cost accountability When resources are accurately tagged automation tools can identify key characteristics of those resources needed to manage elasticity For example many customers run automated start/stop scripts that turn off developmen t environments during non business hours to reduce costs In this scenario Amazon Elastic Compute Cloud (Amazon EC2 ) instance tags provide a simple way to identify development instances that should keep running Automati ng Elasticity With AWS you can aut omate both volume based and time based elasticity which can provide significant savings For example companies that shut down EC2 instances outside of a 10 hour workday can save 70 % compared to running those instances 24 hours a day Automation becomes i ncreasingly important as environments grow larger and become more complex in which manually searching for elasticity savings becomes impractical Automation is powerful but you need to use it carefully It is important to minimize risk by giving people a nd systems only the minimum level of access required to perform necessary tasks Additionally you should anticipate exceptions to automation plans and consider different schedules and usage scenarios A one sizefitsall approach is seldom realistic even within the same ArchivedAmazon Web Services – Automating Elasticity Page 3 department Choose a flexible and customizable approach to accommodate your needs Automating Time Based Elasticity Most non production instances can and should be stopped when they are not being used Although it is possible to manually s hut down unused instances this is impractical at larger scales Let’s consider a few ways to automate time based elasticity AWS Instance Scheduler The AWS Instance Scheduler is a simple solution that allows you to create automatic start and stop schedules for your EC2 instances The solution is deployed using an AWS CloudFormation template which launches and configures the components necessary to automatically start and stop EC2 instances in all AWS Regions of your account During initial deployment you simply defi ne the AWS Instance Scheduler default start and stop parameters and the interval you want it to run These values are stored in Amazon DynamoDB and can be overridden or modified as necessary A custom resour ce tag identifies instances that should receive AWS Instance Scheduler actions The solution's recurring AWS Lambda function automatically starts and stops appropriately tagged EC2 instances You can review th e solution's custom Amazon CloudWatch metric to see a history of AWS Instance Scheduler actions Amazon EC2 API tools You can terminate instances programmatically using Amazon EC2 APIs specifically the StopInstances and TerminateInstances actions These APIs let you build your own schedules and automation tools When you stop an instance the root device and any other devices attached to the instance persist When you terminate an instanc e the root device and any other devices attached during the instance launch are automatically deleted For more information about the differences between rebooting stopping and terminating instances see Instance Lifecycle in the Amazon EC2 User Guide ArchivedAmazon Web Services – Automating Elasticity Page 4 AWS Lambda AWS Lambda serverless functions are another tool that you can use to shut down instances when they are not being used You can configure a Lambda function to start and stop instances when triggered by Amazon CloudWatch Events such as a specific time or utilization threshold For more information read this Knowledge Center topic AWS Data Pipeline AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services as well as on premises data sources at specified intervals It can be used to stop and start Amazon EC2 instances by running AWS Command Li ne Interface (CLI) file commands on a set schedule AWS Data Pipeline runs as an AWS Identity and Access Management (IAM) role which eliminates key management requirements Amazon CloudWatch Amazon Cloud Watch is a monitoring service for AWS cloud resources and the applications you run on AWS You can use Amazon CloudWatch to collect and track metrics and log files set alarms and automatically react to changes in your AWS resources You can use Amazon Cl oudWatch alarms to automatically stop or terminate EC2 instances that have gone unused or underutilized for too long You can stop your instance if it has an Amazon Elastic Block Store (Amazon EBS) volume as its root device A stopped instance retains its instance ID and can be restarted A terminated instance is deleted For more information on the difference between stopping and terminating instances see the Stop and Start Your Instance in the Amazon EC2 User Guide For example you can create a group of alarms that first sends an email notification to developers whose instance ha s been underutilized for 8 hours and then terminat es that instance if its utilization has not improved after 24 hours For instructions on using this method see the Amazon CloudWatch User Guide Automatin g Volume Based Elasticity By taking advantage of volume based elasticity you can scale resources to match capacity The best tool for accomplishing this task is Amazon EC2 Auto ArchivedAmazon Web Services – Automating Elasticity Page 5 Scaling which you can use to optimize performance by automatically increasing the number of EC2 instances during demand spikes and decreasing capacity during lulls to reduce costs Amazon EC2 Auto Scaling is well suited for applications that have stable demand p atterns and for ones that experience hourly daily or weekly variability in usage Beyond Amazon EC2 Auto Scaling you can use AWS Auto Scaling to automatically scale resources for other AWS services in cluding: • Amazon Elastic Container Service (Amazon ECS) – You can configure your Amazon ECS service to use AWS Auto Scaling to adjust its desired count up or down in response to CloudWatch alarms For more informa tion read the documentation • Amazon EC2 Spot Fleets – A Spot Fleet can either launch instances (scale out) or terminate instances (scale in) within the range that you choose in response to one or more scaling policies For more information read the documentation • Amazon EMR clusters – Auto Scaling in Amazon EMR allows you to programmatically scale out and scale in core and task nodes in a cluster based on rules that you specify in a scaling policy For more information read the documentation • Amazon AppStream 20 stacks and fleets – You can define scaling policies that adjust the size of your fleet automatically based on a variety of utilization metrics and optimize the number of running instances to match user demand You can also choose to turn off automatic scaling and make the fle et run at a fixed size For more information read the documentation • Amazon DynamoDB – You can dynamically adjust provisioned throughput capacity in response to actual traffic patterns This enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic without throttling When the workload decrea ses AWS Auto Scaling decreases the throughput so that you don't pay for unused provisioned capacity For more information read to the documentation You can also read our blog post Auto Scaling for Amazon DynamoDB ArchivedAmazon Web Services – Automating Elasticity Page 6 Conclusion The elasticity of cloud services is a powerful way to optimize costs By combining tagging monitori ng and automation your organization can match its spending to its needs and put resources where they provide the most value For more information about elasticity and other cost management topics see the AWS Billing and Cost Management documentation Automation tools can help minimize some of the management and administrative tasks associated with an IT deployment Similar to the benefits from application services an automated or DevOps approach to your AWS infrastructure will provide scalability and elasticity with minimal manual intervention This also provides a level of control over your AWS environment and the associated spending For example when engineers or developers are allowed to provision AWS resources only through an established process a nd use tools that can be managed and audited (for example a provisioning portal such as AWS Service Catalog) you can avoid the expense and waste that results from simply turning on (and most often leaving on) standalone resources Contributors The follow ing individuals and organizations contributed to this document: • Amilcar Alfaro Sr Product Marketing Manager AWS • Erin Carlson Marketing Manager AWS • Keith Jarrett WW BD Lead – Cost Optimization AWS Business Development Document History Date Description March 2020 Minor revisions March 2018 First publication ArchivedAmazon Web Services – Automating Elasticity Page 7
|
General
|
consultant
|
Best Practices
|
Automating_Governance_on_AWS
|
ArchivedAutomating Governance A Managed Service Approach to Security and Compliance on AWS August 2015 THIS PAPER HAS BEEN ARCHIVED For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 2 of 39 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditio ns or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 3 of 39 Contents Abstract 4 Introduction 4 Shared Responsibility Environment 6 Compliance Requirements 7 Compliance and Governance 8 Challenges in Architecting for Governance 9 Implementin g a Managed Services Organization 10 Standardizing Architecture for Compliance 14 Architectural Baselines 14 The Shared Services VPC 18 Automating for Compliance 20 Automating Compliance for EC2 Instances 23 Development & Management 25 Deployment 28 Automating for Governance: HighLevel Steps 33 Step 1: Define Common Use Cases 34 Step 2: Create and Document Reference Architectures 35 Step 3: Validate and Document Architecture Compliance 35 Step 4: Build Automated Solutions Based on Architecture 36 Step 5: Develop an Accreditation and Approval Process 37 Conclusion 37 Contributors 38 Notes 38 ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 4 of 39 Abstract This whitepaper is intended for existing and potential Amazon Web Services (AWS) customers who are implementing security controls for applications running on AWS It provides guidelines for developing and implementing a managed service approach to deploying applications in AWS The guidelines described provide enterprise customers with greater control over their applications while accelerating the process of deploying authorizing and monitoring these applications This paper is targeted at IT decision makers and security personnel and assumes familiarity with basic networking operating system data encryption and operational control security practices Introduction Governance encompasses an organization’s mission longterm goals responsibilities and decision making Gartner describes governance as “the processes that ensure the effective and efficient use of IT in enabling an organization to achieve its goals ”1 An effective governance strategy defines both the frameworks for achieving goals and the decision makers who create them: Frameworks – The policies principles and guidelines that drive consistent IT decision making Decision makers – The entities or individuals who are responsible and accountable for IT decisions Welldeveloped frameworks ultimately can yield an efficient secure and compliant technology environment This paper describes how to develop and automate these frameworks by introducing the following concepts and practices: A managed service organization (MSO) that is part of a centralized cloud governance model Roles and responsibilities of the MSO on the customer side of the AWS shared responsibility model ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 5 of 39 Shared services and the use of Amazon Virtual Private Cloud (Amazon VPC) within AWS Architectural baselines for establishing minimum configuration requirements for applications being deployed in AWS Automation methods that can facilitate application deployment and simplify compliance accreditation ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 6 of 39 Shared Responsibility Environment Moving IT infrastructure to services in AWS creates a model of shared responsibility between the customer and AWS This shared model helps relieve the operational burden on the customer because AWS operates manages and controls the IT components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate The customer assumes responsibility for and management of the guest operating system (including responsibility for updates and security patche s) and other associated application software and the configuration of the AWSprovided security group firewall Customers must carefully consider the services they choose because their responsibilities vary depending on the services they use the integration of those services into their IT environment and applicable laws and regulations Figure 1: The AWS Shared Responsibility Model ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 7 of 39 This customer/AWS shared responsibility model also extends to IT controls Just as AWS and its customers share the responsibility for operating the IT environment they also share the management operation and verification of IT controls AWS can help relieve the customer of the burden of operating controls by managing those controls associated with the physical infrastructure deployed in the AWS environment that might previously have been managed by the customer Customers can shift the management of certain IT controls to AWS which results in a (new) distributed control environment Customers can then use the AWS control and compliance documentation to perform their control evaluation and verification procedures as required under the applicable compliance standard Compliance Requirements The infrastructure and services provided by AWS are approved to operate under several compliance standards and industry certifications These certifications cover only the AWS side of the shared responsibility model; customers retain the responsibility for certifying and accrediting workloads that are deployed on top of the AWSprovided services that they run The following common compliance standards have unique requirements that customers must consider: NIST SP 800 532–Published by the National Institute of Standards in Technology (NIST) NIST SP 800 53 is a catalog of security controls which most US federal agencies must comply with and which are widely used within private sector enterprises Provides a risk management framework that adheres to the Federal Information Processing Standard (FIPS) FedRAMP3–A US government program for ensuring standards in security assessment authorization and continuous monitoring FedRAMP follows the NIST 800 53 security control standards DoD Cloud Security Model (CSM)4–Standards for cloud computing issued by the US Defense Information Systems Agency (DISA) and documented in the Department of Defense (DoD) Security Requirements Guide (SRG) Provides an authorization process for DoD workload owners who have unique architectural requirements depending on impact level ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 8 of 39 HIPAA5 – The Health Insurance Portability and Accountability Act (HIPAA) contains strict security and compliance standards for organizations processing or storing Protected Health Information (PHI) ISO 270016 – ISO 27001 is a widely adopted global security standard that outlines the requirements for information security management systems It provides a systematic approach to managing company and customer information that’s based on periodic risk assessments PCI DSS7 – Payment Card Industry (PCI) Data Security Standards (DSS) are strict security standards for preventing fraud and protecting cardholder data for merchants that process credit card payments Evaluating systems in the cloud can be a challenge unless there are architectural standards that align with compliance requirements These architectural standards are especially critical for customers who must prove their systems meet strict compliance standards before they are permitted to go into production Compliance and Governance AWS customers are required to continue to maintain adequate governance over the entire IT control environment regardless of whether it is deployed in a traditional data center or in the cloud Leading governance practices include: Understanding required compliance objectives and requirements (from relevant sources) Establishing a control environment that meets those objectives and requirements Understanding the validation required based on the organization’s risk tolerance Verifying the operational effectiveness of the control environment Deployment in the AWS cloud gives organizations options to apply various types of controls and verification methods Workload owners can follow these basic steps to ensure strong governance and compliance: 1 Review information from AWS and other sources to understand the entire IT environment ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 9 of 39 2 Document all compliance requirements 3 Design and implement control objectives to meet the organization’s compliance requirements 4 Identify and document controls owned by outside parties 5 Verify that all control objectives are met and all key controls are designed and operating effectively Approaching compliance governance in this manner will help customers gain a better understanding of their control environment and help clearly define the verification activities that must be performed For more information on governance in the cloud see Security at Scale: Governance in AWS8 Challenges in Architecting for Governance AWS provides a high level of flexibility in how customers can design architectures for their applications in the cloud AWS has documented best practices in the whitepapers user guides API references and other resources that describe how to design for elasticity availability and security But these resources alone do not prevent bad design and improper configuration Architectural decisions that impact security can put customer data or personal information at risk and create liability Consider the following challenges: Building a single workload with different architecture choices that is still compliant The need to individually assess each of these unique architectures The high level of flexibility leaves room for error and serious mistakes can be resolved only by redeployment of the application Security analysts may not understand the differences between the many architectural decisions ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 10 of 39 Learning Curve By deploying applications in AWS workload owners and developers have a much greater level of control over and access to resources beyond the operating system and software However the number of decisions required when building an architecture can be overwhelming for those new to AWS Some of the se architectural decisions include how to address: Amazon VPC structure and network controls AWS Identity and Access Management (IAM) configuration policies permissions; Amazon Simple Storage Service (S3) bucket policies Storage and database options Load balancing Monitoring options alerts tagging Aggregation analysis and storage considerations for logging produced by a workload or AWS service Implementing a Managed Services Organization To implement governance AWS customers have begun establishing centralized teams within their organizations that facilitate the migration of legacy applications and the development of new applications Such a team can be called a provisioning team a center of excellence a broker and most commonly the managed service organization (MSO) which is the term we use Customers use an MSO to establish repeatable processes and templates for deploying applications to AWS while maintaining organizational control over their enterprise’s applications When the MSO function is outsourced it is generally referred to as a managed service partner (MSP) Many MSPs are validated by AWS under our Managed Service Program9 Understanding the enterprise’s cloud governance model is key to determining the provisioning strategy for accounts Amazon VPCs and applications and for deciding how to automate these processes Large enterprises generally centrally manage cloud operations at some level It is important to find the optimal balance between central management and decentralized control10 ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 11 of 39 In a centralized governance model an MSO provides the minimum requirements for workload owners who are deploying applications in the cloud: Guardrails for security data protection and disaster recovery Shared services for security continuous monitoring connectivity and authentication Auditing the deployments of workload owners to ensure adherence to security and compliance standards For most large enterprises there are typical ly two sets of cloud governance roles involved in the deployment of applications: MSO – As previously mentioned a component of centralized cloud governance; responsibilities can include account provisioning establishment of connectivity and Amazon VPC networking security auditing hosting of shared services billing and cost management Workload Owners – Those who are directly responsible for the deployment development and maintenance of applications; a workload owner can be a cost center or a department and may include system administrators developers and others directly responsible directly for one or more applications Enterprise customers establish an MSO when there are common functions that can be centralized to ensure that applications are deployed in a secure and compliant fashion The MSO can also accelerate the rate of migration through reuse of approved configurations which minimizes development and approval time while ensuring compliance through the automated implementation of organization al security requirements ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 12 of 39 Figure 2: Shared Responsibility Between the CSP the MSO and the Workload Owner Adding an MSO allows the authorization documentation of the workload owner to be scoped down to only the configuration and installation of software specific to a particular application because the workload owner inherits a significant portion of the security control implementation from AWS and the organization’s MSO Establishing an MSO requires some up front work but this investment provides enhanced control over applications increased speed to deployment decreased time to authorization and overall enhancement of the enterprise’s security posture Common Activities of the MSO MSOs implemented by AWS customers often perform the following activities: Account provisioning After reviewing the workload owner’s use case the MSO establishes the initial account connects it to the appropriate account for consolidated billing and configures basic security functionality prior to granting access to the workload owner ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 13 of 39 Security oversight Centralized account provisioning allows the MSO to implement features that enable security personnel to monitor the application as it is deployed and managed; the MSO might perform activities such as establishing an auditor group with crossaccount access and linking the application VPC to a shared services VPC that is controlled by the MSO Amazon VPC configuration Deploying the VPC and its subnets including configuring security groups and network ACLs To maintain tighter control over the application VPCs the MSO may retain control of VPC configuration and require the workload owner to request desired changes to network security IAM configuration Creating user groups and assignment of rights including creation of groups for internal auditors an IAM superuser and application administrative groups segregated by functionality (eg database and Unix administrators) Development and approval of templates Creating preapproved AWS CloudFormation templates for common use cases Using templates allows workload owners to inherit the security implementation of the approved template thereby limiting their authorization documentation to the features that are unique to their application Templates can be reused to shorten the time required to approve and deploy new applications AMI creation and management Creating a library of common approved Amazon Machine Images (AMIs) for the organization allowing centralized management and updating of machine images Creating common templates allows the MSO to enforce the use of approved AMIs Development of a shared services VPC A shared service VPC allows the MSO to receive continuous monitoring feeds from the organization’s application VPC and to provide common shared services that are required for their organization This often includes a shared access management platform logging endpoints and the aggregation of configuration information ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 14 of 39 Standardizing Architecture for Compliance The solution to the challenge of implementing security controls for applications running on AWS is to build standardized automated and repeatable architectures that can be deployed for common use cases Automation can help customers easily meet the foundational requirements for buildi ng a secure application in the AWS cloud while providing a level of uniformity that follows proven best practices Architectural Baselines To determine the best method for standardizing and automating architecture in AWS establish baseline requirements up front These are the minimum common requirements to which most (or all) workloads must adhere An enterprise’s baseline requirements normally follow preexisting compliance controls regulatory guidelines security standards and best practices Typically a central department or group of individuals who are also involved in the monitoring auditing and evaluation of systems that are being deployed establish standard architectures based upon their baseline compliance and operational requirements Standard architectures can be shared among multiple applications and use cases within an organization This provides efficiency and uniformity and reduces the time and effort spent in designing architectures for new applications on AWS In an organization with a centralized cloud model these standard architectures are deployed during the account provisioning or application onboarding process Access Control/IAM Configuration IAM is central to securely controlling access to AWS resources Administrators can create users groups and roles with specific access policies to control which actions users and applications can perform through the AWS Management Console or AWS API Federation allows IAM roles to be mapped to permissions from central directory services The enterprise should determine how to implement the following IAM controls: Standard users groups or both that will exist in every account ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 15 of 39 Crossaccount roles or federated roles Roles for EC2 instances and application access to the AWS API Roles requiring access to S3 buckets and other shared resources Security requirements such as password policies and multifactor authentication (MFA) Networking/VPC Configuration Network boundaries and components are critical to deploying a secure architecture in the cloud An Amazon VPC is a logically isolated section of the AWS cloud which can be configured to enforce these network boundaries An AWS account can have one or more Amazon VPCs Subnets are logical groupings of IP address space within an Amazon VPC and exist within a single Availability Zone (AZ) A VPC strategy depends on the requirements of a common use case Amazon VPCs can be designated based on application lifecycle (production development) or on role (management shared services) A well documented Amazon VPC strategy will also take into account: The number of Amazon VPCs per AWS account The subnet structure within an Amazon VPC: the number of subnets and routing capabilities of each subnet High availability requirements: Amazon VPC subnet s across availability zones (AZs) Connectivity options: internet gateways virtual private gateways and routing AWS provides the components necessary for controlling the network boundaries of an application in an Amazon VPC The following table lists examples of Amazon VPC networking controls that can be utilized in AWS Control Implementation Protection Provided VPC Routing Tables Control which VPC subnets may communicate directly with the Internet Provides segmentation and broad reduction of attack surface area per subnet VPC Network Subnet level all traffic allowed by Provides blacklist protection for ports ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 15 of 39 Access Control Lists (NACLs) default stateless filtering designed and implemented across one or more VPC subnets and protocols with secu rity concerns such as TFTP and NetBIOS VPC Security Group(s) Hypervisor level all inbound connections denied by default stateful filtering designed for one or more instances Provides whitelist abilities for ingress and egress traffic opening services and protocols required by the instance and applications Host based Protection Customer selected software to provide intrusion detection and prevention and firewall and/or logging capabilities Depending on product implemented can provide scalable protec tion and detection capabilities and security behavior visibility across your virtual fleet Because VPC networking configuration is critical to ensure the confidentiality integrity and availability of an application enterprises should define standards that adhere to security and AWS best practices MSOs should follow these standards or in the case of decentralized deployment workload owners should have a blueprint to follow when building a VPC structure Resource Tagging Almost all AWS resources allow the addition of user defined tags These tags are metadata and are irrelevant to the functionality of the resource but are critical for cost management and access control When multiple groups of users or multiple workload owners exist within the same AWS account restricting access to resources based on tagging is important Regardless account structure tagbased IAM policies can be used to place extra security restrictions on critical resources The following example of an IAM policy specifies a condition that restricts an IAM user to changing the state of an EC2 instance that has the resource tag of “project = 12345 ” { "Version": "20121017" "Statement": [ { "Action": [ "ec2:StopInstances" ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 16 of 39 AWS recommends the following to effectively use resource tagging: Establish tagging baselines that define common keys and expected values across all accounts Implement tag enforcement through both auditing and automation methods Use automated deployment with AWS CloudFormation to automatically tag resources AMI Configuration Organizations commonly ensure security and compliance by centrally providing workload owners with prebuilt Amazon Machine Images (AMIs) These “golden ” AMIs can be preconfigured with hostbased security software and be hardened based on predetermined security guidelines Workload owners and developers can then use the AMIs as starting images on which to install their own software and configuration knowing the images are already compliant Note that managing centrally distributed AMIs can be an involved task for any central team Do not customize software and configuration which are likely to "ec2:RebootInstances" "ec2:TerminateInstances" ] "Condition": { "StringEquals": { "ec2:ResourceTag/project":"12345" } } "Resource": [ "arn:aws:ec2:your_region:your_account_ID:instance/*" ] "Effect": "Allow" } ] } ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 17 of 39 change frequently in an AMI; instead configure them by using Amazon Elastic Compute Cloud (Amazon EC2) user data scripts or automation tools such as Chef Puppet or AWS OpsWorks Figure 3: Differences Between FullyCconfigured and Base AMIs Figure 3 shows how preconfigured AMIs can be used through automation and policy as the standard to control which new EC2 instances are deployed by workload owners Building AMIs can be partially automated by using tools such as Aminator and Packer11 Continuous Monitoring Continuous monitoring is the proactive approach of identifying risk and compliance issues by accurately tracking and monitoring system activity Certain compliance standards such as NIST SP 80053 require continuous monitoring to meet specific security controls AWS includes several services and native capabilities that can facilitate a continuous monitoring solution in the cloud AWS CloudTrail AWS CloudTrail is a service that logs API activity within an AWS account and delivers these logs to an Amazon Simple Storage Service (Amazon S3) bucket This data can be analyzed with thirdparty tools such as Splunk Alert Logic or CloudCheckr12 As a security standard CloudTrail should be enabled on all accounts and should log to a bucket that is accessible by security tools and applications ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 18 of 39 Amazon CloudWatch Alarms Amazon CloudWatch alarms notify users and applications when events related to AWS resources occur For example the failure of an instance can trigger an alarm to send an Amazon Simple Notification Service (Amazon SNS) notification by email to a group of users You can create common alarms for metrics and events within an account that must be monitored Centralized Logging In AWS application logs can be centralized for analysis by security tools This can be simplified by using Amazon CloudWatch Logs CloudWatch Logs provides an agent which can be configured to send application log data directly to CloudWatch Metric filters can then be used to track certain events and activity at the OS and application levels Notifications Amazon SNS can be used to send email or SMSbased notifications to administrative and security staff Within an AWS account you can create Amazon SNS topics to which applications and AWS CloudFormation deployments can publish These push notifications can automatically be sent to individuals or groups within the organization w ho need to be notified of Amazon CloudWatch alarms resource deployments or other activity published by applications to Amazon SNS AWS Config AWS Config is a service that provides you with an AWS resource inventory a configuration history and configuration change notifications all of which enable security and governance13 AWS Config allows detailed tracking and notification whenever a resource in an AWS account is created modified or deleted The Shared Services VPC Our enterprise customers have found that establishing a single Amazon VPC that contains security applications required for monitoring their applications simplifies centralized control of infrastructure and provides easier access to common features such as Network Time Protocol (NTP) servers directory services and certificate management repositories ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 19 of 39 Figure 4: A Sample SharedService Amazon VPC Approach for DoD Customers Figure 4 provides an example of a shared service VPC approach used by a DoD MSO that establishes two VPCs for use by all of their applications In the first VPC the MSO established a VPC dedicated to providing a web application firewall that screens all traffic for known attack patterns creates a single point for monitoring web traffic and yet does not create a singlepoint of failure due to its ability to scale with traffic In the second VPC the MSO hosts a variety of common services including Active Directory servers DNS servers NTP servers HostBased Security System (HBSS) ePolicy Orchestrator (ePO) rollup servers and a master Assured Compliance Assessment Solution (ACAS) Security Center server Each organization must determine the common services that they must host in their AWS environment to support the needs of workload owners ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 20 of 39 Automating for Compliance Any customer can create prebuilt and customizable reference architectures with the tools AWS provides although it does require a level of effort and expertise Automation Methods AWS CloudFormation is the core of AWS infrastructure automation The service allows you to automatically deploy complete architectures by using prebuilt JSONformatted template files The set of resources created by an AWS CloudFormation template is referred to as a “stack ” Modular Design for Compliance Automation When building enterprisewide AWS CloudFormation templates to automate compliance we recommend that you use a modular design Use separate stacks based on the commonality of configuration among applications This can automate and enforce the baseline standards for security and compliance described in the previous sections Figure 5 shows how a customer can develop and maintain AWS CloudFormation templates using a modular design A single workload would use one template from each of these stacks nested in a single template to deploy and configure an entire application ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 21 of 39 Figure 5: AWS CloudFormation Stacks Stack 1 – Stack 1 is the primary security template applied to each account; it deploys common IAM users roles groups and associated policies Stack 2 – Generally there will be a template for each common use case to deploy the associated VPC architecture; this can take into account connectivity options such as VPC peering NAT instances internet and virtual private gateways Stack 3 – There is a template for each common configuration of an application architecture They contain applicationrelated components that are common among multiple applications but distinct among use cases such as elastic load balancers Elastic Load Balancing SSL configuration common security groups and common S3 buckets Stack 4 – There is a template for each specific application that deploys the associated EC2 instances autoscaling groups and other instancelevel resources In this stack instances can be bootstrapped with required user data and other resources such as applicationspecific security groups can be created Use Case Packages Building templates in this manner allows you to reuse configurations For specific use cases and application types you can use “packages ” that consist of ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 22 of 39 multiple templates nested within a single main template to deploy an entire architecture as shown in Figure 6 Figure 6: Example Package That Includes IAM Base Configuration VPC Architecture 1 Application Architecture 2 and APP2 Template An organization with a decentralized cloud governance model can use this automation structure to establish “blueprint ” architectures and allow workload owners full control of deployment at all levels In contrast an organization with a centralized cloud team that is responsible for provisioning might allow workload owners to provision only the applicationlevel components of the architecture while retaining responsibility for initial account provisioning IAM controls and Amazon VPC configuration To successfully build templates to automate compliance: Keep templates modular; use nested stacks when possible Use parameters as much as necessary to ensure flexibility Use the DependsOn attribute and wait conditions to prevent dependency issues when resources are deployed Develop a version control process to maintain template packages ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 23 of 39 Allow for command line interface (CLI)based or AWS Service Catalog based deployment Use a parameters file Use IAM policies to restrict the ability of users to delete AWS CloudFormation stacks Automating Compliance for EC2 Instances There are four tools for automating the configuration of EC2 instances at the operating system and application levels to meet compliance requirements Custom AMIs AWS allows you to create customized AMIs that can be built and hardened for use by workload owners to further install software and applications Building a compliant AMI may requires you to take into account the following: Software packages and updates Password policies SSH keys File system permissions/ownership File system encryption User/group configuration Access control settings Continuous monitoring tools Firewall rules Running services User Data Scripts You can employ user data to bootstrap EC2 instances to install packages and perform configuration on launch Utilize user data to directly manipulate instance configuration with any of the following tools: ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 24 of 39 CloudInit directives – Specify configuration parameters in user data which cloudinit can use to directly modify configuration An example of a directive is “Packages ” which can install a list of specific packages on the instance Shell scripts – Include Bash or PowerShell scripts directly in user data to run on instance launch There is a 16 KB raw data limit on user data which limits this option External scripts – A user data script can pull down a larger shell script from an S3 bucket URL or any other location and run this script to further configure the instance Configuration Management Software Configuration management solutions allow continuous management of instance configuration This can automate consistency among instances and make managing changes easier Examples of such solutions include: Chef Puppet Ansible SaltStack AWS OpsWorks By using these configuration management solutions you can build scripts and packages to secure an operating system These hardening operations can include modifying user access or file system permissions; disabling services; making firewall changes; and many other operations used to secure a system and reduce its attack surface The following example of a Chef script implements a password age policy: template '/etc/logindefs' do source 'logindefserb') mode 0444 owner 'root' group 'root' ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 25 of 39 You can design packages of configuration scripts for example Puppet modules or Chef cookbooks based on specific compliance requirements and apply them to instances that must meet those requirements Containers Containerization with applications such as Docker14 or Amazon EC2 Container Service (Amazon ECS)15 allows one or more applications to run independently on a single instance within an isolated user space Figure 7: Containerization From a compliance perspective containers can be prebuilt with a standardized and hardened configuration based on the operating system and application Development & Management Using a modular approach and a common structure for templates simplifies updates and enforces uniform development by those responsible for creating new use case packages We recommend using the following elements when developing and managing AWS CloudFormation template packages that are architected for compliance Outputs The Output section of a template can include custom information and can be used to retrieve the ID of generated resources when nested stacks are used It variables (password_max_age: node['auth']['pw_max_age'] password_min_age: node['auth']['pw_min_age'] ) end ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 26 of 39 can also be used to provide general information that can be viewed from the AWS CloudFormation console or from the CLI/API describestack s call The Output sections of template files should include at minimum the following reference information: Use case/application type Compliance type Date created Maintained by Parameters AWS CloudFormation parameters16 are fields that allow users to specify data to the template upon launch Use parameters whenever possible You can design an entire set of AWS CloudFormation templates for a common use case by using highly customized parameters For example most tiered web applications share a similar architecture For this type of use case you can develop a complete fourstack template package so that multiple webbased applications can easily be deployed with the same template files by the user specifying parameters for AMIs and other applicationspecific resources Conditions AWS CloudFormation allows the use of Conditions17 which must be true for resources to be created When used in combination with parameters conditions enable you to design templates that make reference architectures flexible and based on application requirements For example a condition can be used to launch an EC2based database instead of an Amazon Relational Database Service (Amazon RDS) instance based on input parameters specified by the user as shown in the following snippet: "CreateDBInstance": { ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 27 of 39 Custom Resources AWS CloudFormation allows you to create custom resources18 which can be used to integrate with external processes or thirdparty providers Custom resources can also be designed to invoke AWS Lambda functions which can provide levels of automation not available with AWS CloudFormation alone Figure 8: Custom Resources Infrastructure as Code AWS CloudFormation templates and associated scripts documents and parameter files can be managed just as any application code would be We recommend that you use version control repositories such as Git or Subversion (SVN) to track changes and allow multiple users to efficiently push updates Capabilities such as version control testing and rapid deployment are possible with AWS CloudFormation templates just as with any source code A full Continuous Integration/Continuous Deployment (CI/CD) solution can be implemented using additional tools such as Jenkins19 "Fn::Not": [ { "Fn::Equals": [ { "Ref": "DatabaseAmi" } "none" ] } ] } ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 28 of 39 Figure 9: Example of CI/CD in AWS Using AWS CloudFormation You can store prebuilt use case packages in either a source code repository or in an S3 bucket This allows provisioning teams and workload owners to easily pull down the latest versions of these files Deployment To ensure a secure reliable and efficient deployment of prebuild template packages you should consider implementing several operational practices as described in the following sections AWS CLI Although you can use the AWS CloudFormation console to deploy templates from a webbased interface there are clear advantages to using the AWS CLI and other automated methods – especially if the templates require input to many parameters The AWS CLI is automatically installed on the Amazon Linux AMI You can use the AWS CLI to deploy automated architectures with a single command from an EC2 Linux instance Including a parameters file simplifies inputting template parameters by eliminating the need to manually input data for each field ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 29 of 39 You can use an additional script as a wrapper to simplify the CLI command or alternatively to directly call the AWS CloudFormation API to create the stack Launch EC2 instances into a predefined IAM role that allows access only to the AWS CloudFormation API To provide “least privilege ” within the AWS CloudFormation service use additional restrictions To launch a template from the AWS CLI: 1 Create an IAM role that allows an EC2 instance to access the AWS CloudFormation API 2 Launch an EC2 instance into the IAM role in a VPC (preferably a shared services VPC) 3 Copy or download the template package to the EC2 instance 4 Run the AWS CLI aws cloudformation create stack command to launch the template stack Security The security of AWS CloudFormation template packages should always be considered especially by customers who must adhere to strict compliance requirements Source code repositories should be secured to allow write access only to those responsible for updating packages In addition user names passwords and access keys should never be included in user data when automating deployment of EC2 instances because they are unencrypt ed plain text It is critical to understand that deleting an AWS CloudFormation stack actually deletes all underlying resources effectively destroying all data stored in EC2 aws cloudformation createstack stackname myStack template body file:///tem platejson parameters file:///parameters_filejson capabilities CAPABILITY_IAM ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 30 of 39 To mitigate the risk of accidental resource deletion use the following safeguards IAM permissions20 Restrict the ability to delete AWS CloudFormation stacks to only users groups and roles that require that ability You can write IAM policies that deny users and groups to which those policies are applied the ability to delete any stack The following is an example of an IAM policy that denies the DeleteStack and UpdateStack API calls: Deletion Policy21 Resources such as S3 buckets and EC2 and RDS instances support the AWS CloudFormation DeletionPolicy attribute Use this attribute to require that resources be retained upon stack deletion or that a snapshot be created (if snapshots are supported) The following is an example of a deletion policy with an S3 bucket AWS CloudFormation resource: { "Version":"2012 1017" "Statement":[{ "Effect":"Deny" "Action":[ "cloudformation:DeleteStack" "cloudformation:Updat eStack" ] "Resource":"*” }] } "myS3Bucket" : { "Type" : "AWS::S3::Bucket" ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 31 of 39 Auditing Automating architecture deployment in AWS can help simplify the process of auditing and accrediting deployed applications Having a base configuration for components such as IAM and VPC controls ensures that workload owners are deploying architectures based on compliance standards Security personnel at the customer’s MSO can “sign off ” on reusable template packages that are based on customer security standards and compliance requirements as compliant The security accreditation and auditing process can make use of automation with the following AWS capabilities: Tagging –AWS resources can be queried for common tags Tags can be applied at the sta ck level to all resources that support tagging Template validation –A scripted validation of the configuration can be tested against the AWS CloudFormation template files prior to deployment SNS notification –A nested stack in a template can be configured to send notifications about stack events to an Amazon SNS topic These Amazon SNS topics can be used to alert individuals groups or applications that a specific template has been deployed in the account Testing deployed resources –Through the AWS API scripted tests can be conducted to validate that deployed architectures meet security requirements For example tests can be run to detect if any security group has open access to certain ports or if there is an internet gateway in a VPC that should not have one ISV solutions –Thirdparty solutions for analyzing deployed architectures are available from AWS Partners Security control validation can also be implemented through solutions such as Telos’ Xacta risk management solution "DeletionPolicy" : "Retain" } ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 32 of 39 AWS Service Catalog Integration AWS Service Catalog allows IT administrators to create and manage approved catalogs of resources which are called products IT administrators create portfolios of one or more products which they can then distribute to AWS end users and workload owners End users can access products through a personalized portal22 Product – Products can be created to provide specific types of applications or to address specific use cases or alternatively they can be used to deploy base resources such as IAM and VPC configuration which other resources such as EC2 instances can utilize Template package deployment can be further automated and simplified by making the template package an AWS Service Catalog product Portfolios – A portfolio consists of one or more products Portfolios can include products for different types of use cases and can be organized by compliance type Permissions – End users and workload owners who are IAM users or members of IAM groups or roles can be given permission to use specific portfolios based on the level of access they need and what they need to deploy Constraints – Constraints are a granular control applied at a portfolio or product level that restrict the ways that AWS resources can be deployed Constraints can be used to allow templates to deploy all resources at a higher level of access than a workload owner has through IAM policies Tags – Tags can be used to control access to resources or for cost allocation Tags are enforced at the portfolio or product level AWS Service Catalog allows sharing of portfolios that are created in a common shared services AWS account This allows central management of and access to deployable reference architectures Central Management of AWS Service Catalog Customers with centralized governance models can fully control and manage the AWS Service Catalog products that workload owners have access to ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 33 of 39 Figure 10: Using AWS Service Catalog Constraints Automating for Governance: HighLevel Steps Automating a compliant secure and reliable architecture that adheres to an organization’s governance model involves several basic steps This section presents a highlevel overview Prerequisites Before beginning to develop automated reference architectures based on compliance requirements your organization must define the following: Cloud strategy and roadmap Governance model Cloud tasks roles and responsibilities VPC and account creation strategy Security standards and compliance requirements ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 34 of 39 Automating for compliance will often be part of a larger IT transformation initiative Many architectural requirements relate directly to existing governance and securityrelated decisions Step 1: Define Common Use Cases Customers must first determine t he standard use cases of their workloads Many applications deployed on AWS support a common use case These use cases share identical or similar base architectures for VPC design IAM configuration and other architectural components The following are examples of a few common use cases: Web applications – Web applications normally consist of multiple tiers (proxy/web application and database) for hosting webbased applications accessed by end users These applications can be designed for scalability and elasticity when properly architected in AWS Different VPC configurations are required depending on whether the application is intended to be internal facing or accessible from users on the public Internet Enterprise applications – Enterprise applications are almost always commercial offtheshelf (COTS) products that are used widely within an organization in critical tobusiness functions Examples include Microsoft SharePoint Active Directory PeopleSoft and Oracle EBusiness Suite Often each enterprise application addresses a specific use case with an architecture that is standardized Data analytics – Applications that analyze large data sets have architectures that require the deployment of common data analytics applications and use AWS big data services such as Amazon Redshift Amazon Elastic MapReduce (Amazon EMR) Amazon Kinesis and Amazon DynamoDB (DynamoDB) ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 35 of 39 Step 2: Create and Document Reference Architectures A welldesigned reference architecture provides clear documentation on how resources will be used within AWS Reference architectures should be created in Visio PowerPoint or another platform from which they can be distributed Figure 11: Example Reference Architecture in PowerPoint Step 3: Validate and Document Architectu re Compliance Accurately documenting how the reference architecture satisfies compliance requirements can reduce the amount of effort required for a workload owner to ensure that the architecture being deployed meets compliance requirements Compliance documentation may include: A security controls implementation matrix (SCTM) ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 36 of 39 A system security plan (SSP) A concept of operations (ConOps) Organizations that must follow specific compliance controls should determine which resources components and configurations meet the requirements of each control Including this documentation in a packaged deployment reduces the need to repeat the same compliance analysis for a proposed architecture Figure 12: Example of a Security Controls Implementation Matrix Provided by the Cloud Security Alliance Step 4: Build Automated Solutions Based on Architecture There are many ways to automate infrastructure creation with AWS services and features Most commonly AWS CloudFormation templates are used to automate deployment and configuration of AWS resources Create template packages using the design guidelines provided in “Automating for Compliance ” earlier in this whitepaper When building templates determine which configurations are common among various types of applications and use cases Properly maintain and update templates when necessary ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 37 of 39 Step 5: Develop an Accreditation and Approval Process Existing processes and methods for evaluating systems against compliance requirements may not apply or may need to be changed for applications in the cloud When automating compliance for an entire enterprise involve security teams early on so they can provide input and gain a deeper understanding of how applications will be deployed in AWS The accreditation and approval plan for automated deployments should consider of all of the following: The compliance standards that the organization must follow The current approval process for applications and infrastructure The existing security requirements related to networking continuous monitoring access control and auditing The current (and proposed) tools for security analysis scanning and monitoring The hardening requirements for deployed operating systems if there are any and the need for prehardened custom images The processes and methods used to validate both architecture templates and deployed configurations Conclusion Developing an automated solution for governance and compliance can reduce the cost time and effort to deploy applications in AWS while minimizing risk and simplifying architecture design When this approach is packaged into a reusable solution it can decrease the level of effort to produce compliancerelated documentation and allow time normally spent evaluating compliant architectures to be used to drive the organization’s goals and mission ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 38 of 39 Contributors The following individuals and organizations contributed to this document: Mike Dixon Consultant AWS Public Sector Sales Lou Vecchioni Senior Consultant AWS ProServ Brett Miller Senior Consultant AWS ProServ Josh Weatherly Practice Manager AWS ProServ Andrew McDermott Senior Compliance Architect AWS Security Notes 1 http://wwwgartnercom/itglossary/itgover nance/ 2 http://nvlpubsnistgov/nistpubs/SpecialPublications/NISTSP80053r4pdf 3 http://d0awsstaticcom/whitepapers/compliance/awsarchitectureand securityrecommendationsforfedrampcompliancepdf 4 http://iasedisamil/cloud_security/Documents/u cloud_computing_srg_v1r1_finalpdf 5 http://awsamazoncom/compliance/hipaacompliance/ 6 http://www27000org/iso27001htm 7 http://awsamazoncom/compliance/pcidsslevel1 faqs/ 8 http://mediaamazonwebservicescom/AWS_Security_at_Scale_Governance_i n_AWSpdf 9 http://awsamazoncom/partners/managedservice/ 10 https://mediaamazonwebservicescom/AWS_Security_at_Scale_Governance _in_AWSpdf 11 https://githubcom/Netflix/aminator https://wwwpackerio/intro/indexhtml ArchivedAmazon Web Services – Automating Governance on AWS August 2015 Page 39 of 39 12 http://awsamazoncom/cloudtrail/partners/ 13 http://awsamazoncom/config/ 14 https://wwwdockercom / 15 http://awsamazoncom/ecs/ 16 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/paramet ers sectionstructurehtml 17 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/conditio ns sectionstructurehtml 18 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/aws resourcecfncustomresourcehtml 19 https://wikijenkinsciorg/display/JENKINS/AWS+Cloudformation+Plugin 20 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/using iamtemplatehtml 21 http://docsawsamazoncom/AWSCloudFormation/latest/UserGuide/aws attributedeletionpolicyhtml 22 http://awsamazoncom/servicecatalog/
|
General
|
consultant
|
Best Practices
|
AWS_Answers_to_Key_Compliance_Questions
|
ArchivedAWS Answers t o Key Compliance Questions January 2017 We w elcome yo ur feedback Please share yo ur thoughts at t his link This paper has been archived For the latest technical content about AWS Compliance see https:// awsamazoncom/compliance/faq/Archived © 201 7 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the info rmation in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditio ns or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its custom ers Archived Contents Key Compliance Questions and Answers 1 Further Reading 8 Document Revis ions 8 Archived Abstract This document addresses common cloud computing compliance questions as they relate to AWS The answers to these may be of interest when evaluating and operating in a cloud computing environment and may assist in AWS customers’ control management efforts ArchivedAmazon Web Services – AWS Answers to Key Compliance Questions Page 1 Key Compliance Questions and Answers Category Cloud Computing Question AWS Information Control Ownership Who owns which controls for cloud deployed infrastructure? For the portion deployed into AWS AWS controls the physical components of that technology The customer owns and controls everything else including control over connection points and transmissions To help customers better understand what controls we have in place and how effectively they are operating we publish a SOC 1 Type II report with controls defined around EC2 S3 and VPC as well as detailed physical security and environmental controls These controls are defined at a high level of specificity that should meet most customer needs AWS customers that have signed a non disclosure agreement with AWS may request a copy of the SOC 1 Type II report Auditing IT How can auditing of the cloud provider be accomplished? Auditing for most layers and controls above the physical controls remains the responsibility of the customer The definition of AWS defined logical and physical controls is documented in the SOC 1 Type II report and the report is available for review by audit and compliance teams AWS ISO 27001 and other certifications are also available for auditors to review SarbanesOxley compliance How is SOX compliance achieved if in scope systems are deployed in the cloud provider environment? If a customer processes financial information in the AWS cloud the customer’s auditors may determine that some AWS systems come into scope for Sarbanes Oxley (SOX) requirements The customer’s auditors must make their own determination regarding SOX applicability Because most of the logical access controls are managed by customer the customer is best positioned to determine if its control activities meet relevant standards If the SOX auditors request specifics regarding AWS’ physical controls they can reference the AWS SOC 1 Type II report which details the controls that AWS provides HIPAA compliance Is it possible to meet HIPAA compliance requirements while deployed in the cloud provider environment? HIPAA requirements apply to and are controlled by the AWS customer The AWS platform allows for the deployment of solutions that meet industry specific certification requirements such as HIPAA Customers can use AWS services to maintain a security level that is equivalent or greater than those required to protect electronic health records Customers have built healthcare applications ArchivedAmazon Web Services – AWS Answers to Key Compliance Questions Page 2 Category Cloud Computing Question AWS Information compliant with HIPAA’s Security and Privacy Rules on AWS AWS provides additional information about HIPAA compliance on its web site including a whitepaper on this topic GLBA compliance Is it possible to meet GLBA certification requirements while deployed in the cloud provider environment? Most GLBA requirements are controlled by the AWS customer AWS provides means for customers to protect data manage permissions and build GLBA compliant applications on AWS infrastructure If the customer requires specific assurance that physical security controls are operating effectively they can reference the AWS SOC 1 Type II report as relevant Federal regulation compliance Is it possible for a US Government agency to be compliant with security and privacy regulations while deployed in the cloud provider environment? US Federal agencies can be compliant under a number of compliance standards including the Federal Information Secur ity Management Act (FISMA) of 2002 Federal Risk and Authorization Management Program (FedRAMP) the Federal Information Processing Standard (FIPS) Publication 1402 and the International Traffic in Arms Regulations (ITAR) Compliance with other laws and statutes may also be accommodated depending on the requirements set forth in the applicable legislation Data location Where does customer data reside? AWS customers designate in which physical region their data and their servers will be located Data replication for S3 data objects is done within the regional cluster in which the data is stored and is not replicated to other data center clusters in other regions AWS customers designate in which physical region their data and their servers will be located AWS will not move customers' content from the selected Regions without notifying the customer unless required to comply with the law or requests of governmental entities For a complete list of regions see awsamazoncom/about aws/globa linfrastructure EDiscovery Does the cloud provider meet the customer’s needs to meet electronic discovery procedures and requirements? AWS provides infrastructure and customers manage everything else including the operating system the network configuration and the installed applications Customers are responsible for responding appropriately to legal procedures involving the identification collection processing analysis and production of electronic documents they store or process using AWS Upon request AWS may work with customers who require AWS’ assistance in legal proceedings ArchivedAmazon Web Services – AWS Answers to Key Compliance Questions Page 3 Category Cloud Computing Question AWS Information Data center tours Are data center tours by customers allowed by the cloud provider? No Due to the fact that our data centers host multiple customers AWS does not allow data center tours by customers as this exposes a wide range of customers to physical access of a third party To meet this customer need an independent and competent auditor validates the presence and operation of controls as part of our SOC 1 Type II report This broadly accepted thirdparty validation provides customers with the independent perspective of the effectiveness of controls in place AWS customers that have signed a non disclosure agreement with AWS may request a copy of the SOC 1 Type II report Independent reviews of data center physical security is also a part of the ISO 27001 audit the PCI assessment ITAR audit and the FedRAMP sm testing programs Thirdparty access Are third parties allowed access to the cloud provider data centers? AWS strictly controls access to data centers even for internal employees Third parties are not provided access to AWS data centers except when explicitly approved by the appropriate AWS data center manager per the AWS access policy See the SOC 1 Type II report for specific controls related to physical access data center access authorization and other related controls Privileged actions Are privileged actions monitored and controlled? Controls in place limit access to systems and data and provide that access to systems or data is restricted and monitored In addition customer data is and server instances are logically isolated from other customers by default Privileged user access control is reviewed by an independent auditor during the AWS SOC 1 ISO 27001 PCI ITAR and FedRAMP sm audits Insider access Does the cloud provider address the threat of inappropriate insider access to customer data and applications? AWS provides specific SOC 1 controls to address the threat of inappropriate insider access and the public certification and compliance initiatives covered in this document address insider access All certifications and third party attestations evaluate logical access preventative and detective controls In addition periodic risk assessments focus on how insider access is controlled and monitored Multitenancy Is customer segregation implemented securely? The AWS environment is a virtualized multi tenant environment AWS has implemented security management processes PCI controls and other security controls designed to isolate each customer from other customers AWS systems are designed ArchivedAmazon Web Services – AWS Answers to Key Compliance Questions Page 4 Category Cloud Computing Question AWS Information to prevent customers from accessing physical hosts or instances not assigned to them by filtering through the virtualization software This architecture has been valid ated by an independent PCI Qualified Security Assessor (QSA) and was found to be in compliance with all requirements of PCI DSS version 31 published in April 2015 Note : AWS also has single tenancy options Dedicated Instances are Amazon EC2 instances launched within your Amazon Virtual Private Cloud (Amazon VPC) that run hardware dedicated to a single customer Dedicated Instances let you take full advantage of the benefits of Amazon VPC and the AWS cloud while isolating your Amazon EC2 compute instances at the hardware level Hypervisor vulnerabilities Has the cloud provider addressed known hypervisor vulnerabilities? Amazon EC2 currently utilizes a highly customized version of the Xen hypervisor The hypervisor is regularly assessed for new and existing vulnerabilities and attack vectors by internal and external penetration teams and is well suited for maintaining strong isolation between guest virtual machines The AWS Xen hypervisor security is regularly evaluated by independent auditors during assess ments and audits See the AWS security whitepaper for more information on the Xen hypervisor and instance isolation Vulnerability management Are systems patched appropriately? AWS is responsible for patching systems supporting the delivery of service to customers such as the hypervisor and networking services This is done as required per AWS policy and in accordance with ISO 27001 NIST and PCI requirements Customers control their own guest operating systems software and applications and are therefore responsible for patching their own systems Encryption Do the provided services support encryption? Yes AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS SimpleDB and EC2 IPSec tunnels to VPC are also encrypted Amazon S3 also offers Server Side Encryption as an option for customers Customers may also use third party encryption technologies Refer to the AWS Security white paper for more information Data ownership What are the cloud provider ’s rights over customer data? AWS customers retain control and ownership of their data AWS errs on the side of protecting customer privacy and is vigilant in determining ArchivedAmazon Web Services – AWS Answers to Key Compliance Questions Page 5 Category Cloud Computing Question AWS Information which law enforcement requests we must comply with AWS does not hesitate to challenge orders from law enforcement if we think the orders lack a solid basis Data isolation Does the cloud provider adequately isolate customer data? All data stored by AWS on behalf of customers has strong tenant isolation security and control capabilities Amazon S3 provides advanced data access controls Please see the AWS security whitepaper for more information about specific data services’ security Composite services Does the cloud provider layer its service with other providers’ cloud services? AWS do es not leverage any thirdparty cloud providers to deliver AWS services to customers Physical and environmental controls Are these controls operated by the cloud provider specified? Yes These are specifically outlined in the SOC 1 Type II report In addition other certifications AWS supports such as ISO 27001 and FedRAMP sm require best practice physical and environmental controls Client side protection Does the cloud provider allow customers to secure and manage access from clients such as PC and mobile devices? Yes AWS allows customers to manage client and mobile applications to their own requirements Server security Does the cloud provider allow customers to secure their virtual servers? Yes AWS allows customers to implement their own security architecture See the AWS security whitepaper for more details on server and network security Identity and Access Management Does the service include IAM capabilities? AWS has a suite of identity and access management offerings allowing customers to manage user identities assign security credentials organize users in groups and manage user permissions in a centralized way Please see the AWS web site for more information Scheduled maintenance outages Does the provider specify when systems will be brought down for maintenance? AWS does not require systems to be brought offline to perform regular maintenance and system patching AWS’ own maintenance and system patching generally do not impact customers Maintenance of instances themselves is controlled by the customer Capability to scale Does the provider allow customers to scale beyond the original agreement? The AWS cloud is distributed highly secure and resilient giving customers massive scale potential Customers may scale up or down paying for only what they use ArchivedAmazon Web Services – AWS Answers to Key Compliance Questions Page 6 Category Cloud Computing Question AWS Information Service availability Does the provider commit to a high level of availability? AWS does commit to high levels of availability in its service level agreements (SLA) For example Amazon EC2 commits to annual uptime percentage of at least 9995% during the service year Amazon S3 commits to monthly uptime percentage of at least 999% Service credits are provided in the case these availability metrics are not me t Distributed Denial Of Service (DDoS) attacks How does the provider protect their service against DDoS attacks? The AWS network provides significant protection against traditional network security issues and the customer can implement further protection See the AWS Security Whitepaper for more information on this topic including a discussion of DDoS attacks Data portability Can the data stored with a service provider be exported by customer request? AWS allows customers to move data as needed on and off AWS storage AWS Import/Export service for S3 accelerates moving large amounts of data into and out of AWS using portable storage devices for transport Service provider business continuity Does the service provider operate a business continuity progr am? AWS does operate a business continuity program Detailed information is provided in the AWS Security Whitepaper Customer business continuity Does the service provider allow customers to implement a business continuity plan? AWS provides customers with the capability to implement a robust continuity plan including the utilization of frequent server instance back ups data redundancy replication and multi region/availability zone deployment architectures Data durability Does the service specify data durability? Amazon S3 provides a highly durable storage infrastructure Objects are redundantly stored on multiple devices across multiple facilities in an Amazon S3 Region Once stored Amazon S3 maintains the durability of objects by quickly detectin g and repairing any lost redundancy Amazon S3 also regularly verifies the integrity of data stored using checksums If corruption is detected it is repaired using redundant data Data stored in S3 is designed to provide 99999999999% durability and 9999% availability of objects over a given year Backups Does the service provide backups to tapes? AWS allows customers to perform their own backups to tapes using their own tape backup service provider However a tape backup is not a service provided by AWS Amazon S3 service is designed to drive the likelihood of data loss to near zero percent and the durability equivalent of multi site copies of data objects is achieved through data ArchivedAmazon Web Services – AWS Answers to Key Compliance Questions Page 7 Category Cloud Computing Question AWS Information storage redundancy For information on data durability and redundancy please refer to the AWS web site Price increases Will the service provider raise prices unexpectedly? AWS has a history of frequently reducing prices as the cost to provide these services reduces over time AWS has reduced prices consistently over the past several years Sustainability Does the service provider company have long term sustainability potential? AWS is a leading cloud provider and is a longterm business strategy of Amazoncom AWS has very high long term sustainability potential ArchivedAmazon Web Services – AWS Answers to Key Compliance Questions Page 8 Further Reading For additional information see the following sources: • AWS Risk and Compliance Overview • AWS Certifications Program s Reports and Third Party Attestations • CSA Consensus Assessments Initiative Questionnaire Document Revisions Date Description January 2017 Migrated to new template January 2016 First publication
|
General
|
consultant
|
Best Practices
|
AWS_Best_Practices_for_DDoS_Resiliency
|
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAWS Best Practices for DDoS Resiliency First Published June 2015 Updated September 21 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlContents Introduction 1 Denial of Service Attacks 1 Infrastructure Layer Attacks 3 Applic ation Layer Attacks 5 Mitigation Techniques 7 Best Practices for DDoS Mitigation 11 Attack Surface Reduction 18 Obfuscat ing AWS Resources (BP1 BP4 BP5) 18 Operational Techniques 21 Visibility 21 Support 28 Conclusion 30 Contributors 30 Further Reading 30 Document revisions 31 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAbstract It’s important to protect your business from the impact of Distributed Denial of Service (DDoS ) attacks as well as other cyberattacks Keeping customer trust in your service by maintaining the availability and responsiveness of your application is high priority You al so want to avoid unnecessary direct costs when your infrastructure must scale in response to an attack Amazon Web Services (AWS ) is committed to providing you with the tools best practices and services to defend against bad actors on the internet Using the right services from AWS helps ensure high availability security and resiliency In this whitepaper AWS provide s you with prescriptive DDoS guidance to improve the resiliency of applications running on AWS This includes a DDoS resilient reference architecture that can be used as a guide to help protect application availability This whitepaper also describe s different attack types such as infrastructure layer attacks and application layer attacks AWS explain s which best practices are most effectiv e to manage each attack type In addition the services and features that fit into a DDoS mitigation strategy are outlined and how each one can be used to help protect your applications is explained This paper is intended for IT decision makers and secur ity engineers who are familiar with the basic concepts of networking security and AWS Each section has links to AWS documentation that provides more detail on the best practice or capability This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 1 Introduction Denial of Service Attack s A Denial of Service (DoS) attack is a deliberate attempt to make a website or application unavailable to users such as by flooding it with network traffic Attackers use a variety of techniques that consume large amounts of network bandwidth or tie up other system resources disrupting access for legitimate users In its simplest form a lone attacker uses a single source to carry out a DoS attack against a target as shown in the following image Diagram of a DoS Att ack In a DDoS attack an attacker uses multiple source s to orchestrate an attack against a target These sources can include distributed groups of malware infected computers routers IoT devices and other endpoints The following diagram shows a network of compromised host participates in the attack generating a flood of packets or requests This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 2 to overwhelm the target Diagram o f a DDoS Attack There are seven layers in the Open Systems Interconnection (OSI) model and they are described in the Open Systems Interconnection (OSI) Model table DDoS attacks are most common at layers three four six and seven Layer three and four attack s correspond to the Network and Transport layers of the OSI model Within this paper AWS refers to these collectively as infrastructure layer attacks Layer s six and seven attacks correspond to the Presentation and Application layers of the OSI model AWS will address these together as application layer attacks Examples of these attack types are discussed in the following sections Open Systems Interconnection (OSI) Model # Layer Unit Description Vector Examples 7 Application Data Network process to application HTTP floods DNS query floods 6 Presentation Data Data representation and encryption TLS abuse This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 3 # Layer Unit Description Vector Examples 5 Session Data Interhost communication N/A 4 Transport Segments Endtoend connections and reliability SYN floods 3 Network Packets Path determination and logical addressing UDP reflection attacks 2 Data Link Frames Physical addressing N/A 1 Physical Bits Media signal and binary transmission N/A Infrastructure Layer Attacks The most common DDoS attacks User Datagram Protocol (UDP) reflection attacks and synchronize (SYN) floods are infrastructure layer attacks An attacker can use either of these methods to generate large volumes of traffic that can inundate the capacity of a network or tie up resources on systems such as server s firewall s intrusion prevention system ( IPS) or load balancer s While these attacks can be easy to identify to mitigate them effectively you must have a network or s ystems that scale up capacity more rapidly than the inbound traffic flood This extra capacity is necessary to either filter out or absorb the attack traffic freeing up the system and application to respond to legitimate customer traffic UDP Reflection At tacks User Datagram Protocol (UDP) reflection attacks exploit the fact that UDP is a stateless protocol Attackers can craft a valid UDP request packet listing the attack target’s IP address as the UDP source IP address The attacker has now falsified —spoo fed—the UDP request packet’s source IP The UDP packet contain s the spoofed source IP and is sent by the attacker to an intermediate server The server is tricked into sending its UDP response packets to the targeted victim IP rather than back to the attac ker’s IP address The intermediate server is used because it generates a response that is several times larger than the request packet effectively amplifying the amount of attack traffic sent to the target IP address The amplification factor is the ratio of response size to request size and it varies depending on which protocol the attacker uses: DNS NTP SSDP CLDAP This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 4 Memcached CharGen or QOTD For example the amplification factor for DNS can be 28 to 54 times the original number of bytes So if an a ttacker sends a request payload of 64 bytes to a DNS server they can generate over 3400 bytes of unwanted traffic to an attack target UDP reflection attacks are accountable for larger volume of traffic in compar ison to other attacks The UDP Reflection A ttack f igure illustrates the reflection tactic and amplification effect UDP Reflection Attack SYN Flood Attacks When a user connects to a Transmission Control Protocol (TCP) service such as a web server their client sends a SYN synchronizatio n packet The server returns a SYN ACK packet in acknowledgement and finally the client responds with an acknowledgement (ACK) packet which completes the expected three way handshake The following image illustrates this typical handshake This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 5 SYN 3 way Handshake In a SYN flood attack a malicious client sends a large number of SYN packets but never sends the final ACK packets to complete the handshakes The server is left waiting for a response to the half open TCP connections and eventually runs out of capacity to accept new TCP connections This can prevent new users from connecting to the server The attack is trying to tie up available server connections so that resources are not available for legitimate connections While SYN floods can reach up to hundreds o f Gbps the purpose of the attack is not to increase SYN traffic volume Application Layer Attacks An attacker may target the application itself by using a layer 7 or application layer attack In these attacks similar to SYN flood infrastructure attacks the attacker attempts to overload specific functions of an application to make the application unavailable or unresponsive to legitimate users Sometimes this can be achieved with very low request volumes that generate only a small volume of network traffi c This can make the attack difficult to detect and mitigate Examples of application layer attacks include HTTP floods cache busting attacks and WordPress XML RPC floods In an HTTP flood attack an attacker sends HTTP requests that appear to be from a valid user of the web application Some HTTP floods target a specific resource while more complex HTTP floods attempt to emulate human interaction with the application This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 6 This can increase the difficulty of using common mitigation techniques like request ra te limiting Cache busting attacks are a type of HTTP flood that use variations in the query string to circumvent content delivery network (CDN) caching Instead of being able to return cached results the CDN must contact the origin server for every page request and these origin fetches cause additional strain on the application web server With a WordPress XML RPC flood attack also known as a WordPress pingback flood an attacker targets a website hosted on the WordPress content management software The attacker misuses the XML RPC API function to generate a flood of HTTP requests The pingback feature allows a website hosted on WordPress (Site A) to notify a different WordPress site (Site B) through a link that Site A has created to Site B Site B then attempts to fetch Site A to verify the existence of the link In a pingback flood the attacker misuses this capability to cause Site B to attack Site A This type of attack has a clear signature: WordPress is typically present in the User Agent of the HTT P request header There are other forms of malicious traffic that can impact an application’s availability Scraper bots automate attempts to access a web application to steal content or record competitive information such as pricing Brute force and credential stuffing attacks are programmed efforts to gain unauthorized access to secure areas of an application These are not strictly DDoS attacks; but their automated nature can look similar to a DDoS attack and they can be mitigated by implementi ng some of the same best practices covered in this paper Application layer attacks can also target Domain Name System (DNS) services The most common of these attacks is a DNS query flood in which an attacker uses many wellformed DNS queries to exhaust t he resources of a DNS server These attacks can also include a cache busting component where the attacker randomizes the subdomain string to bypass the local DNS cache of any given resolver As a result the resolver can’t take advantage of cached domain q ueries and must instead repeatedly contact the authoritative DNS server which amplifies the attack If a web application is delivered over Transport Layer Security ( TLS) an attacker can also choose to attack the TLS negotiation process TLS is computatio nally expensive so an attacker by generating extra workload on the server to process unreadable d ata (or unintelligible (ciphertext)) as a legitimate handshake can reduce server’s availability In a variation of this attack an attacker completes the TLS handshake but perpetually renegotiates the encryption method An attacker can alternatively attempt to exhaust server resources by opening and closing many TLS sessions This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Prac tices for DDoS Resiliency 7 Mitigation Techniques Some forms of DDoS mitigation are included automatically with A WS services DDoS resilience can be improved further by using an AWS architecture with specific services covered in the following sections and by implementing additional best practices for each part of the network flow between users and your application All AWS customers can benefit from the automatic protections of AWS Shield Standard at no additional charge AWS Shield Standard defends against the most common and frequently occurring network and transport layer DDoS attacks that target your website or applications This protection is always on pre configured static and provides no reporting or analytics It is offered on all AWS services and in every AWS Region In AWS Regions DDoS attacks are detected and the Shield Standard system automatically ba selines traffic identifies anomalies and as necessary creates mitigations You can use AWS Shield Standard as part of a DDoS resilient architecture to protect both web and non web applications You can also utilize AWS services that operate from edge l ocations such as Amazon CloudFront AWS Global Accelerator and Amazon Route 53 to build comprehensive availability protection against all known infrastructure layer attacks These services are part of the AWS Global Edge Network and can improve the DDoS resilienc y of your application when serv ing any type of application traffic from edge locations distributed around the world You can run your application in any AWS Region and use these services to protect your application availability and optimize the pe rformance of your application for legitimate end users Benefits of using CloudFront AWS Global Accelerator and Amazon Route 53 include: • Access to internet and DDoS mitigation capacity across the AWS Global Edge Network This is useful in mitigating larg er volumetric attacks which can reach terabit scale • AWS Shield DDoS mitigation systems are integrated with AWS edge services reducing time tomitigate from minutes to sub second • Stateless SYN Flood mitigation techniques proxy and verify incoming connec tions before passing them to the protected service This ensures that only valid connections reach your application while protecting your legitimate end users against false positives drops This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 8 • Automatic traffic engineering systems that disperse or isolate the impact of large volumetric DDoS attacks All of these services isolate attacks at the source before they reach your origin which means less impact on systems protected by these services • Application layer defense when combined with AWS WAF that does not require changing current application architecture (for example in an AWS Region or on premises data center) There is no charge for inbound data transfer on AWS and you do not pay for DDoS attack traffic that is mitigated by AWS Shield The following architecture diagram includes AWS Global Edge Network services DDoS resilient reference architecture This architecture includes several AWS services that can help you improve your web application’s resiliency against DDoS attacks The Summary of Best Practices table provides a summary of these services and the capabilities that they can provide AWS has tagged each service with a best practice indicator (BP1 BP2 ) for easier reference within this document For example an upcoming section discusses the capabilities provided by CloudFront and Global Accelerator that includes the best practice indicator BP1 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 9 Summary of Best Practices Another way to improve your readiness to respond to and mitigate DDoS attacks is by subscribing to AWS Shield Advanced Customers receive tailored detection based on: AWS EDGE AWS REGION Using Amazon CloudFront (BP1) with AWS WAF (BP2) Using AWS Global Accelerator (BP1) Using Amazon Route 53 (BP3) Using Elastic Load Balancing (BP6) with AWS WAF (BP2) Using Security Groups and network ACLs in Amazon VPC (BP5) Using Amazon EC2 Auto Scaling (BP7) Layer 3 (for example UDP reflection) attack mitigation ✔ ✔ ✔ ✔ ✔ ✔ Layer 4 (for example SYN flood) attack mitigation ✔ ✔ ✔ ✔ Layer 6 (for example TLS) attack mitigation ✔ ✔ ✔ ✔ Reduce attack surface ✔ ✔ ✔ ✔ ✔ Scale to absorb application layer traffic ✔ ✔ ✔ ✔ ✔ ✔ Layer 7 (application layer) attack mitigation ✔ ✔(*) ✔ ✔ ✔(*) ✔(*) Geographic isolation and dispersion of excess traffic and larger DDoS attacks ✔ ✔ ✔ * If used with AWS WAF with AWS Application Load Balancer This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 10 • Specific traffic patterns of your application • Protection against Layer 7 DDoS attacks i ncluding AWS WAF at no additional cost • Access to 24x7 specialized support from the AWS SRT • Centralized management of security policies through AWS Firewall manager • Cost protection to safeguard against scaling charges resulting from DDoS related usage sp ikes This optional DDoS mitigation service helps protect application s hosted on any AWS Region The service is available globally for CloudFront Amazon Route 53 and Global Accelerato r Using AWS Shield Advanced with Elastic IP addresse s allows you to protect Network Load Balancer (NLBs) or Amazon EC2 instances Benefits of using AWS Shield Advanced include : • Access to the AWS SRT for assistance with mitigating DDoS attacks that impact application availability • DDoS attack visibility by using the AWS Management Console API and Amazon CloudWatch metrics and alarms • Access to the history of all DDoS events from the past 13 months • Access to AWS web application firewall (WAF) at no additional cost for the mitigation of application layer DDoS attacks (wh en used with CloudFront or Application Load Balancer) • Automatic baselining of web traffic attributes when used with AWS WAF • Access to AWS Firewall Manager at no additional cost for automated policy enforcement • Sensitive detection thresholds that rout e traffic into the DDoS mitigation system earlier and can improve time tomitigate attacks against Amazon EC2 or Network Load Balancer when used with an Elastic IP address • Cost protection that enables you to request a limited refund of scaling related costs that result from a DDoS attack • Enhanced service level agreement that is specific to AWS Shield Advanced customers • Proactive engagement from the AWS SRT when a Shield event is detected This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 11 • Protection groups that enable you to bundle resources providing a selfservice way to customize the scope of detection and mitigation for your application by treating multiple resources as a single unit Resource grouping improves the accuracy of detection minimizes false positives eases automatic protection of newly created resources and accelerates the time to mitigate attacks against many resources that comprise a single application For information about protection groups see Shield Advanced protection groups For a complete list of AWS Shield Advanced features and for more information about AWS Shield refer to How AWS Shield works Best Practices for DDoS Mitigation In the following sections each of the recommended best practices for DDoS mitigation are described in more depth For a quick and easy toimplement guide on building a DDoS mitigation layer for static or dynamic web applications see How to Help Protect Dynamic Web Applications Aga inst DDoS Attacks Infrastructure Layer Defense (BP1 BP3 BP6 BP7) In a traditional data center environment you can mitigate infrastructure layer DDoS attacks by using techniques such as overprovisioning capacity deploying DDoS mitigation systems o r scrubbing traffic with the help of DDoS mitigation services On AWS DDoS mitigation capabilities are automatically provided; but you can optimize your application’s DDoS resilience by making architecture choices that best leverage those capabilities and also allow you to scale for excess traffic Key considerations to help mitigate volumetric DDoS attacks include ensuring that enough transit capacity and diversity are available and protecting AWS resources like Amazon EC2 instances against attack traff ic Some Amazon EC2 instance types support features that can more easily handle large volumes of traffic for example up to 100 Gbps network bandwidth interfaces and enhanced networking This helps prevent interface congestion for traffic that has reached the Amazon EC2 instance Instances that support enhanced networking provide higher I/O performance higher bandwidth and lower CPU utilization compared to traditional implementations This improves the ability of the instance to handle large volumes of t raffic and ultimately makes them highly resilient against packets per second (pps) load This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 12 To allow this high level of resilience AWS recommend s using Amazon EC2 Dedicated Instances or EC2 instances with higher networking throughput that have an N suffix an d support for Enhanced Networking with up to 100 Gbps of Network bandwidth for example c6gn16xlarge and c5n18xlarg e or metal instances (such as c5nmetal) For more information about Amazon EC2 instances that support 100 Gigabit network interfaces and enhanced networking see Amazon EC2 Instance Types The module required for enhanced networking and the required enaSupport attribute set are included with Amazon Linux 2 and the latest versions of the Amazon Linux AMI Therefore if you launch an instance with an HVM version of Amazon Linux on a supported instance type enhanced networking is already enabled for your instance For more information see Test whether enhanced networking is enabled For more information about how to enable enhanced networking see Enhanced networking on Linux Amazon EC2 with Auto Scaling (BP7) Another way to mitigate both infrastructure and application layer attacks is to operate at scale If you have web applications you can use load balancers to distribute traffic to a number of Amazon EC2 instances that are overprovisioned or configured to automatically scale These instances can handle sudden traffic surges that occur for any reason including a flash crowd or an application layer DDoS attack You can set Amazon CloudWatch alarms to initiate Auto Scaling to automatically scale the size of your Amazon EC2 fleet in response to events that you define such as CPU RAM Network I/O and even Custom metrics This approach protects applic ation availability when there’s an unexpected increase in request volume When using CloudFront Application Load Balancer Classic Load Balancers or Network Load Balancer with your application TLS negotiation is handled by the distribution (CloudFront) or by the load balancer Th ese features help protect your instances from being impacted by TLS based attacks by scaling to handle legitimate requests and TLS abuse attacks For more information about using Amazon CloudWatch to invoke Auto Scaling see Monitoring CloudWatch metrics for your Auto Scaling groups and instances Amazon EC2 provides resizable compute capacity so that you can quickly scale up or down as requirements change You can scale horizontally by automatically adding instances to your application by Scaling the size of your Auto Scaling group and you can scale vertically by using larger EC2 instance types This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 13 Elastic Load Balancing (BP6) Large DDoS attacks can overwhelm the capacity of a single Amazon EC2 instanc e With Elastic Load Balancing (ELB) you can reduce the risk of overloading your application by distributing traffic across many backend instances Elastic Load Balancing can scale automatically allowing you to manage larger volumes when you have unanticip ated extra traffic for example due to flash crowds or DDoS attacks For applications built within an Amazon VPC there are three types of Elastic Load Balancing to consider depending on your application type: Application Load Balancer (ALB) Classic Load Balancer (CLB) and Network Load Balancer For web applications you can use the Application Load Balancer to route traffic based on content and accept only well formed web requests Application Load Balancer blocks many common DDoS attacks such as SYN fl oods or UDP reflection attacks protecting your application from the attack Application Load Balancer automatically scales to absorb the additional traffic when these types of attacks are detected Scaling activities due to infrastructure layer attacks are transparent for AWS customers and do not affect your bill For more information about protecting web applications with Application Load Balancer see Getting started with Application Load Balancers For TCP based applications you can use Network Load Balancer to route traffic to targets ( for example Amazon EC2 instances) at ultra low latency One key consideration with Network Load Balancer is that any traffic that reaches the load balancer on a valid listener will be routed to your targets not absorbed You can use AWS Shield Advanced to configure DDoS protection for Elastic IP addresses When an Elastic IP address is assigned per Availability Zone to the Network Load Balancer AWS Shield Advanced will apply the relevant DDoS protections for the Network Load Balancer traffic For more information about protecting TCP applications with Network Load Balancer see Getting started with Network Load Balancers Leverage AWS Edge Locations for Scale (BP1 BP3) Access to highly scaled diverse internet connections can significantly increase your ability to optimize latency and throughput to users absorb DDoS attacks and isolate faults while minimizing the impact on your application’s availability AWS edge locations provide an additional layer of netw ork infrastructure that provides these benefits to any application that uses CloudFront Global Accelerator and Amazon Route 53 With these This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 14 services you can comprehensively protect on the edge your applications running from AWS Regions Web Application D elivery at the Edge (BP1) CloudFront is a service that can be used to deliver your entire website including static dynamic streaming and interactive content Persistent connections and variable time tolive (TTL) settings can be used to offload traffic from your origin even if you are not serving cacheable content Use of these CloudFront features reduces the number of requests and TCP connections back to your origin helping protect your web application from HTTP floods CloudFront only accepts well formed connections which helps prevent many common DDoS attacks such as SYN floods and UDP reflection attacks from reaching your origin DDoS attacks are also geographically isolated close to the source which prevents the traffic from impacting other loc ations These capabilities can greatly improve your ability to continue serving traffic to users during large DDoS attacks You can use CloudFront to protect an origin on AWS or elsewhere on the internet If you’re using Amazon S3 to serve static content o n the internet AWS recommends you use CloudFront to protect your bucket You can use origin access identify (OAI) to ensure that users only access your objects by using CloudFront URLs For more information about OAI see Restricting access to Amazon S3 content by using an origin access identity (OAI) For more information about protecting and optimizing the performance of web applications with CloudFront see Getting started with Amazon CloudFront Protect network traffic further from your origin using AWS Global Accelerator (BP1) Global Accelerator is a networking service that i mproves availability and performance of users’ traffic by up to 60% This is accomplished by ingressing traffic at the edge location closest to your users and routing it over the AWS global network infrastructure to your application whether it runs in a single or multiple AWS Regions Global Accelerator routes TCP and UDP traffic to the optimal endpoint based on performance in the closest AWS Region to the user If there is an application failure Global Accelerator provides failover to the next best endpoint within 30 seconds Global Accelerator uses the vast capacity of the AWS global network and integrations with AWS Shield such as a stateless SYN proxy capability that challenges new connection attempts and only serves legitimate end users to protect applications This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 15 You can implement a DDoS resilient architecture that provides many of the same benefits as the Web Application Delivery at the Edge best practice s even if your application uses protocols not supported by CloudFront or you are operating a web application that requires global static IP addresses For example you may require IP addresses that your end users can add to the allow list in their firewalls and are not used by any other AWS customers In these scenarios y ou can use Global Accelerator to protect web application s running on Application Load Balancer and in conjunction with AWS WAF to also detect and mitigate web application layer request floods For more information about protecting and optimizing the performance of network traffic using Global Accelerator see Getting started with AWS Global Accelerator Domain Name Resolution at the Edge (BP3) Amazon Route 53 is a highly available and scalable Domain Name System (DNS) service that can be used to direct traffic to your web application It includes advanced features like Traffic Flow Health Checks and Monitoring Latency Based Routing and Geo DNS These advanced features allow you to control how the service responds to DNS requests to improve the performance of your web application and to avoid site outages Amazon Route 53 uses techniques like shuffle sharding and anycast striping that can help users access your application even if the DNS service is targeted by a DDoS attack With shuffle sharding each name server in your delegation set corresponds to a uniq ue set of edge locations and internet paths This provides greater fault tolerance and minimizes overlap between customers If one name server in the delegation set is unavailable users can retry and receive a response from another name server at a differ ent edge location Anycast striping allows each DNS request to be served by the most optimal location dispers ing the network load and reducing DNS latency This provides a faster response for users Additionally Amazon Route 53 can detect anomalies in th e source and volume of DNS queries and prioritize requests from users that are known to be reliable For more information about using Amazon Route 53 to route users to your application see Getting Started with Amazon Route 53 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 16 Application Layer Defense (BP1 BP2) Many of the techniques discussed so far in this paper are effective at mitigating the impact that infrastructure layer DDoS attacks have on your application’s availability To also defend against application layer attacks you need to implement an architecture that allows you to specifically detect scale to absorb and block malicious requests This is an important consideration because network based DDoS mitigation systems are generally ineffective at mitigating complex application layer attacks Detect and Filter Malicious Web Requests (BP1 BP2) When your application runs on AWS you can leverage both CloudFront and AWS WAF to help defend against application layer DDoS attacks CloudFront allows you to cache static content and serve it from AWS edge locations which can help reduce the load on your ori gin It can also help reduce server load by preventing non web traffic from reaching your origin Additionally CloudFront can automatically close connections from slow reading or slow writing attackers (for example Slowloris ) By using AWS WAF you can configure web access control lists ( web ACLs) on your CloudFront distributions or Application Load Balancers to filter and block requests based on request signatures Each web ACL consists of rules that you can configure to string match or regex match one or more request attributes such as the Uniform Resource Identifier (URI) query string HTTP method or header key In addition by using AWS WAF's rate based rules you can a utomatically block the IP addresses of bad actors when requests matching a rule exceed a threshold that you defin e Requests from offending client IP addresses will receive 403 Forbidden error responses and will remained blocked until request rates drop be low the threshold This is useful for mitigating HTTP flood attacks that are disguised as regular web traffic To block attacks based on IP address reputation you can create rules using IP match conditions or use Managed Rules for AWS WAF offered by selle rs in the AWS Marketplace AWS WAF direc tly offers AWS Managed Rules as a managed service where you can choose IP reputation rule groups The Amazon IP reputation list rule group contains rules that are based on Amazon internal threat intelligence This is useful if you would like to block IP addresses typically associated with bots or other threats The Anonymous IP list rule group contains rules to block requests from services that allow the obfuscation of viewer identity These include requests from VPNs proxies Tor nodes and cloud platforms (including AWS) Both AWS WAF and CloudFront also enable you to set geo restrictions to block or allow requests from This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices fo r DDoS Resiliency 17 selected countries This can help block attacks originating from geographic locations where you d o not expect to serve users To help identify malicious requests review your web server logs or use AWS WAF’s logging and Sampled Requests features By enabling AWS WAF logging you get detailed information about the traffic analyzed by the web ACL AWS WAF supports log filtering allowing you to specify which web requests are logged and which requests are discarded from the log after the inspection Information recorded in the logs include s the time that AWS WAF received the request from your AWS resourc e detailed information about the request and the matching action for each rule request ed Sampled Requests provide details about requests within the past three hours that matched one of your AWS WAF rules You can use this information to identify potenti ally malicious traffic signatures and create a new rule to deny those requests If you see a number of requests with a random query string make sure to allow only the query string parameters that are relevant to cache for your application This technique is helpful in mitigating a cache busting attack against your origin If you are subscribed to AWS Shield Advanced you can engage the AWS Shield Response Team (SRT) to help you create rules to mitigate an attack that is hurting your application’s availabil ity You can grant AWS SRT limited access to your account’s AWS Shield Advanced and AWS WAF APIs AWS SRT accesses these APIs to place mitigations on your account only with your explicit authorization For more information see the Support section of this document You can use AWS Firewall Manager to centrally configure and manage security rules such as AWS Shield Advanced protections and AWS WAF rules across your organization Your AWS Organizations management account can designate an administrator account which is authorized to create Firewall Manager policies These policies allow you to define criteria such as resource type and tags which determi ne where rules are applied This is useful when you have multiple accounts and want to standardize your protection For more information about: • AWS Managed Rules for AWS WAF see AWS Managed Rules for AWS WAF • Using geo restriction to limit access to your CloudFront distribution see Restricting the geographic distribution of your content • Using AWS WAF see This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 18 o Getting started with AWS WAF o Logging web ACL traffic information o Viewing a sample of web requests • Configuring rate based rules see Protect Web Sites & Services Using Rate Based Rules for AWS WAF • How to manage the deployment of AWS WAF rules across your AWS resources with AWS Firewall Manager see o Getting started with AWS Firewall Manager AWS WAF policies o Getting started with AWS Firewall Manager AWS Shield Advanced policies Attack Surface Reduction Another important consideration when architecting an AWS solution is to limit the opportunities an attacker has to targe t your application This concept is known as attack surface reduction Resources that are not exposed to the internet are more difficult to attack which limits the options an attacker has to target your application’s availability For example if you do not expect users to directly interact with ce rtain resources make sure that those resources are not accessible from the internet Similarly do not accept traffic from users or external applications on ports or protocols that aren’t necessary for communication In the following section AWS provide s best practices to guide you in reduc ing your attack surface and limit ing your application’s internet exposure Obfuscating AWS Resources (BP1 BP4 BP5) Typically users can quickly and easily use an application without requiring that AWS resources be fu lly exposed to the internet For example when you have Amazon EC2 instances behind an Elastic Load Balancing the instances themselves might not need to be publicly accessible Instead you could provide users with access to the Elastic Load Balancing on certain TCP ports and allow only the Elastic Load Balancing to communicate with the instances You can set this up by configuring Security Groups and network access control lists ( network ACLs) within your Amazon Virtual Private Cloud ( Amazon VPC) Amazon VPC allows you to provision a logically isolated section This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 19 of the AWS Cloud where you can launch AWS resources in a virtual network that you define Security groups and network ACLs are similar in that they allow you to control access to AWS res ources within your VPC But security groups allow you to control inbound and outbound traffic at the instance level while network ACLs offer similar capabilities at the VPC subnet level There is no additional charge for using security groups or network ACLs Security Groups and Network Access Control Lists ( Network ACLs) (BP5) You can choose whether to specify security groups when you launch an instance or associate the instance with a security group at a later time All internet traffic to a security group is implicitly denied unless you create an allow rule to permit the traffic For example if you ha ve a web application that uses an Elastic Load Balancing and multiple Amazon EC2 instances you might decide to create one security group for the Elastic Load Balancing (Elastic Load Balancing security group) and one for the instances (web application serv er security group) You can then create an allow rule to permit internet traffic to the Elastic Load Balancing security group and another rule to permit traffic from the Elastic Load Balancing security group to the web application server security group Th is ensures that internet traffic can’t directly communicate with your Amazon EC2 instances which makes it more difficult for an attacker to learn about and impact your application When you create network ACLs you can specify both allow and deny rules T his is useful if you want to explicitly deny certain types of traffic to your application For example you can define IP addresses (as CIDR ranges) protocols and destination ports that are denied access to the entire subnet If your application is used only for TCP traffic you can create a rule to deny all UDP traffic or vice versa This option is useful when responding to DDoS attacks because it lets you create your own rules to mitigate the attack when you know the source IPs or other signature If you are subscribed to AWS Shield Advanced you can register Elastic IP address es as Protected Resources DDoS attacks against Elastic IP addresses that have been registered as Protected Resources are detected more quickly which can result in a faster time to mitigate When an attack is detected the DDoS mitigation systems read the network ACL that corresponds to the targeted Elastic IP address and enforce it at the AWS network border This significantly reduces your risk of impact from a number of infrastr ucture layer DDoS attacks This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 20 For more information about configuring Security Groups and network ACLs to optimize for DDoS resiliency see How to Help Prepare for DDoS Attacks by Reducing Your Attack Surface For more information about using AWS Shield Advanced with Elastic IP addresse s as Protected Resources see the steps to Subscribe to AWS Shield Advanced Protecting Your Origin (BP1 BP5) If you are using CloudFront with an origin that is inside of your VPC you may want to ensure th at only your CloudFront distribution can forward requests to your origin With Edge toOrigin Request Headers you can add or override the value of existing request headers when CloudFront forwards requests to your origin You can use the Origin Custom Hea ders for example XShared Secret header to help validate that the requests made to your origin were sent from CloudFront For more information about protecting your origin with an Origin Custom Headers see Adding custom headers to origin requests and Restricting access to App lication Load Balancers For a guide on implementing a sample solution to automatically rotate the value of Origin Custom Headers for the origin access restriction see How to enhance Amazon CloudFront origin security with AWS WAF and AWS Secrets Manager Alternatively you can use a n AWS Lambda function to automatically update your security group rules to allow only CloudFront traffic This improves your origin’s security by helping to ensure that malicious users cannot bypass CloudFront and AWS WAF when accessing your web applicatio n For more information about how to protect your origin by automatically updating your security groups see How to Automatically Update Your Security Groups for Amazon CloudFront and AWS WAF by Us ing AWS Lambda Protecting API Endpoints (BP4) Typically when you must expose an API to the public there is a risk that the API frontend could be targeted by a DDoS attack To help reduce the risk you can use Amazon API Gateway as an entryway to appl ications running on Amazon EC2 AWS Lambda or elsewhere By using Amazon API Gateway you don’t need your own servers for the API frontend and you can obfuscate other components of your application By making it harder to detect your application’s compone nts you can help prevent those AWS resources from being targeted by a DDoS attack This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 21 When you use Amazon API Gateway you can choose from two types of API endpoints The first is the default option: edge optimized API endpoints that are accessed through a CloudFront distribution The distribution is created and managed by API Gateway however so you don’t have control over it The second option is to use a regional API endpoint that is accessed from the same AWS region from which your REST API is deployed AWS recommend s that you use the second type of endpoint and associate it with your own CloudFront distribution This gives you control over the CloudFront distribution and the ability to use AWS WAF for application layer protection This mode provides you with access to scaled DDoS mitigation capacity across the AWS global edge network When using CloudFront and AWS WAF with Amazon API Gateway configure the following options: • Configure the cache behavior for your distributions to forward all headers to th e API Gateway regional endpoint By doing this CloudFront will treat the content as dynamic and skip caching the content • Protect your API Gateway against direct access by configuring the distribution to include the origin custom header xapikey by setting the API key value in API Gateway • Protect the backend from excess traffic by configuring standard or burst rate limits for each method in your REST APIs For more information about creating APIs with Amazon API Gateway see Amazon API Gateway Getting Started Operational Techniques The mitigation techniques in this paper help you architect applications that are inherently resilient against DDoS attacks In many cases it’s also usef ul to know when a DDoS attack is targeting your application so you can take mitigation steps This section discusses best practices for gaining visibility into abnormal behavior alerting and automation managing protection at scale and engaging AWS for a dditional support Visibility When a key operational metric deviates substantially from the expected value an attacker may be attempting to target your application’s availability Familiarity with the This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 22 normal behavior of your application means you can tak e action more quickly when you detect an anomaly Amazon CloudWatch can help by monitoring applications that you run on AWS For example you can collect and track metrics collect and monitor log files set alarms and automatically respond to changes in your AWS resources If you follow the DDoS resilient reference architecture when architecting your application common infrastructure layer attacks will be blocked before reaching your application If you are subscribed to AWS Shield Advanced you have acc ess to a number of CloudWatch metrics that can indicate that your application is being targeted For example you can configure alarms to notify you when there is a DDoS attack in progress so you can check your application’s health and decide whether to e ngage AWS SRT You can configure the DDoSDetected metric to tell you if an attack has been detected If you want to be alerted based on the attack volume you can also use the DDoSAttackBitsPerSecond DDoSAttackPacketsPerSecond or DDoSAttackRequestsPerSec ond metrics You can monitor these metrics by integrating Amazon CloudWatch with your own tools or by using tools provided by third parties such as Slack or PagerDuty An application layer attack can elevate many Amazon CloudWatch metrics If you’re using AWS WAF you can use CloudWatch to monitor and activate alarm s on increases in requests that you’ve set in AWS WAF to be allowed counted or blocked This allows you to receive a notification if the level of traffic exceeds what your application can handle You can also use CloudFront Amazon Route 53 Application Load Balancer Network Load Balancer Amazon EC2 and Auto Scaling metrics that are tracked in CloudWatch to detect changes that can indicate a DDoS attack The Recommended Amazon CloudWatch Metrics table lists description s of Amazon CloudWatch metrics that are commonly used to detect and react to DDoS attacks Recommended Amazon CloudWatch Metrics Topic Metric Description AWS Shield Advanced DDoSDetected Indicates a DDoS event for a specific Amazon Resource Name (ARN) AWS Shield Advanced DDoSAttackBitsPerSecond The number of bytes observed during a DDoS event for a specific ARN This metric is only available for layer 3/4 DDoS events This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 23 Topic Metric Description AWS Shield Advanced DDoSAttackPacketsPerSecond The number of packets observed during a DDoS event for a specific ARN This metric is only available for layer 3/4 DDoS events AWS Shield Advanced DDoSAttackRequestsPerSecond The number of requests observed during a DDoS event for a specific ARN This metric is only available for layer 7 DDo S events and is only reported for the most significant layer 7 events AWS WAF AllowedRequests The number of allowed web requests AWS WAF BlockedRequests The number of blocked web requests AWS WAF CountedRequests The number of counted web requests AWS WAF PassedRequests The number of passed requests This is only used for requests that go through a rule group evaluation without matching any of the rule group rules CloudFront Requests The number of HTTP/S requests CloudFront TotalErrorRate The percentage of all requests for which the HTTP status code is 4xx or 5xx Amazon Route 53 HealthCheckStatus The status of the health check endpoint ALB ActiveConnectionCount The total number of concurrent TCP connections that are active from clients to the load balancer and from the load balancer to targets ALB ConsumedLCUs The number of load balancer capacity units (LCU) used by your load balancer ALB HTTPCode_ELB_4XX_Count HTTPCode_ELB_5XX_Count The number of HTTP 4xx or 5xx client error codes generated by the load balancer This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 24 Topic Metric Description ALB NewConnectionCount The total number of new TCP connections established from clients to the load balancer and from the load balancer to targets ALB ProcessedBytes The total number of bytes proce ssed by the load balancer ALB RejectedConnectionCount The number of connections rejected because the load balancer reached its maximum number of connections ALB RequestCount The number of requests that were processed ALB TargetConnectionErrorCount The number of connections that were not successfully established between the load balancer and the target ALB TargetResponseTime The time elapsed in seconds after the request le aves the load balancer until a response from the target is received ALB UnHealthyHostCount The number of targets that are considered unhealthy NLB ActiveFlowCount The total number of concurrent TCP flows (or connections) from clients to targets NLB ConsumedLCUs The number of load balancer capacity units (LCU) used by your load balancer NLB NewFlowCount The total number of new TCP flows (or connections) established from clients to targets in the time period NLB ProcessedBytes The total number of bytes processed by the load balancer including TCP/IP headers Global Accelerator NewFlowCount The total number of new TCP and UDP flows (or connections) established from clients to endpoints in the time period This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 25 Topic Metric Description Global Accelerator ProcessedBytesIn The total number of incoming byte s processed by the accelerator including TCP/IP headers Auto Scaling GroupMaxSize The maximum size of the Auto Scaling group Amazon EC2 CPUUtilization The percentage of allocated EC2 compute units that are currently in use Amazon EC2 NetworkIn The number of bytes received by the instance on all network interfaces For more information about using Amazon CloudWatch to detect DDoS attacks on your application see Getting Started with Amazon CloudWatch To explore an example of a dash board built using some of the metrics from the preceding table see A custom baseline monitoring system AWS includes several additional metrics and alarms to notify you about an attack and to help you monitor your application’s resources The AWS Shield conso le or API provide a peraccount event summary and details about attacks that have been detected In addition the global threat environment dashboard provides summary information about Figure 1: Global threat environment dashboard Global threat environment dashboard This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 26 all DDoS attacks that have been detected by AWS This information may b e useful to better understand DDoS threats across a larger population of applications in addition to attack trends and comparing with attacks that you may have observed If you are subscribed to AWS Shield Advanced the service dashboard displays additio nal detection and mitigation metrics and network traffic details for events detected on protected resources AWS Shield evaluates traffic to your protected resource along multiple dimensions When an anomaly is detected AWS Shield creates an event and rep orts the traffic dimension where the anomaly was observed With a placed mitigation this protects your resource from receiving excess traffic and traffic that matches a known DDoS event signature Detection metrics are based on sampled network flows or AWS WAF logs when a web ACL is associated with the protected resource Mitigation metrics are based on traffic that's observed by Shield’s DDoS mitigation systems Mitigation metrics are a more precise measurement of the traffic into your resource The networ k top contributors metric provides insight into where traffic is coming from during a detected event You can view the highest volume contributors and sort by aspects such as protocol source port and TCP flags The top contributors metric includes metric s for all traffic observed on the resource along various dimensions It provides additional metric dimensions you can use to understand network traffic that’s sent to your resource during an event The service dashboard also includes details about the act ions automatically taken to mitigate DDoS attacks This information makes it easier to investigate anomalies explore dimensions of the traffic and better understand the actions taken by AWS Shield Advanced to protect your availability Another tool that can help you gain visibility into traffic that is targeting your application is VPC Flow Logs On a traditional network you might use network flow logs to troubleshoot connectivity and security issues and to make sure that network access rules are working as expected By using VPC Flow Logs you can capture information about the IP traffic that is going to and from network interfaces in your VPC Each flow log record includes the following: source and destination IP addresses source and destination ports protocol and the number of packets and bytes transferred during the capture window You can use this information to help identify anomalies in network traffic and to identify a specific attack vector For example most UDP reflection attacks have specific source ports such as source port 53 for DNS reflection This is a clear attack signature that you can identify in the flow log record In resp onse you might This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS R esiliency 27 choose to block the specific source port at the instance level or create a network ACL rule to block the entire protocol if your application doesn’t require it For more information about using VPC Flow Logs to identify network anomalies an d DDoS attack vectors see VPC Flow Logs and VPC Flow Logs – Log and View Network Traffic Flows Visibility and protection management across multiple accounts In scenarios when you operate across multiple AWS accounts and have multiple components to protect using techniques that enable you to operate at scale and reduce operational overhead increase your mitigation capabilities When managing AWS Shield Advanced protected resources in multiple accounts you can se t up centralized monitoring by using AWS Firewall Manager and AWS Security Hub With Firewall Manager you ca n create a security policy that enforces DDoS protection compliance across all your accounts You can use these two services together to manage your protected resources across multiple accounts and centralize the monitoring of those resources Security Hub automatically integrates with Firewall Manager allowing AWS Shield Advanced customers to view security findings in a single dashboard alongside other high priority security alerts and compliance statuses For instance when AWS Shield Advanc ed detects anomalous traffic destined for a protected resource in any AWS account within the scope this finding will be visible in the Security Hub console If configured Firewall Manager can automatically bring the resource into compliance by creating i t as a n AWS Shield Advanced –protected resource and then update Security Hub when the resource is in a compliant state This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 28 For more information about central monitoring of AWS Shield protected resources see Set up centralized monitoring for DDoS events and auto remediate noncompliant resources Support If you experience an attack you can also benefit from support from AWS in assessing the threat and reviewing the architecture of your application or you might want to request other assistance It is important to create a response plan for DDoS attacks before an actual event The best practices outlined in this paper are intended to be proactive measures that you implement before you launch an application but DDoS attacks against your application might still occur Review the options in this section to determine the support resources that are best suited for your scenario Your account Figure 2: Architecture diagram for monitoring Shield protected resources with Firewall Manager and Security Hub Monitoring AWS Shield protected resources with Firewall Manager and Security Hub architecture diagram This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 29 team can evaluate your use case and application and assist with specific questions or challenges that you have If you’re running production workloads on AWS consider subscribing to Business Support which provides you with 24 /7 access to Cloud Support Engineers who can assist with DDoS attack issues If you’re running mission critical workloads consider Enterprise Support which provides the ability to open critical cases and receive the fastest response from a Senior Cloud Support Enginee r If you are subscribed to AWS Shield Advanced and are also subscribed to either Business Support or Enterprise Support you can configure AWS Shield proactive engagement It allows you to configure health checks associate to your resources and provide 24/7 operations contact information When AWS Shield detects signs of DDoS and your application health checks are showing signs of degradation AWS SRT will proactively reach o ut to you This is our recommended engagement model because it allows for the quickest AWS SRT response times and empowers AWS SRT to begin troubleshooting even before contact has been established with you The proactive engagement feature requires you to configure an Amazon Route 53 health check that accurately measures the health of your application and is associated with the resource protected by AWS Shield Advanced Once a Route 53 health check is associated in the AWS Shield console the AWS Shield Adv anced detection system uses the health check status as an indicator of your application’s health AWS Shield Advanced’s health based detection feature will ensure that you are notified and that mitigations are placed more quick ly when your application is u nhealthy AWS SRT will contact you to troubleshoot whether the unhealthy application is being targeted by a DDoS attack and place additional mitigations as needed Completing configuration of p roactive engagement includes adding contact details in the AWS Shield console AWS SRT will use this information to contact you You can configure up to 10 contacts and provide additional notes if you have any specific contact requirements or preferences Proactive engagement contacts should hold a 24/7 role such as a security operations center or an individual who is immediately available You can enable proactive engagement for all resources or for select key production resources where response time is critical This is accomplished by ass igning health checks only t o these resources You can also escalat e to AWS S RT by creating an AWS Support case using the AWS Support console or Support API if you have a DDoS related event that affects your application’s availability This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 30 Conclusion The best practices outlined in this p aper can help you build a DDoS resilient architecture that protect s your application’s availability by preventing many common infrastructure and application layer DDoS attacks The extent to which you follow these best practices when you architect your app lication will influence the type vector and volume of DDoS attacks that you can mitigate You can incorporate resiliency without subscribing to a DDoS mitigation service By choosing to subscribe to AWS Shield Advanced you gain additional support visibility mitigation and cost protection features that further protect an already resilient application architecture Contributors The following individuals and organizations contributed to this document: • Jeffrey Lyon AWS Perimeter Protection • Rodrigo Ferroni AWS Security Specialist TAM • Dmitriy Novikov AWS Solutions Architect • Achraf Souk AWS Solutions Architect • Yoshihisa Nakatani AWS Solutions Architect Further Reading For additional information see: • Best Practices for DDoS Mitigation on AWS • Guidelines for Implementing AWS WAF • SID324 – re:Invent 2017: Automating DDoS Response in the Cloud • CTD304 – re:Invent 2017: Dow Jones & Wall Street Journal’s Journey to Manage Traffic Spikes While Mitigating DDoS & Application Layer Threats • CTD310 – re:Invent 2017: Living on the Edge It’s Safer Than You Think! Building Strong with Amazon CloudFront AWS Shield and AWS WAF This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsbest practicesddosresiliency/awsbestpracticesddosresiliencyhtmlAmazon Web Services AWS Best Practices for DDoS Resiliency 31 • SEC407 re:Invent 2019: A defense indepth approach to building web applications • SEC321 re:Invent 2020: Get ahead of the curve with DDoS Response Team escalations • William Hill: High performance DDOS Protection with AWS Document revisions Date Description September 21 2021 Updated to include latest recommendations and features AWS Global Accelerator is added as part of comprehensive protection at the edge AWS Firewall Manager for centralized monitoring for DDoS events and auto remediate noncompliant resources December 2019 Updated to clarify cache busting in Detect and Filter Malicious Web Requests (BP1 BP2) section and Elastic Load Balancing and Application Load Balancer usage in Scale to Absorb (BP6) section Updated diagrams and Table 2 marked “Choice of Region” as BP8 Updated BP7 section with more details December 2018 Updated to include AWS WAF logging as a best practice June 2018 Updated to include AWS Shield AWS WAF features AWS Firewall Manager and related best practices June 2016 Added prescriptive architecture guidance and updated to include AWS WAF June 2015 Whitepaper published
|
General
|
consultant
|
Best Practices
|
AWS_Best_Practices_for_Oracle_PeopleSoft
|
ArchivedAWS Best Practices for Oracle PeopleSoft December 2017 This paper has been archived For the latest technical guidance see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/Archived © 2017 Amazon Web Services Inc and DLZP Group All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the info rmation in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditio ns or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its custom ers Archived Contents Benefits of Running Oracle PeopleSoft on AWS 1 Key Benefits of AWS over On Premises 1 Key Benefits of AWS over SaaS 4 Amazon Web Services Concepts 5 Regions and Availability Zones 5 Amazon Elastic Cloud Compute 7 Amazon Relational Database Service 8 Elastic Load Balancing 8 Amazon Elastic Block Store 8 Amazon Machine Image 8 Amazon Simple Storage Service 9 Amazon Route 53 9 Amazon Virtual Private Cloud 9 AWS Direct Connect 9 AWS CloudFormation 10 Oracle PeopleSoft and Database Licensing on AWS 10 Oracle PeopleSoft and Database License Portability 10 Amazon RDS for Oracle Licensing Models 11 Best Practices for Deploying Oracle PeopleSoft on AWS 11 Traffic Distribution and Load Balancing 12 Use Multiple Availability Zones for High Availability 13 Scalability 14 Standby Instances 15 Amazon VPC Deployment and Connectivity Options 15 Disaster Recovery and Cross Region Deployment 15 Disaster Recovery on AWS with Production On Premises 18 Archived AWS Security and Compliance 19 The AWS Security Model 19 AWS Identity and Access Management 20 Monitoring and Logging 20 Network Security and Amazon Virtual Private Cloud 21 Data Encryption 21 Migration Scenarios and Best Practices 21 Migrate Existing Oracle PeopleSoft Environments to AWS 22 Oracle PeopleSoft Upgrade 22 Performance Testing 22 Oracle PeopleSoft Test and Development Environments on AWS 22 Disaster Recovery on AWS 22 Trai ning Environments 22 Monitoring and Infrastructure 23 Conclusion 23 Contributors 24 References 24 Archived Abstract This whitep aper cover s areas that should be considered when moving Oracle PeopleSoft applications to Amazon Web Services ( AWS) It help s you understand how to leverage AWS for all PeopleSoft a pplications including PeopleSoft Human Capital Management ( HCM ) Financials and Supply Chain Manag ement ( FSCM ) Interactive Hub ( IAH ) and Customer Relationship Management ( CRM) ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 1 Benefits of Running Oracle PeopleSoft on AWS Migrating Oracle PeopleSoft applications to AWS can be simplified by leveraging a standardized architecture footprint It is important to understand that this is not just a conversion from physical hardware to a v irtual ized environment In this section we discuss key benefits of running PeopleSo ft applications on AWS compared to various onpremises and Software asa Service (SaaS) environments whether virtualized or not Key Benefits of AWS o ver On Premises There are several key benefits to running PeopleSoft applications on AWS compared to on premises environments : • Eliminate Long Procurement C ycles : In the traditional deployment model responding to increases in c apacity whether it be disk CPU or memory can cause delays and challenges for your infrastructure team The following diagram provi des an overview of a typical client IT procurement cycle Each step is time sensitive and requir es large capital outlays and multiple approvals This process must be repeated for each change/increase in infrastructure which can compound costs and cause significant delays With AWS resources are available as needed within minutes of you requesting them ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 2 • Moore’s Law: With an on premises environment you end up owning hardware that depreciat es in value every year You cannot simply add and remove compu ting capacity on demand You’re generally locked into the price and capacity of the hardware that you have acquired as well as the resulting hardware support costs With AWS you can change the underlying infrastructure as new capabilities and configurati ons become available • Right Size Anytime: Often you end up oversizing your on premises environments to anticipate potential capacity needs or to address development and quality assurance ( QA) needs early on in the project cycle With AWS you can adjust capacity to match your current needs with ease Since y ou pay only for the services you use you save money during all phases of the software deployment cycle • Resiliency: On premises environments require an extensive set of hardware software and network monitoring tools Failures must be handled on a case bycase basis You must procure and replace failed equipment and correct software and configuration issues Key components of PeopleSoft must be replicated and managed With AWS you can leverag e Elastic Load Balancing (ELB) Auto Recovery for Amazon Elastic Compute Cloud (Amazon EC2 ) and Multi Availability IT Procurement Cycle01Capacility Planning 02Capital Allocation 03 Provisioning04 Maintenance05Hardware RefreshArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 3 Zone ( AZ) capabilities to build a highly tolerant and resilient system with the highest service level agreement ( SLA ) available • Disaster Recovery : Traditional disaster recovery (DR) solutions require immense upfront expenditures and are not easily scalable AWS offers built in disaster recovery solutions to execute your business data continuity plans at low er comparative costs which allows you to benefit from an on demand model while always hav ing the optimal amount of data redundancy • Incidental Data Center Costs: With an on premises environment you typically pay hardware support costs virtualizat ion licensing and support data center operational costs and more All of these costs can be eliminated or reduced by leveraging AWS • Testing: Even though testing is recommended prior to any PeopleSoft application or environment change few perform any significant testing after the initial application launch due to the expense and the unavailability of the required environment With AWS you can easily and quickly create and use a test environment thus eliminating the risk of discoverin g functional performance or security issues in production Again you are charged only for the hours the test environment is used • Hardware : All hardware platforms have e ndoflife (EOL) dates at which point the hardware is no longer supported and you are forced to replace it or face enormous maintenance costs With AWS you can simply upgrade platform instances to new AWS instance types with a single cl ick at no cost for the upgrade • High Availability: High availability for critical applications is a m ajor factor in corporate decisions to choose the AWS Cloud With AWS you can achieve a 9995% uptime by placing your data and your applications in multiple Availability Zones (locations) Your critical data synchronously replicates to standby instances automatically and recovers automatically This automation allows AWS to achieve better performance than the average SLA of other data centers With additional invest ment and infrastructure design uptime could approach 9999% • Unlimited Environments: Onpremises environments are rigid and take too long to provision For example if a performance issue is found in production it takes time to provision a test environment with an identical con figuration to the production environment On AW S you can ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 4 create the test environment and clone your production database quickly and easily Key Benefits of AWS o ver SaaS There are several key benefits to deploying PeopleSoft applications on AWS compared to using a SaaS solution : • Lower Total Cost of Ownership (TCO): If you already use PeopleSoft you do not have to purchase new licenses or take the risks associated with reimplementing your applications —you can just move your existing implementation to AWS If you are a new customer the TCO may still be lower when taking monthly SaaS fees into account • Security: On AWS PeopleSoft can be deployed in a v irtual private cloud (VPC) created using the Amazon Virtual Private Cloud service Your VPC can be connected to your onpremises data center s using AWS Direct Connect bypassing the public i nternet Using AWS Direct Connect you can assign private IP addresses to your PeopleSoft instances as if they were on your internal network By contrast SaaS must be accessed over the public internet making it less secure and requiring a bigger integration effort • Unlimited Usage: SaaS applications have governor/platform limits to accommodate their underlying multitenant architecture Governor limits restrict everything from the number of API calls and transaction times to data sets and file sizes With the AWS Cloud you can provision and use as much capacity as needed and pay only for what you use • Elastic Compute Capacity : SaaS products typically use a multitenant architecture that ties you to a specific instance and the limits of that instance With AWS you can provision as much or as little compute capacity as you need • Application Features and Functions : PeopleSoft lets you manage everything from Financials to the Human Capital Management Interactive Hub within the associated application pillar Many SaaS solutions require multiple applications that you must purchase and integrate even if they come from the same vendor It is easy to overlook the cost of integration in the application buying decision Running PeopleSoft with its rich integrated functionality on AWS avoids this cost ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 5 Amazon Web Services Concepts Understanding the various AWS services and how they can be leveraged will allow you to deploy secure and scalable Oracle PeopleSoft applications no matter if your organization has 10 users or 100000 users Regions and Availability Zones An AWS Region is a physical location in the world Each R egion is a separate geographi c area isolated from the other R egions Regions provide you the ability to place resources such as Amazon Elastic Compute Cloud (Amazon EC2 ) instances and data in multiple locations Re sourc es aren't replicated across Regions unless you do so specifically An AWS account provides multiple R egions so that you can launch your application in locations that meet your requirements For example you might want to launch your application in Eur ope to be closer to your European customers or to meet legal requirements Each Region has multiple isolated locations known as Availability Zones Each Availability Zone runs on its own physically distinct independent infrastructure and is engineered t o be highly reliable Common points of failure such as generators and cooling equipment are not shared across Availability Zones Because Availability Zones are physically separate even extremely uncommon disasters such as fires tornados or flooding wo uld only affect a single Availability Zone Each Availability Zone is isolated but the Availability Zones in a Region are connected through low latency links The following diagram illustrates the relationship between R egions and Availability Zones ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 6 Figure 1: Relationship between AWS Regions and Availability Zones The following figure shows the Regions and the number of Availability Zones in each Region provided by an AWS account at the time of this publication For the most current list of Regions and A vailability Zones see https://awsamazoncom/about aws/global infrastructure/ Note that you can’t describe or access additional Regions from the AWS GovCloud (US) Region or China ( Beijing ) Region ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 7 Figure 2: Map of AWS Regions and Availability Zones Amazon Elastic Cloud Compute Amazon Elastic Compute Cloud ( Amazon EC2) is a web service that provides resizable compute capacity in the cloud billed by the hour You can run virtual machines with various compute and memory capacities You have a choice of operating systems including different versions of Windows Server and Linux ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 8 Amazon Relational Database Service Amazon Relation Database Service ( Amazon RDS) makes it easy to set up operate and scale a relational database in the cloud It provides cost efficient and resizable capacity while managing time consuming database administration tasks allowing you to focus on your applications and business For PeopleSoft both Microsoft SQL Server and Oracle D atabase s are available Elastic Load Balanc ing Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple EC2 instances in the cloud It enables you to achieve greater levels of fault tolerance in your applications seamlessly providing the required amount of load balancing capacity needed to distribute application traffic ELB can be used for load balancing web server traffic Amazon Elastic Block Store Amazon Elastic Block Store ( Amazon EBS) provides persistent block level storage volumes for use with EC2 instances in the AWS Cloud Each EBS volume is automatically replicated within its Availability Zone to protect you from component failure offering high availability and durability EBS volumes offer the consistent and lowlatency performance needed to run your workloads Amazon Machine Image An Amazon Machine Image (AMI) is simply a packaged up environment that includes all the necessary bits to set up and boot your EC2 instance AMIs are your unit of deployment Amazon EC2 uses Amazon EBS and Amazon S3 to provide reliable scalable storage of your AMIs so that we can boot them when you ask us to do so ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 9 Amazon Simple Storage Service Amazon Simple Storage Service ( Amazon S3) provides d evelopers and IT teams with secure durable highly scalable object storage Amazon S3 is easy to use with a simple web services interface to store and retrieve any amount of data from anywhere on the web With Amazon S3 you pay only for the storage you actually use There is no minimum fee and no setup cost Amazon Route 53 Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service It is designed to give developers and businesses an extremely reliable and cost effecti ve way to route end users to i nternet applications Amazon Virtual Private Cloud Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own private IP address range creation of subnets and configuration of route tables and network gateways You can leverage multiple layers of security incl uding security groups and network access control lists to help control access to EC2 instances in each subnet Additionally you can create a Hardware Virtual Private Network (VPN) connection between your corporate data center and your VPC and leverage th e AWS Cloud as an extension of your corporate data center AWS Direct Connect AWS Direct Connect is a network service that provides an alternative to using the i nternet to utilize AWS cloud services Using AWS Direct Connect you can establish private ded icated network connectivity between AWS and your data center office or colocation environment which in many cases can reduce your ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 10 network costs increase bandwidth throughput and provide a more consistent network experience than i nternet based connections AWS CloudFormation AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources provisioning and updating them in an orderly and predictable fashion You can leverage AWS CloudFormation to quickly provision your PeopleSoft environments as well as to quickly create and update your infrastructure You can create your own CloudFormation templates to describe the AWS PeopleSoft resources and any associated dependencies or runtime parameters required to run them You don’t need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work— AWS CloudFormation takes care of this for you After your resources are deployed you can modify and update them in a controlled and predictable way in effect applying version control to your AWS infrastructure the same way you do with your traditional software You can deploy and update a template and its associated collection of resources (called a stack) by using the AWS Management Console AWS Command Line Interface or APIs AWS CloudFormation is available at no additional charge and you pay only for the AWS resources needed to run your applications Oracle PeopleSoft and Database Licensing on AWS Oracle PeopleSoft and Database License Portability Most Oracle s oftware licenses are fully portable to AWS including Enterprise License Agreement (ELA) Unlimited License Agreement (ULA) Business Process Outsourcing (BPO) and Oracle Partner Netw ork (OPN) You can use your existing Oracle PeopleSoft and Oracle D atabase licenses on AWS just like you would use them on premises ; however you should read your Oracle contract for specific information and consult with a knowledgeable Oracle licensing expert when in doubt ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 11 Amazon RDS for Oracle L icensing Models You can run Amazon RDS for Oracle under two different licensing models : “License Included” (LI) and “Bring Your Own License (BYOL)” In the LI model you do not need to separately purchase Oracle licenses as the Oracle Database software has been licensed by AWS If you already own Oracle Database licenses you generally can use the BYOL model to run Oracle databases on Amazon RDS The BYOL model is designed for customers who prefer to use existin g Oracle D atabase licenses or purchase new licenses directly from Oracle Best Practices for D eploying Oracle PeopleSoft on AWS The following architecture diagram illustrates how PeopleSoft Pure Internet Architecture (PIA) can be deployed on AWS You can deploy your PeopleSoft web application and process scheduler servers and the PeopleSoft d atabase across multiple Availability Zones for high availability of your application ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 12 Figure 3: Sample PeopleSoft Pure Internet Architecture deployment on AWS Traffic Distribution and Load Balancing Use Amazon Route 53 DNS to direct users to PeopleSoft hosted on AWS Use Elastic Load Balancing to distribute incoming traffic across your web servers deployed in multiple Availability Zones The load balancer serves as a single point of contact for clients which enables you to increase the availability of your application You can add and remove PeopleSoft web server instances from your load balancer as your needs change without disrupting the overall flow of inform ation Elastic Load Balancing ensures that only healthy w eb server instances receive traffic If a web server instance fails Elastic Load Balancing automatically reroutes the traffic to the remaining running web server instances If a failed web server in stance is restored Elastic Load Balancing restores the traffic to that instance ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 13 The PeopleSoft w eb servers will load balance the requests among the PeopleSoft application servers ; if a PeopleSoft app lication server fails the requests are routed to anothe r available PeopleSoft application server PeopleSoft application server load balancing and failover can be configured in the PeopleSoft configurationproperties file Please refer to the PeopleSoft Documentation for more information on configuring PeopleSoft application server load balancing and failover Use Multiple Availability Zones for High Availability Each Availability Zone is isolated from other Availability Zones and runs on its own physically distinct independent infrastructure The likelihood of two Availability Zones experiencing a failure at the same time is relatively small and you can spread your PeopleSoft web application and process scheduler servers across multiple Availability Zones to ensure high availability of your application In the unlikely event of failure of one Availability Zone user requests are routed by Elastic Load Balancing to the web server instances in the second Availability Zone and the PeopleSoft web server s will failover their request s to PeopleSoft application server instances in the second Availability Zone This ensures that your application continues to remain available in the unlikely event of an Availability Zone failure In addition to the PeopleSoft web and application servers t he PeopleSoft database on Amazon RDS can be deployed in a MultiAZ configuration Multi AZ deployments provide enhanced availability and durability for Amazon RDS DB instances making them a natural fit for production database workloads When you provision a Multi AZ DB i nstance Amazon RDS automatically creates a primary DB instance and synchronously replicates the data to a “standby” instance in a different Availability Zone In case of an infrastructure failure (for example instance hardware failure storage failure or network disruption) Amazon RDS performs an automatic failover to the “standby ” instance Since the endpoint for your DB i nstance remains the same after a failover your application can resume database operation s as soon as the failover is complete without the need for manual administrative intervention See Configuring Amazon RDS as an Oracle PeopleSoft Database to learn how to set up Amazon RDS for Oracle as the database backend of your PeopleSoft applicatio n ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 14 You can use the Amazon EC2 Auto Recovery feature to recover failed PeopleSoft web application and process scheduler server instances in case of failure of the underlying host When using Amazon EC2 Auto Recovery several system status checks monitor t he instance and the other components that need to be running in order for your instance to function as expected Among other things the system status checks look for loss of network connectivity loss of system power software issues on the physical host and hardware issues on the physical host If a system status check of the underlying hardware fails the instance will be rebooted (on new hardware if necessary) but will retain its i nstance ID IP address Elastic IP a ddresses EBS volume attachments an d other configuration details Scalability When using AWS you can scale your application easily due to the elastic nature of the cloud You can scale up the PeopleSoft web application and process scheduler servers simply by changing the instance type to a larger instance type For example you can start with an r4large instance with 2 vCPUs and 15 GiB RAM and scale up all the way to an x132xlarge instance with 128 vCPUs and 1952 GiB RAM After selecting a new instance type only a restart is required f or the changes to take effect Typically the resizing operation is completed in a few minutes the EBS volumes remain attached to the instances and no data migration is required For your PeopleSoft database deployed on Amazon RDS you can scale the comp ute and storage independently Y ou can scale up the compute simply by changing the DB instance class to a larger DB instance class This modification typically takes only a few minutes and the database will be temporarily unavailable during this period You can increase the storage capacity and IOPS provisioned for your database without any impact on database availability You can scale out the web and application tier by adding and configuring more instances when required You can launch a new EC2 instanc e in a few minutes However additional work is required to configure the new web and application tier instance Although it might be possible to automate the scaling out of the web and application tier using scripting this requires an additional technica l investment A simpler alternative might be to use stand by instances as explained in the next section ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 15 Standby Instances To meet extra capacity requirements additional instances of PeopleSoft web and application servers can be pre installed and configured on EC2 instances These standby instances can be shut down until extra capacity is required Charges are not incurred when EC2 instances are shut down —only EBS storage charges are incurred At the time of this publication EBS General Purpose volumes are priced at $010 per GB per month in the US East ( Ohio ) Region Therefore for an EC2 instance with 120 GB hard disk drive ( HDD ) space the storage charge is only $12 per month These pre installed standby instances provide you the flexibility to use these instances for meeting additional capacity needs as and when required Amazon VPC Deployment and Connectivity Options Amazon VPC provides you with several options for connecting your AWS virtual networks with other remote networks securely If your users are primarily accessing the PeopleSoft application from an office or on premise s you can use a hardware IP sec VPN connection or AW S Direct Connect to connect to the onpremise s network and Amazon VPC If they’re accessing the application from outside the office (eg a sales rep or customer access es it from the field or from home) you can use a Software appliance based VPN connection over the i nternet Please refer to the Amazon Virtual Private Cloud Connectivity Options whitepaper for detailed information Disaster Recovery and Cross Region Deployment Even though a single R egion architecture with multi Availability Zone deployment might suffice for most use cases some customers might want to consider a multi Region deployment for disaster recovery (DR) depending on business requirements For example there might be a business policy that mandates that the disaster recovery site should be located a certain distance away from the primary site ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 16 CrossR egion deployments for DR should be designed and validated for specific use cases based on customer uptime needs and budget The following diagram depicts a typical PeopleSoft deployment across R egions that addresses both high availability and DR requirements The users are directed to the PeopleSoft application in the primary Region using Amazon Route 53 In ca se the primary Region is unavailable due to a disaster failover is initiated and the users will be redirected towards the PeopleSoft application deployed in the DR R egion The primary database is deployed on Amazon RDS for Oracle in a Multi AZ configurat ion AWS Database Migration Service (AWS DMS) in continuous data replication mode is used to replicate the data from the RDS instance in the primary R egion to another RDS instance in the DR R egion Note that AWS DMS can replicate only the data not the database schema changes The database schema changes in the RDS DB instance in the primary R egion should be applied separately to the RDS DB instance in the DR Region This could be done while patching or updating the PeopleSoft application in the DR R egion Figure 4: Sample PeopleSoft cross Region deployment on AWS ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 17 Deploying the Database on Amazon EC2 Instances While Amazon RDS is the recommended option for deploying the PeopleSoft database there could be some scenarios where Amazon RDS might not be suitable For example Amaz on RDS might not be suitable if the database size is larger than 6 TB which is the current limit for Amazon RDS for Oracle In such scenarios you can install the PeopleSoft database on Oracle on EC2 instances and configure Oracle Data G uard replication f or high availability and DR as shown in the following figure Figure 5: Sample multiRegion deployment with Oracle on Amazon EC2 In this DR scenario the database is deployed on Oracle running on EC2 instances Oracle Data Guard replication is configured between the primary database and two standby databases One of the two standby databases is ‘local’ (for synchronous replication) in another A vailability Zone in the primary Region The other is a ‘remote’ standby database ( for asynchronous replication) in the DR R egion In case of failure of the primary database the ‘local’ standby database is promoted as the primary database and the PeopleSoft application will connect to it In the extremely unlikely event of a R egion failure or unavailability the ‘rem ote’ standby database is promoted as the primary database and users are redirected to PeopleSoft application in the DR R egion using Route 53 ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 18 For more details on deploying Oracle Database with Data Guard replication on AWS please see the Oracle Database on AWS Quick Start Refer to this AWS whitepaper to learn more about using AWS for Disaster Recovery Disaster Recovery on AWS with Production On Premises You can use AWS to deploy DR environments for PeopleSoft applications running on premises In this scenario the production environment remains on premise s but the DR environment is deployed on AWS If the production environment fails a failover is initiated and users of your application are redirected to the PeopleSoft application deployed on AWS The process is fairly simple and involves the following m ajor steps: 1 Set up connectivity between the onpremises data center and AWS using VPN or AWS Direct Connect 2 Install PeopleSoft web application and process scheduler servers on AWS 3 Install the secondary database on AWS and configure Oracle Data Guard replication between the on premises production database and the secondary database on AWS Alternatively instead of Oracle Data Guard you could use the AWS Database Migration Service (AWS DMS) in continuous data replication mode for replicating the on premises production database to the secondary database on AWS AWS DMS can replicate only the data not the database schema changes The database schema changes in the onpremises production database should be applied separately to the secondary database on A WS This could be done while patching or updating the PeopleSoft application on AWS 4 If the onpremises production environment fails initiate a failover and redirect your users to the PeopleSoft application on AWS ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 19 AWS Security and Compliance The AWS C loud security infrastructure has been architected to be one of the most flexible and secure cloud computing environments available today Security on AWS is very similar to security in your on premises data center —but without the costs and complexities involved in protecting facilities and hardware AWS provides a secure global infrastructure plus a range of features that you can use to help secure your systems and data in the cloud To learn more about AWS Security visit the AWS Security Center AWS Compliance enables customers to understand the robust controls in place at AWS to maintain security and data protection in the cloud AWS engages with external certifying bodies and independent auditors to provide customers with extensive information regarding the policies processes and controls established and operated by AWS To learn more about AWS Compliance visit the AWS Compliance Center The AWS Securit y Model The AWS infrastructure has been architected to provide an extremely scalable highly reliable platform that enables you to deploy applications and data quickly and securely Security in the cloud is slightly different than security in your on premi ses data centers When you move computer systems and data to the cloud security responsibilities become shared between you and your cloud service provider In this case AWS is responsible for securing the underlying infrastructure that supports the cloud and you are responsible for securing workloads that you deploy in AWS This shared security responsibility model can reduce your operational burden in many ways and gives you the flexibility you need to implement the most applicable security controls fo r your business functions in the AWS environment ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 20 Figure 6 : The AWS shared responsibility model It’s recommended that you take advantage of the various security features AWS offers when deploying PeopleSoft application s on AWS Some of them are listed in the following discussion AWS Identity and Access Management With AWS Identity and Access Management ( IAM ) you can centrally manage users security credentials such as passwords access keys and permissions policies that control which AWS services and resources users can access IAM supports multifactor authentication (MFA) for privileged accounts including options for hardware based authenticators and support for i ntegration and federation with corporate directories to reduce administrative overhead and improve end user experience Monitoring and Logging AWS CloudTrail is a service that records AWS API calls for your account and delivers log files to you The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service This provides deep visibility into API calls including who what when and from where calls were made The AWS API call history produced b y CloudTrail enables security analysis resource change tracking and compliance auditing ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 21 Network Security and Amazon Virtual Private Cloud You create one or more subnets within each VPC Each instance launched in your VPC is connected to one subnet Tra ditional layer 2 security attacks including MAC spoofing and ARP spoofing are blocked You can configure network ACLs which are stateless traffic filters that apply to all inbound or outbound traffic from a subnet within your VPC These ACLs can contai n ordered rules to allow or deny traffic based on IP protocol by service port as well as source/destination IP address Security groups are a complete firewall solution enabling filtering on both ingress and egress traffic from an instance Traffic can b e restricted by any IP protocol by service port as well as source/destination IP address (individual IP or classless inter domain routing (CIDR) block) Data Encryption AWS offers you the ability to add a layer of security to your data at rest in the cloud by providing scalable and efficient encryption features Data encryption capabilities are available in AWS storage and database services such as Amazon EBS Amazon S3 Amazon Glacier Amazon RDS for Oracle Amazon RDS for SQL Server and Amazon Redsh ift Flexible key management options allow you to choose whether to have AWS manage the encryption keys using the AWS Key Management Service o (AWS KMS) or to maintain complete control over your keys Dedicated hardware based cryptographic key storage opt ions (AWS CloudHSM) are available to help you satisfy compliance requirements For more information see the following AWS whitepapers: Introduction to AWS Security AWS Security Best Practices Migration Scenarios and Best Practices Each customer could potentially have a different migration scenario This section covers some of the most common scenarios ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 22 Migrate Existing Oracle PeopleSoft Environments to AWS This is most suitable if you are on a recent release of PeopleSoft You should design your AWS deployment based on the best practices in this whitepaper Oracle PeopleSoft Upgrade You can leverage AWS as the upgrade environment to keep the PeopleSoft upgrade costs to a minimum In the end you have the option to leverage this new environment for test and development only or you can choose to migrate your entire PeopleSoft environment to AWS Either way it’s a win for you as the overall TCO can be reduced Performance Testing AWS enables you to test your PeopleSoft applications during initial deployments or upgrades with minimal cost because you are only charged for the resour ces you use when the tests run This enable s more consistent repeatable testing for PeopleSoft upgrades and updates which can be budgeted on a predictable basis depending upon your normal need cycles Oracle PeopleSoft Test and Development Environments on AWS The flexibility and pay asyougo nature of AWS makes it compelling for setting up test and d evelopment environments whether to try out AWS prior to a migration or just for additional test and d evelopment environment s if the migration of the produ ction environment is no t imminent Disaster Recovery on AWS You may want to set up a disaster recovery ( DR) environment for your existing PeopleSoft applications on AWS even if your production environment is still on premises This can be done at a much l ower cost than setting up a traditional DR environment Training Environments By leveraging the ability to replicate the p roduction environment you can quickly provision a training environment for short term use and train your ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 23 employees using the most current version of production After training has been completed these instances can be terminate d to save money Monitoring and Infrastructure After migration of your PeopleSoft application to AWS y ou can continue to use the monitoring tools you are familiar with for monitoring your PeopleSoft application You can use PeopleSoft P erformance Monitor to monitor the performance of your PeopleSoft environment You can collect real time resource utilization metrics from your web servers application servers and PeopleSoft Process Scheduler servers as well as key metrics on PeopleTools runtime execution such as SQL statements and PeopleCode events Optionally you can use Oracle Enterprise Manager for monitoring your PeopleSoft environment by instal ling the PeopleSoft Enterprise Environment Management Plug in You can also use Amazon CloudWatch to monitor AWS Cloud resources and the applications you run on AWS Amazon CloudWatch enables you to monitor your AWS resources in near real time including A mazon EC2 instances Amazon EBS volumes ELB l oad balancers and Amazon RDS DB instances Metrics such as CPU utilization latency and request counts are provided automatically for these AWS resources You can also supply your own logs or custom applicati on and system metrics such as memory usage transaction volumes or error rates and Amazon CloudWatch will monitor these as well You can use the Enhanced Monitoring feature of Amazon RDS to monitor your PeopleSoft database Enhanced Monitoring gives you access to over 50 metrics including CPU memory file system and disk I/O You can also view the processes running on the DB instance and their related metrics including percentage of CPU usage and memory usage Conclusion By deploying PeopleSoft applications on the AWS C loud you can reduce costs and simultaneously enable capabilities that might not be possible or cost effective if you deployed your application in an on premises data center The following benefits of deploying PeopleSoft application on AWS were discussed: ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 24 • Low cost – R esources are billed by the hour and only for the duration they are used • Capex to Opex – C hang e from Capex to Op ex to eliminate the need for large capital outlays • High availability – Achieve high availability of 9995 % or more by deploying PeopleSoft in a Multi AZ configuration • Flexibility –Add compute capacity elastically to cope with demand • Testing – Add test environments and use them for short durations Contributors The following individuals and organizations con tributed to this document: • Ashok Sundaram Solutions Architect Amazon Web Services • David Brunet VP Research and Development DLZP Group • Yoav Eilat Amazon Web Services References • Test Drive PeopleSoft R unning EC2 and RDS : http://wwwdlzpgroupcom/testdrivehtml • Amazon EC2 Documentation : https://awsamazoncom/documentation/ec2/ • Amazon RDS Documentation : https://awsamazoncom/documentation/rds/ • Amazon Cloud Watch : https://awsamazoncom/cloudwatch/ • AWS Cost Estimator : http://calculators3amazonawscom/indexhtml • AWS Trusted Advisor : https://awsamazoncom/premiumsupport/trustedadvisor/ • Oracl e Cloud Licensing : http://wwworaclecom/us/corporate/pricing/cloud licensing 070579pdf ArchivedAmazon Web Services – AWS Best Practices for Oracle PeopleSoft Page 25 • Amazon VPC Connectivity Options : https://mediaamazonwebservicescom/AWS_Amazon_VPC_Connectiv ity_Optionspdf • AWS Security : http://d0awsstaticcom/whitepapers /Sec urity/Intro_to_AWS_Security pdf • AWS Security B est Practices : http://mediaamazonwebservicescom/AWS_Security_Best_Practicesp df • Disaster Recovery on AWS : http://d36cz9buwru1ttcloudfrontnet/AWS_Disaster_Recoverypdf • AWS Support : https://awsamazoncom/premiumsupport/ • DLZP Gro up: http://wwwdlzpgroupcom/
|
General
|
consultant
|
Best Practices
|
AWS_Certifications_Programs_Reports_and_ThirdParty_Attestations
|
ArchivedAWS C ertifications Programs R eports and ThirdParty Attestations March 2017 This paper has been archived For the latest information see A WS Services in Scope by Compliance ProgramArchived © 201 7 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contract ual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents CJIS 1 CSA 1 Cyber Essentials Plus 2 DoD SRG Levels 2 and 4 2 FedRAMP SM 3 FERPA 3 FIPS 140 2 4 FISMA and DIACAP 4 GxP 4 HIPAA 5 IRAP 6 ISO 9001 6 ISO 27001 7 ISO 27017 8 ISO 27018 8 ITAR 9 MPAA 9 MTCS Tier 3 Certification 10 NIST 10 PCI DSS Level 1 11 SOC 1/ISAE 3 402 11 SOC 2 13 SOC 3 14 Further Reading 15 Document Revisions 15 Archived Abstract AWS engages with external certifying bodies and independent auditors to provide customers with considerable information regarding the policies processes and controls established and operated by AWS ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 1 CJIS AWS complies with the FBI's Criminal Justice Inf ormation Services (CJIS) standard We sign CJIS security agreements with our customers including allowing or performing any required employee background checks according to the CJIS Security Policy Law enforcement customers (and partners who manage CJI) are taking advantage of AWS services to improve the security and protection of CJI data using the advanced security services and features of AWS such as activity logging ( AWS CloudTrail ) encryption of data in motion and at rest (S3’s Server Side Encryption with the option to bring your own key) comprehensive key management and protection ( AWS Key Management Service and CloudHSM ) and integrated permission management (IAM federated identity management multi factor authentication) AWS has created a Criminal Justice Information Services (CJIS) Workbook in a security plan template format aligned to the CJIS Policy Areas Additionally a CJIS Whitepaper has been developed to help guide customers in their journey to cloud adoption Visit the CJIS Hub Page at https://awsamazoncom/compliance/cjis/ CSA In 2011 the Cloud Security Alliance (CSA) launched STAR an initiative to encourage transparency of security practices within cloud providers The CSA Security Trust & Assurance Registry (STAR) is a free pub licly accessible registry that documents the security controls provided by various cloud computing offerings thereby helping users assess the security of cloud providers they currently use or are considering contracting with AWS is a CSA STAR registrant and has completed the Cloud Security Alliance (CSA) Consensus Assessments Initiative Questionnaire (CAIQ) This CAIQ published by the CSA provides a way to reference and document what security controls exist in AWS’ Infrastructure as a Service offerings The CAIQ provides 298 questions a cloud consumer and cloud auditor may wish to ask of a cloud provider See CSA Consensus Assessments Initiative Questionnaire ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 2 Cyber Essentials P lus Cyber Essentials Plus is a UK Government backed industry supported certification scheme introduced in the UK to help organizations demonstrate operational security against common cyber attacks It demonstrates the baseline controls AWS implements to mitigate the risk from common Internet based threats within the context of the UK Government's " 10 Steps to Cyber Security " It is backed by industry including the Federation of Small Businesses the Confederation of British Industry and a number of insurance organizations that offer incentives for businesses holding this certificatio n Cyber Essentials sets out the necessary technical controls; the related assurance framework shows how the independent assurance process works for Cyber Essentials Plus certification through an annual external assessment conducted by an accredited assess or Due to the regional nature of the certification the certification scope is limited to EU (Ireland) region DoD SRG Levels 2 and 4 The Department of Defense (DoD) Cloud Security Model (SRG) provides a formalized assessment and authorization process for cloud service providers (CSPs) to gain a DoD Provisional Authorization which can subsequently be leveraged by DoD customers A Provisional Authorization under the SRG provides a reusabl e certification that attests to our compliance with DoD standards reducing the time necessary for a DoD mission owner to assess and authorize one of their systems for operation on AWS AWS currently holds provisional authorizations at Levels 2 and 4 of th e SRG Additional information of the security control baselines defined for Levels 2 4 5 and 6 can be found at http://iasedisamil/cloud_security/Pages/indexaspx Visit the DoD Hub Page at https://awsamazoncom/compliance/dod/ ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 3 FedRAMPsm AWS is a Federal Risk and Authorization Management Program (FedRAMPsm) Compliant Cloud Service Provider AWS has completed th e testing performed by a FedRAMPsm accredited Third Party Assessment Organization (3PAO) and has been granted two Agency Authority to Operate (ATOs) by the US Department of Health and Human Services (HHS) after demonstrating compliance with FedRAMPsm requi rements at the Moderate impact level All US government agencies can leverage the AWS Agency ATO packages stored in the FedRAMPsm repository to evaluate AWS for their applications and workloads provide authorizations to use AWS and transition workload s into the AWS environment The two FedRAMPsm Agency ATOs encompass all US regions (the AWS GovCloud (US) region and the AWS US East/West regions) For a complete list of the services that are in the accreditation boundary for the regions stated above see the AWS Services in Scope by Compliance Program page ( https://awsamazoncom/compliance/services inscope/ ) For more information on AWS FedRAMPsm compliance please see the AWS FedRA MPsm FAQs at https://awsamazoncom/compliance/fedramp/ FERPA The Family Educational Rights and Privacy Act (FERPA) (20 USC § 1232g; 34 CFR Part 99) is a Federal law that protects the privacy of student education records The law applies to all schools that receive funds under an applicable program of the US Department of Education FERPA gives parents certain rights with respect to their children's education records These rights transfer to the student when he or she reaches the age of 18 or attends a school beyond the high school level Students to whom the rights have transferred are "eligible students" AWS enables c overed entities and their business associates subject to FERPA to leverage the secure AWS environment to process maintain and store protected education information AWS also offers a FERPA focused whitepaper for customers interested in learning more about how they can leverage AWS for the processing and storage of educational data ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 4 The FERPA Compliance on AWS whitepaper outlines how companies can use AWS to process systems that facilitate FERPA compliance: FIPS 1402 The Federal Information Processing Standard (FIPS) Publication 1402 is a US government security standard that specifies the security requirements for cryptographic modules protecting sensitive information To support customers with FIPS 140 2 requirements SSL terminations in AWS GovCloud (US) operate using FIPS 140 2 validated hardware AWS works with AWS GovCloud (US) customers to provide the information they need to help manage compliance when using the AWS GovCloud (US) environment FISMA and DIACAP AWS enables US government agencies to achieve and sustain compliance with the Federal Information Security Management Act ( FISMA ) The AWS infrastructure has been evaluated by independent assessors for a variety of government systems as part of their system owners' approval process Numerous Federal Civilian and Department of Defense (DoD) organizations have s uccessfully achieved security authorizations for systems hosted on AWS in accordance with the Risk Management Framework (RMF) process defined in NIST 800 37 and DoD Information Assurance Certification and Accreditation Process ( DIACAP ) GxP GxP is an acronym that refers to the regulations and guidelines applicable to life sciences organizations that make food and medical products such as drugs medical devices and medical software applications The overall intent of GxP requirements is to ensure that food and medical products are safe for consumers and to ensure the integrity of data used to make product related safety decisions AWS offers a GxP whitepaper which details a comprehensive approach for using AWS for GxP systems This whitepaper provides guidance for using AWS Products in the context of GxP and the content has been developed in conjunction with AWS pharmaceutical and medical device customers as well as ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 5 software partners who are curre ntly using AWS Products in their validated GxP systems For more information on the GxP on AWS please contact AWS Sales and Business Development For additional information ple ase see our GxP Comp liance FAQs at https://awsamazoncom/compliance/gxp part 11annex 11/ HIPAA AWS enables covered entities and their business associates subject to the US Health Insurance Portabilit y and Accountability Act (HIPAA) to leverage the secure AWS environment to process maintain and store protected health information and AWS will be signing business associate agreements with such customers AWS also offers a HIPAA focused whitepaper for customers interested in learning more about how they can leverage AWS for the processing and storage of health information The Architecting for HIPAA Secur ity and Compliance on Amazon Web Services whitepaper outlines how companies can use AWS to process systems that facilitate HIPAA and Health Information Technology for Economic and Clinical Health (HITECH) compliance Customers who execute an AWS BAA may use any AWS service in an account designated as a HIPAA Account but they may only process store and transmit PHI using the HIPAA eligible services defined in the AWS BAA For a complete list of these services see the HIPAA Eligible Services Reference page (https://awsamazoncom/compliance/hipaa eligible services reference/) AWS maintains a standards based risk management program to ensure that the HIPAA eligible servic es specifically support the administrative technical and physical safeguards required under HIPAA Using these services to store process and transmit PHI allows our customers and AWS to address the HIPAA requirements applicable to the AWS utility based operating model For additional information please see our HIPAA Compliance FAQs and Architecting for HIPAA Security and Compliance on Amazon Web Services ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 6 IRAP The Information Security Registered Assessors Program (IRAP) enables Australian government customers to validate that appropriate controls are in place and determine the appropria te responsibility model for addressing the needs of the Australian Signals Directorate (ASD) Information Security Manual (ISM) Amazon Web Services has completed an independent assessment that has determined all applicable ISM controls are in place relating to the processing storage and transmission of Unclassified (DLM) for the AWS Sydney Region For more information see the IRAP Compli ance FAQs at https://awsamazoncom/compliance/irap/ and AWS alignment with the Australian Signals Directorate (ASD) Cloud Computing Security Considerations ISO 9001 AWS has achieved ISO 9001 certification AWS’ ISO 9001 certification directly supports customers who develop migrate and operate their quality controlled IT systems in the AWS cloud Customers can leverage AWS’ compliance reports as evidence for their own ISO 9001 programs and industry specific quality programs such as GxP in life sciences ISO 13485 in medical devices AS9100 in aerospace and ISO/TS 16949 in automotive AWS customers who don't have quality system requirements will still benefit from the additional assurance and transparency that an ISO 9001 certification provides The ISO 9001 certification covers the quality management system over a specified scope of AWS services and Regions of operations For a complete list of services see the AWS Services in Scope by Compliance Program page (https://awsamazoncom/compliance/services inscope/ ) ISO 9001:2008 is a global standard for managing the quality of products and services The 9001 standard outlines a quality management system based on eight principles defined by the International Organization for Standardization (ISO) Technical Committee for Quality Management and Quality Assurance They include: • Customer focus ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 7 • Leadership • Involvement of people • Process approach • System approach to management • Continual Improvement • Factual approach to decision making • Mutually beneficial supplier relationships The AWS ISO 9001 certification can be downloaded at https://d0awsstaticcom/certifications/iso_9001_certificationpdf AWS provides additional information and frequently asked questions abou t its ISO 9001 certification at: https://awsamazoncom/compliance/iso 9001 faqs/ ISO 27001 AWS has achieved ISO 27001 certification of our Information Security Management System (ISMS) cove ring AWS infrastructure data centers and services For a complete list of services see the AWS Services in Scope by Compliance Program page ( https://awsamazoncom/compliance/services in scope/ ) ISO 27001/27002 is a widely adopted global security standard that sets out requirements and best practices for a systematic approach to managing company and customer informati on that’s based on periodic risk assessments appropriate to ever changing threat scenarios In order to achieve the certification a company must show it has a systematic and ongoing approach to managing information security risks that affect the confidentiality integrity and availability of company and customer information This certification reinforces Amazon’s commitment to providing significant information regarding our security controls and practices The AWS ISO 27001 certification can be downloaded at https://d0awsstaticcom/certifications/iso_27001_global_certificationpdf ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 8 AWS provides additional information and frequently asked questions about its ISO 27001 certification at: https://awsamazoncom/compliance/iso 27001 faqs/ ISO 27017 ISO 27017 is the newest code of practice released by the International Organization for Standardization (ISO) It provides implementation guidance on information security controls that specifically relate to cloud services AWS has achieved ISO 27017 certification of our Information Security Management System (ISMS) covering AWS infrastructure data centers and services For a complete list of services see the AWS Services in Scope by Compliance Program page ( https://aws amazoncom/compliance/services in scope/ ) The AWS ISO 27017 certification can be downloaded at https://d0awsstaticcom/certifications/iso_27017_certificationpdf AWS pr ovides additional information and frequently asked questions about its ISO 27017 certification at https://awsamazoncom/compliance/iso 27017 faqs/ ISO 27018 ISO 27018 is the first Internat ional code of practice that focuses on protection of personal data in the cloud It is based on ISO information security standard 27002 and provides implementation guidance on ISO 27002 controls applicable to public cloud Personally Identifiable Informatio n (PII) It also provides a set of additional controls and associated guidance intended to address public cloud PII protection requirements not addressed by the existing ISO 27002 control set AWS has achieved ISO 27018 certification of our Information Sec urity Management System (ISMS) covering AWS infrastructure data centers and services For a complete list of services see the AWS Services in Scope by Compliance Program page ( https://awsamazoncom/compliance/services in scope/ ) ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 9 The AWS ISO 27018 certification can be downloaded at https://d0awsstaticcom/certifications/iso_27018_certificationpdf AWS provides additional information and frequently asked questions about its ISO 27018 certification at https://awsamazo ncom/compliance/iso 27018 faqs/ ITAR The AWS GovCloud (US) region supports US International Traffic in Arms Regulations ( ITAR ) compliance As a part of managing a comprehensive ITAR compliance program companies subject to ITAR export regulations must control unintended exports by restricting access to protected data to US Persons and restricting physical location of th at data to the US AWS GovCloud (US) provides an environment physically located in the US and where access by AWS Personnel is limited to US Persons thereby allowing qualified companies to transmit process and store protected articles and data subject t o ITAR restrictions The AWS GovCloud (US) environment has been audited by an independent third party to validate the proper controls are in place to support customer export compliance programs for this requirement MPAA The Motion Picture Association of America (MPAA) has established a set of best practices for securely storing processing and delivering protected media and content ( http://wwwfightfilmtheftorg/facility security programhtml ) Media companies use these best practices as a way to assess risk and security of their content and infrastructure AWS has demonstrated alignment with the MPAA best practices and the AWS infrastructure is compliant with all applicable MPAA i nfrastructure controls While the MPAA does not offer a “certification” media industry customers can use the AWS MPAA documentation to augment their risk assessment and evaluation of MPAA type content on AWS See the AWS Compliance MPAA hub page for additional details at https://awsamazoncom/compliance/mpaa/ ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 10 MTCS Tier 3 Certification The Multi Tier Cloud Security (MTCS) is an operational Singapore security management Standard (SPRING SS 584:2013) based on ISO 27001/02 Information Security Management System (ISMS) standards The certification assessment requires us to: • Systematically evaluate our information security risks taking into account the impact of company threats and vulnerabili ties • Design and implement a comprehensive suite of information security controls and other forms of risk management to address company and architecture security risks • Adopt an overarching management process to ensure that the information security controls meet the our information security needs on an ongoing basis View the MTCS Hub Page at https://awsamazoncom/compliance/aws multitiered cloud security standard certification/ NIST In June 2015 The National Institute of Standards and Technology (NIST) released guidelines 800 171 "Final Guidelines for Protecting Sensitive Government Information Held by Contractors" This guidance is applicable to the pro tection of Controlled Unclassified Information (CUI) on nonfederal systems AWS is already compliant with these guidelines and customers can effectively comply with NIST 800 171 immediately NIST 800 171 outlines a subset of the NIST 800 53 requirements a guideline under which AWS has already been audited under the FedRAMP program The FedRAMP Moderate security control baseline is more rigorous than the recommended requirements established in Chapter 3 of 800 171 and includes a significant number of security controls above and beyond those required of FISMA Moderate systems that protect CUI data A detailed mapping is available in the NIST Special Publication 800 171 starting on page D2 (which is page 37 in the PDF) ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 11 PCI DSS Level 1 AWS is Level 1 compliant under the Payment Card Industry (PCI) Data Security Standard (DSS) Customers can run applicati ons on our PCI compliant technology infrastructure for storing processing and transmitting credit card information in the cloud In February 2013 the PCI Security Standards Council released PCI DSS Cloud Computing Guidelines These guidelines provide customers who are managing a cardholder data environment with considerations for maintaining PCI DSS controls in the cloud AWS has incorporated the PCI DSS Cloud Computing Guidelines into the AWS PCI Compliance Package for customers The AWS PCI Compliance Package includes the AWS PCI Attestation of Compliance (AoC) which shows that AWS has been successfully validated against standards applicable to a Level 1 service provider under PCI DSS Version 31 and the AWS PCI Responsibility Summary which explains how compliance responsibilities are shared between AWS and our customers in the cloud For a complete list of services in scope for PCI DSS Level 1 see the AWS Services in Scope by Comp liance Program page (https://awsamazoncom/compliance/services inscope/ ) For more information see https://awsa mazoncom/compliance/pci dsslevel 1faqs/ SOC 1/ISAE 3402 Amazon Web Services publishes a Service Organization Controls 1 (SOC 1) Type II report The audit for this report is conducted in accordance with American Institute of Certified Public Accountants (AICPA): AT 801 (formerly SSAE 16) and the International Standards for Assurance Engagements No 3402 (ISAE 3402) This dual standard report is intended to meet a broad range of financial auditing requirements for US and international auditing bodies The SOC 1 report audit attests that AWS’ control objectives are appropriately designed and that the individual controls defined to safeguard customer data are operating effectively This report is the replacement of the Statement on Auditing Standards No 70 (SAS 70) Type II Audit report ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 12 The AWS SOC 1 control objectives are provided here The report itself identifies the control activities that support each of these objectives and the independent auditor’s results of their testing procedures of each control Objective Area Objective Description Security Organization Controls provide reasonable assurance that information security policies have been implemented and communicated throughout the organization Employee User Access Controls provide reasonable assurance that procedures have been established so that Amazon employee user accounts are added modified and deleted in a timely manner and reviewed on a periodic basis Logical Security Controls provide reasonable assurance that policies and mechanisms are in place to appropriately restrict unauthorized internal and external access to data and customer data is appropriately segregated from other customers Secure Data Handling Controls provide reasonable assurance that data handling between the customer’s point of initiation to an AWS storage location is secured and mapped accurately Physical Security and Environmental Protection Controls provide reasonable assurance that physical access to data centers is restricted to authorized personnel and that mechanisms are in place to minimize the effect of a malfunction or physical disaster to data center facilities Change Management Controls provide reasonable assurance that changes (including emergency / non routine and configuration) to existing IT resources are logged authorized tested approved and documented Data Integrity Availability and Redundancy Controls provide reasonable assurance that data integrity is maintained through all phases including transmission storage and processing ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 13 Objective Area Objective Description Incident Handling Controls provide reasonable assurance that system incidents are recorded analyzed and resolved The SOC 1 reports are designed to focus on controls at a service organization that are likely to be relevant to an audit of a user entity’s financi al statements As AWS’ customer base is broad and the use of AWS services is equally as broad the applicability of controls to customer financial statements varies by customer Therefore the AWS SOC 1 report is designed to cover specific key controls li kely to be required during a financial audit as well as covering a broad range of IT general controls to accommodate a wide range of usage and audit scenarios This allows customers to leverage the AWS infrastructure to store and process critical data including that which is integral to the financial reporting process AWS periodically reassesses the selection of these controls to consider customer feedback and usage of this important audit report AWS’ commitment to the SOC 1 report is ongoing and AWS w ill continue the process of periodic audits For the current scope of the SOC 1 report see the AWS Services in Scope by Compliance Program page (https://awsamazoncom/compliance/services inscope/ ) SOC 2 In addition to the SOC 1 report AWS publishes a Service Organization Controls 2 (SOC 2) Type II report Similar to the SOC 1 i n the evaluation of controls the SOC 2 report is an attestation report that expands the evaluation of controls to the criteria set forth by the American Institute of Certified Public Accountants (AICPA) Trust Services Principles These principles define l eading practice controls relevant to security availability processing integrity confidentiality and privacy applicable to service organizations such as AWS The AWS SOC 2 is an evaluation of the design and operating effectiveness of controls that meet the criteria for the security and availability principles set forth in the AICPA’s Trust Services Principles criteria This report provides additional transparency into AWS security and availability based on a pre defined industry standard of leading pract ices and further demonstrates AWS’ commitment to protecting ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 14 customer data The SOC 2 report scope covers the same services covered in the SOC 1 report See the SOC 1 description above for the in scope services SOC 3 AWS publishes a Service Organization Co ntrols 3 (SOC 3) report The SOC 3 report is a publically available summary of the AWS SOC 2 report The report includes the external auditor’s opinion of the operation of controls (based on the AICPA’s Security Trust Principles included in the SOC 2 report) the assertion from AWS management regarding the effectiveness of controls and an overview of AWS Infrastructure and Services The AWS SOC 3 report includes all AWS data center s worldwide that support in scope services This is a great resource for customers to validate that AWS has obtained external auditor assurance without going through the process to request a SOC 2 report The SOC 3 report scope covers the same services covered in the SOC 1 report See the SOC 1 description above for the in scope services View the AWS SOC 3 report here ArchivedAmazon Web Services –Certifications Programs Reports and Third Party Attestations Page 15 Further Reading For additional information see the following sources: • AWS Risk and Compliance Overview • AWS Answers to Key Compliance Questions • CSA Consensus Assessments Initiative Questionnaire Document Revisions Date Description March 2017 Updated in scope services January 2017 Migrated to new template January 2016 First publication
|
General
|
consultant
|
Best Practices
|
AWS_Cloud_Adoption_Framework_Security_Perspective
|
Archived AWS Cloud Adoption Framewo rk Security Perspective June 2016 This paper has been archived For the latest content about the AWS Cloud Adoption Framework see the AWS Cloud Adoption Framework page: https://awsamazoncom/professionalservices/CAFArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 2 of 34 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 3 of 34 Contents Abstract 4 Introduction 4 Security Benefits of AWS 6 Designed for Security 6 Highly Automated 6 Highly Available 7 Highly Accredited 7 Directive Component 8 Considerations 10 Preventive Component 11 Considerations 12 Detective Component 13 Considerations 14 Responsive Component 15 Considerations 16 Taking the Journey – Defining a Strategy 17 Considerations 19 Taking the Journey – Delivering a Program 20 The Core Five 21 Augmenting the Core 22 Example Sprint Series 25 Considerations 27 Taking the Journey – Develop Robust Security Operations 28 Conclusion 29 Appendix A: Tracking Progress Across the AWS CAF Security Perspective 30 ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 4 of 34 Key Security Enablers 30 Security Epics Progress Model 31 CAF Taxonomy and Terms 33 Notes 34 Abstract The Amazon Web Services (AWS) Cloud Adoption Framework1 (CAF) provides guidance for coordinating the different parts of organizations migrating to cloud computing The CAF guidance is broken into a number of areas of focus relevant to implementing cloudbased IT systems These focus areas are called perspectives and each perspective is further separated into components There is a whitepaper for each of the seven CAF perspectives This whitepaper covers the Security Perspective which focuses on incorporating guidance and process for your existing security controls specific to AWS usage in your environment Introduction Security at AWS is job zero All AWS customers benefit from a data center and network architecture built to satisfy the requirements of the most security sensitive organizations AWS and its partners offer hundreds of tools and features to help you meet your security objectives around visibility auditability controllability and agility This means that you can have the security you need but without the capital outlay and with much lower operational overhead Figure 1: AWS CAF Security Perspective ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 5 of 34 than in an onpremises environment The Security Perspective goal is to help you structure your selection and implementation of controls that are right for your organization As Figure 1 illustrates the components of the Security P erspective organize the principles that will help drive the transformation of your organization’s security culture For each component this whitepaper discusses specific actions you can take and the means of measuring progress : Directive controls establish the governance risk and compliance models the environment will operate within Preventive controls protect your workloads and mitigate threats and vulnerabilities Detective controls provide full visibility and transparency over the operation of your deployments in AWS Responsive controls drive remediation of potential deviations from your security baselines Security in the cloud is familiar The increase in agility and the ability to perform actions faster at a larger scale and at a lower cost does not invalidate well established principles of information security After covering the four Security Perspective components this whitepaper shows you the steps you can take to on your journey to the cloud to ensure that your environment maintains a strong security footing: Defin e a strategy for security in the cloud When you start your journey look at your organization al business objectives approach to risk management and the level of opportunity presented by the cloud Deliver a security program for development and implementation of security privacy compliance and risk management capabilities The scope can initially appear vast so it is important to create a structure that allows your organization to holistically address security in the cloud Th e implementation should allow for iterative development so that capabilit ies mature as programs develop This allows the security component to be a catalyst to the rest of th e organization’s cloud adoption efforts ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 6 of 34 Develop robust security operations capabilities that continuously mature and improve The security journey continues over time We recommend that you intertwine operational rigor with the building of new capabilities so the constant iteration can bring continuous improvement Security Benefits of AWS Cloud security at AWS is the highest priority As an AWS customer you will benefit from a data center and network architecture built to meet the requiremen ts of the most securitysensitive organizations An advantage of the AWS cloud is that it allows customers to scale and innovate while maintaining a secure environment Customers pay only for the services they use meaning that you can have the security you need but without the upfront expenses and at a lower cost than in an onpremises environment This section discusses some of the security benefits of the AWS platform Designed for Security The AWS Cloud infrastructure is operated in AWS data centers and is designed to satisfy the requirements of our most securitysensitive customers The AWS infrastructure has been designed to provide high availability while putting strong safeguards in place for customer privacy All data is stored in highly secure AWS data centers Network firewalls built into Amazon VPC and web application firewall capabilities in AWS WAF let you create private networks and control access to your instances and applications When you deploy systems in the AWS Cloud AWS helps by sharing the security responsibilities with you AWS engineers the underlying infrastructure using secure design principles and customers can implement their own security architecture for workloads deployed in AWS Highly Automated At AWS we purposebuild security tools and we tailor them for our unique environment size and global requirements Building security tools from the ground up allows AWS to automate many of the routine tasks security experts normally spend time on This means AWS security experts can spend more time ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 7 of 34 focusing on measures to increase the security of your AWS Cloud environment Customers also automate security engineering and operations functions using a comprehensive set of APIs and tools Identity management network security and data protection and monitoring capabilities can be fully automated and delivered using popular software development methods you already have in place Customers take an automated approach to responding to security issues When you automate using the AWS services rather than having people monitoring your security position and reacting to an event your system can monitor review and initiate a response Highly Available AWS builds its data centers in multiple geographic Regions Within the Regions multiple Availability Zones exist to provide resiliency AWS designs data centers with excess bandwidth so that if a major disruption occurs there is sufficient capacity to loadbalance traffic and route it to the remaining sites minimizing the impact on our customers Customers also leverage this MultiRegion Multi AZ strategy to build highly resilient applications at a disruptively low cost to easily replicate and back up data and to deploy global security controls consistently across their business Highly Accredited AWS environments are continuously audited with certifications from accreditation bodies across the globe This means that segments of your compliance have already been completed For more information about the security regulations and standards with which AWS complies see the AWS Cloud Compliance2 web page To help you meet specific government industry and company security standards and regulations AWS provides certification reports that describe how the AWS Cloud infrastructure meets the requirements of an extensive list of global security standards You can obtain available compliance reports by contacting your AWS account representative Customers inherit many controls operated by AWS into their own compliance and certification programs lowering the cost to maintain and run security assurance efforts in addition to actually maintaining the controls themselves With a strong foundation in place you are free to optimize the security of your workloads for agility resilience and scale ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 8 of 34 The rest of this whitepaper introduces each of the components of the Security Perspective You can use these components to explore the security goals you need to be successful on your journey to the cloud Directive Component The Directive component of the AWS Security Perspective provides guidance on planning your security approach as you migrate to AWS The key to effective planning is to define the guidance you will provide to the people implementing and operating your security environment The information needs to provide enough direction to determine the controls needed and how they should be operated Initial areas to consider include: Account Governance — Direct the organization to create a process and procedures for managing AWS accounts Areas to define include how account inventories will be collected and maintained which agreements and amendments are in place and what criteria to use for when to create an AWS account Develop a process to create accounts in a consistent manner ensuring that all initial settings are appropriate and that clear ownership is established Account Ownership and contact information —Establish an appropriate governance model of AWS accounts used across your organization and plan how contact information is maintained for each account Consider creating AWS accounts tied to email distribution lists rather than to an individual ’s email address This allows a group of people to monitor and respond to information from AWS about your account activity Additionally this provides resilience when internal personnel change and it provides a means of assigning security accountability List your security team as a security point of contact to speed timesensitive communications Control framework —Establish or apply an industry standard control framework and determine if you need modifications or additions in order to incorporate AWS services at expected security levels Perform a compliance mapping exercise to determine how compliance requirements and security controls will reflect AWS service usage Control ownership —Review the AWS Shared Responsibility Model3 information on the AWS website to determine if control ownership ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 9 of 34 modifications should be made Review and update your responsibility assignment matrix (RACI chart) to include ownership of controls operating in the AWS environment Data classification —Review current data classifications and determine how those classifications will be managed in the AWS environment and what controls will be appropriate Change and asset management —Determine how change management asset management are to be performed in AWS Create a means to determine what assets exist what the systems are used for and how the systems will be managed securely This can be integrated with an existing configuration management database (CMDB) Consider creating a practice for naming and tagging that allows identification and management to occur to the securit y level required You can use this approach to define and track the metadata that enables identification and control Data locality —Review criteria for where your data can reside to determine what controls will be needed to manage the configuration and usage of AWS services across Regions AWS customers choose the AWS Region(s) where their content will be hosted This allows customers with specific geographic requirements to establish environments in locations they choose Customers can replicate and back up content in more than one Region but AWS does not move customer content outside of the customer’s chosen Region(s) Least privilege access — Establish an organizational security culture built on the principle of least privilege and strong authentication Implement protocols to protect access to sensitive credential and key material associated with every AWS account Set expectations on how authority will be delegated down through software engineers operations staff and other job functions involved in cloud adoption Security operations playbook and r unbooks —Define your security patterns to create durable guardrails the organization can reference over time Implement the plays through automation as runbooks; document human in theloop interventions as appropriate ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 10 of 34 Considerations Do create a tailored AWS shared responsibility model for your ecosystem Do use strong authentication as part of a protection scheme for all actors in your account Do promote a culture of security ownership for application teams Do extend your data classification model to include services in AWS Do integrate developer operations and security team objectives and job functions Do consider creating a strategy for naming and tracking accounts used to manage services in AWS Do centralize phone and email distribution lists so that teams can be monitored ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 11 of 34 Preventive Component The Preventive component of the AWS Security Perspective provides guidance for implementing security infrastructure with AWS and within your organization The key to implementing the right set of controls is enabling your security teams to gain the confidence and capability they need to build the automation and deployment skills necessary to protect the enterprise in the agile scalable environment that is AWS Use the Directive component to determine the controls and guidance that you will need and then use the Preventive component to determine how you will operate the controls effectively AWS regularly provides guidance on best practices for AWS service utilization and workload deployment patterns which can be used as control implementation references Visit the AWS Security Center blog and most recent AWS Summit and re:Invent conference Security Track videos Consider the following areas to determining what changes (if any) you need to make to your current security architectures and practices This will help you with a smooth and planned AWS adoption strategy Identity and access —Integrate the use of AWS into the workforce lifecycle of the organization as well as into the sources of authentication and authorization Create finegrained policies and roles associated with appropriate users and groups Create guardrails that permit important changes through automation only and prevent unwanted changes or roll them back automatically These steps will reduc e human access to production systems and data Infrastructure protection —Implement a security baseline including trust boundaries system security configuration and maintenance (eg harden and patch) and other appropriate policy enforcement points (eg security groups AWS WAF Amazon API Gateway) to meet the needs that you identified using the Directive component Data protection —Utilize appropriate safeguards to protect data in transit and at rest Safeguards includ e finegrain ed access controls to objects creating and controlling the encryption keys used to encrypt your data ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 12 of 34 selecting appropriate encryption or tokenization methods integrity validation and appropriate retention of data Considerations Do treat security as code allowing you to deploy and validate security infrastructure in a manner that allows you the scale and agility to protect the organization Do create guardrails sensible defaults and offer templates and best practices as code Do build security services that the organization can leverage for highly repetitive or particularly sensitive security functions Do define actors and then storyboard their experience interacting with AWS services Do use the AWS Trusted Advisor tool to continually assess your AWS security posture and consider an AWS Well Architected review Do establish a minimal viable security baseline and continually iterate to raise the bar for the workloads you’re protecting ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 13 of 34 Detective Component The Detective component of the AWS CAF Security Perspective provides guidance for gaining visibility into your organization’s security posture A wealth of data and information can be gathered by using services like AWS CloudTrail servicespecific logs and API/CLI return values Ingesting these information sources into a scalable platform for managing and monitoring logs event management testing and inventory/audit will give you the transparency and operational agility you need to feel confident in the security of your operations Logging and monitoring —AWS provides native logging as well as services that you can leverage to provide greater visibility near to real time for occurrences in the AWS environment You can use these tools to integrate into your existing logging and monitoring solutions Integrate the output of logging and monitoring sources deeply into the workflow of the IT organization for end toend resolution of securityrelated activity Security testing —Test the AWS environment to ensure that defined security standards are met By testing to determine if your systems will respond as expected when certain events occur you will be better prepared for actual events Examples of security testing include vulnerability scanning penetration testing and error injection to prove standards are being met The goal is to determine if your control will respond as expected Asset inventory —Knowing what workloads you have deployed and operational will allow you to monitor and ensure that the environment is operating at the security governance levels expected and demanded by the security standards Change detection —Relying on a secure baseline of preventive controls also requires knowing when these controls change Implement measures to determine drift between secure configuration and current state ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 14 of 34 Considerations Do determine what logging information for your AWS environment you want to capture monitor and analyze Do determine how your existing security operations center (SOC) business capability will integrate AWS security monitoring and management into existing practices Do continually conduct vulnerability scans and penetration tests in accordance with AWS procedures for doing so ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 15 of 34 Responsive Component The Responsive component of the AWS CAF Security Perspective provides guidance for the responsive portion of your organization’s security posture By incorporating your AWS environment into your existing security posture and then preparing and simulating actions that require response you will be better prepared to respond to incidents as they occur With automated incident response and recovery and the ability to mitigate portions of disaster recovery it is possible to shift the primary focus of the security team from response to performing forensics and root cause analysis Some things to consider as part of adapting your security posture include the following: Incident response —During an incident containing the event and returning to a known good state are important elements of a response plan For instance automating aspects of those functions using AWS Config rules and AWS Lambda responder scripts gives you the ability to scale your response at Internet speeds Review current incident response processes and determine if and how automated response and recovery will become operational and managed for AWS assets The security operations center’s functions should be tightly integrated with the AWS APIs to be as responsive as possible This provides the security monitoring and management function for AWS Cloud adoption Security incident response simulations —By simulating events you can validate that the controls and processes you have put in place react as expected Using this approach you can determine if you are effectively able to recover and respond to incidents when they occur Forensics —In most cases your existing forensics tools will work in the AWS environment Forensic teams will benefit from the automated deployment of tools across regions and the ability to collect large volumes of data quickly with low friction using the same robust scalable services their business critical applications are built on such as Amazon Simple Storage Service (S3) Amazon Elastic Block Store (EBS) Amazon Kinesis Amazon DynamoDB Amazon Relational Database Service (RDS) Amazon RedShift and Amazon Elastic Compute Cloud (EC2) ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 16 of 34 Considerations Do update your incident response processes to recognize the AWS environment Do leverage services in AWS to forensically ready your deployments through automation and feature selection Do automate response for robustness and scale Do use services in AWS for data collection and analysis in support of an investigation Do validate your incident response capability through simulations of security incident responses ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 17 of 34 Taking the Journey – Defining a Strategy Review your current security strategy to determine if portions of the strategy would benefit from change as part of a cloud adoption initiative Map your AWS cloud adoption strategy against the level of risk your business is willing to accept your approach to meeting regulatory and compliance objectives as well as your definitions for what needs to be protected and how it will be protected Table 1 provides an example of a security strategy that articulates a set of principles which are then mapped to specific initiatives and work streams Principle Example Action s Infrastructure as code Skill up security team in code and automation ; move to DevSecOps Design guardrails not gates Architect drives toward good behavior Use the cloud to protect the cloud Build operate and manage security tools in the cloud Stay current ; run secure Consume new security features; patch and replace frequently Reduce reliance on persistent access Establish role catalog; automate KMI via secrets service Total visibility Aggregate AWS logs and metadata with OS and app logs Deep insights Implement a s ecurity data warehouse with BI and analytics Scalable incident response (IR) Update IR and Forensics standard operating procedure ( SOP) for shared responsibility framework SelfHealing Automate correction and restoration to known good state Table 1: Example Security Strategy As your strategy evolves you will want to begin iterating on your thirdparty assurance frameworks and organizational security requirements and incorporating into a risk management framework that will guide your journey to AWS It is often an effective practice to evolve your compliance mapping as you ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 18 of 34 gain a better understanding of the needs of your workloads in the cloud and the security capabilities provided by AWS Another key element of your strategy is mapping out the shared responsibility model specific to your ecosystem In addition to the macro relationship you share with AWS you’ll want to explore internal organizational shared responsibilities as well as those you impart upon your partners Companies can break their shared responsibility model into three major areas: a control framework; a responsible accountable consulted informed model (RACI); and a risk register The control framework describes how the security aspects of the business are expected to work and what controls will be put in place to manage risk You can use the RACI to identify and assign a person with responsibility for controls in the framework Finally use a risk register to capture controls without proper ownership Prioritize residual risks that have been identified aligning their treatment with new work streams and initiatives put in place to resolve them As you map these shared responsibilities you can expect to find new opportunities to automate operations and improve workflow between critical actors in your security compliance and risk management community Figure 2 shows an example extended shared responsibility model Figure 2: Example Shared Responsibility Model ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 19 of 34 Considerations Do create a tailored strategy that addresses your organization al approach to implementing security in the cloud Do promote automation as an underlying theme for all your strategy Do clearly articulate your approach to cloud first Do promote agility and flexibility by defining guardrails Do take strategy as a short exercise that defines your organization’s approach to information security in the cloud Do iterate quickly while laying down what the strategy is Your aim is to have a set of guiding principles that will drive the core of the effort forward – strategy is not the end in itself Move quickly and be willing to adapt and evolve Do define strategic principles which will impart the culture you want in security and which inform the design decisions you’ll make rather than a strategy which impl ies specific solutions ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 20 of 34 Taking the Journey – Delivering a Program With a strategy in place it is now time to put it into practice and initiate the implementation that will transform your security organization and secure the cloud journey Whil e you have a wide choice of options and features your implementation should not be not a protracted effort This process of designing and implementing how different capabilities will work together represents an opportunity to quickly gain familiarity and learn how to iterate your designs to best meet your requirements Learn from actual implementation early then adapt and evolve using small changes as you learn To help you with your implementation you can use the CAF Security Epics (See Figure 3) The Security Epics consist of groups of user stories (use cases and abuse cases) that you can work on during sprints Each of these epics has multiple iterations addressing increasingly complex requirements and layering in robustness Although we advise the use of agile the epics can also be treated as general work streams or topics that help in prioritizing and structuring delivery using any other framework A proposed structure consists of the following 10 security epics (Figure 4 ) to guide your implementation Figure 3: AWS CAF Security Epics ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 21 of 34 The Core Five The following five epics are the core control and capability categories that you should consider early on because they are fundamental to getting your journey started IAM —AWS Identity and Access Management (IAM) forms the backbone of your AWS deployment In the cloud you must establish an account and be granted privileges before you can provision or orchestrate resources Typical automation stories may include entitlement mapping/grants/audit secret material management enforcing separation of duties and least privilege access just intime privilege management and reducing reliance on long term credentials Logging and monitoring —AWS services provide a wealth of logging data to help you monitor your interactions with the platform The performance of AWS services based upon your configuration choices and the ability to ingest OS and application logs to create a common frame of reference Typical automation stories may include log aggregation thresholds/alarming/alerting enrichment search platform visualization stakeholder access and workflow and ticketing to initiate closedloop organizational response Infrastructure security —When you treat infrastructure as code security infrastructure becomes a first tier workload that must also be deployed as code This approach will afford you the opportunity to programmatically configure AWS services and deploy security infrastructure from AWS Figure 4: AWS Ten Security Epics ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 22 of 34 Marketplace partners or solutions of your own design Typical automation stories may include creating custom templates to configure AWS services to meet your requirements implementing security architecture patterns and security operations plays as code crafting custom security solutions from AWS services using patch management strategies like blue/green deployments reducing exposed attack surface and validating the efficacy of deployments Data protection —Safeguarding important data is a critical piece of building and operating information systems and AWS provides services and features giving you robust options to protect your data throughout its lifecycle Typical automation stories may include making workload placement decisions implementing a tagging schema constructing mechanisms to protect data in motion such as VPN and TLS/SSL connections (including AWS Certificate Manager) constructing mechanisms to protect data at rest through encryption at appropriate tiers in your infrastructure using AWS Key Management Service (AWS KMS) implementation/integration deploying AWS CloudHSM creating tokenization schemes and implementing and operating of AWS Marketplace Partner solutions Incident response —Automating aspects of your incident management process improves reliability and increases the speed of your response and often creates and environment easier to assess in afteraction reviews Typical automation stories may include using AWS Lambda function “ responders ” that react to specific changes in the environment orchestrating auto scaling events isolating suspect system components deploying just intime investigative tools and creating workflow and ticketing to terminate and learn from a closed loop organizational response Augmenting the Core These five epics represent the themes that will drive continued operational excellence through availability automation and audit You’ll want to judiciously integrate these epics into each sprint When additional focus is required you may consider treating them as their own epics Resilience —High availability continuity of operations robustness and resilience and disaster recovery are often reasons for cloud deployments with AWS Typical automation stories may include using Multi AZ and Multi Region deployments changing the available attack surface scaling and ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 23 of 34 shifting allocation of resources to absorb attacks safeguarding exposed resources and deliberately inducing resource failure to validate continuity of system operations Compliance validation —Incorporating compliance end toend into your security program prevents compliance from being reduced to a checkbox exercise or an overlay that occurs post deployment This epic provides the platform that consolidates and rationalizes the compliance artifacts generated through the other epics Typical automation stories may include creating security unit tests mapped to compliance requirements designing services and workloads to support compliance evidence collection creating compliance notification and visualization pipelines from evidentiary features monitoring continuous ly and creating compliancetoolingoriented DevSecOps teams Secure CI/CD (DevSecOps) —Having confidence in your software supply chain through the use of trusted and validated continuous integration and continuous deployment tool chains is a targeted way to mature security operations practices as you migrate to the cloud Typical automation stories may include hardening and patching the tool chain least privilege access to the tool chain logging and monitoring of the production process security integration/deployment visualization and code integrity checking Configuration and vulnerability analysis —Configuration and vulnerability analysis gain big benefit from the scale agility and automation afforded by AWS Typical automation stories may include enabling AWS Config and creating customer AWS Config Rules using Amazon CloudWatch Events and AWS Lambda to react to change detection implementing Amazon Inspector selecting and deploying continuous monitoring solutions from the AWS Marketplace deploying triggered scans and embedding assessment tools into the CI/CD tool chains Security big data and predictive analytics —Security operations benefit from big data services and solutions just like any other aspect of the business Leveraging big data gives you deeper insights in a more timely fashion thus enhancing your agility and ability to iterate on your security posture at scale Typical automation stories may include creating security data lakes developing analytics pipelines creating visualization to drive security decision making and establishing feedback mechanisms for autonomic response ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 24 of 34 After this structure is defined an implementation plan can be crafted Capabilities change over time and opportunities for improvement will be continually identified As a reminder the themes or capability categories above can be treated as epics in an agile methodology which contain a range of user stories including both use cases and abuse cases Multiple sprints will lead to increased maturity while retain ing flexibility to adapt to business pace and demand ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 25 of 34 Example Sprint Series Consider organizing a sample set of six twoweek sprints (a group of epics driven over a twelveweek calendar quarter) including a short prep period in the following way Your approach will depend on resource availability priority and level of maturity desired in each capability as you move towards your minimal ly viable production capability ( MVP ) Sprint 0 —Security cartography: compliance mapping policy mapping initial threat model review establish risk registry; Build a backlog of use and abuse cases ; plan the security epics Sprint 1 —IAM; logging and monitoring Sprint 2 —IAM; logging and monitoring; infrastructure protection Sprint 3 —IAM; logging and monitoring; infrastructure protection Sprint 4 —IAM; logging and monitoring; infrastructure protection; data protection Spring 5 —Data protection automating security operations incident response planning/tooling; r esilience Sprint 6 —Automating security operations incident r esponse; r esilience A key element of compliance validation is incorporating the validation into each sprint through security and compliance unit test cases and then undergoing the promotion to production process When explicit compliance validation capability is required sprints can be established to focus specifically on those user stories Over time iteration can be leveraged to achieve continuous validation and implementation of autocorrection of deviation where appropriate The overall approach aims to clearly define what an MVP or baseline is which will then map to first sprint in each area In the initial stages the end goal can be less defined but a clear roadmap of initial sprints is created Timing experience and iteration will allow refining and adjusting the end state to be just right for your organization In reality the final state may continuously shift but ultimately the process does lead to continuous improvement at a faster pace This approach can be more effective and have greater cost efficiency than a big bang approach based on long timelines and high capital outlays ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 26 of 34 Diving a little deeper the first sprint for IAM can consist of defining the account structure and implementing the core set of best practices A second sprint can implement federation A third sprint can expand account management to cater for multiple accounts and so on IAM user stories that may span one or more of these initial sprints could include stories such as the following: “As an access administrator I want to create an initial set of users for managing privileged access and federation identity provider trust relationships ” “As an access administrator I want to map users in my existing corporate directory to functional roles or sets of access entitlements on the AWS platform” “As an access administrator I want to enforce multi factor authentication on all interaction with the AWS console by interactive users” In this example the following logging and monitoring user stories may span one or more initial sprints: “As a security operations analyst I want to receive platform level logging for all AWS Regions and AWS Accounts” “As a security operations analyst I want all platform level logs delivered to one shared location from all AWS Regions and accounts” “As a security operations analyst I want to receive alerts for any operation that attaches IAM policies to users groups or roles” You can build capability in parallel or serial fashion and maintain flexibility by including security capability user stories in the overall product backlog You can also split the user stories out into a securityfocused DevOps team These are decisions you can periodically revisit allowing you to tailor your delivery to the needs of the organization over time ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 27 of 34 Considerations Do review your existing control framework to determine how AWS services will be operated to meet your required security standards Do define actors and then storyboard their experience interacting with AWS services Do define what the first sprint is and what the initial highlevel longer term goal will be Do establish a minimal ly viable security baseline and continually iterate to raise the bar for the workloads and data you’re prot ecting ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 28 of 34 Taking the Journey – Develop Robust Security Operations In an environment where infrastructure is code security must also be treated as code The Security Operations component provides a means to communicate and operationalize the fundamental tenets of security as code: Use the cloud to protect the cloud Security infrastructure should be cloudaware Expose security features as services using the API Automate everything so that your security and compliance can scale To make this governance model practical lines of business often organize as DevOps teams to build and deploy infrastructure and business software You can extend the core tenets of the governance model by integrating security into your DevOps culture or practice; which is sometimes called DevSecOps Build a team around the following principles: The security team embraces DevOps cultures and behaviors Developers contribute openly to code used to automate security operations The security operations team is empowered to participate in testing and automation of application code The team takes pride in how fast and frequently they deploy Deploying more frequently with smaller changes reduces operational risk and shows rapid progress against the security strategy Integrated development security and operations teams have three shared key missions Harden the continuous integration/ continuous deployment tool chain Enable and promote the development of resilient software as it traverses the tool chain Deploy all security infrastructure and software through the tool chain ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 29 of 34 Determining the changes (if any) to current security practices will help you plan a smooth AWS adoption strategy Conclusion As you embark on your AWS adoption journey you will want to update your security posture to include the AWS portion of your environment This Security Perspective whitepaper prescriptively guides you on an approach for taking advantage of the benefits that operating on AWS has for your security posture Much more security information is available on the AWS website where security features are described in detail and more detailed prescriptive guidance is provided for common implementations There is also a comprehensive list of security focused content4 that should be reviewed by various members of your security team as you prepare for AWS adoption initiatives ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 30 of 34 Appendix A: Tracking Progress Across t he AWS CAF Security Perspective You can use the key security enablers and the security epics progress model discussed in this appendix to measure the progress and the maturity of your implementation of the AWS CAF Security Perspective The enablers and the progress model can be used for project planning purposes to evaluate the robustness of implementations or simply as a means to drive conversation about the road ahead Key Security Enablers Key security enablers are milestones that help you stay on track We use a scoring model that consists of three values: Unaddressed Engaged and Completed Cloud Security Strategy [Unaddressed Engaged Completed] Stakeholder Communication Plan [Unaddressed Engaged Completed] Security Cartography [Unaddressed Engaged Completed] Document Shared Responsibility Model [Unaddressed Engaged Completed] Security Operations Playbook & Runbooks [Unaddressed Engaged Completed] Security Epics Plan [Unaddressed Engaged Completed] Security Incident Response Simulation [Unaddressed Engaged Completed] ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 31 of 34 Security Epics Progress Model The security epics progress model helps you evaluate your progress in implementing the 10 Security Epics described in this paper We use a scoring model of 0 (zero) through 3 to measure robustness We provided examples for the Identity and Access Management and the Logging and Monitoring epics so you could see how this progression works Core 5 Security Epics 0 Not addressed 1 Addressed in architecture and plans 2 Minimal viable implementation 3 Enterprise ready production implementation ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 32 of 34 Security Epic 0 1 2 3 Identity and Access Management Example: No relationship between on premise s and AWS identities Example: An approach is defined for workforce lifecycle identity management IAM architecture is documented Job functions are mapped to IAM policy needs Example: Implemented IAM as defined in architecture IAM policies implemented that map to some job functions IAM implementation validated Example: Automation of IAM lifecycle workflow s Logging and Monitoring Example: No utilization of AWS provided logging and monitoring solutions Example: An approach is defined for log aggregation monitoring and integration into security event management processes Example: Platform level and service level logging is enab led and centralized Example: Events with security implications are deeply integrated into security workflow and incident management processes and systems Infrastructure Security Data Protection Incident Management ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 33 of 34 Augmenting the Core 5 0 Not addressed 1 Addressed in architecture and plans 2 Minimal viable implementation 3 Enterprise ready production implementation Security Epic 0 1 2 3 Resilience DevSecOps Compliance Validation Configuration & Vulnerability Management Security Big Data CAF Taxonomy and Terms The Cloud Adoption Framework (CAF) is the framework AWS created to capture guidance and best practices from previous customer engagements An AWS CAF perspective represents an area of focus relevant to implementing cloudbased IT systems in organizations For example the Security Perspective provides guidance and process for evaluating and enhancing your existing security controls as you move to the AWS environment ArchivedAmazon Web Services – AWS CAF Security Perspective June 2016 Page 34 of 34 Each CAF Perspective is made up of components and activities A component is a subarea of a perspective that represents a specific aspect that needs attention This whitepaper explores the components of the Security perspective An activity provides more prescriptive guidance for creating actionable plans that the organization can use to move to the cloud and to operate cloudbased solutions on an ongoing basis For example Directive is one component of the Security Perspective and tailoring an AWS shared responsibility model for your ecosystem may be an activity within that component When combined the Cloud Adoption Framework (CAF) and the Cloud Adoption Methodology (CAM) can be used as guidance during your journey to the AWS cloud Notes 1 https://d0awsstaticcom/whitepapers/aws_cloud_adoption_frameworkpdf 2 https://awsamazoncom/compliance/ 3 https://awsamazoncom/compliance/sharedresponsibilitymodel/ 4 https://awsamazoncom/security/securityresources/
|
General
|
consultant
|
Best Practices
|
AWS_Cloud_Transformation_Maturity_Model
|
ArchivedAWS Cloud Transformation Maturity Model September 2017 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived © 201 7 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or l icensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Project Stage 3 Challenges and Barriers 4 Transformation Activities 5 Outcomes and Maturity 7 Foundation Stage 8 Challenges and Barriers 8 Transformation Activities 9 Outcomes and Maturity 10 Migration Stage 11 Challenges and Barriers 11 Transformation Activities 12 Outcomes and Maturity 14 Optimization Stage 15 Challenges and Barriers 15 Transformation Activities 16 Outcomes and Maturity 17 Conclusion 18 Contributors 18 Document Revisions 19 Archived Abstract The AWS Cloud Transformation Maturity Model (CTMM) maps the maturity of an IT organization’s process people and technology capabilities as they move through the four stages of the journey to the AWS Cloud : project foundation migration and optimization The objective of the CTMM is to help enterprise IT organizations understand the significant challenges they might face as they adopt AWS learn best practice s and activities to handle those challenges and recognize the signs of maturity or expected outcomes to gauge their maturity and readiness at every stage This whitepaper guide s organizations to measur e their readiness for the AWS Cloud build an effective cloud transformation strategy and drive a n effective execution plan ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 1 Introduction The Amazon Web Services ( AWS) Cloud Transformation Maturity Model (CTMM) is a tool enterprise customers can use to assess the maturity of their cloud adoption through four key stages : project foundation migration and optimization Each stage brings an organization ’s people processes and technologies closer to realizing its vision of ITasaService (ITaaS) To fully benefit from the AWS C loud the whole organization has to transform and adopt the cloud —not just the IT division Figure 1 shows the key AWS CTMM activities and when they occur during the four stages of cloud transformation Figure 1 : AWS Cloud T ransform ation Maturi ty Model – stages milestones and timeline The four stages of cloud transformation are described in detail in this paper Table 1 provides a mat urity matrix of the challenges key transformation activities and outcomes at each stage of the AWS CTMM ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 2 Table 1: AWS Cloud T ransformation Maturity Matrix Maturity Stage Customer Challenges Transformation Activities Outcomes/Milestones of Maturity Project Limited knowledge of AWS services Raise level of AWS awareness via education and training Organization knowledge and support Limited executive support for new IT investment Seek case studies of proven return on investment ( ROI) and participate in AWS executive briefings Executive support and appropriate funding Unable to purchase required services Use current services or create new contract Educate procurement and legal staff about new purchasing paradigms when procur ing cloud services and tools1 Ability to purchase all required services Limited confidence in cloud service capabilities Execute one or more pilot/POC project s Increased confidence and fewer concerns No clear ownership or direction Conduct a Kickoff and Discovery Workshop IT ownership with clear strategy and direction Foundation Assigning the required resources to effectively drive the transformation Conduct a People Model Workshop and establish a CCoE Dedicated resources to define policies architecture Lack of a detailed organizational transformation plan Conduct a Governance Model Workshop and a Migration Jumpstart Detailed plan for all aspects of the transformation (People Process and Technology ) Limited knowledge of security and compliance paradigms and requirements in the cloud Conduct an AWS Security Risk and Compliance Workshop Best practice security policies architecture and procedures Cost and budget management requirements and concerns Conduct an AWS Cost Model Workshop Detailed TCO for proposed operating environment Migration Developing an effective and efficient migration strategy Conduct an Application Portfolio Assessment Jumpstart A migration strategy with a clear line of sight from current to target state environment ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 3 Maturity Stage Customer Challenges Transformation Activities Outcomes/Milestones of Maturity Implementing an effective and efficient migration process Select and implement best migration environment A cost efficient and effective application migration process Managing environment efficiently and effectively Select and implement best management environment A cost efficient and effective portfolio management with robust governance and security Migrating all targeted applications ( AllIn ) successfully Migrate workloads using AWS/Partner implementation tools and services Allin – organization achieving significant benefits Optimization Optimizing cost management Leverage AWS tools and features to continuous ly improv e operational costs (eg consol idated billing Reserved Instances discounts ) Focused and robust processes in place to continuous ly seek ways to optimize costs Optimizing service management Utilize latest AWS tools to continuously improve service management methods/processes Fully optimized service management and increased customer satisfaction Optimizing application management services Utilize AWS best practices and tools (eg DevOps CI/CD) to continuously improve application management methods/tools Rigorous emphasis on optimized application management services Optimizing enterprise services Continuously seek ways to aggregate and improve shared services Optimized enterprise services and customer satisfaction Project Stage The project stage begins the transformation journey for your organization Organizations in this stage usually have limited knowledge of c loud services and their potential costs and benefits and typically they don’t have a centralized cloud adoption strategy Getting through this initial stage is crucial to the ultimate success for your organization’s journey to the cloud T he outcomes realized and lessons learned ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 4 here lay the strong foundation for broader cloud adoption at all organizational levels Challenges and Barriers Your organization needs to overcome t he following key challenges and barriers during this stage of the transformation : • Limited knowledge and training – IT s taff and their internal customers are accustomed to the older model and related process of acquiring and consuming IT Significant investment in training is required for IT staff and other business units to adopt the cloud model • Executive support and fund ing – IT leaders have traditionally framed IT infrastructure investments as a necessary evil to gain funding approval for signi ficant infrastructure upgrades As a result e xecutives are often skeptical and resistant to any new funding In addition executives constantly hear complaints from IT customers ( that is the other business units ) about rising costs poor service delivery and fail ed or failing project implementations • Purchasing public cloud services – IT leaders face the challenge of establishing new contracts or leveraging existing contracts with specific terms and conditions to purchase cloud services A significant obstacle can be the lack of awareness among the procurement and legal staff about purchasing paradigms for cloud services In addition IT leaders have to ensure that new contracts meet the competitive bidding laws of their jurisdiction which can be a long and complex process • Limited confidence in cloud service models – Cloud service infrastructure provisioning and management operation models are significantly different from the traditional on premise s operating model Your IT group might require hands on experience before it is ready to support the transformation effort If your IT group resists change or isn’t enthusiastic about changing to the cloud model your transformation initiative could be significantly undermine d • IT ownership and direction – IT leaders have many leadership challenges including shadow IT where other business units set up their own IT operations IT leaders have to gain control of central IT ownership ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 5 and communicat e a clear transformation roadmap to all organization stakeholders Transformation Activities To overcome the challenges and barriers in the project stage and mature to the foundation stage your organization must complete the following transformation activities : • Contact an AWS account manager – An AWS a ccount manager is a key resource and a single point of contact who can connect you with AWS Partners and professional services to address all of your AWS needs To get in touch with an AWS account manager go to Contact Us 2 • Raise the level of AWS awareness – There are many AWS events3 and education and training resources for your organization’s stakeholders including: o AWS Business Essentials – This training helps your IT business leaders and professionals understand the benefits of cloud computing from the strategic business value perspective For more information see the AWS Business Essentials website4 o Online videos and hands on labs – AWS offers a series of free ondemand instructional videos and labs to help you learn about AWS in minutes5 In addition qwikL ABS provide hands on practice with popular AWS Cloud services and real world scenarios 6 To learn more about AWS services and features from AWS engineers and solution s architects and to hear customer perspectives visit the AWS YouTube Channel 7 o AWS Technical Essentials – This training provides an overview of AWS services and solutions to your technical users to give them the information they need to make informed decisions about the IT solutions for your organization For more information see the AWS Technical Essentials website8 o AWS whitepapers – The comprehensive online collection of AWS Whitepapers cover s a broad range of technical topics including best practices for solving business problems architectures security compliance and cloud economics9 ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 6 o AWS trainings – AWS offers an array of instructor led technical trainings to help your teams develop the skills to design deploy and operate infrastructure and applications i n the AWS C loud Please visit AWS Training and Certification for more information10 Table 2 : AWS rec ommended educational resources for roles in your organization Role Resources IT leadership team AWS Business Essentials Online Videos and Labs AWS Whitepapers IT staff AWS Business Essentials Online Videos and Labs AWS T echnical Essentials AWS W hitepapers AWS Training and Certification IT customers AWS Business Essentials Online Videos and Labs AWS Whit epapers • Secure executive support and funding – AWS offers cost and value modeling workshops to provide you with estimated costs and strategic value so you can perform a costbenefit analysis as a basis for securing executive support and funding In addition numerous case studies 11 and whitepapers demonstrate proven cost savings and agility benefits for customers of all sizes in virtually every market segment • Consider purchasing o ptions – You can buy AWS Cloud services12 the following ways: o Direct purchase from AWS – Start using AWS services within minutes by opening an account online in accordance with the AWS Terms and Conditions o Indirect p urchase from an AWS Partner – Acquire AWS via Partner contract vehicles to serve the needs of federal state and l ocal governments as well as the education sector F or more information see the AWS whitepaper Ten Considerations for a Cloud Procurement13 the contracts web page AWS Public Sector Contract Center14 or send an email to aws wwps contract mgmt@amazoncom ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 7 • Execute a pilot or proof ofconcept (POC) project – Most customers leverage one or more pilot or POC projects to test AWS implementation on representative workloads AWS supports such initiatives by providing accelerator service s such as an AWS Migration Jumpstart to provide the end toend knowledge transfer of an actual workload migration In addition for customers working with an AWS Partner the AWS POC Program is another avenue to get funding for POC projects executed via eligible AWS Partners F or more information see the Partner Funding webpage 15 • Conduct an IT Transformation Workshop – This workshop enable s rapid cloud adoption by showing you how to replace uncertainty with a vision and strategy on how to derive value from AWS The workshop is an interactive educational experience where you can clearly identify business drivers objectives and blockers This helps you build a cloud adoption roadmap to guide you through the next steps in your journey to the cloud Outcomes and Maturity Use t he following key outcomes to measure your organization’s maturity and readiness to proceed to the foundation stage : • Effective use of AWS resources – The AWS account manager works with your organization to coordinate the appropriate AWS professional services onsite presentations and meetings onsite training web service accounts and support • Knowledgeable and trained o rganization – Your IT leadership team is familiar with AWS its costs and benefits and transformation best practices Key IT staff members have some hands on experience with AWS services and IT customers have basic knowledge of AWS features and capabilities • Executive support and funding – Your IT leadership team has presented a sound business case for funding the cloud transformation initiative to your organization’s executive leadership This business case typically includes a cost benefit analysis customer reference examples and risk management assessments ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 8 • Ability to p urchase AWS and AWS profe ssional services – Your IT team has work ed with the AWS account manager to identify an existing contract vehicle via an AWS Partner or to put a new contract in place 16 • IT staff confidence and true buyin – The POC was executed successfully and addressed the concerns of your key IT staff who se complete support is crucial to effectively transform the organization • Central IT ownership and a clear transformation roadmap – Centralized ownership of the cloud initiative has emerged and all of your stakeholders participated in an IT Transformation Workshop The IT leaders have a clear vision and a transformation roadmap has been communicated to key stakeholders across the organization The roadmap provides direction on establishing preliminary AWS governance policies that mitigate the risks of business units moving ahead Foundation Stage The foundation stage is characterized by the customer’s intent to move forward with migration to AWS with executive spo nsorship some experience with AWS services and partially trained staff During this stage the customer’s environment is assessed all contractual agreements are in place and a plan is created for the migration The migration plan details the business case in scope workloads approach to migration resources required and the timeframe Challenges and Barriers Your organization must overcome t he following key challenges and barriers during this stage: • Assigning transformation support resources – Effective execution in this stage requires a significant amount of time from key IT staff who are knowledgeable and trusted to provide input into decisions concerning architecture security and governance This can be challenging because IT organizations are constantly inundated with competing priori ties related to managing the current environment This situation is further compounded by the limited number of key infrastructure security and service management staff • Providing leadership through a transformation plan – IT leaders are challenged with the daunting task of developing a transformation plan ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 9 that addresses all aspects of organization al change including business governance architecture service delivery operations roles and responsibili ties and training • Integrating s ecurity and compliance p olicies – IT organizations are challenged with integrating AWS into their existing security and control framework that supports their current IT environment They are also challenged with configurin g AWS to be in compliance with regulatory requirements • Managing c ost and budget – IT organizations are challenged to develop a budget aligned with the OpEx model of utility computing measurable benefit goals and an effective cost management process Transformation Activities We recommend t he following transformation activities to achieve the necessary outcomes before moving to the migration stage: • Establish a Cloud Center of Excellence (CCo E) – AWS recommends strong governance practices using a CCoE We recommend that you staff the CCoE gradually with a dedicated team that has the following core responsibilities: o Defining central policies and strategy o Providing support and knowledge transfer to business units using hybrid cloud solutions o Creating and provisioning AWS accounts for workload/program owners o Providing a central point of access control and security standards o Creating and managing common use case architectures (blueprints) The use of a CCoE lowers the implementation and migration risk across the organization and serves as a conduit for sharing the best practices for a broader impact of cloud transformation throughout the organization • Develop security and compliance architecture – AWS Prof essional Services helps your organization achieve risk management and compliance goals Prescriptive guidance enables you to adopt rigorous ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 10 methods for implementing security and compliance processes for systems and personnel • Develop a value management plan – Developing a robust value management model is a key activity that includes tactical benefits ( cost management prioritization of IT spending and a system of allocating costs ) and strategic value from the cloud (agility time to market ITaaS innova tion) When you have a plan you can focus on and prioritize initiatives (see Figure 2) For example with AWS you can view specific IT operating costs and system performance data AWS also enables allocati on to specific business groups or specific applicat ions in near real time Figure 2 : Strategic and t actical values of AWS adoption identified Outcomes and Maturity Use t he following key outcomes to measure your organization ’s readiness to move to the migration stage : • CCoE for Cloud Governance – The central CCOE provides the following benefits: o Standardization of s trategy and v ision – Centralization allows a single point of cloud strategy that is aligned with the larger business requirements of the wider organization o Centralized expertise – A central cloud team can be trained quickly in specialized cloud technologies while individual business areas are still getting up to speed ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 11 o Standardization of t echnical processes and procedures – A central team owns the responsibility for standard processe s procedures and blueprints which can include the use of automation and other methods to simplify and standardize deployments by application owners o Bias for a ction – A central cloud team has a vested interest in making sure that the cloud computing model is successful whereas decentralized business units might be less effective if they don’t realize a direct benefit • Clear transformation roadmap – A transformation roadmap establishes a plan identifies resourc es and provides details about migration activities The roadmap is used to define the ordering and dependencies of your initiatives to achieve t he goals set by the CCo E steering c ommittee or program management • Best practice security and compliance architecture – A highly scalable best practice architecture design is created that supports all policy and regulatory compliance requirements • Strong value management plan – A value management plan determines and describe s how you quantify value and identifies the areas where the project team s should focus Migration Stage The migration stage is where your organization matures overall with governance technical and operational foundation in place to effectively and efficiently migrate targeted application s Dur ing this stage the building blocks of the migration and operational tools are implemented and the mass migration of inscope workloads is completed Significant risks exist at this stage such as project delays budget overruns and application failures If the appropr iate migration strategies tools and methods are not implemented there is also a risk that customer confidence and support will diminish Challenges and Barriers Your organization must overcome t he following key challenges and barriers during this stage: ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 12 • Developing an e ffective and efficient m igration strategy – Your organization is challenged to implement a strategy that mini mizes the risk of project failures and maximizes ROI Many ambitious IT projects fail because they are based on inappropriate strategies and plans It’s critical to classify sequenc e and have an appropriate migration disposition for your targeted application workloads to ensure the success of the overall implementation pl an • Implementing a robust migration process – Your organization is challenged to implement a migration execution process that minimizes cost and is repeatable and sustainable The selection and implementation of proven migration tools and methods is a ke y factor in your organization ’s ability to minimize the risks associated with migrating targeted application workloads • Setting up a c loud environment – Your organization is challenged to implement a cloud environment that is controlled sustainable reliable and enables improved agility This challenge includes leveraging existing tools and processes as well as developing new tools and processes • Going allin – Your organization is challenged to implement process es that enable the effe ctive and efficient migration of all application workloads onto AWS on time and within budget Like all projects the risk is that technical failures unsustainable processes and performance failures could create significant project delays and unplanned costs Transformation Activities We recommend t he following transformation activities to achieve the outcomes in this stage and mature to the optimization stage : • Conduct a portfolio assessment – Your organization must go through a portfolio rationalization exercise to determine which applications to migrate r eplace or in some cases eliminate Figure 3 illustrates decision points to consider in determining the strategy for moving each application to the AWS Cloud focusing on the 6 Rs : retire retain rehost replatform repurchase and refactor ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 13 Figure 3 : Application migration dispositions and paths identified from migration strategy Table 3 describes the transformation impact of the 6 Rs in the order of their execution complexity Table 3: Cloud m igration strategies and corresponding levels of complexity for execution Migration Pattern Transformation Impact Complexity Refactoring Rearchitecting and recoding require investment in new capabilities delivery of complex programs and projects and potentially significant business disruption Optimization for the cloud should be realized High Replatforming Amortization of transformation costs is maximized over larger migrations Opportunities to address significant infrastructure upgrades can be realized This has a positive impact on compliance regulatory and obsolescence drivers Opportunities to optimize in the cloud should be realized High Repurchasing A replacement through either procurement or upgrade Disposal commissioning and decommissioning costs may be significant Medium Rehosting Typically referred to as lift and shift or forklifting Automated and scripted migrations are highly effective Medium Retiring Decommission and archive data as necessary Low Retaining This is the do nothing option Legacy costs remain and obsolescence costs typically increase over time Low ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 14 • Implement a m igration environment – In addition to the migration strategy your organization must develop a migration process for each application workload These processes include application migration tools data migration tools validation methods and roles and responsibilities In addition to other criteria such as business criticality and architecture each application is classified by migration method and process For example Figure 3 shows how you can migrate applications using AWS VM Import /Export or third party migration tools or by manually moving the code and data • Implement a best management environment – Your organization must develop and implement an effective cloud governance and operating model that addresses your organization’s nee d from the standpoint of access security compliance and automation • Migrate targeted workloads – AWS recommends using the principles of agile methodology to effectively execute and manage the migration of workloads from end to end This requires that y our organization plan schedule and execute migrations in repeatable sprints incorporating lessons learned after every sprint Each migration sprint should go through an appropriate acceptance test and change control process Outcomes and Maturity Use t he following key outcomes to measure your organization’s maturity in this stage and assess the organization’s readiness to progress to the optimization stage : • Allin with AWS – This means that the organization has declared that AWS is its primary cloud host for both legacy and new applications T his is a strategic long term direction from executive leadership to stop managing data centers and migrat e all targeted application workloads to AWS • IT as a Service (ITaaS) – Your organization is realizing the core benefits of cloud adoption : measurable cost savings agility and innovation Your organization is now effectively prov iding IaaS based services as a part of an ITaaS delivery organization ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 15 Optimization Stage The optimization stage is the fourth stage in the transformation maturity model To reach this stage your organization has successfully migrated all targeted application workloads ( that is it is allin on AWS) and is efficiently managing the AWS environment and service delivery process Thi s phase is an ongoing loop not a destination The objective of this phase is to optimize existing process es by lowering costs improving service and extending AWS value deeper into your organization The focus on continuous service improvement enables you to realize the true value of utility computing where you constantly seek optimiz ation and addition of newer AWS services to drive cost and performance efficiencies Challenges and Barriers Your organization must overcome t he following key challenges and barriers during this phase of the transformation journey: • Optimize costs – Reducing and optimizing costs are not new challenges to the IT world With AWS your organization can finally realize those benefits AWS and third party providers frequently re lease new features and services including various discounting/consumption based models that you can evaluate for efficacy within your organization For example by evaluating application and database licensing fees that are often overlooked your organization can realize significant costreduction opportunities available with a cloud based payasyougo model • Optimize operation services – Your organization will be challenged to continuously improve the service delivery model for provisioning change control and managing the environment AWS and third party providers frequently release new features (eg automation templates) and services that you can investigate to improve automation and repeatability of tasks • Optimize application services – Your organization will be challenged to continuously improve application services that you use to build and enhance applications AWS and third party providers frequently release new features and services that your organization can evaluate to further optimiz e application services ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 16 • Optimize enterprise services – O rganization s are constantly challenged to seek Software asaService ( SaaS )based offerings as opposed to hosted solutions to continuously improve enterprise application services AWS and third party providers innovat e at a rapid pace adding services and features (eg managed databases virtual desktop email and document management) that can simplify your enterprise services Transformation Activities Your organization should complete t he following transformation activities to achieve the outcomes that your organization needs to continuously maximize maturity and value: • Implement a continuous cost optimization process – Either the designated resources on a CCo E or a group of centralized staff from IT Finance must be trained to support an ongoing process using AWS or third party cost management tools to assess costs and optimize savings • Implement a continuous operation management optimization process – Your organization should evaluate ongoing advancements in AWS services as well as thirdparty tools to pursue continuous improvement to operation management and service delivery process es • Implement a continuous applicati on service optimization process – Your organization should evaluate ongoing advancements in AWS services and features including thirdparty offerings to seek continuous improvement to the application service process Your organization might not use the AWS fully managed a pplication service solutions to migrat e existing application s but these services provide significant value in new application development AWS a pplication service offerings include the following : o Amazon API Gateway – A fully managed ser vice that makes it easy for developers to create publish maintain monitor and secure APIs at any scale o Amazon AppStream 20 – E nables you to stream your existing Windows applications from the cloud reaching more users on more devices without code modifications ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 17 o Amazon Elasticsearch Service (Amazon ES) – This fully managed service makes it easy to deploy operate and scale Amazon ES for log analytics full text search application monitoring and more o Amazon Elastic Transcoder – M edia transcoding in the cloud This service is designed to be a highly scalable easy touse and cost effective way for developers and businesses to convert (that is transcode) media files from their source format into formats required by consumer playback devices such as smartphones tablets and PCs • Implement a continuous enterprise s ervice optimization process – AWS continually innovat es and launch es additional enterprise applications that your organization should consider implementing to achieve ease ofuse and enterprise grade security without the burden of managing maintenance overhead For example AWS enterprise services applications include: o Amazon WorkSpaces – A managed desktop cloud computing service o Amazon WorkDocs – A fully managed secure enterprise s torage and sharing service with strong administrative controls and feedback capabilities that improve user productivity o Amazon WorkMail – A secure managed business email and calendar service with support for existing desktop and mobile email clients Outcomes and Maturity Use t he following transformation outcomes to measure your organization’s maturity as optimized and continuously maximizing maturity and value: • Optimized cost savings – Your organization has an ongoing process and a team focused on continually review ing AWS usage across your organization and identify ing cost reduction opportunities • Optimized operations management process – Your organization has an ongoing process in place to routinely review AWS and third party management tools to identify ways to improve the efficiency and effectiveness of the current operation management process ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 18 • Optimized application development process – Your organization has an ongoing process in place to evaluate AWS and third party management tools to identify ways to improve the efficiency and effectiveness of the application architecture and development process • Optimized enterprise services – Your organization has an ongoing process in place to regularly review AWS and third party management enterprise s ervice offerings to improve the delivery security and management of services offered throughout the organization Conclusion Every customer’s cloud journey is unique However the challenges corresponding actions and outcomes achieved are similar The AWS Cloud Transformation Maturity Model provide s you with a way to identify and anticipate the challenges early become familiar with the mitigation strategies based on AWS best practices and guidance and successfully drive value from cloud transforma tion AWS and its thousands of partners have leveraged this model to accelerate customer adoption of AWS Cloud services by compressing the time through each stage of their cloud transformation Even in situations where customers pursue certain activities in parallel across multiple stages or are at varying levels of maturity in different parts of the organization due to their size and IT organizational structure the guidance provided in th is paper can help you significantly reduce the risk and uncertainty in your organization ’s cloud transformation initiative Contributors The following individuals and organizations contributed to this document: • Blake Chism Global Practice Development AWS Public Sector • Sanjay Asnani Partner Strategy Consultant AWS Public Sector • Brian Anderson Practice Manager SLG AWS Public Sector ArchivedAmazon Web Services – AWS Cloud Transformation Maturity Model Page 19 Document Revisions Date Description September 2017 Updated content September 2016 First publication 1 https://d0awsstaticcom/whitepapers/10 considerations fora cloud procurementpdf 2 https://awsamazoncom/contact us/ 3 https://awsamazoncom/about aws/events/ 4 https://awsamazoncom/training/course descriptions/business essentials/ 5 https://awsamazoncom/training/intro_series/ 6 https://qwiklabscom/ 7 https://wwwyoutubecom/user/AmazonWebServices 8 https://awsamazoncom/training/course descriptions/essentials/ 9 https://awsamazoncom/whitepapers/ 10 https://awsamazoncom/training/ 11 https://awsamazoncom/solutions/case studies/ 12 https://awsamazoncom/how tobuy/ 13 https://d0awsstaticcom/whitepapers/10 considerations fora cloud procurementpdf 14 https://awsamazoncom/contract center/ 15 https://awsamazoncom/partners/fundingbenefits/ 16 https://awsamazoncom/contract center/ Notes
|
General
|
consultant
|
Best Practices
|
AWS_Database_Migration_Service_Best_Practices
|
AWS Database Migration Service Best Practices August 2016 This paper has been archived For the latest technical content about this subject see the AWS Whitepapers & Guides page: http://awsamazoncom/whitepapers ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 2 of 17 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 3 of 17 Contents Abstract 4 Introduction 4 Provisioning a Replication Server 6 Instance Class 6 Storage 6 Multi AZ 7 Source Endpoint 7 Target Endpoint 7 Task 8 Migration Type 8 Start Task on Create 8 Target Table Prep Mode 8 LOB Controls 9 Enable Logging 10 Monitoring Your Tasks 10 Host Metrics 10 Replication Task Metrics 10 Table Metrics 10 Performance Expectations 11 Increasing Performance 11 Load Multiple Tables in Parallel 11 Remove Bottlenecks on the Target 11 Use Multiple Tasks 11 Improving LOB Performance 12 Optimizing Change Processing 12 Reducing Load on Your Source System 12 Frequently Asked Questions 13 What are the main reasons for performing a database migration? 13 ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 4 of 17 What steps does a typical migration project include? 13 How Much Load Will the Migration Process Add to My Source Database? 14 How Long Does a Typical Database Migration Take? 14 I’m Changing Engin es–How Can I Migrate My Complete Schema? 14 Why Doesn’t AWS DMS Migrate My Entire Schema? 14 Who Can Help Me with My Database Migration Project? 15 What Are the Main Reasons to Switch Database Engines? 15 How Can I Migrate from Unsupported Database Engine Versions? 15 When Should I NOT Use DMS? 16 When Should I Use a Native Replication Mechanism Instead of the DMS and the AWS Schema Conversion Tool? 16 What Is the Maximum Size of Database That DMS Can Handle? 16 What if I Want to Migrate from Classic to VPC? 17 Conclusion 17 Contributors 17 Abstract Today as many companies move database workloads to Amazon Web Services (AWS) they are often also interested in changing their primary database engine Most current methods for migrating databases to the cloud or switching engines require an extended outage The AWS Database Migration Service helps organizations to migrate database workloads to AWS or change database engines while minimizing any associated downtime This paper outlines best practices for using AWS DMS Introduction AWS Database Migration Service allows you to migrate data from a source database to a target database During a migration the service tracks changes being made on the source database so that they can be applied to the target database to eventually keep the two databases in sync Although the source and target databases can be of the same engine type they don’t need to be The possible types of migrations are: 1 Homogenous migrations (migrations between the same engine types) 2 Heterogeneous migrations (migrations between different engine types) ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 5 of 17 At a high level when using AWS DMS a user provisions a replication server defines source and target endpoints and creates a task to migrate data between the source and target databases A typical task consists of three major phases: the full load the application of cached changes and ongoing replication During the full load data is loaded from tables on the source database to tables on the target database eight tables at a time (the default) While the full load is in progress changes made to the tables that are being loaded are cached on the replication server ; these are the cached changes It’s important to know that the capturing of changes for a given table doesn’t begin until the full load for that table starts ; in other words the start of change capture for each individual table will be different After the full load for a given table is complete you can begin to apply the cached changes for that table immediately When ALL tables are loaded you begin to collect changes as transactions for the ongoing replication phase After all cached changes are applied your tables are consistent transactionally and you move to the ongoing replication phase applying changes as transactions Upon initial entry into the ongoing replication phase there will be a backlog of transactions causing some lag between the source and target databases After working through this backlog the system will eventually reach a steady state At this point when you’re ready you can: Shut down your applicatio ns Allow any remaining transactions to be applied to the target Restart your applications pointing at the new target database AWS DMS will create the target schema objects that are needed to perform the migration However AWS DMS takes a minimalist approach and creates only those objects required to efficiently migrate the data In other words AWS DMS will create tables primary keys and in some cases unique indexes It will not create secondary indexes nonprimary key constraints data defaults or other objects that are not required to efficiently migrate the data from the source system In most cases when performing a migration you will also want to migrate most or all of the source schema If you are performing a homogeneous migration you can accomplish this by using your engine’s native tools to perform a no data export/import of the schema If your migration is heterogeneous you can use the AWS Schema Conversion Tool (AWS SCT) to generate a complete target schema for you Note Any inter table dependencies such a s foreign key constraints must be disabled during the “ful l load” and “cached change application ” phases of AWS DMS processing Also if performance is an issue it will be beneficial to remove or disable secondary indexes during the migration process ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 6 of 17 Provisioning a Replication Server AWS DMS is a managed service that runs on an Amazon Elastic Compute Cloud (Amazon EC2) instance The service connects to the source database reads the source data formats the data for consumption by the target database and loads the data into the target database Most of this processing happens in memory however large transactions may require some buffering on disk Cached transactions and log files are also written to disk The following sections describe what you should consider when selecting your replication server Instance Class Some of the smaller instance classes are sufficient for testing the service or for small migrations If your migration involves a large number of tables or if you intend to run multiple concurrent replication tasks you should consider using one of the larger instances because the service consumes a fair amount of memory and CPU Note T2 type instances are designed to provide moderate baseline performance and the capability to burst to si gnificantly higher performance as required by your workload They are intended for workloads that don't use the full CPU often or consistently but that occasionally need to burst T2 instances are well suited for general purpose workloads such as web se rvers developer environments and small databases If you’re troubleshooting a slow migration and using a T2 instance type look at the CPU Utilization host metric to see if you’re bursting over the baseline for that instance type Storage Depending on the instance class your replication server will come with either 50 GB or 100 GB of data storage This storage is used for log files and any cached changes that are collected during the load If your source system is busy or takes large transactions or if you’re running multiple tasks on the replication server you might need to increase this amount of storage However the default amount is usually sufficient Note All storage volumes in AWS DMS are GP2 or General Purpose SSDs GP2 volumes come with a ba se performance of three I/O Operations Per Second (IOPS) with abilities to burst up to 3 000 IOPS on a credit basis As a rule of thumb check the ReadIOPS and WriteIOPS metrics for the replication instance and be sure the sum of these values does not cro ss the base performance for that volume ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 7 of 17 MultiAZ Selecting a MultiAZ instance can protect your migration from storage failures Most migrations are transient and not intended to run for long periods of time If you’re using AWS DMS for ongoing replication purposes selecting a MultiAZ instance can improve your availability should a storage issue occur Source Endpoint The change capture process used when replicating ongoing changes collects changes from the database logs by using the database engines native API no client side install is required Each engine has specific configuration requirements for exposing this change stream to a given user account (for details see the AWS Key Management Service documentation ) Most engines require some additional configuration to make the change data consumable in a meaningful way without data loss for the capture process (For example Oracle requires the addition of supplemental logging and MySQL requires rowlevel bin logging) Note When capturing changes from an Amazon Relational Database Service (Amazon RDS) source ensure backups are enabled and the source is configured to retain change logs for a sufficiently long time ( usually 24 hours) Target Endpoint Whenever possible AWS DMS attempts to create the target schema for you including underlying tables and primary keys However sometimes this isn’t possible For example when the target is Oracle AWS DMS doesn’t create the target schema for security reasons In MySQL you have the option through extra connection parameters to have AWS DMS migrate objects to the specified database or to have AWS DMS create each database for you as it finds the database on the source Note For the purposes of this paper in Oracle a user and schema are synonymous In MySQL schema is synonymous with database Both SQL Server and Postgres have a concept of database AND schema In this paper we’re referring to the schema ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 8 of 17 Task The following section highlights common and important options to consider when creating a task Migration Type Migrate existing data If you can afford an outage that’s long enough to copy your existing data this is a good option to choose This option simply migrates the data from your source system to your target creating tables as needed Migrate existing data and replicate ongoing changes This option performs a full data load while capturing changes on the source After the full load is complete captured changes are applied to the target Eventually the application of changes will reach a steady state At that point you can shut down your applications let the remaining changes flow through to the target and restart your applications to point at the target Replicate data changes only In some situations it may be more efficient to copy the existing data by using a method outside of AWS DMS For example in a homogeneous migration using native export/import tools can be more efficient at loading the bulk data When this is the case you can use AWS DMS to replicate changes as of the point in time at which you started your bulk load to bring and keep your source and target systems in sync When replicating data changes only you need to specify a time from which AWS DMS will begin to read changes from the database change logs It’s important to keep these logs available on the server for a period of time to ensure AWS DMS has access to these changes This is typically achieved by keeping the logs available for 24 hours (or longer) during the migration process Start Task on Create By default AWS DMS will start your task as soon as you create it In some situations it’s helpful to postpone the start of the task For example using the AWS Command Line Interface (AWS CLI) you may have a process that creates a task and a different process that starts the task based on some triggering event Target Table Prep Mode Target table prep mode tells AWS DMS what to do with tables that already exist If a table that is a member of a migration doesn’t yet exist on the target AWS DMS will create the table By default AWS DMS will drop and recreate any existing tables on the target in preparation for a full load or a reload If you’re precreating your schema set your target table prep mode to truncate causing AWS DMS to truncate existing tables prior to load or reload When the table ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 9 of 17 prep mode is set to do nothing any data that exists in the target tables is left as is This can be useful when consolidating data from multiple systems into a single table using multiple tasks AWS DMS performs these steps when it creates a target table: The source database column data type is converted into an intermediate AWS DMS data type The AWS DMS data type is converted into the target data type This data type conversion is performed for both heterogeneous and homogeneous migrations In a homogeneous migration this data type conversion may lead to target data types not matching source data types exactly For example in some situations it’s necessary to triple the size of varchar columns to account for multibyte characters We recommend going through the AWS DMS documentation on source and target data types to see if all the data types you use are supported If the resultant data types aren’t to your liking when you’re using AWS DMS to create your objects you can precreate those objects on the target database If you do pre create some or all of your target object s be sure to choose the truncate or do nothing options for target table preparation mode LOB Controls Due to their unknown and sometimes large size large objects (LOBs) require more processing and resources than standard objects To help with tuning migrations of systems that contain LOBs AWS DMS offers the following options: Don’t include LOB columns When this option is selected tables that include LOB columns are migrated in full however any columns containing LOBs will be omitted Full LOB mode When you select full LOB mode AWS DMS assumes no information regarding the size of the LOB data LOBs are migrated in full in successive pieces whose size is determined by the LOB chunk size Changing the LOB chunk size affects the memory consumption of AWS DMS; a large LOB chunk size requires more memory and processing Memory is consumed per LOB per row If you have a table containing three LOBs and are moving data 1000 rows at a time an LOB chunk size of 32 k will require 3*32*1000 = 96000 k of memory for processing Ideally the LOB chunk size should be set to allow AWS DMS to retrieve the majority of LOBs in as few chunks as possible For example if 90 percent of your LOBs are less than 32 k then setting the LOB chunk size to 32 k would be reasonable assuming you have the memory to accommodate the setting Limited LOB mode When limited LOB mode is selected any LOBs that are larger than max LOB size are truncated to max LOB size and a warning is issued to the log file Using limited LOB mode is almost always more efficient and faster than full LOB mode You can usually query your data dictionary to determine the size of the largest LOB in a table settin g max LOB size to something slightly larger than this (don’t forget to account for multibyte characters) If you have a table in which most LOBs are small with a few ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 10 of 17 large outliers it may be a good idea to move the large LOBs into their own table and use two tasks to consolidate the tables on the target LOB columns are transferred only if the source table has a primary key or a unique index on the table Transfer of data containing LOBs is a two step process : 1 The containing row on the target is c reated without the LOB data 2 The table is updated with the LOB data The process was designed this way to accommodate the methods source database engines use to manage LOBs and changes to LOB data Enable Logging It’s always a good idea to enable loggi ng because many informational and warning messages are written to the logs However be advised that you’ll incur a small charge as the logs are made accessible by using Amazon CloudWatch Find appropriate entries in the logs by looking for lines that start with the following: Lines starting with “E:” – Errors Lines starting with “W:” – Warnings Lines starting with “I:” – Informational messages You can use grep (on UNIXbased text editors) or search (for Windowsbased text editors) to find exactly what you’re looking for in a huge task log Monitoring Your Tasks There are several options for monitoring your tasks using the AWS DMS console Host Metrics You can find host metrics on your replication instances monitoring tab Here you can monitor whether your replication instance is sized appropriately Replication Task Metrics Metrics for replication tasks including incoming and committed changes and latency between the replication host and source/target databases can be found on the task monitoring tab for each particular task Table Metrics Individual table metrics can be found under the table statistics tab for each individual task These metrics include: the number of rows loaded during the full load; the number of inserts updates and deletes since the task started; and the number of DDL operations since the task started ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 11 of 17 Performance Expectations There are a number of factors that will affect the performance of your migration: resource availability on the source available network throughput resource capacity of the replication server ability of the target to ingest changes type and distribution of source data number of objects to be migrated and so on In our tests we have been able to migrate a terabyte of data in approximately 12 –13 hours (under “ideal” conditions) Our tests were performed using source databases running on EC2 and in Amazon RDS with target databases in RDS Our source databases contained a representative amount of relatively evenly distributed data with a few large tables containing up to 250 GB of data Increasing Performance The performance of your migration will be limited by one or more bottlenecks you encounter along the way The following are a few things you can do to increase performance Load Multiple Tables in Parallel By default AWS DMS loads eight tables at a time You may see some performance improvement by increasing this slightly when you’re using a very large replication server; however at some point increasing this parallelism will reduce performance If your replication server is smaller you should reduce this number Remove Bottlenecks on the Target During the migration try to remove any processes that would compete for write resources on your target database This includes disabling unnecessary triggers validation secondary indexes and so on When migrating to an RDS database it’s a good idea to disable backups and Multi AZ on the target until you’re ready to cutover Similarl y when migrating to nonRDS systems disabling any logging on the target until cutover is usually a good idea Use Multiple Tasks Sometimes using multiple tasks for a single migration can improve performance If you have sets of tables that don’t particip ate in common transactions it may be possible to divide your migration into multiple tasks Note Transactional consistency is maintained within a task Therefore it’s important that tables in separate tasks don’t participate in common transactions Ad ditionally each task will independently read the transaction stream Therefore be careful not to put too mu ch stress on the source system For very large systems or systems with many LOBs you may also consider using multiple replication servers each co ntaining one or more tasks A review of the ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 12 of 17 host statistics of your replication server can help you determine whether this might be a good option Improving LOB Performance Pay attention to the LOB parameters Whenever possible use limited LOB mode If you have a table which consists of a few large LOBs and mostly smaller LOBs consider breaking up the table into a table that contains the large LOBs and a table that contains the small LOBs prior to the migration You can then use a task in limited LOB mode to migrate the table containing small LOBs and a task in full LOB mode to migrate the table containing large LOBs Important In LOB processing LOBs are migrated using a two step process : first the containing row is created without the LOB and then the row is updated with the LOB data Therefore even if the LOB column is NOT NULLABLE on the source it must be nullable on the target during the migration Optimizing Change Processing By default AWS DMS processes changes in a transactional mode which preserves transactional integrity If you can afford temporary lapses in transactional integrity you can turn on batch optimized apply Batch optimized apply groups transactions and applies them in batches for efficiency purposes Note Using batch optimized apply will almost certainly violate referential integrity constraints Therefore you should disable them during the migration process and enable them as part of the cutover process Reducing Load on Your Source System During a migration AWS DMS performs a full table scan of each source table being processed (usually in parallel) Additionally each task periodically queries the source for change information To perform change processing you may be required to increase the amount of data written to your database ’s change log If you find you are overburdening your source database you can reduce the number of tasks or tables per task of your migration If you prefer not to add load to your source consider performing the migration from a read copy of your source system ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 13 of 17 Note Using a read copy will increase the replication lag Frequently Asked Questions What Are the Main Reasons for Performing a Database migration? Would you like to move your database from a commercial engine to an open source alternative? Perhaps you want to move your onpremises database into the AWS Cloud Would you like to divide your database into functional pieces? Maybe you’d like to move some of your data from RDS into Amazon Redshift These and other similar scenarios can be considered “database migrations” What Steps Does a Typical Migration Project Include? This of course depends on the reason for and type of migration you choose to perform At a minimum you’ll want to do the following Perform an Assessment In an assessment you determine the basic framework of your migration and discover things in your environment that you’ll need to change to make a migration successful The following are some questions to ask: Which objects do I want to migrate? Are my data types compatible with those covered by AWS DMS? Does my source system have the necessary capacity and is it configured to support a migration? What is my target and how should I configure it to get the required or desired capacity? Prototype Migration Configuration This is typically an iterative process It’s a good idea to use a small test migration consisting of a couple of tables to verify you’ve got things properly configured Once you’ve verified your configuration test the migration with any objects you suspect could be difficult These can include LOB objects character set conversions complex data types and so on When you’ve worked out any kinks related to complexity test your largest tables to see what sort of throughput you can achieve for them Design Your Migration Concurrently with the prototyping stage you should determine exactly how you intend to migrate your application The steps can vary dramatically depending on the type of migration you’re performing ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 14 of 17 Testing Your Endto End Migration After you have completed your prototyping it’s a good idea to test a complete migration Are all objects accounted for? Does the migration fit within expected time limits? Are there any errors or warnings in the log files that are a concern? Perform Your Migration After you’re satisfied that you’ve got a comprehensive migration plan and have tested your migration end toend it’s time to perform your migration! How Much Load Will the Migration Process Add to My Source Database? This a complex question with no specific answer The load on a source database is dependent upon several things During a migration AWS DMS performs a full table scan of the source table for each table processed in parallel Additionally each task periodically queries the source for change information To perform change processing you may be required to increase the amount of data written to your databases change log If your tasks contain a Change Data Capture (CDC) component the size location and retention of log files can have an impact on the load How L ong Does a Typical Database Migration Take? The following are items that determine the length of your migration: total amount of data being migrated amount and size of LOB data size of the largest table s total number of objects being migrated secondary indexes created on the target before the migration resources available on the source system resources available on the target system resources available on the replication server network throughput and so on Clearly there is no one formula that will predict how long your migration will take The best way to gauge how long your particular migration will take is to test it I’m Changing Engines –How Can I Migrate My Complete Schema? As previously stated AWS DMS will only create those objects needed to perform an optimized migration of your data You can use the free AWS Schema Conversion Tool (AWS SCT) to convert an entire schema from one database engine to another The AWS SCT can be used with AWS DMS to facilitate the migration of your entire system Why Doesn’t AWS DMS Migrate My Entire Schema? All database engines supported by AWS DMS have native tools that you can use to export and import your schema in a homogeneous environment Amazon has developed the AWS SCT to facilitate the migration of your schema in a heterogeneous environment The AWS DMS is ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 15 of 17 intended to be used with one of these methods to perform a complete migration of your database Who Can Help Me with My Database Migration Project? Most of Amazon’s customers should be able to complete a database migration project by themselves However if your project is challenging or you are short on resources one of our migration partners should be able to help you out For details please visit https://awsamazoncom/partners What Are the Main R easons to Switch Database Engines? There are two main reasons we see people switching engines: Modernization The customer wants to use a modern framework or platform for their application portfolio and these platforms are available only on more modern SQL or NoSQL database engines License fees The customer wants to migrate to an open source engine to reduce license fees How Can I Migrate from Unsupported Database Engine Versions? Amazon has tried to make AWS DMS compatible with as many supported database versions as possible However some database versions don’t support the necessary features required by AWS DMS especially with respect to change capture and apply Currently to fully migrate from an unsupported database engine you must first upgrade your database to a supported engine Alternatively y ou may be able to perform a complete migration from an “unsupported” version if you don’t need the change capture and apply capabilities of DMS If you are performing a homogeneous migration one of the following methods might work for you: MySQL: Importing and Exporting Data From a MySQL DB Instance Oracle: Importing Data Into Oracle on Amazon RDS SQL Server: Importing and Exporting SQL Server Databases PostgreSQL: Importing Data into PostgreSQL on Amazon RDS ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 16 of 17 When Should I NOT Use DMS? Most databases offer a native method for migrating between servers or platforms Sometimes using a simple backup and restore or export/import is the most efficient way to migrate your data into AWS If y ou’re considering a homogeneous migration you should first assess whether a suitable native option exists In some situations you might choose to use the native tools to perform the bulk load and use DMS to capture and apply changes that occur during the bulk load For example when migrating between different flavors of MySQL or Amazon Aurora creating and promoting a read replica is most likely your best option See Importing and Exporting Data From a MySQL DB Instance When Should I Use a Native Replication Mechanism Instead of the DMS and the AWS Schema Conversion Tool? This is very much related to the previous question If you can successfully set up a replica of your primary database in your target environment by using native tools more easily than you can with DMS you should consider using that native method for migrating your system Some examples include: Read replicas – MySQL Standby databases – Oracle Postgres AlwaysOn availability groups – SQL Server Note AlwaysOn is not supported in RDS What Is the Maximum Size of Database That DMS Can Handle ? This depends on your environment the distribution of data and how busy your source system is The best way to determine whether your particular system is a candidate for DMS is to test it out Start slowly to get the configuration worked out add some complex objects and finally attempt a full load as a test As a ballpark maximum figure: Under mostly ideal conditions (EC2 to RDS cross region) over the course of a weekend (approximately 33 hours) we were able to migrate five terabytes of relatively evenly distributed data including four large (250 GB) tables a huge (1 TB) table 1000 small to moderately sized tables three tables that contained LOBs varying between 25 GB and 75 GB and 10000 very small tables ArchivedAmazon Web Services – AWS Database Migration Service Best Practices August 2016 Page 17 of 17 What if I Want to Migrate from Classic to VPC? DMS can be used to help minimize databaserelated outages when moving a database from outside a VPC into a VPC The following are the basic strategies for migrating into a VPC: Generic EC2 Classic to VPC Migration Guide: Migrating from a Linux Instance in EC2 Classic to a Linux Instance in a VPC Specific Procedures for RDS: Moving a DB Instance Not in a VPC into a VPC Conclusion This paper outlined best practices for using AWS DMS to migrate data from a source database to a target database and offers answers to several frequently asked questions about migrations As companies move database workloads to AWS they are often also interested in changing their primary database engine Most current methods for migrating databases to the cloud or switching engines require an extended outage The AWS DMS helps to migrate database workloads to AWS or change database engines while minimizing any associated downtime Contributors The following individuals and organizations contributed to this document: Ed Murray Senior Database Engineer Amazon RDS/AWS DMS Arun Thiagarajan Cloud Support Engineer AWS Premium Support Archived
|
General
|
consultant
|
Best Practices
|
AWS_Governance_at_Scale
|
ArchivedAWS Governance at Scale November 2018 This paper has been archived For the latest technical guidance abot the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazonc om/whitepapersArchived© 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents A mazon Web Services (A WS) current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppl iers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Contents 3 Abstract 1 Introduction 2 Traditional Appr oaches to Manage Scale 2 Governance at Scale 3 Governance at Scale Focal Points 4 Deciding on Your Solution 11 Conclusion 14 Appendix A : Example Use Case 15 Appendix B: Governance at Scale Capability Checklist 17 Account Management 17 Budget Management 18 Security a nd Compliance Automation 17 Archived Amazon Web Services – AWS Governance at Scale Page 1 Abstract Customers need to structure their governance to grow and scale as they grow the number of AWS accounts AWS proposes a new approach to meet these challenges Governance at Scale addresses AWS account management cost control and security and compliance through automation; organized by a centralized management toolset Governance at Scale aligns the organization hierarchy with the AWS multi account structure for complete management through an intuitive interface There are three areas of focus for governanc e at scale with techniques for addressing them using a toolset for a typical organizational hierarchy This whitepaper includes an example use case an evaluation and selection criteria for developing or procuring a toolset to instantiate governance at sc ale Archived Amazon Web Services – AWS Governance at Scale Page 2 Introduction As operational footprints scale on AWS a common theme across compan ies is the need to maintain control over cloud resource usage visibility and policy enforcement The ability to rapidly provision instances introduces the potential risk of overspending and misconfigurations When strong governance and enforcement are not in place it can cause security concerns Companies must address oversight challenges so risks are known and can be minimized Identified stakeholders are responsible for budget alignment governance compliance business objectives and technical direction across an entire company To meet these needs AWS has developed this governance at scale guidance to help identify and instantiate best practices Governance at Scale can help compan ies establish centrally managed budgets for cloud resources oversight of cloud implementations and a dashboard of the company’s cloud health Cloud health is based on near realtime compliance to governance policies and enforcement mechanisms To enable this the policies and mechanisms are separated into three governance at scale focal points: • Account Management Automate account provisioning and maintain good security when hundreds of users and business units are requesting cloud based resources • Budget & Cost Management Enforce and monitoring budgets across many accounts workloads and users • Security & Compliance Automation Manage security risk and compliance at a scale and pace to ensure the organization maintains compliance while minimizing impa ct to the business Traditional Approaches to Manage Scale Companies employ three basic approaches to manage large operations on AWS provision multiple AWS accounts control budgets address security risk and compliance Each of these approaches have the following l imitations : • Traditional IT management processes : A central group controls access through approval chains and manual or partially automated setup processes for accounts and resources This approach is difficult to scale because it relies on people and processes that lack automated workflows for help des k tickets and hand offs between staff with different roles Archived Amazon Web Services – AWS Governance at Scale Page 3 • Unrestricted decentralized access to AWS across multiple disassociated accounts This approach can cause resource sprawl that leadership cannot see While usage can scale visibility and accountability are sacrificed The lack of visibility within a self service cloud model introduces compliance and financial risks that most companies cannot tolerate • Using a cloud broker enables visibility and accountability but may limit which AWS servi ces are available to developers and applications or require additional technology augmentation for organizations that require native access to AWS services Companies that have large scale cloud adoption attempt to work around these limitations by using a combination of technologies to address agility and governance goals Companies may use a specific account management application a specific cost enforcement system or multiple toolsets for security and compliance These separate technologies introduce additional layers of complexity and interoperability challenges Governance at Scale AWS Governance at Scale helps you to monitor and control costs accounts and compliance standards associated with operating large enterprises on AWS This guidance is der ived from best practices at AWS and from customers who have successfully operated at scale The components are designed to be flexible so that both technical users and project teams can self serve on AWS while leadership maintains control on spending deci sions and automated policy enforcement Companies can implement governance at scale practices by developing their own solution investing in a commercial solution aligned to the framework or engaging AWS Professional Services for custom options Mechanism s that align to governance at scale focus on control and reporting of budget security and compliance and enforcing AWS access across all stakeholder teams A core element is a centralized interface that provides hierarchical structure while preserving n ative access to the AWS API the AWS Management Console and the AWS SDK/CLI AWS guidance to achieve governance at scale is designed to conform with a company’s existing structure and business processes The following diagram shows a typical government or corporate company Each layer can have different technical financial reporting and security requirements Different departments and teams can have different success criteria goals and technical skill setsArchived Amazon Web Services – AWS Governance at Scale Page 4 Figure 1: Sample organizational structure An interface and subsystem that meets the governance at scale criteria allows leaders to allocate funding assign budgets and monitor near real time resource consumption Each level within a company can institute policies or adjust company and project budgets based on mission priorities and usage patterns Companies can propagate these policies down through the organization The interface provides the mechanisms for authorized staff to create new projects request new AWS accounts request access to existing accounts restrict access to AWS resources and obtain near realtime metrics on project budget consumption This hierarchy combined with security automation provides reliable near real time reporting for each level of leadership and staff The granular and transparent nature of the workflows and data assures leadership that cloud operations across the enterprise are visible and constrained as appropriate with the implemented governance policies Governance at Scale Focal Points Governance at Scale implements three focal points: Account Management Budget and Cost Management and Security and Compliance Automation Archived Amazon Web Services – AWS Governance at Scale Page 5 Account Management AWS guidance to achieve governance at scale streamlines account management across multiple AWS accounts and workloads within a company through centralization standardization and automation of account maintenance tasks This is done through policy automat ion identity federation and account automation Example instead of requiring a central group to manually manage the company’s master billing account a selfservice model with workflow automation is employed It enables authorized staff to link multiple accounts to one or more master billing accounts and attach appropriate automatic enforced governance policies Figure 2: Automation can create and manage accounts at scale Policy Automation AWS guidance to achieve governance at scale automates the application of company policies deploying accounts with standard specifications to ensure consistency across AWS accounts and resources The policy engine is flexible to accommodate and enforce dif ferent types of security polices such as AWS Identity and Access Management ( IAM ) AWS CloudFormation or custom scripts Identity Federation AWS governance solutions employ AWS S ingle SignOn (SSO) through federated identity integration with external authentication providers such as OpenID or Active Directory to centralize AWS account management and simplify user access to AWS accounts When SSO is used with AWS CloudTrail user activity can be tracked across multiple AWS accounts Archived Amazon Web Services – AWS Governance at Scale Page 6 Account Automation Services such as AWS Organizations AWS CloudFormation and AWS Service Catalog automate AWS acc ount provisioning and network architecture baselining These services replace manual processes and facilitate the use of pre defined standardized system deployment templates Users can create new AWS accounts for projects through self service and lever age the AWS Management Console and APIs without the assistance of provisioning experts Project or AWS account owners within a company use a centralized interface to manage access to resources within their assigned area and configure cross account access to AWS resources The automation of account management removes impediments such as ticketing and additional outofband manual processes from the account provisioning process This accelerates developers access to AWS resources they need Budget and Cost Management Automated methods define and enforce fiscal policies to achieve governance at scale Budget planning and enforcement practices allow leaders and staff to allocate and manage budgets for multiple AWS accounts and define enforcement actions Automation ensures spending is actively monitored and controlled in near real time These mechanisms allow leaders to make proactive well informed decisions around budgetary controls and allocations across their company When budgets are aligned with pr ojects and AWS accounts automation ensures budgets are maintained in real time and accounts can’t exceed an approved budget1 Companies are able to meet fiscal requirements such as the Federal Antideficiency Act for US Government agencies Shared service providers or AWS resellers can implement governance at scale to provide chargeback capabilities across a diverse company Budget Planning It is important to align the company’s budget management process to an automated workflow The workflow should be flexible so that different types of funding sources such as investment appropriation and contract line items (CLINs) are managed as the fu nding is allocated across the company Financial owners should define the timeframe for the funding source set enforcement actions if budget limits are exceeded and track utilization over time Example if AWS provides a customer a $10000 credit the fi nancial owner has the ability to subdivide the funding amount 1 For an example use case where budget enforcement is automated with a governance at scale solution see Appendix A Archived Amazon Web Services – AWS Governance at Scale Page 7 across the company Automation will manage each allocation individually while providing awareness and real time financial dashboards to decision makers over the lifetime of the funding source Budget Enforcement Enforcement of budget constraints is a key component of governance at scale Each layer of the company defines spending limits within accounts and projects monitors account spending in near real time and triggers warning notifications or enforcement actions Automated actions include: • Restricting the use of AWS resources to those that cost less than a specified price • Throttle new resource provisioning • Shut down terminate or de provision AWS resources after archiving configurations a nd data for future use The following diagram illustrates how this could work Red numbers indicate the current or projected AWS spend rate exceeds the budget allocated to the project Green numbers indicate that current AWS spend rate is within budget Wh en viewed on a governance dashboard a decision maker has near real time awareness of usage and spending across the entire company Figure 3: Budgets are allocated and enforced through the company Archived Amazon Web Services – AWS Governance at Scale Page 8 Security and Compliance Automation Governance at scale security and compliance practices employ automation to enforce security requirements and help streamline activities across the company’s AWS accounts These practices are made up of the following items: Identity & Access Au tomation AWS guidance to achieve governance at scale is to offer AWS Identity & Access Management (IAM) capabilities through a central portal Users can access the portal with an approved authentication scheme such as Microsoft Active Directory or Lightweight Directory Access Protocol The system grants access based on the roles defined by the company Once authorized the system enforces a strict “policy of least privilege” by providing access to resources authorized by the appropriate authorities The portal allows users and workload owners to request and approve access to projects AWS accounts and centralized resources by managing company defined IAM policies applied at every level Example if a Chief Information Security Officer (CISO) wants to allow the company to access a new AWS services that was previously not allowed the developer can edit the IAM policy at the root OU level and the system wil l implement the change across all cloud accounts Security Automation Maintaining a secure posture when operating at scale requires automating security tasks and compliance assessments Manual or semi manual processes cannot easily scale with business grow th With automation AWS services or Amazon Virtual Private Cloud (Amazon VPC) baseline configurations can be provisioned using standardized AWS configurations or AWS CloudFormation templates These templates align with the company’s security and complianc e requirements and have been evaluated and approved by compan y’s risk decision makers The provisioning process interfaces with the compan y’s Governance Risk and Compliance (GRC) tools or systems of recor d2 These templates generate security documentation and implementation details for newly provisioned baseline architectures and shorten the overall time required for a system or project to be assessed and approved for operations Well implemented security automation is responsive to security incidents This includes processes to respond to policy violations by revoking IAM user access preventing new resource allocation terminating resources or isolating existing cloud resources for forensic analysis Automation can be accomplished by colle cting and storing AWS logging data into centralized data lakes and performing analytics or basing responses on the output of other analytics tools 2 Partner Solutions include Telos Xacta 360 RSA Archer ArchivedAmazon Web Services – AWS Governance at Scale Page 11 Policy Enforcement AWS guidance to achieve governance at scale helps you achieve policy enfor cement on AWS Regions AWS S ervices and resource configurations Enforcement is based on stakeholder roles and responsibilities and in accordance with compliance regulations (eg HIPAA FedRAMP PCI/DSS) At each level of the hierarchy the company can specify which AWS Services features and resources are approved for use on a per department per user or per project basis This ensures self service requests can’t provision unapproved items as illustrated in the following diagram Figure 4: Security and compliance guardrails flow down through hierarchy Circles indicates third party security requirements: FedRAMP HIPAA and PCI Deciding on Your Solution Designing a system to achieve governance at scale addresses key issues for companies around account management cost enforcement and security and compliance Companies can build a governance at scale solution or the y can build one in partnership with AWS Professional Services or an AWS Partner3 Decision Factor 1 Determine need Does the company’s AWS footprint exceed or will it exceed the number of AWS 3 Partner offerings include Cloudtamerio Turbot and Dome9 Security ArchivedAmazon Web Services – AWS Governance at Scale Page 12 accounts and resources that can be manage d using a manual process? Example do you review account billing details us e spreadsheets for tracking or do you u se the AWS Management Console to create and manage all accounts? If the answer to the top question is yes then a governance at scale solution is needed Decision Factor 2 Is it feasible to build versus buy? In order to build a custom solution your company should be able to answer Yes to the following questions : • Does your company have a robust AWS resource tagging or account management methodology for budget control and enforcement? • Does your company have an existing governance model with business processes that can be automated? • Does your company have the resources to build and maintain an enterprise software solution for managing governance at scale across your company ? This includes : engineers and developers with an advanced understanding of the AWS Cloud APIs security features and services and sufficient staff to maintain the enterprise solution over time? To determine if your company can develop a solution that mee ts all of the governance at scale requirements see Appendix B Decision Factor 3 Criteria selection for buying a commercial solution A commercial solution may include one or more products and/or professional services assistance with integration and building key components If you decide to purchase a third party solution to achieve governance at scale see Appendix B to determine if partner products or p rofessional services meet all of your requirements What does a Governance at Scale solution look like to an organizational stakeholder? The following diagram illustrates a finalized governance at scale implementation dashboard overlaying cost and compliance indicators in the company ArchivedAmazon Web Services – AWS Governance at Scale Page 13 Figure 5: Example Company cloud environment Decision makers at each layer of the hierarchy are provided real time data and metrics that are tailored to their company role and/or business units: • Executive – Executives can assign budgets and security policies to any segment of the company Data is collected from the all segments and is presented in a summary view to include overall compliance status and financial health • Senior Leadership – Senior leaders can view their respective financial health within their sub organization They are responsible for assigning budgets to their respective employees and applying additional security policies as needed • Upper Management – Management monitor s budgets grants personnel access to projects and assigns focused security policies This is achieved by assigning specific budget and security policies to business units and teams responsible for applications • Employee – Employees interact directly with cloud accounts and have operational awareness of current spend vs the assigned budget They can request access to other projects and exceptions to security and financial policies as appropriate Archived Amazon Web Services – AWS Governance at Scale Page 14 Conclusion Govern ance at Scale is a new concept for automating cloud governance that can help your company retire manual processes in account management budget enforcement and security and compliance By automating these common challenges the company can scale without i nhibiting agility speed and innovation while providing decision makers with the visibility control and governance that is necessary to protect sensitive data and systems Carefully consider which solution you chose for your company The decision to bu ild or buy a solution can have critical implications on your AWS migration strategy Discuss the potential impact with your AWS Solution Architect and/or Professional Services consultant They can help ensure your solution meets your specific requirements The use case example in Appendix A offers one way to formalize implementation This example shows the challenges companies face and the effect a governance at scale implementation can have Appendix B provides you with a list of the key capabilities for each governance at scale focal point The Governance at Scale framework provides a compass and map to help companies build or buy solutions that can help them scale with confidence by replacing human based governance processes with automation that is familiar and easy to use for all stakehol dersArchived Amazon Web Services – AWS Governance at Scale Page 15 Appendix A : Example Use Case Example use case for i mplementing governance at scale to manage AWS accounts within a company: ACME organization has outgrown their manual and spreadsheet based governance process The company is large and profitable (1B yearly revenue) but have diverse business units that require autonomy and flexibility They have a small governance team and a limited budget for a custom home grown solution Because of their organizational and financial constraints they decided to purchase a solution from an AWS partner4 Once the solution is deployed and configured to align with the company specific processes and requirements the solution is available for developers and decision makers to centrally manage their cloud resources The workflow below describes how a new developer would access and manage their resources within a governance at scale solution John is a developer joining a team that designs application environments for deployment in the AWS Cloud Therefore he needs an AWS development environment that allows him to manipulate infrastructure components using code without affecting other develo pers or systems Each developer within the team is approved for individual monthly billing budgets for the use of AWS A governance at scale implementation and workflow for this scenario is: 1 John navigates to a portal to submit a request for an AWS account for developers From the list he chooses from a set of standard corporate AWS account types and then specifies that he needs a monthly billing budget of $5000 2 His request triggers a notification that is sent to his manager His manager uses the portal to confirm or change the monthly billing budget that John specified and selects any preapproved/assessed system boundary that John’s environment is allowed to operate within 3 An automated process creates a new AWS account for John and uses AWS CloudForm ation to build a baseline architecture and apply predefined IAM policies and AWS service configurations within John’s new AWS account o IAM policies include what services and resources that John is allowed to access and the AWS Service API calls he is all owed to perform See https://awsamazoncom/iam for details 4 Partner offerings include Cloudtamerio Turbot and Dome9 Security Archived Amazon Web Services – AWS Governance at Scale Page 16 o AWS service configurations include services such as an Amazon Virtual Private Cloud ( Amazon VPC) architecture that includes predefined AWS security groups to be assigned to Amazon Elastic Compute Cloud (Amazon EC2) instances Amazon Simple Storage Service (Amazon S3) buckets provisioned with predefined access control policies and network connectivity to access functional and s ecurity enabling shared services Example code repositories patch repositories security scanning tools antimalware services authentication services time synchronization services directory services backup and recovery services and etc 4 An automate d process interfaces with the company’s governance risk and compliance (GRC) tool to link John’s AWS account with the preapproved/assessed system boundary This allows the GRC tool to access the account for the system inventory and monitor for complianc e violations as part of automated IT auditing and continuous monitoring 5 An automated process begins tracking the AWS services and resources that John provisions to record the spending rate within John’s AWS account 6 As the monthly spend limit is approache d an automated series of notifications is sent to John so he can act to ensure that he does not overspend his budget It is escalated to his management if he fails to react appropriately Additionally a series of automated predefined budget enforcement actions take place including preventing new AWS resources from being provisioned and shutting down or de provisioning AWS resources Archived Amazon Web Services – AWS Governance at Scale Page 17 Appendix B: Governance at Scale Capability Checklist There are several Amazon Partner Network ( APN) solutions that you can use to meet your company’s governance at scale requirements We encourage companies to evaluate each solution and decide based on your specific requirements AWS Prof essional Services and Solution Architects can assist in your evaluation process If you want to discuss partner products reach out to your AWS Sales teams or send an email to compliance accelerator@amazoncom Account Management Capability Programmatically provision and delete AWS accounts using AWS APIs to ensure uniformity Allow external IAM accounts to enable and disable users Provide single sign on to the AWS Management Console for AWS account users to manage cloud resources Integrate with external IAM providers such as Active Directory Support MFA token management Associate AWS accounts with one or more master billing accounts Associate users with IAM policies to control access Support multi level organizational hierarchy Support use of Enterprise Accelerators to apply baseline configurations to accounts Provide self service workflow that allows users to join projects Provide self service workflow that allows users to create new project s Provide self service workflow that allows users to connect one or more accounts Control access to custom Amazon Machine Images (AMIs) Fully Partially Comments implements implements (yes/no) (yes/no) Archived Amazon Web Services – AWS Governance at Scale Page 18 Allow user access to the AWS API AWS Management Console and SDKs Budget Management Capability Manage funding sources used to pay for AWS usage Allocate funding sources to individuals and AWS accounts based on organizational hierarchy Set monthly and yearly budgets for AWS accounts View current spending accrual of AWS accounts Aggregate spending of AWS accounts based on organization structure and purpose Apply cost restrictions to AWS accounts (for example force use of Reserved Instances restrict Amazon EC2 instance usage to instances less than $x/ hr etc) Set rules to define enforcement actions (including notification limit creating new cloud resources archiving cloud resources and termination of cloud resources) when financial thresholds are reached for each AWS account Send alerts to financial stakeholders when predefined limits and thresholds are met Fully Partially Comments implements implements (yes/no) (yes/no) Archived Amazon Web Services – AWS Governance at Scale Page 17 Security and Compliance Automation Capability Programmatically apply access control policies to restrict user access to AWS services that do not meet regulatory compliance standards (such as HIPAA FedRAMP PCI/DSS) Programmatically apply access control policies to restrict user access to AWS Regions that do not meet regulatory compliance standards (for example HIPAA FedRAMP and PCI/DSS) Programmatically apply access control policies to restrict user access to AW S resource configurations that do not meet regulatory compliance standards (for example HIPAA FedRAMP and PCI/DSS) Support multi level organizational hierarchy to apply and inherit access control policies Collect and store logs for all AWS accounts resources and API actions Programmatically verify that cloud resources are configured in alignment with best practices organizational policies and regulatory compliance standards Programmatically generate Authorization to Operate (ATO) artifacts i ncluding system security plans (SSPs) based on current cloud resources within AWS accounts Schedule continuous monitoring tasks (for example vulnerability scans within and across AWS accounts) to determine whether the system is compliant Set rules to define enforcement actions (including notification limit creating new cloud resources and isolation of cloud resources) when compliance violation thresholds are reached for each AWS account Fully Partially Comments implements implements (yes/no) (yes/no) Archived Amazon Web Services – AWS Governance at Scale Page 19 Contributors The following individuals and organizations contributed to this document: • Doug Vanderpool Principal Consultant Advisory AWS Professional Services • Brett Miller Technical Program Manager WWPS Security and Compliance Business Acceleration Team • Lou Vecchioni Senior Consultant AWS Professional Services • Colin Desa Head Envision Engineering Center • Tim And erson Program Manager WWPS Security and Compliance Business Acceleration Team • Nathan Case Senior Consultant AWS Professional Services Resources • AWS Whitepapers • AWS Documentation • AWS Compliance Quick Starts Document Revisions Date Change May 2017 First DRAFT Version August 2017 DRAFT Version 20 November 2017 DRAFT Version 21 July 2018 DRAFT Version 22 November 2018 DRAFT Version 2 3
|
General
|
consultant
|
Best Practices
|
AWS_Key_Management_Service_Best_Practices
|
ArchivedAWS Key Management Service Best Practices AWS Whitepaper For the latest technical content refer to : https://docsawsamazoncom/kms/latest/ developerguide/bestpracticeshtmlArchivedAWS Key Management Service Best Practices AWS Whitepaper AWS Key Management Service Best Practices: AWS Whitepaper Copyright © Amazon Web Services Inc and/or its affiliates All rights reserved Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to or sponsored by AmazonArchivedAWS Key Management Service Best Practices AWS Whitepaper Table of Contents Abstract 1 Abstract 1 Introduction 2 Identity and Access Management 3 AWS KMS and IAM Policies 3 Key Policies 3 Least Privilege / Separation of Duties 4 Cross Account Sharing of Keys 5 CMK Grants 5 Encryption Context 5 MultiFactor Authentication 6 Detective Controls 8 CMK Auditing 8 CMK Use Validation 8 Key Tags 8 Infrastructure Security 9 Customer Master Keys 9 AWSmanaged and Customermanaged CMKs 9 Key Creation and Management 10 Key Aliases 10 Using AWS KMS at Scale 11 Data Protection 12 Common AWS KMS Use Cases 12 Encrypting PCI Data Using AWS KMS 12 Secret Management Using AWS KMS and Amazon S3 12 Encrypting Lambda Environment Variables 12 Encrypting Data within Systems Manager Parameter Store 12 Enforcing Data at Rest Encryption within AWS Services 13 Data at Rest Encryption with Amazon S3 13 Data at Rest Encryption with Amazon EBS 14 Data at Rest Encryption with Amazon RDS 14 Incident Response 15 Security Automation of AWS KMS 15 Deleting and Disabling CMKs 15 Conclusion 16 Contributors 17 Document Revisions 18 Notices 19 iiiArchivedAWS Key Management Service Best Practices AWS Whitepaper Abstract AWS Key Management Service Best Practices Publication date: April 1 2017 (Document Revisions (p 18)) Abstract AWS Key Management Service (AWS KMS) is a managed service that allows you to concentrate on the cryptographic needs of your applications while Amazon Web Services (AWS) manages availability physical security logical access control and maintenance of the underlying infrastructure Further AWS KMS allows you to audit usage of your keys by providing logs of all API calls made on them to help you meet compliance and regulatory requirements Customers want to know how to effectively implement AWS KMS in their environment This whitepaper discusses how to use AWS KMS for each capability described in the AWS Cloud Adoption Framework (CAF) Security Perspective whitepaper including the differences between the different types of customer master keys using AWS KMS key policies to ensure least privilege auditing the use of the keys and listing some use cases that work to protect sensitive information within AWS 1ArchivedAWS Key Management Service Best Practices AWS Whitepaper Introduction AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data AWS KMS uses Hardware Security Modules (HSMs) to protect the security of your keys You can use AWS KMS to protect your data in AWS services and in your applications The AWS Key Management Service Cryptographic Details whitepaper describes the design and controls implemented within the service to ensure the security and privacy of your data The AWS Cloud Adoption Framework (CAF) whitepaper provides guidance for coordinating the different parts of organizations that are moving to cloud computing The AWS CAF guidance is broken into areas of focus that are relevant to implementing cloudbased IT systems which we refer to as perspectives The CAF Security Perspective whitepaper organizes the principles that will help drive the transformation of your organization’s security through five core capabilities: Identity and Access Management Detective Control Infrastructure Security Data Protection and Incident Response For each capability in the CAF Security Perspective this whitepaper provides details on how your organization should use AWS KMS to protect sensitive information across a number of different use cases and the means of measuring progress: •Identity and Access Management: Enables you to create multiple access control mechanisms and manage the permissions for each •Detective Controls: Provides you the capability for native logging and visibility into the service •Infrastructure Security: Provides you with the capability to shape your security controls to fit your requirements •Data Protection: Provides you with the capability for maintaining visibility and control over data •Incident Response: Provides you with the capability to respond to manage reduce harm and restore operations during and after an incident 2ArchivedAWS Key Management Service Best Practices AWS Whitepaper AWS KMS and IAM Policies Identity and Access Management The Identity and Access Management capability provides guidance on determining the controls for access management within AWS KMS to secure your infrastructure according to established best practices and internal policies AWS KMS and IAM Policies You can use AWS Identity and Access Management (IAM) policies in combination with key policies to control access to your customer master keys (CMKs) in AWS KMS This section discusses using IAM in the context of AWS KMS It doesn’t provide detailed information about the IAM service For complete IAM documentation see the AWS IAM User Guide Policies attached to IAM identities (that is users groups and roles) are called identitybased policies (or IAM policies ) Policies attached to resources outside of IAM are called resourcebased policies In AWS KMS you must attach resourcebased policies to your customer master keys (CMKs) These are called key policies All KMS CMKs have a key policy and you must use it to control access to a CMK IAM policies by themselves are not sufficient to allow access to a CMK although you can use them in combination with a CMK key policy To do so ensure that the CMK key policy includes the policy statement that enables IAM policies By using an identitybased IAM policy you can enforce least privilege by granting granular access to KMS API calls within an AWS account Remember IAM policies are based on a policy of defaultdenied unless you explicitly grant permission to a principal to perform an action Key Policies Key policies are the primary way to control access to CMKs in AWS KMS Each CMK has a key policy attached to it that defines permissions on the use and management of the key The default policy enables any principals you define as well as enables the root user in the account to add IAM policies that reference the key We recommend that you edit the default CMK policy to align with your organization’s best practices for least privilege To access an encrypted resource the principal needs to have permissions to use the resource as well as to use the encryption key that protects the resource If the principal does not have the necessary permissions for either of those actions the request to use the encrypted resource will be denied It’s also possible to constrain a CMK so that it can only be used by specific AWS services through the use of the kms:ViaService conditional statement within the CMK key policy For more information see the AWS KMS Developer Guide To create and use an encrypted Amazon Elastic Block Store (EBS) volume you need permissions to use Amazon EBS The key policy associated with the CMK would need to include something similar to the following: { "Sid": "Allow for use of this Key" "Effect": "Allow" "Principal": { "AWS": "arn:aws:iam:: 111122223333:role/UserRole " } "Action": [ "kms:GenerateDataKeyWithoutPlaintext" 3ArchivedAWS Key Management Service Best Practices AWS Whitepaper Least Privilege / Separation of Duties "kms:Decrypt" ] "Resource": "*" } { "Sid": "Allow for EC2 Use" "Effect": "Allow" "Principal": { "AWS": "arn:aws:iam:: 111122223333:role/UserRole " } "Action": [ "kms:CreateGrant" "kms:ListGrants" "kms:RevokeGrant" ] "Resource": "*" "Condition": { "StringEquals": { "kms:ViaService": "ec2 uswest2amazonawscom" } } } In this CMK policy the first statement provides a specified IAM principal the ability to generate a data key and decrypt that data key from the CMK when necessary These two APIs are necessary to encrypt the EBS volume while it’s attached to an Amazon Elastic Compute Cloud (EC2) instance The second statement in this policy provides the specified IAM principal the ability to create list and revoke grants for Amazon EC2 Grants are used to delegate a subset of permissions to AWS services or other principals so that they can use your keys on your behalf In this case the condition policy explicitly ensures that only Amazon EC2 can use the grants Amazon EC2 will use them to reattach an encrypted EBS volume back to an instance if the volume gets detached due to a planned or unplanned outage These events will be recorded within AWS CloudTrail when and if they do occur for your auditing When developing a CMK policy you should keep in mind how policy statements are evaluated within AWS This means that if you have enabled IAM to help control access to a CMK when AWS evaluates whether a permitted action is to be allowed or denied the CMK policy is joined with the IAM policy Additionally you should ensure that the use and management of a key is restricted to the parties that are necessary Least Privilege / Separation of Duties Key policies specify a resource action effect principal and conditions to grant access to CMKs Key policies allow you to push more granular permissions to CMKs to enforce least privilege For example an application might make a KMS API call to encrypt data but there is no use case for that same application to decrypt data In that use case a key policy could grant access to the kms:Encrypt action but not kms:Decrypt and reduce the possibility for exposure Additionally AWS allows you to separate the usage permissions from administration permissions associated with the key This means that an individual may have the ability to manipulate the key policy but might not have the necessary permissions to use the key for cryptographic functions Given that your CMKs are being used to protect your sensitive information you should work to ensure that the corresponding key policies follow a model of least privilege This includes ensuring that you do NOT include kms:* permissions in an IAM policy This policy would grant the principal both administrative and usage permissions on all CMKs to which the principal has access Similarly including kms:* permissions for the principals within your key policy gives them both administrative and usage permissions on the CMK It’s important to remember that explicit deny policies take precedence over implicit deny policies When you use NotPrincipal in the same policy statement as "Effect: Deny" the permissions specified in the 4ArchivedAWS Key Management Service Best Practices AWS Whitepaper Cross Account Sharing of Keys policy statement are explicitly denied to all principals except for the ones specified A toplevel KMS policy can explicitly deny access to virtually all KMS operations except for the roles that actually need them This technique helps prevent unauthorized users from granting themselves KMS access Cross Account Sharing of Keys Delegation of permissions to a CMK within AWS KMS can occur when you include the root principal of a trusted account within the CMK key policy The trusted account then has the ability to further delegate these permissions to IAM users and roles within their own account using IAM policies While this approach may simplify the management of the key policy it also relies on the trusted accounts to ensure that the delegated permissions are correctly managed The other approach would be to explicitly manage permissions to all authorized users using only the KMS key policy which in turn could make the key policy complex and less manageable Regardless of the approach you take the specific trust should be broken out on a per key basis to ensure that you adhere to the least privilege model CMK Grants Key policy changes follow the same permissions model used for policy editing elsewhere in AWS That is users either have permission to change the key policy or they do not Users with the PutKeyPolicy permission for a CMK can completely replace the key policy for a CMK with a different key policy of their choice You can use key policies to allow other principals to access a CMK but key policies work best for relatively static assignments of permissions To enable more granular permissions management you can use grants Grants are useful when you want to define scopeddown temporary permissions for other principals to use your CMK on your behalf in the absence of a direct API call from you It’s important to be aware of the grants per key and grants for a principal per key limits when you design applications that use grants to control access to keys Ensure that the retiring principal retires a grant after it’s used to avoid hitting these limits Encryption Context In addition to limiting permission to the AWS KMS APIs AWS KMS also gives you the ability to add an additional layer of authentication for your KMS API calls utilizing encryption context The encryption context is a keyvalue pair of additional data that you want associated with AWS KMSprotected information This is then incorporated into the additional authenticated data (AAD) of the authenticated encryption in AWS KMSencrypted ciphertexts If you submit the encryption context value in the encryption operation you are required to pass it in the corresponding decryption operation You can use the encryption context inside your policies to enforce tighter controls for your encrypted resources Because the encryption context is logged in CloudTrail you can get more insight into the usage of your keys from an audit perspective Be aware that the encryption context is not encrypted and will be visible within CloudTrail logs The encryption context should not be considered sensitive information and should not require secrecy AWS services that use AWS KMS use encryption context to limit the scope of keys For example Amazon EBS sends the volume ID as the encryption context when encrypting/decrypting a volume and when you take a snapshot the snapshot ID is used as the context If Amazon EBS did not use this encryption context an EC2 instance would be able to decrypt any EBS volume under that specific CMK An encryption context can also be used for custom applications that you develop and acts as an additional layer of control by ensuring that decrypt calls will succeed only if the encryption context 5ArchivedAWS Key Management Service Best Practices AWS Whitepaper MultiFactor Authentication matches what was passed in the encrypt call If the encryption context for a specific application does not change you can include that context within the AWS KMS key policy as a conditional statement For example if you have an application that requires the ability to encrypt and decrypt data you can create a key policy on the CMK that ensures that it provides expected values In the following policy it is checking that the application name “ExampleApp” and its current version “1024” are the values that are passed to AWS KMS during the encrypt and decrypt calls If different values are passed the call will be denied and the decrypt or encrypt action will not be performed { "Effect": "Allow" "Principal": { "AWS": "arn:aws:iam::111122223333:role/RoleForExampleApp" } "Action": [ "kms:Encrypt" "kms:Decrypt" ] "Resource": "*" "Condition": { "StringEquals": { "kms:EncryptionContext:AppName": "ExampleApp" "kms:EncryptionContext:Version": "1024" } } } This use of encryption context will help to further ensure that only authorized parties and/or applications can access and use the CMKs Now the party will need to have IAM permissions to AWS KMS a CMK policy that allows them to use the key in the requested fashion and finally know the expected encryption context values MultiFactor Authentication To provide an additional layer of security over specific actions you can implement an additional layer of protection using multifactor authentication (MFA) on critical KMS API calls Some of those calls are PutKeyPolicy ScheduleKeyDeletion DeleteAlias and DeleteImportedKeyMaterial This can be accomplished through a conditional statement within the key policy that checks for when or if an MFA device was used as part of authentication If someone attempts to perform one of the critical AWS KMS actions the following CMK policy will validate that their MFA was authenticated within the last 300 seconds or 5 minutes before performing the action { "Sid": "MFACriticalKMSEvents" "Effect": "Allow" "Principal": { "AWS": "arn:aws:iam::111122223333:user/ExampleUser" } "Action": [ "kms:DeleteAlias" "kms:DeleteImportedKeyMaterial" "kms:PutKeyPolicy" "kms:ScheduleKeyDeletion" ] "Resource": "*" "Condition":{ " NumericLessThan ":{"aws: MultiFactorAuthAge":"300"} } 6ArchivedAWS Key Management Service Best Practices AWS Whitepaper MultiFactor Authentication } 7ArchivedAWS Key Management Service Best Practices AWS Whitepaper CMK Auditing Detective Controls The Detective Controls capability ensures that you properly configure AWS KMS to log the necessary information you need to gain greater visibility into your environment CMK Auditing AWS KMS is integrated with CloudTrail To audit the usage of your keys in AWS KMS you should enable CloudTrail logging in your AWS account This ensures that all KMS API calls made on keys in your AWS account are automatically logged in files that are then delivered to an Amazon Simple Storage Service (S3) bucket that you specify Using the information collected by CloudTrail you can determine what request was made the source IP address from which the request was made who made the request when it was made and so on AWS KMS integrates natively with many other AWS services to make monitoring easy You can use these AWS services or your existing security tool suite to monitor your CloudTrail logs for specific actions such as ScheduleKeyDeletion PutKeyPolicy DeleteAlias DisableKey DeleteImportedKeyMaterial on your KMS key Furthermore AWS KMS emits Amazon CloudWatch Events when your CMK is rotated deleted and imported key material in your CMK expires CMK Use Validation In addition to capturing audit data associated with key management and use you should ensure that the data you are reviewing aligns with your established best practices and policies One method is to continuously monitor and verify the CloudTrail logs as they come in Another method is to use AWS Config rules By using AWS Config rules you can ensure that the configuration of many of the AWS services are set up appropriately For example with EBS volumes you can use the AWS Config rule ENCRYPTED_VOLUMES to validate that attached EBS volumes are encrypted Key Tags A CMK can have a tag applied to it for a variety of purposes The most common use is to correlate a specific CMK back to a business category (such as a cost center application name or owner) The tags can then be used to verify that the correct CMK is being used for a given action For example in CloudTrail logs for a given KMS action you can verify that the CMK being used belongs to the same business category as the resource that it’s being used on Previously this might have required a look up within a resource catalog but now this external lookup is not required because of tagging within AWS KMS as well as many of the other AWS services 8ArchivedAWS Key Management Service Best Practices AWS Whitepaper Customer Master Keys Infrastructure Security The Infrastructure Security capability provides you with best practices on how to configure AWS KMS to ensure that you have an agile implementation that can scale with your business while protecting your sensitive information Topics •Customer Master Keys (p 9) •Using AWS KMS at Scale (p 11) Customer Master Keys Within AWS KMS your key hierarchy starts with a CMK A CMK can be used to directly encrypt data blocks up to 4 KB or it can be used to secure data keys which protect underlying data of any size AWSmanaged and Customermanaged CMKs CMKs can be broken down into two general types: AWSmanaged and customermanaged An AWS managed CMK is created when you choose to enable serverside encryption of an AWS resource under the AWSmanaged CMK for that service for the first time (eg SSEKMS) The AWSmanaged CMK is unique to your AWS account and the Region in which it’s used An AWSmanaged CMK can only be used to protect resources within the specific AWS service for which it’s created It does not provide the level of granular control that a customermanaged CMK provides For more control a best practice is to use a customermanaged CMK in all supported AWS services and in your applications A customermanaged CMK is created at your request and should be configured based upon your explicit use case The following chart summarizes the key differences and similarities between AWSmanaged CMKs and customermanaged CMKs AWSmanaged CMK Customermanaged CMK Creation AWS generated on customer’s behalfCustomer generated Rotation Once every three years automaticallyOnce a year automatically through optin or ondemand manually Deletion Can’t be deleted Can be deleted Scope of use Limited to a specific AWS service Controlled via KMS/IAM policy Key Access Policy AWS managed Customer managed User Access Management IAM policy IAM policy For customermanaged CMKs you have two options for creating the underlying key material When you choose to create a CMK using AWS KMS you can let KMS create the cryptographic material for you or you can choose to import your own key material Both of these options provide you with the same level 9ArchivedAWS Key Management Service Best Practices AWS Whitepaper Key Creation and Management of control and auditing for the use of the CMK within your environment The ability to import your own cryptographic material allows you to do the following: • Prove that you generated the key material using your approved source that meets your randomness requirements • Use key material from your own infrastructure with AWS services and use AWS KMS to manage the lifecycle of that key material within AWS • Gain the ability to set an expiration time for the key material in AWS and manually delete it but also make it available again in the future • Own the original copy of the key material and to keep it outside of AWS for additional durability and disaster recovery during the complete lifecycle of the key material The decision to use imported key material or KMSgenerated key material would depend on your organization’s policies and compliance requirements Key Creation and Management Since AWS makes creating and managing keys easy through the use of AWS KMS we recommend that you have a plan for how to use the service to best control the blast radius around individual keys Previously you may have used the same key across different geographic regions environments or even applications With AWS KMS you should define data classification levels and have at least one CMK per level For example you could define a CMK for data classified as “Confidential” and so on This ensures that authorized users only have permissions for the key material that they require to complete their job You should also decide how you want to manage usage of AWS KMS Creating KMS keys within each account that requires the ability to encrypt and decrypt sensitive data works best for most customers but another option is to share the CMKs from a few centralized accounts Maintaining the CMKs in the same account as the majority of the infrastructure using them helps users provision and run AWS services that use those keys AWS services don’t allow for crossaccount searching unless the principal doing the searching has explicit List* permissions on resources owned by the external account This can also only be accomplished via the CLI or SDK and not through service consolebased searches Additionally by storing the credentials in the local accounts it might be easier to delegate permissions to individuals who know the IAM principals that require access to the specific CMKs If you were sharing the keys via a centralized model the AWS KMS administrators would need to know the full Amazon Resource Name (ARN) for all users of the CMKs to ensure least privilege Otherwise the administrators might provide overly permissive permissions on the keys Your organization should also consider the frequency of rotation for CMKs Many organizations rotate CMKs yearly For customermanaged CMKs with KMSgenerated key material this is easy to enforce You simply have to opt in to a yearly rotation schedule for your CMK When the CMK is due for rotation a new backing key is created and marked as the active key for all new requests to protect information The old backing key remains available for use to decrypt any existing ciphertext values that were encrypted using this key To rotate CMKs more frequently you can also call UpdateAlias to point an alias to a new CMK as described in the next section The UpdateAlias method works for both customermanaged CMKs and CMKs with imported key material AWS has found that the frequency of key rotation is highly dependent upon laws regulations and corporate policies Key Aliases A key alias allows you to abstract key users away from the underlying Regionspecific key ID and key ARN Authorized individuals can create a key alias that allows their applications to use a specific CMK independent of the Region or rotation schedule Thus multiRegion applications can use the same key alias to refer to KMS keys in multiple Regions without worrying about the key ID or the key ARN You can also trigger manual rotation of a CMK by pointing a given key alias to a different CMK Similar to how Domain Name Services (DNS) allows the abstraction of IP addresses a key alias does the same for the 10ArchivedAWS Key Management Service Best Practices AWS Whitepaper Using AWS KMS at Scale key ID When you are creating a key alias we recommend that you determine a naming scheme that can be applied across your accounts such as alias/<Environment><Function><Service Team> It should be noted that CMK aliases can’t be used within policies This is because the mapping of aliases to keys can be manipulated outside the policy which would allow for an escalation of privilege Therefore key IDs must be used in KMS key policies IAM policies and KMS grants Using AWS KMS at Scale As noted earlier a best practice is to use at least one CMK for a particular class of data This will help you define policies that scope down permissions to the key and hence the data to authorized users You may choose to further distribute your data across multiple CMKs to provide stronger security controls within a given data classification AWS recommends using envelope encryption to scale your KMS implementation Envelope encryption is the practice of encrypting plaintext data with a unique data key and then encrypting the data key with a key encryption key (KEK) Within AWS KMS the CMK is the KEK You can encrypt your message with the data key and then encrypt the data key with the CMK Then the encrypted data key can be stored along with the encrypted message You can cache the plaintext version of the data key for repeated use reducing the number of requests to AWS KMS Additionally envelope encryption can help to design your application for disaster recovery You can move your encrypted data asis between Regions and only have to reencrypt the data keys with the Regionspecific CMKs The AWS Cryptographic team has released an AWS Encryption SDK that makes it easier to use AWS KMS in an efficient manner This SDK transparently implements the lowlevel details for using AWS KMS It also provides developers options for protecting their data keys after use to ensure that the performance of their application isn’t significantly affected by encrypting your sensitive data 11ArchivedAWS Key Management Service Best Practices AWS Whitepaper Common AWS KMS Use Cases Data Protection The Data Protection capability addresses some of the common AWS use cases for using AWS KMS within your organization to protect your sensitive information Common AWS KMS Use Cases Encrypting PCI Data Using AWS KMS Since security and quality controls in AWS KMS have been validated and certified to meet the requirements of PCI DSS Level 1 certification you can directly encrypt Primary Account Number (PAN) data with an AWS KMS CMK The use of a CMK to directly encrypt data removes some of the burden of managing encryption libraries Additionally a CMK can’t be exported from AWS KMS which alleviates the concern about the encryption key being stored in an insecure manner As all KMS requests are logged in CloudTrail use of the CMK can be audited by reviewing the CloudTrail logs It’s important to be aware of the requests per second limit when designing applications that use the CMK directly to protect Payment Card Industry (PCI) data Secret Management Using AWS KMS and Amazon S3 Although AWS KMS primarily provides key management functions you can leverage AWS KMS and Amazon S3 to build your own secret management solution Create a new Amazon s3 bucket to hold your secrets Deploy a bucket policy onto the bucket to limit access to only authorized individuals and services The secrets stored in the bucket utilize a predefined prefix per file to allow for granular control of access to the secrets Each secret when placed in the S3 bucket is encrypted using a specific customermanaged KMS key Furthermore due to the highly sensitive nature of the information being stored within this bucket S3 access logging or CloudTrail Data Events are enabled for audit purposes Then when a user or service requires access to the secret they assume an identity within AWS that has permissions to use both the object in the S3 bucket as well as the KMS key An application that runs in an EC2 instance uses an instance role that has the necessary permissions Encrypting Lambda Environment Variables By default when you create or update Lambda functions that use environment variables those variables are encrypted using AWS KMS When your Lambda function is invoked those values are decrypted and made available to the Lambda code You have the option to use the default KMS key for Lambda or specify a specific CMK of your choice To further protect your environment variables you should select the “Enable encryption helpers” checkbox By selecting this option your environment variables will also be individually encrypted using a CMK of your choice and then your Lambda function will have to specifically decrypt each encrypted environment variable that is needed Encrypting Data within Systems Manager Parameter Store Amazon EC2 Systems Manager is a collection of capabilities that can help you automate management tasks at scale To efficiently store and reference sensitive configuration data such as passwords license keys and certificates the Parameter Store lets you protect sensitive information within secure string parameters 12ArchivedAWS Key Management Service Best Practices AWS Whitepaper Enforcing Data at Rest Encryption within AWS Services A secure string is any sensitive data that needs to be stored and referenced in a secure manner If you have data that you don't want users to alter or reference in clear text such as domain join passwords or license keys then specify those values using the Secure String data type You should use secure strings in the following circumstances: • You want to use data/parameters across AWS services without exposing the values as clear text in commands functions agent logs or CloudTrail logs • You want to control who has access to sensitive data • You want to be able to audit when sensitive data is accessed using CloudTrail • You want AWSlevel encryption for your sensitive data and you want to bring your own encryption keys to manage access By selecting this option when you create your parameter the Systems Manager encrypts that value when it’s passed into a command and decrypts it when processing it on the managed instance The encryption is handled by AWS KMS and can be either a default KMS key for the Systems Manager or you can specify a specific CMK per parameter Enforcing Data at Rest Encryption within AWS Services Your organization might require the encryption of all data that meets a specific classification Depending on the specific service you can enforce data encryption policies through preventative or detective controls For some services like Amazon S3 a policy can prevent storing unencrypted data For other services the most efficient mechanism is to monitor the creation of storage resources and check whether encryption is enabled appropriately In the event that unencrypted storage is created you have a number of possible responses ranging from deleting the storage resource to notifying an administrator Data at Rest Encryption with Amazon S3 Using Amazon S3 it’s possible to deploy an S3 bucket policy that ensures that all objects being uploaded are encrypted The policy looks like the following: { "Version":"20121017" "Id":"PutObjPolicy" "Statement":[{ "Sid":"DenyUnEncryptedObjectUploads" "Effect":"Deny" "Principal":"*" "Action":"s3:PutObject" "Resource":"arn:aws:s3:::YourBucket/*" "Condition":{ "StringNotEquals":{ "s3:xamzserversideencryption":"aws:kms" } } } ] } Note that this doesn’t cause objects already in the bucket to be encrypted This policy denies attempts to add new objects to the bucket unless those objects are encrypted Objects already in the bucket before this policy is applied will remain either encrypted or unencrypted based on how they were first uploaded 13ArchivedAWS Key Management Service Best Practices AWS Whitepaper Data at Rest Encryption with Amazon EBS Data at Rest Encryption with Amazon EBS You can create Amazon Machine Images (AMIs) that make use of encrypted EBS boot volumes and use the AMIs to launch EC2 instances The stored data is encrypted as is the data transfer path between the EBS volume and the EC2 instance The data is decrypted on the hypervisor of that instance on an asneeded basis then stored only in memory This feature aids your security compliance and auditing efforts by allowing you to verify that all of the data that you store on the EBS volume is encrypted whether it’s stored on a boot volume or on a data volume Further because this feature makes use of AWS KMS you can track and audit all uses of the encryption keys There are two methods to ensure that EBS volumes are always encrypted You can verify that the encryption flag as part of the CreateVolume context is set to “true” through an IAM policy If the flag is not “true” then the IAM policy can prevent an individual from creating the EBS volume The other method is to monitor the creation of EBS volumes If a new EBS volume is created CloudTrail will log an event A Lambda function can be triggered by the CloudTrail event to check if the EBS volume is encrypted or not and also what KMS key was used for the encryption An AWS Lambda function can respond to the creation of an unencrypted volume in several different ways The function could call the CopyImage API with the encrypted option to create a new encrypted version of the EBS volume and then attach it to the instance and delete the old version Some customers choose to automatically delete the EC2 instance that has the unencrypted volume Others choose to automatically quarantine the instance it by applying security groups that prevent most inbound connections It’s also easy to write a Lambda function that posts to an Amazon Simple Notification Service (SNS) topic that alerts administrators to do a manual investigation and intervention Note that most enforcement responses can—and should—be accomplished programmatically without human intervention Data at Rest Encryption with Amazon RDS Amazon Relational Database Service (RDS) builds on Amazon EBS encryption to provide full disk encryption for database volumes When you create an encrypted database instance with Amazon RDS Amazon RDS creates an encrypted EBS volume on your behalf to store the database Data stored at rest on the volume database snapshots automated backups and read replicas are all encrypted under the KMS CMK that you specified when you created the database instance Similar to Amazon EBS you can set up an AWS Lambda function to monitor for the creation of new RDS instances via the CreateDBInstance API call via CloudTrail Within the CreateDBInstance event ensure that KmsKeyId parameter is set to the expected CMK 14ArchivedAWS Key Management Service Best Practices AWS Whitepaper Security Automation of AWS KMS Incident Response The Incident Response capability focuses on your organization’s capability to remediate incidents that may involve AWS KMS Security Automation of AWS KMS During your monitoring of your CMKs if a specific action is detected an AWS Lambda function could be configured to disable the CMK or perform any other incident response actions as dictated by your local security policies Without human intervention a potential exposure could be cut off in minutes by leveraging the automation tools inside AWS Deleting and Disabling CMKs While deleting CMKs is possible it has significant ramifications to an organization You should first consider whether it’s sufficient to set the CMK state to disabled on keys that you no longer intend to use This will prevent all future use of the CMK The CMK is still available however and can be reenabled in the future if it’s needed Disabled keys are still stored by AWS KMS; thus they continue to incur recurring storage charges You should strongly consider disabling keys instead of deleting them until you are confident in their encrypted data management Deleting a key must be very carefully thought out Data can’t be decrypted if the corresponding CMK has been deleted Moreover once a CMK is deleted it’s gone forever AWS has no means to recover a deleted CMK once it’s finally deleted Just as with other critical operations in AWS you should apply a policy that requires MFA for CMK deletion To help ensure that a CMK is not deleted by mistake KMS enforces a minimum waiting period of seven days before the CMK is actually deleted You can choose to increase this waiting period up to a maximum value of 30 days During the waiting period the CMK is still stored in KMS in a “Pending Deletion” state It can’t be used for encrypt or decrypt operations Any attempt to use a key that is in the “Pending Deletion” state for encryption or decryption will be logged to CloudTrail You can set an Amazon CloudWatch Alarm for these events in your CloudTrail logs This gives you a chance to cancel the deletion process if needed Until the waiting period has expired the CMK can be recovered from the “Pending Deletion” state and restored to either the disabled or enabled state Finally it should also be noted that if you are using a CMK with imported key material you can delete the imported key material immediately This is different from deleting a CMK directly in several ways When you perform the DeleteImportedKeyMaterial action AWS KMS deletes the key material and the CMK key state changes to pending import When the key material is deleted the CMK is immediately unusable There is no waiting period To enable use of the CMK again you must reimport the same key material Deleting key material affects the CMK right away but data encryption keys that are actively in use by AWS services are not immediately affected For example let’s say a CMK using your imported material was used to encrypt an object being placed in an S3 bucket using SSEKMS Right before you upload the object into the S3 bucket you place the imported material into your CMK After the object is uploaded you can delete your key material from that CMK The object will continue to sit in the S3 bucket in an encrypted state but no one will be able to access it until the same key material is reimported into the CMK This flow obviously requires precise automation for importing and deleting key material from a CMK but can provide an additional level of control within an environment 15ArchivedAWS Key Management Service Best Practices AWS Whitepaper Conclusion AWS KMS provides your organization with a fully managed service to centrally control your encryption keys Its native integration with other AWS services makes it easier for AWS KMS to encrypt the data that you store and process By taking the time to properly architect and implement AWS KMS you can ensure that your encryption keys are secure and available for applications and their authorized users Additionally you can show your auditors detailed logs associated with your key usage 16ArchivedAWS Key Management Service Best Practices AWS Whitepaper Contributors The following individuals and organizations contributed to this document: • Matthew Bretan Senior Security Consultant AWS Professional Services • Sree Pisharody Senior Product Manager – Technical AWS Cryptography • Ken Beer Senior Manager Software Development AWS Cryptography • Brian Wagner Security Consultant AWS Professional Services • Eugene Yu Managing Consultant AWS Professional Services • Michael StOnge Global Cloud Security Architect AWS Professional Services • Balaji Palanisamy Senior Consultant AWS Professional Services • Jonathan Rault Senior Consultant AWS Professional Services • Reef Dsouza Consultant AWS Professional Services • Paco Hope Principal Consultant AWS Professional Services 17ArchivedAWS Key Management Service Best Practices AWS Whitepaper Document Revisions To be notified about updates to this whitepaper subscribe to the RSS feed updatehistorychange updatehistorydescription updatehistorydate Initial publication (p 18) First published April 1 2017 18ArchivedAWS Key Management Service Best Practices AWS Whitepaper Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2020 Amazon Web Services Inc or its affiliates All rights reserved 19
|
General
|
consultant
|
Best Practices
|
AWS_Key_Management_Service_Cryptographic_Details
|
Archived AWS Key Manag ement Service Cryptographi c Details August 2018 This paper has been archived For the latest technical content about AWS KMS Cryptographic Details see https://docsawsamazoncom/kms/latest/cryptographic details/introhtmlArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 2 of 42 © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents the current AWS product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own in dependent assessment of the information in this document Any use of AWS products or services is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitmen ts conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 3 of 42 Contents Abstract 4 Introduction 4 Design Goals 6 Background 7 Cryptographic Primitives 7 Basic Concepts 10 Customer’s Key Hierarchy 11 Use Cases 13 Amazon EBS Volume Encryption 13 Client side Encryption 15 Customer Master Keys 17 Imported Master Keys 19 Enable and Disable Key 22 Key Deletion 22 Rotate Customer Master Key 23 Customer Data Operations 23 Generating Data Keys 24 Encrypt 26 Decrypt 26 ReEncrypting an Encrypted Object 28 Domains and the Domain State 29 Domain Keys 30 Exported Domain Tokens 30 Managing Do main State 31 Internal Communication Security 33 ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 4 of 42 HSM Security Boundary 33 Quorum Signed Commands 34 Authenticated Sessions 35 Durability Protection 36 References 38 Appendix Abbreviations and Keys 40 Abbreviations 40 Keys 41 Contributors 42 Document Revisions 42 Abstract AWS Key Management Service (AWS KMS) provides cryptographic keys and operations secured by FIPS 140 2 [1] certified hardware security modules (HSMs) scaled for the cloud AWS KMS keys and functionality are used by multiple AWS Cloud services and you can use them to protect data in your applications This whitepaper provides details on the cryptographic operations that are executed within AWS when you use AWS KMS Introduction AWS KMS provides a web interface to generate and manage cryptographic keys and operate s as a cryptographic servic e provider for protecting data AWS KMS offers traditional key management services integrated with AWS services to provide a consistent view of customers’ keys across AWS with centralized management and auditing This whitepaper provides a detailed descri ption of the cryptographic operations of AWS KMS to assist you in evaluating the features offered by the service AWS KMS includes a web interface through the AWS Management Console command line interface and RESTful API operation s to request cryptograph ic ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 5 of 42 operations of a distributed fleet of FIPS 140 2 validated hardware security module s (HSM )[1] The AWS Key Management Service HSM is a multichip standalone hardware cryptographic appliance designed to provide dedicated cryptographic functions to meet the security and scalability requirements of AWS KMS You can establish your own HSMbased cryptographic hierarchy under keys that you manage as customer master keys (CMK s) These keys are made available only on the HSMs for the necessary cycles needed to process your cryptographic request You can create multiple CMKs each represented by its key ID You can define access controls o n who can manage and/or use CMKs by creating a policy that is attached to the key This allows you to define application specific uses for your keys for each API operation Figure 1: AWS KMS architecture AWS KMS is a tiered service consisting of web facing KMS hosts and a tier of HSM s The grouping of these tiered hosts forms the AWS KMS stack All requests to AWS KMS must be made over the Transport Layer Security protocol (TLS) and terminate on a n AWS KMS host AWS KMS hosts only allow TLS with a ciphersuite that provides perfect forward secrecy [2] The AWS KMS hosts use protocols and procedures defined within this whitepaper to fulfill those requests through the HSM s AWS KMS authenticates and authorizes your requests using the same credential and policy mechanisms that are available for all other AWS API operation s including AWS Identity and Access Management (IAM) ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 6 of 42 Design Goals AWS KMS is designed to meet the following requirements Durability : The durability of cryptographic keys is designed to equal that of the highest durability services in AWS A single cryptographic key can encrypt large volumes of customer data accumulated over a long time period However data encrypted under a key becomes irretrievable if the key is lost Quorum based access : Multiple Amazon employee s with rolespecific access are required to perform a dministr ative actions on the HSMs There is no mechanism to export plaintext CMKs The confidentiality of your cryptographic keys is crucial Access control : Use of keys is protected by access control policies defined and managed by you Low latency and high throughput : AWS KMS provide s cryptographic operations at latency and throughput leve ls suitable for use by other services in AWS Regional independence : AWS provides regional independence for customer data Key usage is isolated within an AWS Region Secure source of random numbers : Because strong cryptography depends on truly unpredicta ble random number generation AWS provides a high quality and validated s ource of random numbers Audit : AWS records the use of cryptographic keys in AWS CloudTrail logs You can use AWS CloudTrail logs to inspect use of your cryptographic keys including use of keys by AWS services on your behalf To achieve these goals the AWS KMS system includes a set of KMS operators and service host operators (collectively “operators ”) that administer “domains” A domain is a regionall y defined set of AWS KMS servers HSM s and operators Each KMS operator has a hardware token that contains a private and public key pair used to authenticate its actions The HSM s have an additional private and ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 7 of 42 public key pair to establish encryption keys that protect HSM state synchronization This whitepaper illustrates how the AWS KMS protects your keys and other data that you want to encrypt Throughout th is document encryption keys or data you want to encrypt are referred to as “secrets” or “secret m aterial” Background This section contains a description of the cryptographic primitives and where they are used In addition it introduces the basic elements of AWS KMS Cryptographic Primitives AWS KMS uses configurable cryptographic algorithms so that the system can quickly migrate from one approved algorithm or mode to another The initial default set of cryptographic algorithms has been selected from Federal Information Processing Standard ( FIPS approved ) algorithms for their security properties and performance Entropy and Random Number Generation AWS KMS key generation is performed on the KMS HSM s The HSM s implement a hybrid random number generator that uses the NIST SP800 90A Deterministic Random Bit Generator (DRBG) CTR_DRBG using AES 256[ 3] It is seeded with a nondeterministic random bit generator with 384bits of entropy and updated with additional entropy to provide prediction resistanc e on every call for cryptographic material Encryption All symmetric key encrypt commands used within HSM s use the Advanced Encryption Standards (AES) [ 4] in Galois Counter Mode (GCM) [ 5] using 256 bit keys The analogous calls to decrypt use the inverse function AES GCM is an authenticated encryption scheme In addition to encrypting plaintext to produce ciphertext it computes an authentication tag over the ciphertext and any additional data over which au thentication is required (additionally authenticated data or AAD) The authentication tag helps ensure ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 8 of 42 that the data is from the purported source and that the ciphertext and AAD have not been modified Frequently AWS omits the inclusion of the AAD in o ur descriptions especially when referring to the encryption of data keys It is implied by surrounding text in these cases that the structure to be encrypted is partitioned between the plaintext to be encrypted and the cleartext AAD to be protected AWS K MS provides an option for you to import CMK key material instead of relying on the service to generate the key This imported key material can be encrypted using RSAES PKCS1 v1_5 or RSAES OAEP [ 6] to protect the key during transport to the KMS HSM The RSA key pairs are generated on KMS HSM s The imported key material is decrypted on a KMS HSM and reencrypted under AES GCM before being stored by the service Key Derivation Functions A key deriv ation function is used to derive additional keys from an initial secret or key AWS KMS uses a key derivation function (KDF) to derive per call keys for every encryption under a CMK All KDF operations use the KDF in counter mode [7] using HMAC [FIPS197] [8] with SHA256 [FIPS180] [9] The 256 bit derived key is used with AES GCM to encrypt or decrypt customer data and keys Digital Signatures All service entities have an elliptic cur ve digital signature algorithm (ECDSA) key pair They perform ECDSA as defined in Use of Elliptic Curve Cryptography (ECC) Algorithms in Cryptographic Message Syntax (CMS) [10] and X962 2005: Public Key Cry ptography for the Financial Services Industry: The Elliptic Curve Digital Signature Algorithm (ECDSA)[ 11] The entities use the secure hash algorithm defined in Federal Information Processing Standards Publications FIPS PUB 1804 [9] known as SHA384 The keys are generated on the curve secp384r1 (NIST P384) [12] Digital signatures are used to authenticate commands and communications between AWS KMS entities A key pair is denoted as (d Q) the sign ing operation as Sig = Sign(d msg) and the verify operation as Verify(Q msg Sig) The verify operation returns an indication of success or failure ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 9 of 42 It is frequently convenient to represent an entity by its public key Q In these cases the identifying information such as an identifier or a role is assumed to accompan y the public key Key Establishment AWS KMS uses t wo different key establishment methods The first is defined as C(1 2 ECC DH) in Recommendation for Pair Wise K ey Establishment Schemes Using Discrete Logarithm Cryptography (Revision 2) [1 3] This scheme has an initiator with a static signing key The initiator generates and signs an ephemeral elliptic curve Diffie Hellman (ECDH) key intended for a recipient with a static ECDH agreement key This method uses one ephemeral key and two static keys using ECDH That is the derivation of the label C(1 2 ECC DH) This method is sometimes called one pass ECDH The second key establishment method is C(2 2 ECC DH) [1 3] In this scheme both parties have a static signing key and they generate sign and exchange an ephemeral ECDH key This method uses two static keys and two ephemeral keys using ECDH That is the derivation of the label C(2 2 ECC DH) This method is sometimes called ECDH ephemeral or ECDHE All ECDH keys are generated on the curve secp3 84r1 (NIST P384) [12] Envelope Encryption A basic construction used within many cryptographic systems is envelope encryption Envelope encryption uses two or more cryptographic keys to secure a message Typically one key is derived from a longer term sta tic key k and another key is a per message key msgKey which is generated to encrypt the message The envelope is formed by encrypting the message ciphertext = Encrypt(msgKey message) encrypting the message key with the long term static key encKey = Encrypt(k msgKey) and packaging the two values (encKey ciphertext) into a single structure or envelope encrypted message The recipient with access to k can open the enveloped message by first decrypting the encrypted key and then decrypting the message AWS KMS provides the ability to manage these longer term static keys and automate the process of envelope encryption of your data ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 10 of 42 AWS KMS uses envelope encryption internally to secure confidential material between service endpoint s In addition to the encryption cap abilities provided within the KMS service the AWS Encryption SDK [14] provides client side envelope encryption libraries You can use these libraries to protect your data and the encryption keys used to encrypt that data Basic Concepts This section introduces some basic AWS KMS concepts that are elaborated on throughout this whitepaper Customer master key (CMK) : A logical key that represents the top of your key hierarchy A CMK is given an Ama zon Resource Name (ARN) that includes a unique key identifier or key ID Alias: A user friendly name or alias can be associated with a CMK The alias can be used interchangeably with key ID in many of the AWS KMS API operation s Permissions: A policy a ttached to a CMK that defines permissions on the key The default policy allows any principals that you define as well as allowing the AWS account root user to add IAM policies that reference the key Grants: Grants are intended to allow delegated use of CMKs when the duration of usage is not known at the outset One use of grants is to define scoped down permissions for an AWS service The service uses your key to do asynchronous work on your behalf on encrypted data in the absence of a direct signed API call from you Data keys: Cryptographic keys generated on HSM s under a CMK AWS KMS allows authorized entities to obtain data keys protected by a CMK They can be returned both as plaintext (unencrypted) data keys and as encrypted data keys Ciph ertexts : Encrypted output of AWS KMS is referred to as customer ciphertext or just ciphertext when there is no confusion Ciphertext contains ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 11 of 42 encrypted data with additional information that identifies the CMK to use in the decryption process Encryption context: A key–value pair map of additional information associated with AWS KMS –protected infor mation AWS KMS uses authenticated encryption to protect data keys The encryption context is incorporated into the AAD of the authenticated encryption in AWS KMS –encrypted ciphertexts This context information is optional and not returned when requesting a key (or an encryption operation) But if used this context value is required to successfully complete a decryption operation An intended use of the encryption context is to provide additional authenticated information that can be used to enforce policie s and be included in the AWS CloudTrail logs For example a key –value pair of {"key name":"satellite uplink key"} could be used to name the data key Subsequently whenever the key is used a AWS CloudTrail entry is made that includes “key name”: “satellite uplink key” This additional information can provide useful context to understand why a given master key was used Customer ’s Key Hierarchy Your key hierarchy starts with a top level logical key a CMK A CMK represents a container for top level key material and is uniquely defined within the AWS service namespace with an ARN The ARN include s a uniquely generated key identifier a CMK key ID A CMK is created based on a user initiated request through AWS KMS Upon reception AWS KMS request s the creation of an initial HSM backing key ( HBK ) to be placed into the CMK container All such HSM resident only keys are denoted in red The HBK is generated on an HSM in the domain and is designed never to be exported from the HSM in plaintext Instead the HBK is exported encrypted under HSM managed domain keys These exported HBK s are referred to as exported key tokens (EKT s) The EKT is exported to a highly durable low latency storage You receive an ARN to the logical CMK This represents the top of a key hierarchy or cryptographic context for you You can create multiple CMK s within your account and set policies on your CMKs like any other AWS named resource Within the hierarchy of a specific CMK the HBK can be though t of as a version of the CMK When you want to rotate the CMK through AWS KMS a new HBK is created and associated with the CMK as the active HBK for the CMK The older ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 12 of 42 HBK s are preserved and can be used to decrypt and verify previously protected data but only the active cryptographic key can be used to protect new information Figure 2: CMK hierarchy You can make requests through AWS KMS to use your CMK s to directly protect information or request additional HSM generated keys protected under you r CMK These keys are called customer data keys or CDKs CDKs can be returned encrypted as ciphertext (CT) in plaintext or both All objects encrypted under a CMK (either customer supplied data or HSM generated keys ) can be decrypted only on an HSM via a call through AWS KMS The returned ciphertext or the decrypted payload is never stored within AWS KMS The information is returned to you over your TLS connection to AWS KMS This also applies to calls made by AWS services on your behalf We summ arize the key hierarchy and the specific key properties in the following table ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 13 of 42 Key Description Lifecycle Domain key A 256 bit AES GCM key only in memory of an HSM used to wrap versions of the CMKs the HSM backing keys Rotated daily1 HSM backing key A 256 bit symmetric key only in memory of an HSM used to protect customer data and keys Stored encrypted under domain keys Rotated yearly2 (optional config ) Data encryption key A 256 bit AES GCM key only in memory of an HSM used to encrypt customer data and keys Derived from an HBK for each encryption Used once per encrypt and regenerated on decrypt Customer data key User defined key exported from HSM in plaintext and ciphertext Encrypted under an HSM backing key and returned to authorized users over TLS channel Rotation and use controlled by application Use Cases This whitepaper presents two use cases The first demonstrates how AWS KMS performs se rverside encryption with CMKs on an Amazon Elastic Block Stor e (Amazon EBS) volume The second is a client side application that demonstrates how you can use envelope encryption to protect content with AWS KMS Amazon EBS Volume Encryption Amazon EBS offe rs volume encryption capability Each volume is encrypted using AES 256XTS [1 5] This requires two 256 bit volume keys which you can think of as one 512 bit volume key The volume key is encrypted under a CMK in your account For Amazon EBS to encrypt a volume for you it must have access to generate a volume key (VK) under a CMK in the account You do this by providing a grant for Amazon EBS to the CMK to create data keys and to encrypt and decrypt these volume keys Now Amazon E BS uses AWS KMS with a CMK to generate AWS KMS –encrypted volume keys 1 AWS KMS may from time to time relax domain key rotation to at most weekly to account for domain administration and configuration tasks 2 Default service master keys created and managed by AWS KMS on your behalf are automatically rotated every 3 years ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 14 of 42 Figure 3: Amazon EBS volume encryption with AWS KMS keys Encrypt ing data being written to an Amazon EBS volume involves five steps : 1 Amazon EBS obtains an encrypted volume key under a CMK through AWS KMS over a TLS session and stores the encrypted key with the volume metadata 2 When the Amazon EBS volume is mounted the encrypted volume key is retrieved 3 A call to AWS KMS over TLS is made to d ecrypt the encrypted volume key AWS KMS identif ies the CMK and make s an internal request to an HSM in the fleet to decrypt the encrypted volume key AWS KMS then return s the volume key back to the Amazon Elastic Compute Cloud (Amazon EC2) host that contai ns your instance over the TLS session 4 The volume key is used to encrypt and decrypt all data going to and from the attached Amazon EBS volume Amazon EBS retains the encrypted volume key for later use in case the volume key in memory is no longer availabl e ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 15 of 42 Client side Encryption The AWS Encryption SDK [14] includes an API operation for performing envelope encryption using a CMK from AWS KMS For complete recommendations and usage details see the related documentation [14] Client applications can use the AWS Encryption SDK to perform envelope encryption using AWS KMS // Instantiate the SDK final AwsCrypto crypto = new AwsCrypto(); // Set up the KmsMasterKe yProvider backed by the default credentials final KmsMasterKeyProvider prov = new KmsMasterKeyProvider(keyId); // Do the encryption final byte[] ciphertext = cryptoencryptData(prov message); The client applicati on can execute the following steps: 1 A request is made under a CMK for a new data key An encrypted data key and a plaintext version of the data key are returned 2 Within the AWS Encryption SDK the plaintext data key is used to encrypt the message The plai ntext data key is then deleted from memory 3 The encrypted data key and encrypted message are combined into a single ciphertext byte array Figure 4: AWS Encryption SDK envelope encryption ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 16 of 42 The envelope encrypted message can be decrypted using the decrypt functionality to obtain the originally encrypted message final AwsCrypto crypto = new AwsCrypto(); final KmsMasterKeyProvider prov = new KmsMasterKeyProvider(keyId); // Decrypt the data final CryptoResult<byte[] KmsMasterKey> res = cryptodecryptData(pr ov ciphertext); // We need to check the master key to ensure that the // assumed key was used if (!resgetMasterKeyIds()get(0)equals(keyId)) { throw new IllegalStateException("Wrong key id!"); } byte[] plaintext = resgetResult(); 1 The AWS Encrypt ion SDK parse s the envelope encrypted message to obtain the encrypted data key and make a request to AWS KMS to decrypt the data key 2 The AWS Encryption SDK receive s the plaintext data key from AWS KMS 3 The data key is then used to decrypt the message returning the initial plaintext Figure 5: AWS Encryption SDK envelope decryption ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 17 of 42 Customer Master Keys A CMK refers to a logical key that may refer to one or more HBK s It is generated as a result of a call to the CreateKey API call The following is the CreateKey request syntax { "Description": "string" "KeyUsage": "string" “Origin”: “string”; "Policy": "string" } The request accepts the following data in JSON format Optional Description: Description of the key We recommend that you choose a description that helps you decide whether the key is appropriate for a task Optional KeyUsage: Specifies the intended use of the key Currently this defaults to “ENCRYPT/DECRYPT” since only symmetri c encryption and decryption are supported Optional Origin: The source of the CMK's key material The default is “AWS_KMS” In addition to the default value “AMS_KMS” the value “EXTERNAL” may be used to create a CMK without key material so that you can im port key material from your existing key management infrastructure The use of EXTERNAL is covered in the following section on Imported Master Keys Optional Policy: Policy to attach to the key If the policy is omitted the key is created with the default policy (below) that enables IAM users with AWS KMS permissions as well as the root account to manage it For details on the policy see https://docsawsamazon com/kms/latest/developerguide/key policieshtml ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 18 of 42 The call return s a response containing an ARN with the key identifier arn:aws:kms:<region>:<owningAWSAccountId>:key/<keyId> If the Origin is AWS_KMS after the ARN is created a request to an HSM is made over an authenticated session to provision an HBK The HBK is a 256 bit key that is associated with this CMK key ID It can be generated only on an HSM and is designed never to be exported outside of the HSM boundary in cleartext An HBK is generated on the HSM and encrypted under the current domain key DK 0 These encrypted HBK s are referred to as EKTs Although the HSMs can be configured to use a variety of key wrapping methods the current implementation uses the authenticated encryption scheme known as AES 256 in Galois Counter Mode (GCM) [ 5] As part of the authenticated encryption mode some cleartext exported key token metadata can be protected This is stylistically represented as EKT = Encrypt( DK 0 HBK ) Two fundamental forms of protection are provided to your CMKs and the subsequent HBK s: authorization policies set on your CMK s and the cryptographic protections on your associated HBK s The remaining sections describe the cryptographic protections and the security of the management functions in AWS KMS In addition to the ARN a user friendly name can be associated with the CMK by creating an alias for the key Once an alias has been associated with a CMK the alias can be used in place of the ARN Multiple levels of authorizations surround the use of CMK s AWS KMS enables separate authorization policies between the encrypted co ntent and the CMK For instance an AWS KMS envelope encrypted Amazon Simple Storage Service (Amazon S3) object inherit s the policy on the Amazon S3 bucket However access to the necessary encryption key is determined by the access policy on the CMK ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 19 of 42 For the latest information about authentication and authorization policies for AWS KMS see https://docsawsamazoncom/kms/latest/developerguide/control accesshtml Imported Master Keys AWS KMS provides a mechanism for importing the cryptographic material used for a n HBK As described in the section on Customer Master Keys earlier when the CreateKey command is used with Origin set to EXTERNAL a logical CMK is created that contains no underlying HBK The cryptographic material must be imported using the ImportKeyMaterial API call This feature allows you to control the key creation and durability of the cryptographic material It is recommended that if you use this feature you take significant caution in the handling and durability of these keys in your environment For complete details and recommendations for importing master keys see https://docsawsamazoncom/kms/latest/developerguide/importing keyshtml GetParametersForImport Prior to importing the key material for an imported master key you must obtain the necessary parameter s to import the key The following is the GetParametersForImport request syntax { "KeyId": "string" "WrappingAlgorithm": "string" “WrappingKeySpec” : “string” } KeyId : A unique key identifier for a CMK This value can be a globally unique identifier an ARN or an ali as WrappingAlgorithm: The algorithm you use when you encrypt your key material The valid values are “RSAES_OAEP_SHA256” “RSAES_OAEP_SHA1” or “RSAES_PKCS1_V 1_5” AWS KMS recommends that you use ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 20 of 42 RSAES_OAEP_SHA256 You may have to use another key wrapp ing algorithm depending on what your key management infrastructure supports WrappingKeySpec: The type of wrapping key (public key) to return in the response Only RSA 2048 bit public keys are supported The only valid value is “RSA_2048” This call resul ts in a request from the AWS KMS host to an HSM to generate a new RSA 2048 bit key pair This key pair is used to import an HBK for the specified CMK key ID The private key is protected and accessible only by an HSM member of the domain A successful call results in the following return values { "ImportToken": blob "KeyId": "string" "PublicKey": blob "ValidTo": number } ImportToken: A token that contains metad ata to ensure that your key material is impor ted correctly Store this value and send it in a subsequent ImportKeyMaterial request KeyId: The CMK to use when you subsequently import the key material This is the same CMK specified in the request PublicKey : The public key to use to encrypt your key material The public key is encoded as specified in section A11 of PKCS#1 [ 6] an ASN1 DER encoding of the RSAPublicKey It is the ASN1 encoding of two integers as an ASN1 sequence ValidTo: The time at which the import token and public key expire These items are valid for 24 hours If you do not use them for a subsequent ImportKeyMaterial request within 24 hours you must retrieve new ones The import token and public key from the same response must be used together ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 21 of 42 ImportKeyMaterial The ImportKeyMaterial request imports the necessary cryptographic material for the HBK The cryptographic material must be a 256 bit symmetric key It must be encrypted using the algorithm specified in WrappingAlgorithm under the returne d public key from a recent GetParametersForImport request ImportKeyMaterial takes the following arguments { "EncryptedKey": blob "ExpirationModel": "string" "ImportToken": blob "KeyId": "string" "ValidTo": number } EncryptedKey: The encrypt ed key material Encrypt the key material with the algorithm that you specified in a previous GetParametersForImport request and the public key that you received in the response to that request ExpirationModel: Specifies whether the key material expires When this value is KEY_MATERIAL_EXPIRES the ValidTo parameter must contain an expiration date When this value is KEY_MATERIAL_DOES_NOT_EXPIRE do not include the ValidTo parameter The valid values are “KEY_MATERIAL_EXPIRES” and “KEY_MATERIAL_DOES_NOT_E XPIRE” ImportToken: The import token you received in a previous GetParametersForImport response Use the import token from the same response that contained the public key that you used to encrypt the key material KeyId: The CMK to import key material int o The CMK's Origin must be EXTERNAL Optional ValidTo: The time at which the imported key material expires When the key material expires AWS KMS deletes the key material and the CMK ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 22 of 42 becomes unusable You must omit this parameter when ExpirationModel is set to KEY_MATERIAL_DOES_NOT_EXPIRE Otherwise it is required On success the CMK is available for use within AWS KMS until the specified validity date Once an imported CMK expires the EKT is deleted from the service’s storage layer Enable and Disable Key The ability to enable or disable a CMK is separate from the key lifecycle This does not modify the actual state of the key but instead suspe nds the ability to use all HBK s that are tied to a CMK These are simple commands that take just the CMK key ID Figure 6: AWS KMS CMK lifecycle3 Key Deletion You can delete a CMK and all associated HBK s This is an inherently destructive operation and you should exercise caution when deleting keys from KMS AWS KMS enforces a minimal wait time of seven days when deleting CMKs During the waiting period the key is placed in a disable d state with a key state indicating Pending Deletion All calls to use the key for cryptographic operations will fail 3 The lifecycle for an EXTERNAL CMK differs It can be in the state of pending import and key rotation is not currently available Furthe r the EKT can be removed without requiring a waiting period by calling DeleteImportedKeyMaterial DeactivatedDeactivatedEnabled key(s) DeactivatedDeactivatedKey Gene rationActive DeactivatedDeletedRotation Schedule key for deletionCreateKe yArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 23 of 42 CMKs can be deleted using the ScheduleKeyDeletion API call It takes the following arguments { "KeyId": “string” "PendingWindowInDays": number } KeyId: The unique identifier for the CMK to delete To specify this value use the uniqu e key ID or the ARN of the CMK Optional PendingWindowInDays: The waiting period specified in number of days After the waiting period ends AWS KMS deletes the CMK and a ll associated HBK s This value is optional If you include a value it must be between 7 and 30 inclusive If you do not include a value it defaults to 30 Rotate Customer Master Key You can induce a rotation of your CMK The current system allows you to opt in to a yearly rotation schedule for your CMK When a CMK is rotated a new HBK is created and marked as the active key for all new requests to protect information The current active key is moved to the deactivated state and remains available for use to decrypt any existing ciphertext values that have been encrypted using this versi on of the HBK AWS KMS does not store any ciphertext values encrypted under a CMK As a direct consequence these ciphertext values require the deactivated HBK to decrypt These older ciphertexts can be re encrypted to the new HBK by calling the ReEncrypt API call You can set up key rotation using a simple API call or from the AWS Management Console Customer Data Operations After you have established a CMK it can be used to perform cryptographic operations Whenever data is encrypted under a CMK the re sulting object is a customer ciphertext The ciphertext contain s two sections: an unencrypted ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 24 of 42 header (or cleartext) portion protected by the authenticated encryption scheme as the additional authenticated data and an encrypted portion The cleartext port ion include s the HBK identifier (HBKID) These two immutable fields of the ciphertext value help ensure that AWS KMS can decrypt the object in the future Generating Data Keys A request can be made for a specific type of data key or a random key of arbitrary length through the GenerateDataKey API call A simplified view of this API operation is provided here and in other examples You can find a detailed description of the full API here http s://docsawsamazoncom/kms/latest/APIReference/Welcomehtml The following is the GenerateDataKey request syntax { "EncryptionContext": {"string" : "string"} "GrantTokens": [ "string"] "KeyId": "string" "KeySpec": "string" "NumberOfB ytes": "number" } The reque st accepts the fo llowing data in JSON format Optional EncryptionContext : Name :value pair that contains additional data to authenticate during the encryption and decryption processes that use the key Optional GrantTokens : A list of grant tokens that represent grants that provide permissions to generate or use a key For more information on grants and grant tokens see https://docsawsamazo ncom/kms/latest/developerguide/control accesshtml Optional KeySpec: A value that identifies the encryption algorithm and key size Currently this can be AES_128 or AES_256 ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 25 of 42 Optional NumberOfBytes: An integer that contains the number of bytes to generate AWS KMS after authenticating the command acquire s the current active EKT pertaining to the CMK It pass es the EKT along with your provided request and any encryption context to an HSM over a protected session between the AWS KMS host and an HSM in the domain The HSM does the following: 1 Generate s the requested secret material and hold it in volatile memory 2 Decrypt s the EKT matching the key ID of the CMK that is defined in the request to obtain the active HBK = Decrypt( DK i EKT) 3 Generate s a random nonce N 4 Derives a 256 bit AES GCM Data Encryption K ey K from HBK and N 5 Encrypt s the secret material cipher text = Encrypt( K context secret) The ciphertext value is returned to you and is not retained anywhere in the AWS infrastructure Without possession of the cipher text the encryption context and the authorization to use the CMK the underlying secret cannot be returned The GenerateDataKey returns the plaintext secret material and the ciphertext to you over the secure channel between the AWS KMS host and the HSM AWS KMS then sends it to you over the TLS session The following is the r esponse syntax { "CiphertextBlob": "blob" "KeyId": "string" "Plaintext": "blob" } The management of data keys is left to you as the application developer They can be rotated at any frequency Further the data key itself can be reencrypted t o ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 26 of 42 a different CMK or a rotated CMK using the ReEncrypt API operation Full details can be found here: http s://docsawsamazoncom/kms/latest/APIReference/Welcomehtml Encrypt A basic function of AWS KMS is to encrypt an object under a CMK By design AWS KMS provides low latency cryptographic operations on HSMs Thus there is a limit of 4 KB on the amount of plaintext that can be encrypted in a direct call to the encrypt functi on The KMS Encryption SDK can be used to encrypt larger messages AWS KMS after authenticating the command acquire s the current active EKT pertaining to the CMK It pass es the EKT along with the plaintext provided by you and encryption context to an y available HSM in the region over an authenticated session between the AWS KMS host and an HSM in the domain The HSM execute s the following: 1 Decrypt s the EKT to obtain the HBK = Decrypt( DK i EKT) 2 Generate s a random nonce N 3 Derive s a 256 bit AES GCM Data E ncryption Key K from HBK and N 4 Encrypt s the plaintext ciphertext = Encrypt( K context plaintext) The ciphertext value is returned to you and neither the plaintext data or ciphertext is retained anywhere in the AWS infrastructure Without possession of the ciphertext and the encryption context and the authorization to use the CMK the underlying plaintext cannot be returned Decrypt A call to AWS KMS to decrypt a ciphertext value accepts an encrypted value ciphertext and an encryption context AWS KMS authenticate s the call using AWS signature version 4 signed requests [16] and extract s the HBKID for the wrapp ing key from the ciphertext The HBKID is used to obtain the EKT required to decrypt the ciphertext the key ID and the policy for the key ID The request is authorized based on the key policy grants that may be present and any associated IAM pol icies that reference the key ID The Decrypt function is analogous to the encryption function ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 27 of 42 The following is the Decrypt request syntax { "CiphertextBlob": "blob" "EncryptionContext": { "string" : "string" } "GrantTokens": ["string"] } The following are the r equest parameters CiphertextBlob: Ciphertext including metadata Optional EncryptionContext : The encryption context If this was specified in the Encrypt function it must be specified here or the decryption operation fail s For more in formation see https://docsawsamazoncom/kms/latest/developerguide/encrypt contexthtml Optional GrantTokens : A list of grant tokens that represent grants that provide permissions to perform decryption The ciphertext and the EKT are sent along with the encryption context over an authenticated session to an HSM for decryption The HSM execute s the following: 1 Decrypt s the EKT to obtain the HBK = Decrypt( DK i EKT) 2 Extract s the nonce N from the ciphertext structure 3 Regenerate s a 256 bit AES GCM Data E ncryption Key K from HBK and N 4 Decrypt s the ciphertext to obtain plaintext = Decrypt( K context ciphertext ) ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 28 of 42 The resulting key ID and plaintext are returned to the AWS KMS host over the secure session and then back to the calling customer application over a TLS connection The following is the r esponse syntax { "KeyId": "string" "Plaintext": blob } If the calling applicati on wants to ensure that the authenticity of the plaintext it must verify the key ID returned is the one expected ReEncrypting an Encrypted Object An existing customer ciphertext encrypted under one CMK can be reencrypted to another CMK through a re encr ypt command Reencrypt encrypts data on the server side with a new CMK without exposing the plaintext of the key on the client side The data is first decrypted and then encrypted The following is the r equest syntax { "CiphertextBlob": "blob" "DestinationEncryptionContext": { "string" : "string" } "DestinationKeyId": "string" "GrantTokens": ["string"] "SourceEncryptionContext": { "string" : "string"} } The request accepts the following data in JSON format CiphertextBlob: Ciphertext of the data to reencrypt ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 29 of 42 Optional DestinationEncryptionContext: Encryption context to be used when the data is reencrypted DestinationKeyId: Key identifier of the key used to reencrypt the data Optional GrantTokens : A list of grant tokens th at represent grants that provide permissions to perform decryption Optional SourceEncryptionContext: Encryption context used to encrypt and decrypt the data specified in the CiphertextBlob parameter The process combines the decrypt and encrypt operations of the previous descriptions : The customer ciphertext is decrypted under the initial HBK referenced by the customer ciphertext to the current HBK under the intended CMK When the CMK s used in this command are the same this command moves the customer ciph ertext from an old version of an HBK to the latest version of an HBK The following is the r esponse syntax { "CiphertextBlob ": blob "KeyId": "string" "SourceKeyId ": "string" } If the calling application wants to ensure the authenticity of the underlying plaintext it must verify the SourceKeyId returned is the one expected Domains and the Domain State A cooperative collection of trusted internal AWS KMS entities within an AWS Region is referred to as a domain A domain includes a set of trusted entities a set of rules and a set of secret keys called domain keys The domain keys are shared among HS Ms that are members of the domain A domain state consists of the following fields ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 30 of 42 Field Description Name A domain name to identify this domain Members A list of HS Ms that are members of the domain including their public signing key and public agreement keys Operators A list of entities public signing keys and a role (KMS operator or service host) that re presents the operators of this service Rules A list of quorum rules for each command that must be satisfied to execute a command on the HSM Domain keys A list of domain keys (symmetric keys) currently in use within the domain The full domain state is available only on the HSM The domain state is synchronized between HSM domain members as an exported domain token Domain Keys All the HS Ms in a domain share a set of domain keys {DK r } These keys are shared through a domain state export routine The exported domain state can be imported into any HSM that is a member of the domain How this is accomplished and the additional contents of the domain state are detailed in a following secti on on Managing Domain State The set of domain keys {DK r } always includes one active domain key and several deactivated domain keys Domain keys are rotated daily to ensure that we comply with Recommendation for Key Management Part 1 [1 7] During domain key rotation all existing CMK keys encrypted under the outgoing domain key are reencrypted under the new active domain key The active domain key is used to encrypt any new EKTs The expired domain keys can be used only to decrypt previously encrypted EKTs for a number of days equivalent to the number of recently rotated domain keys Exported Domain Tokens There is a regular need to synchronize state between domain participants This is accomplished through exporting the domain state whenever a change is made to the domain The domain state is exported as an exported domain token ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 31 of 42 Field Description Name A domain name to identify this domain Members A list of HS Ms that are members of the domain including their signing and agreement public keys Operators A list of entities public signing keys and a role that represents the operators of this service Rules A list of quorum rules for each command that must be satisfied to execute a command on an HSM domain member Encrypted domain keys Envelope encrypted domain keys The domain keys are encrypted by the signing member for each of the members listed above enveloped to their public agreement key Signature A signature on the domain state produced by an HSM necessarily a member of the domain that exported the domain state The exported domain token forms the fundamental source of trust for entities operatin g within the domain Managing Domain State The domain state is managed through quorum authenticated commands These changes include modifying the list of trusted participants in the domain modifying the quorum rules for executing HSM commands and period ically rotating the domain keys These commands are authenticated on a per command basis as opposed to authenticated session operations; see the API model depicted in Figure 7 An HSM in its initialized and operational state conta ins a set of self generated asymmetric identity keys a signing key pair and a key establishment key pair Through a manual process a KMS operator can establish an initial domain to be created on a first HSM in a region This initial domain consist s of a full domain state as defined in Domains and the domain state section It is installed through a join command to each of the defined HSM members in the domain After an HSM has joined an initial domain it is bound to the rules defined in that domain These rules govern the commands that use customer cryptographic keys or make changes to the host or domain state The authenticated session API operation s that use your cryptographic keys have been defined earlier ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 32 of 42 Figure 9: Domain management Figure 9 depicts how a domain state gets modified It consists of four steps: 1 A quorum based command is sent to an HSM to modify the domain 2 A new domain state is generated and exported as a new exported domain token The state on the HSM is not modified meaning that the change is not enact ed on the HSM 3 A second command is sent to each of the HS Ms in the newly exported domain token to update their domain state with the new domain token 4 The HSM s listed in the new exported domai n token can authenticate t he command and the domain token They can also unpack the domain keys to update the domain state on all HSM s in the domain HSM s do not communicate directly with each other Instead a quorum of operators requests a change to the domain state that result s in a new exported domain token A service host member of the domain is used to distribute the new domain state to every HSM in the domain The leaving and joining of a domain are done through the HSM management functions and th e modification of the domain state is done through the domain management functions ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 33 of 42 Command Description of HSM management Leave domain Causes an HSM to leave a domain deleting all remnants and keys of that domain from memory Join domain Causes an HSM to join a new domain or update its current domain state to the new domain state using the existing domain as source of the initial set of rules to authenticate this message Command Description of domain management Create domain Causes a new domain to be created on an HSM Returns a first domain token that can be distributed to member HSM s of the domain Modify operators Adds or removes operators from the list of authorized operators and their roles in the domain Modify members Adds or removes an HSM from the list of authorized HSM s in the domain Modify rules Modifies the set of quorum rules required to execute commands on an HSM Rotate domain keys Causes a new domain key to be created and marked as the active domain key This moves the existing active key to a deactivated key and removes the oldest deactivated key from the domain state Internal Communication Security Commands between the service hosts /KMS operators and the HSMs are secu red through two mechanisms depicted in Figure 7: a quorum signed request method and an authenticated session using a n HSM service host protocol The quorum signed commands are designed so that no single operator can modify the criti cal security protections provided by the HSMs The commands executed over the authenticated sessions help ensure that only authorized service operators can perform operations involving CMKs A ll customer bound secret information is secured across the AWS infrastructure HSM Security Boundary The inner security boundary of AWS KMS is the HSM The HSM has a limited webbased API and no other active physical interfaces in its operational state An operational HSM is provisioned during initialization with the necessary cryptographic keys to establish its role in the domain Sensitive cryptographic materials of the HSM are only stored in volatile memory and erased when the ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 34 of 42 HSM moves out of the opera tional state including intended or unintended shutdowns or resets The HSM API operation s are authenticated either by individual commands or over a mutually authenticated confidential session established by a service host Figure 7: HSM API operation s Quorum Signed Commands Quorum signed commands are issued by operators to HSMs This section describes how quorum based commands are created signed and authenticated These rules are fairly simple For example command Foo requires two me mbers from role Bar to be authenticate d There are three steps in the creation and verification of a quorum based command The first step is the initial command creation ; the second is the submission to additional operators to sign ; and the third is the verification and execution For the purpose of introducing the concepts assume that there is an authentic set of operator ’s public keys and roles {QOS s } and a set of quo rum rules QR = { Command i { Rule {i t}} where each Rule is a set of roles and minimum number N {Role t Nt } For a command to satisfy the quorum rule the command dataset must be signed by a set of operators listed in {QOS s } such that they meet one of the rules listed for that command As mentioned earlier in this whitepaper the set of quorum rules and operators are stored in the domain state and the exported domain token In practice an initial signer signs the command Sig 1 = Sign(d Op1 Command) A second operator also signs the command Sig 2 = Sign(d Op2 Command) The doubly signed message is sent to an HSM for execution The HSM performs the following: ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 35 of 42 1 For each signature it extracts the signer ’s public key from the domain state and verifies the signature on the command 2 It verifies that the set of signers satisfies a rule for the command Authenticated Sessions Your key operations are executed between the externally facing AWS KMS hosts and the HS Ms These commands pertain to the creation and use of cryptographic keys and secure random number generation The commands execute over a session authenticated channel between the service hosts and the HS Ms In addition to the need for authenticity these sessions r equire confidentiality Commands executing over these sessions include the returning of cleartext data keys and decrypted messages intended for you To ensure that these sessions cannot be subverted through man inthemiddle attacks sessions are authentic ated This protocol performs a mutually authenticated ECDHE key agreement between the HSM and the service host The exchange is initiated by the service host and completed by the HSM The HSM also returns a session key (SK) encrypted by the negotiated key and an exported key token that contains the session key The exported key token contains a validity period after which the service host must renegotiate a session key A service host is a member of the domain and has an identity signing key pair (dHOS i QHOS i) and an authentic copy of the HSMs’ identity public keys It uses its set of identity signing keys to securely negotiate a session key that can be used between the service host and any HSM in the domain The exported key tokens have a validity period associated with them after which a new key must be negotiated Figure 8: HSM service host oper ator authenticated sessions ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 36 of 42 The process begins with the service host recognition that it requires a session key to send and receive sensitive communication flows between itself and an HSM member of the domain 1 A service host generates an ECDH ephemeral key pair (d1 Q 1) and signs it with its identity key Sig 1 = Sign(dOSQ 1) 2 The HSM verifies the signature on the received public key using its current domain token and creates an ECDH ephemeral key pair ( d2 Q 2) It then completes the ECDH keyexchange accordi ng to Recommendation for Pair Wise Key Establishment Schemes Using Discrete Logarithm Cryptography (Revised) [1 3] to form a negotiated 256 bit AES GCM key The HSM generates a fresh 256 bit AES GCM session key It encrypts the session key with the negotiated key to form the encrypted session key (ESK) It also encrypts the session key under the domain key as an exported key token EKT Finally it signs a return valu e with its identity key pair Sig 2 = Sign( dHSK (Q 2 ESK EKT)) 3 The service host verifies the signature on the received key s using its current domain token The service host then completes the ECDH key exchange according to Recomme ndation for Pair Wise Key Establishment Schemes Using Discrete Logarithm Cryptography (Revised) [1 3] It next decrypts the ESK to obtain the session key SK During the validity period in the EKT the service host can use the negotiated session key SK to send envelope encrypted commands to the HSM Every service host initiated command over this authenticated session includes the EKT The HSM respond s using the same negotiated session key SK Durability Protection Additional service durability is provided by the use of offline HSMs multiple nonvolatile storage of exported domain tokens and redundant storage of encrypted CMKs The offline HSMs are members of the existing domains With the exception of not being onli ne and participating in the regular domain operations the offline HSMs appear identically in the domain state as the existing HSM members ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 37 of 42 The durability design is intended to protect all CMKs in a region should AWS experience a wide scale loss of either the online HSM s or the set of CMKs stored within our primary storage system Imported master keys are not included under the durability protections afforded other CMKs In the event of a regionwide failure in AWS KMS imported master keys may need to be reimported The offline HSM s and the credentials to access them are stored in safes within monitored safe rooms in multiple independent geographical locations Each safe requires at least one AWS security officer and one AWS KMS operator from two independent teams in AWS to obtain these materials The use of these materials is governed by internal policy requiring a quorum of AWS KMS operators to be present ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 38 of 42 References [1] Amazon Web Services “FIPS 140 2 Non proprietary Sec urity Policy AWS Key Management Service HSM” version 10101 18 January 2018 https://csrcnistgov/CSRC/media/pr ojects/cryptographic module validation program/documents/security policies/140sp3139pdf [2] NIST Special Publication 800 52 Revision 1 Guidelines for the Selection Configuration and Use of Transport Layer Security (TLS) Implementations April 2014 https ://nvlpubsnistgov/nistpubs/SpecialPublications/NISTSP800 52r1pdf [3] Recommendation for Random Number Generation Using Deterministic Random Bit Generators NIST Special Publication 800 90A Revision 1 June 2015 Available from https://nvlpubsnistgov/nistpubs/SpecialPublications/NISTSP800 90Ar1pdf [4] Federal Information Processing Standards Publication 197 Announcing the Advanced Encryption Standard (AES) November 2001 Available from http://csrcnistgov/publications/fips/fips197/fips 197pdf [5] Recommendation for Block Cipher Modes of Operation: Galois/Counter Mode (GCM) and GMAC NIST Special Publication 800 38D November 2007 Available from http://csrcnistgov/publications/nistpubs/800 38D/SP 800 38Dpdf [6] PKCS#1 v22: RSA Cryptograph y Standard RSA Laboratories October 2012 Available from http://wwwemccom/emc plus/rsa labs/pkcs/files/h11300 wp pkcs 1v22rsacryptogra phystandardpdf [7] Recommendation for Key Derivation Using Pseudorandom Functions NIST Special Publication 800 108 October 2009 Available from https://nvl pubsnistgov/nistpubs/legacy/sp/nistspecialpublication800 108pdf [8] Federal Information Processing Standards Publication 198 1 The Keyed Hash Message Authentication Code (HMAC) July 2008 Available from http://csrcnistgov/publications/fips/fips198 1/FIPS 1981_finalpdf ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 39 of 42 [9] Federal Information Processing Standards Publications FIPS PUB 180 4 Secure Hash Standard Aug ust 2012 Available from https://nvlpubsnistgov/nistpubs/FIPS/NISTFIPS180 4pdf [10] Use of Elliptic Curve Cryptography (ECC) Algorithms in Cryptographic Message Syntax (CMS) Brown D Turner S Internet Engineering Task Force July 2010 http://toolsietforg/html/rfc5753/ [11] X962 2005: Public Key Cryptography for the Financial Services Industry: The Elliptic Curve Di gital Signature Algorithm (ECDSA) American National Standards Institute 2005 [12] SEC 2: Recommended Elliptic Curve Domain Parameters Standards for Efficient Cryptography Group Version 20 27 January 2010 http://wwwsecgorg/sec2 v2pdf [13] Recommendation for Pair Wise Key Establishment Schemes Using Discrete Logarithm Cryptography (Revised) NIST Special Publication 800 56A Revision 2 May 2013 Available from http://nvlpubsnistgov/nistpubs/SpecialPublications/NISTSP800 56Ar2pdf [14] Amazon Web S ervices “What is the AWS Encryption SDK” http://docsawsamazoncom/encryption sdk/latest/developer guide/introductionhtml [15] Recommendation for Block Cipher Modes of Operation: The XTS AES Mode for Confidentiality on Storage Devices NIST Special Publication 800 38E January 2010 Available from http://csrcnistgov/p ublications/nistpubs/800 38E/nist sp800 38Epdf [16] Amazon Web Services General Reference (Version 10) “Signing AWS API Request ” http://docsawsamazoncom/g eneral/latest/gr/signing_aws_api_requestshtml [17] Recommendation for Key Management Part 1: General (Revision 3) NIST Special Publication 800 57A January 2016 Available from https://nvlpubsnistgov/nistpubs/SpecialPublications/NISTSP800 57pt1r4pdf ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 40 of 42 Appendix Abbreviations and Keys This section lists abbreviations and keys reference d throughout the document Abbreviations Abbreviation Definition AES Advanced Encryption Standard CDK customer data key CMK customer master key CMKID customer master key identifier DK domain key ECDH Elliptic Curve Diffie Hellman ECDHE Elliptic Curve Diffie Hellman Ephemeral ECDSA Elliptic Curve Digital Signature Algorithm EKT exported key token ESK encrypted session key GCM Galois Counter Mode HBK HSM backing key HBKID HSM backing key identifier HSM hardware security module RSA Rivest Shamir and Adleman (cryptologic) secp384r1 Standards for Efficient Cryptography prime 384 bit random curve 1 SHA256 Secure Hash Algorithm of digest length 256bits ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 41 of 42 Keys Abbreviation Name: Description HBK HSM backing key : HSM backing k eys are 256 bit master keys from which specific use keys are derived DK Domain key: A domain key is a 256bit AESGCM key It is shared among all the members of a domain and is used to protect HSM backing keys material and HSM service host session keys DKEK Domain key encryption key : A domain key encryption Key is an AES 256 GCM key generated on a host and used for encrypting the current set of domain keys synchronizing domain state across the HSM hosts (dHAK QHAK ) HSM agreement key pair : Every initiated HSM has a locally generated Elliptic Curve Diffie Hellman agreement key pair on the curve secp384r1 (NIST P384) (dE QE) Ephemeral agreement key p air: HSM and service hosts generate ephemeral agreement keys These are Elliptic Curve Diffie Hellman keys on the curve secp384r1 (NIST P384) These are generated in two use cases : to establish a hosttohost encryption key to transport domain key encryption keys in domain tokens and to establish HSM service host session keys to protect sensitive communications (dHSK QHSK ) HSM signature key pair: Every initiated HSM has a locally generated Elliptic Curve Digital Signature key pair on the curve secp384r1 (NIST P384) (dOS QOS ) Operator signature key pair: Both the service host operators and KMS operators have an identity signing key used to authenticate itself to other domain participants K Data encryption key : A 256 bit AES GCM key derived from an HBK using the NIST SP800 108 KDF in counter mode using HMAC with SHA256 SK Session key: A session key is created as a result of an authenticated Elliptic Curve Diffie Hellman key exchanged between a service host operator and an HSM The purpose of the exchange is to secur e communication between the service host and the member s of the domain ArchivedAmazon Web Services – AWS KMS Cryptographic Details August 2018 Page 42 of 42 Contributors The following individu als and organizations contributed to this document: • Ken Beer General Manager KMS AWS Cryptography • Richard Moulds Principal Product Manager – KMS AWS Cryptography • Matthew Campagna Principal Security Engineer AWS C ryptography • Raj Copparapu Sr Prod uct Manager KMS AWS Cryptography Document Revisions For the most up to date version of this white paper please visit: https://d1awsstaticcom/whitepapers/KMS Cryptographic Detailspdf
|
General
|
consultant
|
Best Practices
|
AWS_Migration_Whitepaper
|
Archived1 AWS Migration Whitepaper AWS Professional Services March 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived 2 © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments cond itions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its cu stomers Archived 3 Contents Introduction 1 Using the AWS Cloud Adoption Framework ( AWS CAF) to Assess Migration Readiness 2 Impact of Culture on Cloud Migration 4 Business Drivers 5 Migration Strategies 7 “The 6 R’s”: 6 Application Migration Strategies 7 Which Migration Strategy is Righ t for Me? 9 Building a Business Case for Migration 12 People and Organization 15 Orga nizing Your Company’s Cloud Teams 15 Creating a Cloud Center of Excellence 15 Migration Readiness and Planning 17 Assessing Migration Readiness 17 Application Discovery 19 Application Discovery Tools 20 Application Portfolio Analysis 21 Migration Planning 22 Technical Planning 23 The Virtual Private Cloud Environment 24 Migrating 29 First Migrations – Build Experience 29 Migration Execution 30 Application Migration Process 30 Team Models 32 Conclusion 34 Contributors 35 Archived 4 Resources 35 Additional Information 35 FAQ 36 Glossary 37 Archived 5 Abstract Adopting Amazon Web Services presents many benefits such as increased business agility flexibility and reduced costs As an enterprise ’s cloud journey evolves from building and running cloud native applications on AWS to mapping out the migration of an entire enterprise IT estate certain challenges surface Migrating at scale to AWS calls for a level of business transformation in order to fully realize the numerous benefits of operating in a cloud environment including changes to tools processes and skillsets The AWS approach is a culmination of our experiences in helping large companies migrate to the cloud From these experiences we have developed a set of methods and best practices to enable a successful move to AWS Here we discuss the importance of driving organiz ational change and leadership how to establish foundational readiness and plan for migrating at scale and our iterative a pproach to migration execution Migrating to AWS requires an iterative approach which begins with building and evolving your business case as you and your team learn and uncover more data over time through activities like application portfolio discovery and portfol io analysis There are the common migration strategies which will inform your business plan and a recommended approach to organizing and evolving your cloud teams as confidence and capability increases You will stand up a Cloud Center of Excellence (CCoE ) to lead and drive change evangelize your cloud migration initiative establish cloud governance guardrails and enable and prepare your organization to provide and consume new services Our approach walks you through what it means to be ready to migrat e at scale and how to establish a solid foundation to save time and prevent roadblocks down the road We will cover our approach to migration execution continuing on with building momentum and acceleration through a method of learn anditerate ArchivedAmazon Web Services – AWS Migration Whitepaper Page 1 Introduction Migrating your existing applications and IT assets to the Amazon Web Services (AWS) Cloud presents an opportunity to transform the way your organization does business It can help you lower costs become more agile develop new skills more quickly and deliver reliable globally available services to your customers Our goal is to help you to implement your cloud strategy successfully AWS has identified key factors to successful IT tra nsformation through our experience engaging and supporting enterprise customers We have organized these into a set of best practices for successful cloud migration Customer scenarios range from migrating smal l single application s to migrating entire data centers with hundreds o f applications We provide an overview of the AWS migration methodology which is built on iterative and continuous progress We discuss the principles that drive our approach and the essential activities that are neces sary for successful enterprise migrations Migrating to AWS is an iterative process that evolve s as your organization develops new skills processes tools and capabilities The initial migrations help build experience and momentum that accelerate your later migration efforts Establish ing the right foundation is key to a successful migration Our migration process balances the business and technical efforts needed to complete a cloud migration W e identify key business drivers for migration and present best strategies for planning and executing a cloud migration Once you understand why you are moving to the cloud it is time to address how to get there There are many challenges to completing a successful cloud migration We have collected common custom er questions from hundreds of cloud migration journeys and listed them here to illustrate common concerns as you embark on your cloud migration journey The order and priorit ization will vary based on your unique circumstances but we believe the exercise of thinking through and prioritizing your organization’s concerns upfront is beneficial: How do I build the right business case? How do I ac curately assess my environment? ArchivedAmazon Web Services – AWS Migration Whitepaper Page 2 How do I learn what I don’t know about my enterprise network topology and application portfolio ? How do I create a migration plan? How do I identify and evaluate the right partners to help me? How do I estimate the cost of a large transition like this? How long will the migration process take to complete? What tools will I need to complete the migration ? How do I handle my legacy applications? How do I accelerate the migration effort to realize the business and technology benefits? These questions and many more will be a nswered throughout this paper We have include d support and documentation such as the AWS Cloud Migration Portal 1 The best practices described in this paper will help you build a foundation for a successful migration including build ing a solid business plan defining appropriate processes and identify ing best inclass migration tools and resources to complete the migration Having this foundation will help you avoid the typical migration pitfalls that can lead to cost overruns and migr ation delays The Cloud Adoption Framework ( AWS CAF) AWS developed the AWS Cloud Adoption Framework (AWS CAF) which helps organizations understand how cloud adoption transforms the way they work AWS CAF leverages our experiences assisting companies arou nd the world with their Cloud Adoption Journey Assessing migration readiness across key business and technical areas referred to as Perspectives helps determine the most effective approach to an enterprise cloud migration effort First let’s outline wha t we mean by perspective AWS CAF is organized into six areas of focus which span your entire organization We describe these areas of focus as Perspectives: Business People Governance Platform Security and Operations For further reading please see the AWS CAF Whitepaper 2 AWS CAF provides a mental model to establish areas of focus in determining readiness to migrate and creating a set of migration execution workstreams As these are key areas of ArchivedAmazon Web Services – AWS Migration Whitepaper Page 3 the business impacted by cloud adoption it’s important that we create a migration plan which considers and incorporates the necessary requir ements across each area Figure 1: AWS Cloud Adoption Framework People and Technology Perspectives The following table presents a description of each Perspective and the common roles involved Table 1: AWS CAF perspectives Perspective Description and Com mon Roles Involved Business Business support capabilities to optimize business value with cloud adoption Common Roles: Business Managers; Finance Managers; Budget Owners; Strategy Stakeholders People People development training communications and change management Common Roles: Human Resources; Staffing; People Managers Governance Managing and measuring resulting business outcomes Common Roles: CIO; Program Managers; Project Managers; Enterprise Architects; Business Analysts; Portfolio Managers Platform Develop maintain and optimize cloud platform solutions and services Common Roles: CTO; IT Managers; Solution Architects Security Designs and allows that the workloads deployed or developed in the cloud align to the organization’s security control resiliency and compliance requirements Common Roles: CISO; IT Security Managers; IT Security Analysts; Head of Audit and Compliance Operations Allows system health and reliability through the move to the cloud and delivers an agile cloud comp uting operation Common Roles: IT Operations Managers; IT Support Managers ArchivedAmazon Web Services – AWS Migration Whitepaper Page 4 Motivating Change Cultural issues are at the root of many failed business transformations yet most organizations do not assign explicit responsibility for culture – Gartner 2016 Culture is critical to cloud migration Cloud adoption can fail to reach maximum potential if co mpanies do not consider the impact to culture people and processes in addition to the technology Onpremise s infrastructure h as been historically manag ed by people and even with advancements in server virtualization most companies have not been able to implement the levels of automation that the cloud can provide The AWS platform provides customers instant access to infrastructure and applications serv ices through a pay asyou go pricing model You can automate the provision ing of AWS resources using AWS service APIs As a result roles and responsibilities within your organization will change as application teams take more control of their infrastructu re and application services The impact of culture on cloud and cloud on culture does not need to be a daunting or arduous proposition Be aware and intentional about the cultural changes you are looking to drive and manag e the people side of change Measure and track the cultural change just as you would the technology change We recommend implementing an organizational change management (OCM) framework to help drive the desired changes throughout your organization ArchivedAmazon Web Services – AWS Migration Whitepaper Page 5 Table 2: Organizational c hange man agement to accelerate your cloud transformation he AWS OCM Framework guide s you through mobilizing your people aligning leadership envisioning the future state of operating in the cloud engaging your organization beyond the IT environment enabling capacity and making all of those changes stick for the long term You can find a dditional information on this topic in the R esources section of this paper Business Drivers The number one reason customers choose to move to the cloud is for the agility they gain The AWS Cloud provides more than 90 services including everything from compute storage and databases to continuous integration data analytics and artificial intelligence You are able to move from idea to implementation in minutes rather than the months it can take to provision services on premises In addition to agility other common reasons customers migrat e to the cloud include increased productivity data center consolidation or rationaliz ation and preparing for an acquisition divestiture or reduction in infrastructure sprawl Some companies want to completely re imagine their business as part of a larger digital transformation program And of course organizations are always looking fo r ways to reduce costs Common drivers that apply when migrating to the cloud are: ArchivedAmazon Web Services – AWS Migration Whitepaper Page 6 Operational Costs – Operational costs are the cost s of running your infrastructure They include the unit price of infrastructure matching supply and demand investment risk for new application s market s and venture s employing an elastic cost base and building transparency into the IT operating model Workforce Productivity – Workforce productivity is how efficiently you are able to get your services to market You ca n quickly provision AWS services which increases your productivity by letting you focus on the things that make your business different ; rather than spending time on the things that don’t like managing data centers With over 90 services at your disposal you eliminate the need to build and maintain these independently We see workforce productivity improvements of 30 %50% following a large migration Cost Avoidance – Cost avoidance is setting up an environment that does not create unnecessary costs E liminating the need for hardware refresh and maintenance programs is a key contributor to cost avoidance Customers tell us they are not interest ed in the cost and effort required to execute a big refresh cycle or data center renewal and are accelerating thei r move to the cloud as a result Operational Resilience – Operation al resilience is reducing your organization’s risk profile and the cost of risk mitigation With 16 Regions comprising 42 Availability Zones (AZs) as of June 2017 With AWS you can deploy your applications in multiple regions around the world which improve s your uptime and reduces your risk related costs After migrating to AWS o ur customers have seen improvements in application performance better security and reduction in high severity incidents For example GE Oil & Gas saw a 98% reduction in P1/P0 incidents with im proved application performance Business Agility – Business agility is the ability to react quickly to changing market conditions Migrating to the AWS Cloud helps increase your overall operational agility You can expand into new markets take products to market quickly and acquir e assets that offer a competitive advantage You also have the flexibility to speed up divestiture or acquisition of lines of business Operation al speed standardization and flexibility develop when you use DevOps models automation monitoring and auto recovery or high availability capabilities ArchivedAmazon Web Services – AWS Migration Whitepaper Page 7 Migration Strateg ies This is where you start to develop a migration strategy Consider where your cloud journey fits into your organization’s larger business strategy and find opportunities for alignment of vision A well aligned migration strategy with a supporting business case and a well thought out migration plan sets the proper groundwork for c loud adoption success One critical aspect of developing your migration strategy is to c ollect application portfolio data and rationalize it into what we refer to as the 6 R’s: Re host Re platform Re factor/Re architect Re purchase Retire and Retain This is a method for categorizing what is in your environment what the interdependencies are technical complexity to migrate and how you’ll go about migrating each application or set of applications Using the “6 R” Framework outlined below group your applications into R ehost Re platform Re factor/Re architect Re purchase Retire and Retain Using this knowledge you will outline a migration plan for each of the applications in your portfolio This plan will be iterated on and mature as you progr ess through the migration build confidence learn new capabilities and better understand your existing estate The complexity of migrating existing applications varies depending on considerations such as architecture existing licensing agreements and business requirements For example migrating a virtualized service oriented architecture is at the low complexity end of the spectrum A monolithic mainframe is at the high complexity end of the spectrum Typically you want to begin with an application on the low complexity end of the spectrum to allow for a quick win to build team confidence and to provide a learning experience You also want to choose an application that has business impact These strategies will help build momentum “The 6 R’s”: 6 Application Migration Strategies The 6 most common application migration strategies we see are: 1 Rehost (Referred to as a “lift and shift” ) Move applications without changes In large scale legacy migration s organization s are looking to move quickly to meet business objectives T he majority of these applications are re hosted GE Oil & Gas found that even ArchivedAmazon Web Services – AWS Migration Whitepaper Page 8 without implementing any cloud optimizations it could save roughly 30 % of its costs by re hosting Most re hosting can be automated with tools (eg AWS VM Import/Export ) Some customers prefer to do this manually as they learn how to apply their legacy systems to the new cloud platform Applications are easier to optimize/re architect once they’re already running in the cloud Partly becau se your organization will have developed the skills to do so and partly because the hard part — migrating the application data and traffic — has already been done 2 Replatform (Referred to as “lift tinker and shift” ) Make a few cloud optimizations to achieve a tangible benefit You will not change the core architecture of the application For example reduce the amount of time you spend managing database instances by migrating to a database as aservice platform like Amazon Relational Database Service (Amazon RDS) or migrating your application to a fully managed platform like AWS Elastic Beanstalk A large media company migrated hundreds of web servers that it ran on premises to AWS In the process it moved from WebLogic (a Java application container that requires an expensive license) to Apache Tomcat an open source equivalent By migrating to AWS t his media company saved millions of dollars in licensing costs and increased savings and agility 3 Refactor / Re architect Reimagine how the applicat ion is architected and developed using cloud native features This is driven by a strong business need to add features scale or performance that would otherwise be difficult to achieve in the application’s existing environment Are you looking to migrate from a monolithic architecture to a service oriented (or server less) architecture to boost agility or improve business continuity? This strategy tends to be the most expensive but it can also be the most beneficial if you have a good product market fit 4 Repurchase ArchivedAmazon Web Services – AWS Migration Whitepaper Page 9 Move from perpetual licenses to a software asaservice model For example move from a customer relationship management ( CRM ) to Salesforcecom an HR system to Workday or a content management system ( CMS ) to Drupal 5 Retire Remove applications that are no longer needed Once you have completed discovery for your environment ask who owns each application As much as 10% 20% of an enterprise IT portfolio is no longer useful and can be turned off These savings can boost your business case direct your team’s attention to the applications people use and reduce the number of applications you have to secure 6 Retain ( Referred to as re visit ) Keep applications that are critical for the business but that require major refactoring before they can be migrated You can revisit a ll applications that fall in this category at a later point in time Figure 2: Six most common application migration strategies Which Migration Strategy is Right for Me? Choosing the right migration strategy depends on your business drivers for cloud adoption as well as time considerations business and financial constraints and resource requirements Replatform if you are migrating for ArchivedAmazon Web Services – AWS Migration Whitepaper Page 10 cost avoidance and to eliminat e the need for a hardware refresh Figure 3 shows that this strategy involves more effort than a Rehost strategy but less than a Re factor strategy Rehost the majority of your platform and Re factor later i f your data center contract will end in 12 months and you do not want to renew Figure 3: Comparison of c loud migration strategies Consider a phased approach to migrating applications prioritizing business functionality in the first phase rather than attempting to do it all in one step In the next phase o ptimi ze applications where the AWS Platform can make a notable difference in cost performance productivity or compliance For example if you are migrating an application that leverages an Oracle database and your strategy includes replacing Oracle with Auro ra PostgreSQL the best migration approach may be to migrate the application and stabilize it in the migration phase Then execute the database change effort in a subsequent phase This approach controls risk during the migration phase and focuses on the m igration business case and value proposition There are common objectives that will improve application performance resilience and compliance across the port folio that should be included in every migration They should be packaged into the migration pro cess for consistent execution ArchivedAmazon Web Services – AWS Migration Whitepaper Page 11 Your migration strategy should guide your teams to move quickly and independently Applying project management best practices that include clear budgets timelines and business outcomes supports this goal Your strategy shou ld address the following questions : Is there a time sensitivity to the business case or business driver for example a data center shutdown or contract expiration ? Who will operate your AWS environment and your applications? Do you use an outsourced provider today? What operating model wo uld you like to have long term? What standards are critical to impose on all applications that you migrate? What automation requirements will you impose on applications as a starting point for cloud operations flexib ility and speed? Will these requirements be imposed on all applications or a defined subset? How will you impose these standards? The following are examples: We will drive the migration timeline to retire specific facilities and use savings to fund the tr ansformation to cloud computing Time is very important but we will consider any changes that can be done quickly and safely while creating immediate savings We will insource core engineering functions that have been historically outsourced We will look at technology platforms that remove operational barriers and allow us to scale this function Business continuity is a critical driver for our migration We will take the time during the migration to improve our position Where application risk and costs are high we will consider a phased approach : migrate first and optimize in subsequent phases In these cases the migration plan must include the second phase For all custom development we will move to a DevOps model We will take the time to build the development and release processes and educate development teams in each application migration plan matching this pattern ArchivedAmazon Web Services – AWS Migration Whitepaper Page 12 Understanding your application portfolio is an important step for determining your migration strategy and subsequent migration plan and business case This strategy does not need to be elaborate but addressing the questions above helps align the organization a nd test your operational norms Building a Business Case for Migration IT leaders understand the value that AWS brings to their organization including cost savings operational resilience productivity and speed of delivery Building a clear and compelling migration business case provides your organization’s leadership with a data driven ratio nale to support the initiative A migration busines s case has four categories : 1) run cost analysis 2) cost of change 3) labor productivity and 4) business value A business case for migration address es the following questions: What is the future expected IT cost on AWS versus the existing (base) cost? What are the estimated migration investment costs? What is the expected ROI and when will the project be cash flow positive? What are the business benefi ts beyond cost savings? How will using AWS improve your ability to respond to business changes? The following table outlines each cost or value category Table 3: Business case cost/ value categorization Category Inputs for Consideration Run Cost Analysis Total Cost of Ownership (TCO) comparison of run costs on AWS post migration vs current operating model Impact of AWS purchasing/ pricing options (Reserved Instances volume discounts ) Impacts of AWS discounts (E nterprise Discount Program service credits eg Migration Acceleration Program incentives ) Cost of Change Migration planning/consulting costs ArchivedAmazon Web Services – AWS Migration Whitepaper Page 13 Category Inputs for Consideration Compelling events (eg planned refresh data center lease renewal divestiture ) Change management (eg training establishment of a Cloud Center of Excellence governance and op erations model ) Application migration cost estimate parallel environments cost Labor Productivity Estimate of reduction in number of hours spent conducting legacy operational activities (requisitioning racking patching ) Productivity gains from automation Developer productivity Business Value Agility (faster time to deploy flexibility to scale up/scale down mergers and acquisitions global expansion) Cost avoidance (eg server refresh maintenance contracts ) Risk miti gation (eg resilience for disaster recovery or performance) Decommissioned asset reductions For an enterprise Oil & Gas customer cost savings was a primary migration driver This customer realized a dditional financial and overall business benefit s through the course of migrating 300+ applications to AWS For example this customer was able to increase business agility operational resilience improve workforce productivity and decrease operational costs The data from each value category shown in the following table provides a compelling case for migration Table 4: A case for migration ArchivedAmazon Web Services – AWS Migration Whitepaper Page 14 Drafting Your Business Case Your business case will go through several phases of evolution: directional refined and detailed The directional business case uses an estimate for the number of servers and rough order of magnitude (ROM) assumptions around server utilization The purpose is to gain early buy in allowing budgets to be assigned and resources applied You can develop a refined business case w hen you have additional data about the scope of the migration and workloads The initial discovery process refines the scope of your migration and business case The detailed business case requires a deep discovery of the on premise s environment and server utilization We recommend using a n automated discovery tool for deep discovery This is discussed later in the Application Discovery section Items to Consider In building your business case consider the following items: Right size mapping provides estimates of the AWS service s (compute storage etc) required to run th e existing applications and processes on AWS It includes capacity views (as provisioned) and utilization views (based on actual use) This is a significant part of the value proposition especially in overprovisioned virtualized data centers Extend rightsize mapping to consider resources that are not required full time for example turning off development and test servers when not in use and reducing run costs Identify early candidates for migration to establish migration processes and develop experien ce in the migration readiness and planning phase This early analysis of the application discovery data will help you determine run rate cost migration cost resource requirements a nd timelines for the migration AWS has a series of tools and processes t hat can help you develop your business case for a migration The AWS Simple Monthly Calculator can provide directional business case inputs3 while the AWS Total Cost of Operation ( TCO ) calculator s can provide a more refined business case4 Additional ly AWS has tools that can help you estimate the cost of migr ation ArchivedAmazon Web Services – AWS Migration Whitepaper Page 15 People and Organization It is import ant to develop a critical mass of people with production AWS experience are you prepare for a large migration Establish operational processes and form a Cloud Center of Excellence (CCoE) that’s dedicated to mobilizing the appropriate resources The CCoE will lead your company through organizational and business transformation s over the course of the migration effort A CCoE institutionalize s best practices governance standards automation and drive s change throughout the organization When done well a CCoE inspire s a cultural shift to innovation and a change isnormal mindset Organizing Your Company’s Cloud Teams An effective CCoE team evolves over time in size makeup function and purpose Long term and short term objectives as well as key opera ting model decisions will require adjustments to your team In the early stages of cloud adoption team development begins as a small informal group connected by a shared interest —experimentation with cloud implementation As the cloud initiative grows a nd the need for a more formalized structure increases it becomes beneficial to esta blish a CCoE dedicated to evangelizing the value of cloud While the CCoE establishes best practices methods and governance for your evolving technology oper ations addit ional small cloud teams form These small teams migrate candidate applications and application groupings commonly referred to as migration waves to the cloud environment The CCoE direct s the operating parameters of the migration teams and both the CCoE and migration teams provide feedback Collectively lessons are learned and documented improving efficiency and confidence through hands on experience Creating a Cloud Center of Excellence The following are guiding principles for the creation of a CCoE The CCoE structure will evolve and change as your organization transforms Diverse cross functional representation is key Treat the cloud as your product and the application team leaders as your customers Drive enab lement not command and control Buil d company culture into everything you do ArchivedAmazon Web Services – AWS Migration Whitepaper Page 16 Organizational change management is central to business transformation Use intentional and targeted organizational change management to change company cultur e and norms Embrace a change asnormal mindset Change of applications IT systems and business direction is expected Operating model decisions will determine how people fill roles that achieve business outcomes Structure the CCoE to Prepare for Migration at Scale Designing a CCoE to include people from acr oss impacted business segments with cross functional skills and experiences is important for successful migration at scale you build subject matter expertise achieve buyin earn trust across your organization and establish effective guidelines that balance your business requirements There is no single organizational structure that works for everyone The following guidelines will help you design a CCoE that represen ts your company A CCoE is comprised of two functional groups : the Cloud Business Office (CBO) and Cloud Engineering (see Figure 4) The functions of each group will help you determine who to include in each group and in the larger CCoE The CBO owns maki ng sure that the cloud service s meet the needs of your internal customer business services Business services and the applications that support them consume the cloud services provided by IT IT should adopt a customer centric model toward business application owners This tenet represents a shift for most organizations It is an important consideration when developing your cloud operating model CCoE and cloud team approach The CBO owns functions such as organizational change management stakehold er requirements governance and cost optimization It develop s user requirements and onboard s new applications and users onto the cloud It also handle s vendor management internal marketing communications and status updates to users You will se lect IT Leadership responsible for the cloud service vision O rganizational Change Management Human Resources financial management vendor management and enterprise architecture One individual may represent m ultiple functional areas or multiple indivi duals may represent one functional area ArchivedAmazon Web Services – AWS Migration Whitepaper Page 17 The Cloud Engineering group owns functions such as infrastructure automation operational tools and processes security tooling and controls and migration landing zones T hey optimize the speed at which a business unit can access cloud resources and optimize use patterns The Cloud Engineering group focuses on performance availability and security The following figure shows the functional groups that require representation within your company’s CCoE Figure 4: Functional organization of a CCoE Migration Readiness and Planning Migration Readiness and Planning (MRP) is a method that consist s of tools processes and best practices to prepare an enterprise for cloud migration The MRP method aligns to the AWS Clou d Adoption Framework and is execution driven MRP describes a specific program that AWS Professional Services offers However we highlight the main topi c areas and key concepts below Assessing Migration Readiness The AWS Cloud Adoption Framework ( AWS CAF ) is a framework for analyzing your IT environment Using this framework lets you determine your cloud migration readiness Each perspective of the AWS CAF provides ways of looking at your environment through different lenses to make sure all areas of your business are addressed Being ready for a large migration initiative requires preparation across several key areas ArchivedAmazon Web Services – AWS Migration Whitepaper Page 18 Items to consider: Have you clearly defined the scope and the business case for the migration? Have you evaluated the environment and applications in scope through the lenses of the AWS CAF? Is your virtual private cloud (VPC) secure and can it act as a landing zone for all applications in scope? Have your operations and employee skills been review ed and updated to accommodate the change? Do you (or does a partner) have the experience necessary to move the tech stacks that are in scope? AWS has developed a set of tools and processes to help you assess your organization’s current migration readiness state in each of the AWS CAF perspectives The Migration Readiness Assessment (MRA) process identifies readiness gaps and makes recommendations to fill those gaps in preparation for a large migration effort The MRA is completed interactively in a cross group setting involving key stakeholders and team members from across the IT organization to build a common view of the current state You may have representatives from IT Leadership Networking Operations Security Risk and Compliance Application Devel opment Enterprise Architecture and your CCoE or CBO The MRA output includes actions and next steps and visuals like a heat map ( see Figure 5 ) The MRA is available through A WS or an AWS Migration Partner ArchivedAmazon Web Services – AWS Migration Whitepaper Page 19 Figure 5: Migrat ion Readiness Assessment heat map Application Discovery Application Discovery is the process of understanding your onpremise s environment determining what physical and virtual servers exist and what applications are running on those servers You will need to take stock of your existing on premises portfolio of applications servers and other resources to build your business c ase and plan your migration You can categorize your organization’s on premise s environment based on operating system mix application patterns and business scenarios This categorization can be simple to start For example you may group applications bas ed on an end oflife operating system or by applications dependent on a specific database or sub system Application Discovery will help you develop a strategic approach for each group of applications Application Discovery provides you with the required data for project planning and cost estimation It includes data collection from multiple sources A common source is an existing Configuration Management Database (CMDB ) The CMDB help s with high level analysis but often lacks fidelity For ArchivedAmazon Web Services – AWS Migration Whitepaper Page 20 example perfor mance and utilization data need to pair the resources to the appropriate AWS resource ( for example matching Amazon EC2 instance types) Manually performing discovery can take weeks or months to perform so we recommend taking advantage of automated discov ery tools These discovery tools can automate the discovery of all the applications and supporting infrastructure including sizing performance utilization and dependencies Items to consider: We recommend using an automated discovery tool Your environm ent will change over time Plan how to keep your data curren t by continuously running your automated discovery tool It may be useful to do an initial application discovery during business case development to accurately reflect the scope Discovery Tools Discovery tools are available in the AWS Marketplace under the Migration category Additionally AWS has built the Application Discovery Service (ADS) ADS discovers server inventories and performance characteristics through either an appliance connector for virtual server s or agents installed on physical or virtual hosts An application discovery tool can : Automatical ly discover the inventory of infrastructure and applications running in your data center and maintain the inventory by continually monitori ng your systems Help determine how application s are dependent on each other or on underlying infrastructure Inventory versions of operating systems and services for analysis and planning Measure applications and processes running on hosts to determine performance baseline s and optimization opportunities ArchivedAmazon Web Services – AWS Migration Whitepaper Page 21 Provide a means to categorize applications and servers and describe them in a way that’s meaningful to the people who will be involved in the migration project You can use the se tools to build a high fidelity real time model of your applications and their dependencies This automat es the time consuming process of discovery and data collection and analysis Items to consider: An automated discovery tool can save time and energy when bring ing a CMDB up to date Keeping the inventory up to date is key as the project progresses and a tool helps make this less painful Discovery tools on the market each have their special purpose or capability so analyzing this against your needs will help you select the right tool for your environment Application Portfolio Analysis Application portfolio analysis takes the application discovery data and then begins grouping applications based on patterns in the portfolio It identifies order of migration and the migration strategy (ie which of the 6 R’s out lined on page 9 will be used) for migrating the given pattern The result of this analysis is a broad categorization of resources aligned by common traits Special cases may also be identified that need special handling Examples of this high level analysis are: The m ajority of the servers are Windows based with a consistent standard OS version Some of the servers might require an OS upgrade Distribution of databases across multiple database platforms : 80% of the databases are Oracle and 20% are SQL Server Grouping of applications and servers by business unit : 30% marketing and sales application s 20% HR applications 40% internal productivity applications and 10% infrastructure management applications Grouping of resources across type of environment: 50% production 30% test and 20% development ArchivedAmazon Web Services – AWS Migration Whitepaper Page 22 Scoring and prioritizing based on different fac tors: opportunity for cost saving business criticality of the application utilization of servers and complexity of migration Grouping based on 6 R’s: 30% of the portfolio could use a re host pattern 30% require some level of re platforming changes 30% require application work (re architecture) to migrate and 10% can be retired The data driven insights you get from the application discovery work will become the foundation for migration planning as you move into the migration readiness phase of your p roject Migration Planning The primary objective of the migration plan is to lead the overall migration effort This includes managing the scope schedule resource plan issues and risks coordination and communication to all stakeholders Working on th e plan early can organize the project as multiple teams migrate multiple applications The migration plan considers critical factors such as the migration order for workloads when resources are needed and tracking the progress of the migration We recomm end your team use agile delivery methodologies project control best practices a robust business communication plan and a welldefined delivery approach Recommended migration plan activities include : Review of project management methods tools and capabilities to assess any gaps Define project management methods and tools to be used during the migration Define and create the Migration Project Charter/Communication Plan including repor ting and escalation procedures Develop a project plan risk/mitigation log and roles and responsibilities matrix (eg RACI) to manage the risks that occur during the project and identify owner ship for each resource involved Procure and deploy project management tools to support the delivery of the project ArchivedAmazon Web Services – AWS Migration Whitepaper Page 23 Identi fy key resources and leads for each of the migration work streams defined in this section Facilitate the coordination and activities outlined in the plan Outline resources timelines and cost to migrate the targeted environment to AWS Technical Plannin g Planning a migration goes beyond cost schedule and scope It includes taking the application portfolio analysis data and building an initial backlog of prioritize d applications Build the backlog by conducting a deep analysis on your portfolio by gathe ring data on use patterns A small team can lead this process often from the enterprise architecture team which is part of your CCoE The team analyzes and prioritizes the application portfolio and gather s information about the current architecture for each application They develop the future architecture and capture workload details to execute a streamlined migration It is not important to get through every application before begging execution of the pla n To be agile do a deep analysis of the first two to three prioritized apps and then begin the migration Continue deeper analyses of the next applications while the first a pplications are being migrated An iterative process helps you avoid feeling overwhelmed by the scale of the project or limiting your progress as the initial design plans become dated Organize applications into migration patterns and into move groups to determine the number of migration teams cost and migration project timeline Maintain a backlog of applications (about three 2week sprints) for each migration team in the overall project plan As you migrate you gain technical and organizational expertise that you will build into your planning and execution processes You will be a ble to take advantage of opportunities to optimize as you progress through your application portfolio The iterative process allows the project to scale to support migration teams structured by pattern business unit geography or other dimensions that al ign to your organization and project scope A high fidelity model that provid es accurate and current application and infrastructure data is critical to make performance and dependency decisions during your migration phase Having a well informed plan with good data is one of the key enablers for migrating at speed ArchivedAmazon Web Services – AWS Migration Whitepaper Page 24 Items to consider: Application discovery and portfolio analysis data are important for categorization prioritization and planning at this stage An agile approach allows you to use this data for the migration before it becomes obsolete Iteration helps migrations continue as the detailed plan evolve s with new learnings The Virtual Private Cloud Environment The VPC environment is an integrated collection of AWS accounts and configurations where your applications will run It includes third party solutions from the AWS Marketplace that address requirements not directly controlled on AWS You can implement t he AWS CAF Security Operatio ns and Platform Perspectives to migrate and operate in the cloud environment securely and efficiently They will be covered together in this section Security Building security into your VPC architecture will save you time and will improve your company’s security posture Cloud security at AWS is the highest priority AWS customers benefit from the AWS Cloud data centers and network architectures that are built to meet the requirements of the most security sensitive organizations A compelling advantage o f the AWS Cloud is that it allows you to scale and innovate while maintaining a secure environment The AWS CAF Security Perspective outlines a structured approach to help you build a foundation of security risk and compliance capabilities that will acc elerate your readiness and pl anning for a migration project To learn more about cloud security see the AWS security whitepapers 5 The AWS CAF Security Perspective details how to build and control a secure VPC in the AWS Cloud Figure 6 illustrates the AWS CAF Security Perspective Capabilities ArchivedAmazon Web Services – AWS Migration Whitepaper Page 25 Figure 6: AWS CAF Security Perspective Capabilities The AWS CAF Security Perspective is comprised of 10 themes: Five core security themes – Fundamental themes that manage risk as well as progress by functions outside of information security: identity and access management logging and monitoring infrastructure security data protection and incident response Five augmenting security theme s – Themes that drive continuous operational excellence through availability automation and audit: resilience compliance validation secure continuous integration/continuous deployment ( CI/CD) configuration and vulnerability analysis and security big data analytics By u sing the ten themes of the Security Perspective you can quickly iterate and mature security capabilities on AWS while maintaining flexibility to adapt to business pace and demand Items to consider: Read the AWS security whitepapers for information on best security practices ArchivedAmazon Web Services – AWS Migration Whitepaper Page 26 Engage with AWS to run security workshops to speed up your teams ’ understanding and implementation Read the AWS Well Architected Framework and the AWS Well Architected Security Pillar whitepaper s for information on how to architect a secure environment6 7 Operations The AWS CAF Operations Perspective describes the focus areas to run use operate and recover IT workloads Your operations group defines how day to day quarter toquarter and year toyear business is conducted IT operations must align with and support the operations of your business The Operations Perspective defines current operating procedures and identifies the process changes and training that is needed for successful cloud adoption Figure 7: AWS CAF Operations Perspective Capabilities The Operations Perspective helps you examine how you currently operate and how you would like to operate in the future Operational decisions relate to the specific applica tions being migrated Determine the appropriate Cloud Operating Model (COM) for a particular application or set of applications when envisioning the future state To learn more about cloud operations see the AWS ArchivedAmazon Web Services – AWS Migration Whitepaper Page 27 operations whitepapers8 and the AWS Well Architected Operational Excellence Pillar whitepaper9 There are different uses and us ers for applications across your business Products and services will be consumed in different patterns across your organization Therefore you will have multiple modes of operating in a cloud environment When planning for your migration you will f irst d efine the use cases and actors and then determine how to deliver the solution To build an organization that is capable of delivering and consuming cloud services create a Cloud Services Organization Cloud organizational constructs such as a CCoE a CBO and Cloud Shared Services teams all fall within this Cloud Services Organization The last piece of the COM is the set of capabilities such as ticketing workflows service catalogs and pipelines that are required to deliver and consume cloud services These capabilities help the Cloud Services Organization function effectively Items to consider: Building a Cloud Center of Excellence early in the process will centralize best practices Recognize that your organization will have multiple operating models (eg R&D applications are different than back office applications) A managed service such as AWS Managed Services 10 can reduce the time need ed to solve operational problems in the early phases It lets your team focus on improving the migrated applications Platform The AWS CAF Platform Perspective includes principles and patterns for implementing new solutions on the cloud and migrating on premises workloads to the cloud IT architects use models to understand and communicate the design of IT systems and their relationships The Platform Perspective capabilities help you describe the architecture of the targ et state environment in detail ArchivedAmazon Web Services – AWS Migration Whitepaper Page 28 Figure 8: AWS CAF Platform Perspective Capabilities The Platform work stream provides you with proven implementation guidelines You can repeatedly set up AWS environments that can scale as you deploy new or migrate existing workloads You can establish key platform components that support flexible baseline AWS environments These environments can accommodate changing business requirements and workloads Once in place your platform can simplify and streamline the decision making process involv ed in configuring an AWS infras tructure The following are k ey elements of the platform work stream : AWS landing zone – provides an initial structure and pre defined configurations for AWS accounts networks identity and billing frameworks and custome rselectable optional packages Account structure – defines an initial multi account structure and pre configured baseline security that can be easily adopted into your organizational model Network structure – provides baseline network configurations that support the most co mmon patterns for network isolation implements baseline network connectivity between AWS and on premises networks and provides user configurable options for network access and administration ArchivedAmazon Web Services – AWS Migration Whitepaper Page 29 Predefined identity and billing framework s – provide frameworks for cross account user identity and access management (based on Microsoft A ctive Directory ) and centralized cost management and reporting Predefined userselectable packages – provide a series of user selectable packages to integ rate AWS related logs into popular reporting tools integrat e with the AWS Service Catalog and automate infrastructure It offers third party tools to help you manage and monitor AWS usage and costs Items to consider: If your business is new to AWS consider a managed service provider such as AWS Managed Services to build out and manage the platform Identify account structures up front that allow for effective bill back processes You will have both on premises and cloud servers working together at least initially Consider a hybrid cloud solution Migrating First Migrations – Build Experience MRP develop s core operations security and platform capabilities to operate at scale You will build confidence and momentum for your migration project Running applications in the new operating model and environment will help you mature these capabilities It is important to develop migration skills and experience early to help you make informed choices about your workload patterns We recommend migrating three to five applications These applications should be representative of common migration patterns in the portfolio One example is rehost ing an application using existing server replication tools Other examples are replatform ing an application to have its database running on Amazon RDS or migrating an application that has internet facing requirements and validating the controls and services involved Choose the applications before you start the MRP in order to develop an approac h and schedule that accommodate s your selections ArchivedAmazon Web Services – AWS Migration Whitepaper Page 30 Working through these initial migrations build s confidence and experience It inform s the migration plan with the patterns and tool choices that fit your organization ’s needs It provide s validation and testing of the operational and security processes Items to c onsider: Identify patterns ( eg common architectures technology stacks etc) in the portfolio to create a list of application groupings based on common pattern s This create s a common pro cess for group migrations Your first three to five applications should be representative of common patterns in your portfolio This will determine the process for moving that pattern in the mass migration to follow Migration Execution In the early migrat ions you tested specific migration patterns and your CCoE gained experience Now you will scale teams to support your initial wave of migrations The core team s expand to form migration sprint teams that operate in parallel This is useful for rehost and replatforming patterns that can use automation and tooling to accelerate application migration In the next section we will cover the migration factory process and expand on the agile team model Application Migration Process Specific patterns with larger volumes such as rehosting offer the opportunity to define methods and tools for moving data and application components However every application in the execution phase of a migration follow s the same six step process: Discover Design Bu ild Integrate Validate and Cutover Discover In the Discover stage the application portfolio analysis and planning backlog are used to understand the current and future architectures If needed more data is collected about the application There are two categories of information: Discover Business Information (DBI) and Discover Technical Information ArchivedAmazon Web Services – AWS Migration Whitepaper Page 31 (DTI) Examples of DBI are application o wner roadmap cutover plans and operation runbooks Examples of DTI are server statistics connectivit y process information and data flow This information can be captured via tools and confirmed with the application owner The data is then analyzed and a migration plan for that application is confirmed with both the sprint team and the application owne r In the case of rehost patterns this is done in groups that match the patterns The portfolio discovery and planning process provide s this information Design In the Design stage the target state is developed and documented The target state includes the AWS architecture application architecture and supporting operational components and processes A member of the sprint team and engineering team uses the information collected during the Discover stage to design the application for the targeted AWS e nvironment This work depends on the migration pattern and includes an infrastructure architecture document that outlines what services to use The document also includes information about data flow foundational elements monitoring design and how the application will consume external resources Build In the Build stage the migration design created during the Design stage is executed The required people tools and reusable templates are identified and given to the migration teams A migration team is select ed based on the migration strategy chosen for the application The team will use these pre defined methods and tools to migrate to AWS They assert basic validations against the AWS hosted application Integrate In the Integrate stage your migration team make s the external connections for the application Your team work s with external service providers and consumers of the application to make the connections or service calls to the application The team then run the application to demonstrate functionality and operation before the application is ready for the Validat e stage Validate In the Validate stage each application go es through a series of specific test s (that is build verification functional performan ce disaster recovery and ArchivedAmazon Web Services – AWS Migration Whitepaper Page 32 business continuity tests ) before being finalized and released for the Cutover stage Your teams evaluate release management verify rollout and rollback plans and evaluate performance baselines Rollback procedures are defined by application within a rollback playbook which consist s of an operations communication plan for users and define s integration application and performance impacts You complete business acceptance criteria by running parallel testing for pre migrated and migrated applications Cutover In the Cutover stage you execute the cutover plan that was agreed upon by the migration team and application owners Perform a user acceptance test at this stage to support a successful cutover Use the o utline d rollback procedure in the cutover plan if the migration is not successful Items to consider: Make sure the team is familiar with agile practices An iterative approach to maximizes immediate requirements gathering You will not do up front work that will be out of date by the time you are ready to use it The CC oE play s a key role in sharing best practices and lessons learned across the different migration teams Team Models Core migration teams persist through the project as part of your new IT operating model These teams each have their own are as of specialization Core Cloud Teams The Core Cloud teams work across the migration teams They act as a central hub for managing projects sharing lessons learned coordinating resources and building common solutions These teams include: Cloud Business Office (Program Control) – Drives the program manag es resources and budgets manag es and report s risk and drives communication and change management Typically this team reports to the overall migration or cloud lead and becomes the program office for your migrations ArchivedAmazon Web Services – AWS Migration Whitepaper Page 33 Cloud Engineering & Operations – Builds and validates the fundamental components that ensure development test and production environments are scalable automated maintained and monitored This team also p repares landing zones as needed for migrations Innovation – Develops repeatable solutions that will expedite migrations in coordination with the platform engineering migration and transition teams They work on larger or more complex technical issues for the migration teams Portfolio Discovery & Planning – Accelerates downstream activities by executing application discovery and optimizing application backlogs They work to eliminate objecti ons and minimize wasted effort Migration Factory Teams In the scale out phase of a migration project multiple teams operat e concurrently Some support a large volume of migrations in the rehost and minor replatform patterns These teams are referred to as migration factory teams Your migration factory team increase s the speed of execution of your migration plan Between 20 %50% of an enterprise application portfolio consists of repeated pattern s that can be optimized by a factory approach This is an agile delivery model and it is important to create a release management plan Your plan should be based on current workloads and information generated during the MRP phase You should optimize it continu ally for future migration waves and future migration teams We recommend that you have a backlog of application s that support three sprints for each team This allows you to reprioritiz e applications if you encounter problems that affect the schedule Larger and more complex applications often follow the refactor/ rearchitect pattern They are generally conducted in planned release cycles by the application owner The factory teams are self sufficient and include five to six cross functional roles The se include operations business analyst s and owner s migration engineer s developer s and DevOps professional s The following are examples of migration factory teams that are focused on specific migration patterns : Rehost migration team – Migrates high volume low complexity applications that don’t require material change This team leverage s migration automation tools This approach is integrated into patch andrelease management processes ArchivedAmazon Web Services – AWS Migration Whitepaper Page 34 Replatform migration team – Designs and migrates applications that require a change of platform or a repeatable change in application architecture Refactor/ rearchitect migration team – Designs and migrat es complex or core business applications that have many dependencies In most cases development and technical o perations teams support this business capability The migration becomes a release cycle or a few release cycles within the plan for that team There can be many of these in flight and the role of the CBO is to track timing risks and issues until migrati on completion This team owns the application migration process Items to c onsider: Perform a portfolio analysis to understand common patterns across all applications This can help build repeatable work for the factory teams to execute efficiently Use a partner to help with resource constraints as your team supports regular business activities AWS and the AWS Partner Network (APN) ecosystem can bring specialized resources for specific topics such as databases application development and migration tooling Conclusion We have introduced both the preparation and execution steps required for large migrations to the cloud Analyzing your current state building a plan and iterating the work breaks a large migration into manageable activities for efficient execution Looking at a migration as an organizational change project empowers you to build buyin and maintain communications through each stage of the process Build a business case and refine the return on investment as the project progres ses Use the AWS Cloud Adoption Framework to analyze your environment through the different Perspectives : Business People Governance Platform Security and Operations This gives you a complete view of which areas to improve before moving forward with a large migration effort Use a migration factory construct and iterat e the migration patterns to create an optimal move to the AWS Cloud Today migrating to the cloud has moved from ArchivedAmazon Web Services – AWS Migration Whitepaper Page 35 asking “why” to asking “when ” Building an effective migration strategy and plan will change your response to “NOW!” Migration is just the beginning of what is possible Once you have migrated an application consider your migration experience as a capability that you can use for the optimization phases for this application You will have a current architecture and a future design You will implement test and validate changes You will cutover and go live You now have a new IT capability that can drive speed agility and business value for your organization and your compan y Contributors The following individuals and organizations contributed to this document: AWS Professional Services Global Migrations Practice Resources AWS M igration Competency and Partners: https://awsamazoncom/partners/find AWS Whitepapers : https://awsamazoncom/whitepapers AWS Migration Acceleration Program: https: //awsamazoncom/migration acceleration program/ AWS Webinar: How to Manage Organizational Change and Cultural Impact During a Cloud Transformation : https://youtube/2WmDQG3vp0c Additional Information Articles by Stephen Orban Head of Enterprise Strategy at AWS on cloud migration : http://amznto/considering mass migration http://amznto/migration process http://amznto/migration strategies http://amznto/cloud native vsliftandshift ArchivedAmazon Web Services – AWS Migration Whitepaper Page 36 http://amznto/migrate mainframe tocloud FAQ 1 How do I build the right business case? Your business case should be driven by your organizational KPIs and common drivers such as operational costs workforce productivity cost avoidance operational re silience and business agility 2 How do I accurately assess my environment? How do I learn what I don’t know about my enterprise network topology and application portfolio and create a migration plan ? Consider the volume of resources used by each applicatio n and automate the assessment process to confirm that it’s done rapidly and accurately Assessing your environment manually is a time consuming process It exposes your organization to human error Automating the process will help you gain insight into what you don’t know and it will help you more clearly understand and define these uncertainties so they can be factored into your migration strategy 3 How do I identify and evaluate the right partners to help me? Details on Partner offerings can be found at : o AWS Migration Partner Solutions11 o Migration Solutions in AWS Marketplace12 4 How do I estimate the cost of a large transition like this? The AWS Total Cost of Ownership Calculator can compare how much it costs to run your applications in an on premises or colocation environment to what it cost s on AWS 13 5 How long will the migration process take to complete? Enterprise migrations that are completed within 18 months generate the greatest ROI The duration of a migration depends on scope and resources 6 How do I handle my legacy applications? Consider taking an incremental approach to your migration by determining which of your legacy applications can be moved most easily ArchivedAmazon Web Services – AWS Migration Whitepaper Page 37 Move these applications to the cloud first For legacy applications that require a more complicated approach you can develop an effective plan for migration 7 How do I accelerate the migration ef fort to realize the business and technology benefits more quickly? Automat e the migration process as much as possible Using migration tools from AWS and APN Partners is the best way to accelerate th e migration effort Glossary • Application Portfolio – Collection of detailed information about each application of an organization including the cost to build and maintain the application and the business value • AWS Cloud Adoption Framework ( AWS CAF ) –A structure for developing an efficient and effective pla n for organizations to successfully move to the cloud • Cloud Center of Excellence ( CCoE ) –A diverse team of key members who play the primary role in establishing the migration timeline and evangelize about moving to the cloud • Landing Zone – The initial de stination area that is established on AWS where the first applications operate from to ensure they have been migrated successfully • Migration Acceleration Program ( MAP ) –Designed to provide consulting support and help enterprises who are migrating to the c loud realize the business benefits of moving to the cloud • Migration at Scale – The stage in the migration process when the majority of the portfolio is moved to the cloud in waves with more applications moved at a faster rate in each wave • Migration Method or Migration Process – Refers to Readiness Mobilization Migration at Scale and Operate ArchivedAmazon Web Services – AWS Migration Whitepaper Page 38 • Migration Readiness and Planning ( MRP ) – A preplanning service to prepare for migration when the resources processes and team members who will be engaged in carrying out a successful migration to AWS are identified Part of the Readiness stage of the migration process • Migration Readiness Asses sment ( MRA ) –A tool to determine level of commitment competence and capability • Mobilization – The stage in the migration process in which roles and responsibilities are assigned an in depth portfolio assessment is conducted and a small number of selec t applications is migrated to the cloud • Operate – The stage in the migration process when most of the portfolio has been migrated to the cloud and is optimized for peak performance • Readiness – The initial stage in the migration process when the opportuni ty is evaluated the business case is confirmed and organizational alignment is achieved for migrating to the cloud • Stage – The individual topics of the migration process Readiness Mobilization Migration at Scale and Operate are all stages in the migration process 1 https://awsamazoncom/cloud migration/ 2 https://d0awsstaticcom/whitepapers/aws_cloud_adoption_frameworkpdf 3 https://calculators3amazonawscom/indexhtml 4 https://awsamazoncom/tco calculator/ 5 https://awsamazoncom/whitepapers/#security 6 https://d1awsstaticcom/whitepapers/architecture/AWS_Well Architected_Frameworkpdf 7 https://d1awsstaticcom/whitepapers/architecture/AWS Security Pillarpdf 8 https://awsamazoncom/whitepapers/#operations Notes ArchivedAmazon Web Services – AWS Migration Whitepaper Page 39 9 https://d1awsstaticcom/whitepapers/architecture/AWS Operational Excellence Pillarpdf 10 https://awsamazoncom/managed services/ 11 https://awsamazoncom/migration/partner solutions/ 12 https://awsamazoncom/marketpl ace/search/results?searchTerms=migratio n&page=1&ref_=nav_search_box&x=0&y=0 13 https://awstcocalculatorcom/
|
General
|
consultant
|
Best Practices
|
AWS_Operational_Resilience
|
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/awsoperational resilience/awsoperationalresiliencehtmlPage 1 Amazon Web Services ’ Approach to Operational Resilience in the Financial Sector & Beyond First published March 2019 Updated April 02 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 2 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Opera tional Resilience in the Financial Sector & Beyond 3 Contents Introduction 5 What does operational resilience mean at AWS? 5 Operational resilience is a shared responsibility 5 How AWS maintains operational resilience and continuity of service 6 Incident management 8 Customers can achieve and test res iliency on AWS 8 Starting with first principles 9 From design principles to implementation 11 Assurance mechanisms 14 Independent thirdparty verification 14 Direct assurance for customers 15 Document revisions 16 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 4 Abstract The purpose of this paper is to describe how Amazon Web Services ( AWS ) and our customers in the financial services industry achieve operational resilience using AWS services The primary audience of this paper is organizations with an interest in how AWS and our financial services customers can operate services in the face o f constant change ranging from minor weather events to cyber issues This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 5 Introduction AWS provides information technology (IT) services and building blocks that all types of businesses public authorities universities and individuals utilize to become more secure innovative and responsive to their own needs and the needs of their customers AWS offers IT services in categories ranging from compute storage database and networking to artificial intelligence and machine learning AWS standardizes its servi ces and makes them available to all customers including financial institutions Across the world financial institutions have used AWS services to build their own applications for mobile banking regulatory reporting and market analysis AWS and the finan cial services industry share a common interest in maintaining operational resilience ; for example the ability to provide continuous service despite disruption Continuity of service especially for critical economic functions is a key prerequisite for fi nancial stability AWS recognizes that financial institutions which use AWS services need to comply with sector specific regulatory obligations and internal requirements regarding operational resilience These obligations and requirements are found inte r alia in IT guidelines1 and cyber resilience guidance2 Financial institution customers are able to rely on AWS to provide resilient infrastructure and services while at the same time designing their applications in a manner that meets regulatory and compliance obligations This dual approach to operational resilience is something that we call “shared responsibility” What does operational resilience mean at AWS? Operational resilience is the ability to provide continuous service through people proces ses and technology that are aware of and adaptive to constant change It is a realtime execution oriented norm embedded in the culture of AWS that is distinct from traditional approaches in Business Continuity Disaster Recovery and Crisis Management which rely primarily on centralized hierarchical programs focused on documentation development and maintenance Operational resilience is a shared responsibility AWS is responsible for ensuring that the services used by our customers —the building blocks for their applications —are continuously available as well as ensuring that we are prepared to handle a wide range of events that could affect our infrastructure This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ A pproach to Operational Resilience in the Financial Sector & Beyond 6 In this paper we also explore customers’ responsibility for operational resilience —how customers can design deploy and test their applications on AWS to achieve the availability and resiliency they need including for mission critical applications that require almost no downtime Those kinds of applications require that AWS infrastructur e and services are available when customers need them even upon the occurrence of a disruption As discussed below customers are able to use AWS’s services to design applications that meet this standard and provide a level of security and resilience that we consider is greater than what existing on premises IT environments can offer Finally given the importance of operational resilience to our customers this paper explore s the variety of mechanisms AWS offers to customers to demonstrate assurance3 How AWS maintains operational resilience and continuity of service AWS builds to guard against outages and incidents and accounts for them in the design of AWS services —so when disruptions do occur their impact on customers and the continuity of services is as minimal as possible To avoid single points of failure AWS minimizes interconnectedness within our global infrastructure AWS’s global infrastructure is geographically dispersed over five continents It is composed of 20 geographic Regions which are composed of 61 Availability Zones (AZs) which in turn are composed of data centers4 The AZs which are physically separated and independent from each other are also bu ilt with highly redundant networking to withstand local disruptions Regions are isolated from each other meaning that a disruption in one Region does not result in contagion in other Regions Compared to global financial institutions’ on premises environ ments today the locational diversity of AWS’s infrastructure greatly reduces geographic concentration risk We are continuously adding new Regions and AZs and you can view our most current global infrastructure map here: https://awsamazoncom/about aws/global infrastructure At AWS we employ compartmentalization throughout our infrastructure and services We have multiple constructs that provide different levels of independent r edundant components Starting at a high level consider our AWS Regions To minimize interconnectedness AWS deploys a dedicated stack of infrastructure and services to each Region Regions are autonomous and isolated from each other even though we allow customers to replicate data and perform other operations across Regions To allow these cross Region capabilities AWS takes enormous care to ensure that the This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 7 dependencies and calling patterns between Regions are asynchronous and ring fenced with safety mec hanisms For example we have designed Amazon Simple Storage Service (Amazon S3) to allow customers to replicate data from one Region ( for example USEAST 1) to another Region (eg US WEST 1) but at the same time we have designed S3 to operate autonom ously within each Region so that an outage of S3 in US EAST does not result in an S3 outage in US WEST5 The vast majority of services operate entirely within single Regions The very few exceptions to this approach involve services that provide global d elivery such as Amazon Route 53 (an authoritative Domain Name System) whose data plane is designed for 100000% availability As discussed below financial institutions and other customers can architect across both multiple Availability Zones and Regions Availability Zones (AZs) which comprise a Region and are composed of multiple data centers demonstrate further compartmentalization Locating AZs within the same Region allows for data replication that provides redundancy without a substantial impact on latency —an important benefit for financial institutions and other customers who need low latency to run applications At the same time we make sure that AZs are independent in order to ensure services remain available in the event of major incidents AZs have independent physical infrastructure and are distant from each other to mitigate the effects of fires floods and other events Many AWS services run autonomously within AZs; this means that if one AZ within a single Region loses power or connectivi ty the other AZs in the Region are unaffected or in the case of a software error the risk of that error propagating is limited AZ independence allows AWS to build Regional services using multiple AZs that in turn provide high availability to and resiliency for our customers In addition AWS leverages another concept known as cell based architecture Cells are multiple instantiations of a service that are isolated from each other; these internal service structures are invisible to customers In a cell based architecture resources and requests are partitioned into cells which are capped in size This design minimizes the chance that a disruption in one cell —for example one subset of customers —would disrupt other cells By reducing the blast radius of a given failure within a service based on cells overall availability increases and continuity of service remains A rough analogy is a set of watertight bulkheads on a ship: enough bulkheads appropriately designed can contain water in case the ship’s h ull is breached and will allow the ship to remain afloat This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 8 Incident management Although the likelihood of such incidents is very low AWS is prepared to manage large scale events that affect our infrastructure and services AWS becomes aware of incidents or degradations in service based on continuous monitoring through metrics and alarms high severity tickets customer reports and the 24x7x365 service and technical support hotlines In case of a significant event an on call engineer convenes a call with p roblem resolvers to analyze the event to determine if additional resolvers should be engaged A call leader drives the group of resolvers to find the approximate root cause to mitigate the event The relevant resolvers will perform the necessary actions to address the event After addressing troubleshooting repair procedures and affected components the call leader will assign follow up documentation and actions and end the call engagement The call leader will declare the recovery phase complete after th e relevant fix activities have been addressed The post mortem and deep root cause analysis of the incident will be assigned to the relevant team Post mortems are convened after any significant operational issue regardless of external impact and Correct ion of Errors (COE) documents are composed such that the root cause is captured and preventative actions may be taken for the future Implementation of the preventative measures is tracked during weekly operations meetings Customers can achieve and test resiliency on AWS AWS believes that financial institutions should ensure that they —and the critical economic functions they perform —are resilient to disruption and failure whatever the cause Prolonged outages or outright failures could ca use loss of trust and confidence in affected financial institutions in addition to causing direct financial losses due to failing to meet obligations AWS builds —and encourages its customers to build —for failure to occur at any time Similarly as the Ba nk of England recognizes “We want firms to plan on the assumption that any part of their infrastructure could be impacted whatever the reason” In the design building and testing of their applications on AWS customers are able to achieve their object ives for operational resilience AWS offers the building blocks for any type of customer from financial institutions to oil and gas companies to government agencies to construct applications that can withstand large scale events In this section This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 9 we walk through how financial institution customers can build that type of resilient application on the AWS cloud Starting with first principles AWS field teams composed of technical managers solution architects and security experts help financial institutio n customers build their applications according to customers’ design goals security objectives and other internal and regulatory requirements As reflected in our shared responsibility model customers remain responsible for deciding how to protect their data and systems in the AWS Cloud but we offer workbooks guidance documents and on site consulting to assist in the process Before deploying a mission critical application —whether on the AWS cloud or in another environment —significant financial institu tion customers will go through extensive development and testing For a customer who begins building an application on AWS with high availability and resiliency in mind we recommend that they begin by answering some fundamental questions6 including but not limited to: 1 What problems are you trying to solve? 2 What specific aspects of the application require specific levels of availability? 3 What is the amount of cumulative downtime that this workload can realistically accumulate in one year? 4 What is the actual impact of unavailability? Financial institutions and market utilities perform both critical and non critical types of functions in the financial services sector From deposit taking to loan processing trade execution to securities settlement finan cial entities across the world perform services whose continuity and resiliency are necessary to ensure the public’s trust and confidence in the financial system At the industrywide level for systemically important payment clearing settlement and othe r types of applications central banks and market regulators specify a discrete recovery time objective in the Principles for Financial Market Infrastructures (PFMI) standard: “The [business continuity] plan should incorporate the use of a secondary site a nd should be designed to ensure that critical information technology (IT) systems can resume operations within two hours following disruptive events The plan should be designed to enable the FMI to complete settlement by the end of the day of the disrupti on even in case of extreme circumstances”7 Beyond the 2 hour RTO financial regulatory agencies expect regulated entities to be able to meet RTOs and recovery point objectives (RPOs) according to the criticality of This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyo nd 10 their applications beginning with “Ti er 1 application” as the most critical For example regulated entities may classify their RTO and RPOs in the following way: Table 1 — How regulated entities classify RTO and RPO Resiliency requirement Tier 1 app Tier 2 app Tier 3 app Recovery Time Objective 2 Hours < 8 Hours 24 Hours Recovery Point Objective < 30 seconds < 4 Hours 24 Hours Although systemically important financial institutions may have upwards of 8000 to 10000 applications they do not classify all applications according to the same criticality For example disruptions in an application for processing mortgage loan requests are undesirable but a financial institution operating such an application may decide that it can tolerate an 8 hour RTO Other types of important but not n ecessarily systemically important workloads include post trade market analysis and customer facing chatbots While the majority of financial entities’ applications are non critical from a systemic perspective disruption of some Tier 1 applications would jeopardize not only the safety and soundness of the affected financial institution but also other financial services entities and possibly the broader economy For example a settlement application may be a Tier 1 application and have an associated RTO of 30 minutes and an RPO of < 30 seconds Such applications are the heart of financial markets and disruptions could cause operational liquidity and even credit risks to crystallize For such applications there is little to virtually no time for humans to make an active decision on how to recover from an outage or failover to a backup data center Recovery would need to be automatic and triggered based on metrics and alarms8 AWS provides guidance to customers on best practices for building highly available resilient applications including through our Well Architected Framework9 For example we recommend that the components comprising an application should be independent and isolated to provide redundancy When changing components or configurati ons in an application customers should make sure that they can roll back any changes to the application if it appears that the changes are not working Monitoring and alarming should be used to track latency error rates and availability for each request for all downstream dependencies and for key operations Data gathered through monitoring should allow for efficient diagnosis of problems10 Best practices for distributed systems This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 11 should be implemented to enable automated recovery Recovery paths should be tested frequently —and most frequently for complex or critical recovery paths For financial institutions it can be difficult to practice these principles in traditional on premises environments many of which reflect decades of consolidation with oth er entities and ad hoc changes in their IT infrastructures On the other hand these principles are what drive the design of AWS’s global infrastructure and services and form the basis of our guidance to customers on how to achieve continuity of service11 Financial institutions using AWS services can take advantage of AWS’s services to improve their resiliency regardless of the state of their existing systems From design principles to implementation Customers have to make many decisions: where to place t heir content where to run their applications and how to achieve higher levels of availability and resiliency For example a financial institution can choose to run its mobile banking application in a single AWS Region to take advantage of multiple AZs Figure 1 Example of Multi AZ Design Let’s take the example of a deployment across 2 AZs to illustrate how AZ independence provides resiliency As shown in Figure 1 the customer deploys its mobile banking application so that its architecture is stable and consistent across AZs ; for example the workload in each AZ has sufficient capacity as This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 12 well as stable infrastructure configurations and policies that keep both AZs up to date Elastic Load Balancing routes traffic only to healthy instances and data layer replication allows for fast failover in case a database instance fails in one AZ thus minimizing downtime for the financial institution’s mobile banking customers Compared to AWS’s infrastructure and services traditional on premises environ ments present several obstacles for achieving operational resilience For example let’s assume a significant event shuts down a financial institution’s primary on premises data center The financial institution also has a secondary data center in additio n to its primary data center The capacity of the secondary data center is able to handle only a proportion of the overall workload that would otherwise operate at the primary data center ( for example 11000 servers at the secondary center instead of 120 00 servers at the primary center; network capacity increased 300% at the primary center in the last 4 years but only 250% at the secondary center) and errors in replication mean that the secondary center’s data has not been updated in 36 hours Furthermor e macroeconomic factors have driven transaction volume higher at the primary data center by 15% over the past 6 months As a result the financial institution may find that its secondary data center cannot process current transaction volume within a given time period per its internal and regulatory requirements By using AWS services the financial institution would have been able to increase its capacity at frequent intervals to support increasing transaction volumes as well as track and manage changes t o maintain all of its deployments with the same up todate capacity and architecture In addition customers can maintain additional “cold” infrastructure and backups on AWS that can activate if necessary —at much lower cost than procuring their own physic al infrastructure This is not a hypothetical issue —key regulatory requirements highlight the need for regulated entities to account for capacity needs in adverse scenarios12 On AWS customers can also deploy workloads across AZs located in multiple Regio ns (Figure 2) to achieve both AZ redundancy and Region redundancy Customers that have regulatory or other requirements to store data in multiple Regions or to achieve even greater availability can use a multi Region design In a multi Region set up the customer will need to perform additional engineering to minimize data loss and ensure consistent data between Regions A routing component monitors the health of the customer’s application as well as dependencies This routing layer will also handle automat ic failovers changing the destination when a location is unhealthy and temporarily stopping data replication Traffic will go only to healthy Regions This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financi al Sector & Beyond 13 AWS improves operational resilience compared to traditional on premises environments not only for failo ver but also for returning to full resiliency For the financial institution with a secondary data center it may have to perform data backup and restoration over several days Many traditional environments do not feature bidirectional replication result ing in current data at the backup site and “outdated” data in the primary site that makes fast failback difficult to achieve On AWS the financial institution is not “stuck” as it would be in a traditional environment —it can fail forward by quickly launch ing its workload in another location The key point is that AWS’s global infrastructure and services offer financial institutions the capacity and performance to meet aggressive resiliency objectives To achieve assurance about the resiliency of their appl ications we recommend that financial institution customers perform continuous performance load and failure testing; extensively use logging metrics and alarms; maintain runbooks for reporting and performance tracking; and validate their architecture t hrough realistic full scale tests known as “game day” exercises Per the regulatory requirements in their jurisdictions financial institutions may provide evidence of such tests runbooks and exercises to their financial regulatory authorities Figure 2 — Example of multiRegion design This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 14 Assurance mechanisms We are prepared to deliver assurance about AWS’s approach to operational resilience and to help customers achieve assurance about the security and resiliency of their workloads Financial institution s and other customers can gain assurance about the security and resiliency of their workloads on AWS through a variety of means including: reports on AWS’s infrastructure and services prepared by independent third party auditors; services and tools to mo nitor assess and test their AWS environments; and direct experience with AWS through our audit engagement offerings Independent thirdparty verification With our standardized offering and millions of active customers across virtually every business segment and in the public sector we provide assurance about our risk and control environment including how we address operational resilience AWS operates thousands of controls that meet the highest standards in the industry To understand these controls and how we operate them customers can access our System and Organization Control (SOC) 2 Type II report reflecting examination by our independent thirdparty auditor which provides an overview of the AWS Resiliency Program Furthermore an ind ependent third party auditor has validated AWS’s alignment with ISO 27001 standard The International Organization for Standardization (ISO) brings together experts to share knowledge and to develop and publish uniform international standards that support innovation and provide solutions to global challenges In addition to ISO 27001 AWS also aligns with the ISO 27017 guidance on information security in the cloud and ISO 27018 code of practice on protection of personal data in the cloud The basis of thes e standards are the development and implementation of a rigorous security program The Information Security Management System (ISMS) required under the ISO 27001 standard defines how AWS manages security in a holistic comprehensive manner and includes num erous control objectives (eg A16 and A17) relevant to operational resilience With a non disclosure agreement in place customers can download these reports and others through AWS Artifact — more than 2 600 security controls standards and requirements in all AWS can provide such reports upon request to regulatory agencies AWS also aligns with the National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF) Developed originally to apply to critical infrastructure entities the foundational set of security disciplines in the CSF can apply to any organization in any s ector and This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 15 regardless of size The US Financial Services Sector Coordinating Council has developed a Financial Services Sector Specific Cybersecurity Profile (available here) that maps the CSF to a variety of international US federal and US state standards and regulations AWS’s alignment with CSF attested by a third party auditor reflects the suitability of AWS services to enhance the security and resiliency of fina ncial sector entities Direct assurance for customers Customers may also achieve continuous assurance about the resilience of their own workloads Through services and tools available from the AWS management console customers have unprecedented visibility monitoring and remediation capabilities to ensure the security and compliance of their own AWS environments Financial institution customers no longer have to rely on periodic snapshots or quarterly and annual assessments to validate their security and compliance Consider just a few examples of the many ways customers achieve direct assurance about the security and compliance of their AWS resources13 First customers can integrate their auditing controls into a notification and workflow system using AW S services For example in such a system a change in the state of a virtual server from pending to running would result in corrective action logging and as needed notify the appropriate personnel Customers can also integrate their notification and w orkflow system with a machine learning driven cybersecurity service offered by AWS that detects unusual API calls potentially unauthorized deployments and other malicious activity Second customers can also translate discrete regulatory requirements in to customizable managed rules and continuously track configuration changes among their resources; for example if a bank has a requirement that developers cannot launch unencrypted storage volumes the bank can predefine a rule for encryption that would flag the volume for non compliance and automatically remove the volume Finally and third another AWS service allows customers to automatically assess the security of their environment targeting their network file system and process activity and collecti ng a wide set of activity and configuration data This data includes details of communication with AWS services use of secure channels details of the running processes network traffic among the running processes and more —resulting in a list of findings and security problems ordered by severity This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilienc e in the Financial Sector & Beyond 16 While these and other services correct for non compliant configurations or security vulnerabilities AWS also recommends that customers test their applications for operational resilience Financial institution cu stomers should test for the transient failures of their applications’ dependencies (including external dependencies) component failures and degraded network communications One major customer has developed open source software that can be a basis for this type of testing To address concerns that malicious actors may access critical functions or processes in customers’ environments customers can also conduct penetration testing of their AWS environments14 Finally AWS’s efforts to provide transparency about our risk and control environment do not stop at our third party audit reports or formal audit engagements Our security and compliance personnel security solution architects engineers a nd field teams engage daily with customers to address their questions and concerns Such interaction may be a phone call with the financial institution’s security team an executive meeting with a customer’s Chief Information Security Officer and Chief Information Officer a briefing on AWS’s premises — and countless other ways Customers drive our overall infrastructure and service roadmap and meeting and exceeding their security and resiliency needs is our number one objective Document revisions Date Description April 02 2021 Reviewed for technical accuracy March 2019 First publication Notes 1 US Federal Financial Institution Examination Council (FFIEC) IT Handbook; see https://ithandbookffiecgov 2 Committee on Payments and Market Infrastructures and Board of the International Organization of Securities Commissions (CPMI IOSCO) Guidance on cyber resilience This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 17 for financial market infrastructures (June 2016); see https://wwwbisorg/cpmi/publ/ d146pdf 3 This paper reflects only an overview of our ongoing efforts to ensure our customers can use AWS services safely To complement our concept of shared responsibility we are also dedicated to excee ding customer and regulatory expectations To that end AWS technical teams security architects and compliance experts assist financial institutions customers in meeting regulatory and internal requirements including by actively demonstrating their secu rity and resiliency through continuous monitoring remediation and testing AWS continuously engages with financial regulators around the world to explain how AWS’s infrastructure and services enable all sizes and types of financial institutions —from fintech startups to stock exchanges —to improve their security and resiliency compared to on premises environments We always want to receive feedback from customers and their regulators about AWS’s approach and their experience 4 You ca n take a virtual tour of an AWS data center here: https://awsamazoncom/compliance/data center 5 As evidenced by the Amazon S3 service disruption of February 28 2017 which occurred in the Northern Virginia (US EAST 1) Region but not in other Regions See “Summary of the Amazon S3 Service Disruption in the Northern Virginia (US EAST 1) Region” https://awsamazoncom/message/41926/ 6 We recommend that customers review the Cloud Adoption Framework to develop efficient and effectiv e adoption plans See Reliability Pillar AWS Well Architected Framework 7 Key Consideration 176 of PFMI available at https://wwwbisorg/cpmi/publ/d101apdf 8 Customers can enable automatic recovery using a variety of AWS services including Amazon Cl oudWatch metrics Amazon CloudWatch Events and AWS Lambda See also the following AWS re:Invent presentati on “Disaster Recovery and Business Continuity for Financial Institutions ” for additional information on applicable AWS services and example architecture: https://wwwyoutubecom/watch?v=Xa xTwhP 1UU 9 See https://awsamazoncom/architecture/well architected 10 A variety of AWS services support these practices; for examples see pp 26 28 at https://d0awsstaticcom/whitepapers/ architecture/AWS Reliability Pillarpdf This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/aws operationalresilience/awsoperationalresiliencehtmlAmazon Web Services Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond 18 11 For a comprehensive overview of our guidance to customers see the “Reliability Pillar” whitepaper (September 2018) at https:// d0awsstaticcom/whitepapers/archit ecture/AWS Reliability Pillarpdf 12 See for example US Securities and Exchange Commission (SEC) Regulation Systems Compliance and Integrity 17 CFR § 240 242 & 249; see also adopting release: https://wwwsecgov/rules/final/2014/34 73639pdf See also FFIEC Business Continuity Planning IT Examination Handbook (February 2015) available at https://ithandbookffiecgov/media/274725/ffiec_itbooklet_businesscontinuityplanningp df 13 The AWS services discussed in this section include: Amazon CloudWatch Events AWS Config Amazon GuardDuty AWS Config Rules and Amazon Inspector 14 For example in the United Kingdom the Bank of England has developed the CBEST framework for testing financial firms’ cyber resilience Accredited penetration test companies attempt to access critical assets within the target firm An accredited threat intelligence company provides threat intelligence and provides guidance how the penetration testers can attack the firm Financial institution customers subject to the CBEST framework and planning to have a penetration test conducted on their AWS resources n eed to notify AWS by submitting a request (at https://awsamazoncom/security/penetration testing ) because such activity is indistinguishable from prohibited security violations and netwo rk abuse
|
General
|
consultant
|
Best Practices
|
AWS_Overview_of_Security_Processes
|
ArchivedAmazon Web Services: Overview of Security Processes March 2020 This paper has been archived For the latest technical content on Security and Compliance see https://awsamazoncom/ architecture/securityidentity compliance/ArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Shared Security Responsibility Model 1 AWS Security Responsibilities 2 Customer Security Responsibilities 2 AWS Global Infrastructure Security 3 AWS Compliance Program 3 Physical and Environmental Security 4 Business Continuity Management 6 Network Security 7 AWS Access 11 Secure Design Principles 12 Change Management 12 AWS Account Security Features 14 Individual User Accounts 19 Secure HTTPS Access Points 19 Security Logs 20 AWS Trusted Advisor Security Checks 20 AWS Config Security Checks 21 AWS Service Specific Security 21 Compute Services 21 Networking Services 28 Storage Services 43 Database Services 55 Application Services 66 Analytics Services 73 Deployment and Management Services 77 ArchivedMobile Services 82 Applications 85 Document Revisions 88 ArchivedAbstract This document is intended to answer questions such as How does AWS help me ensure that my data is secure? Specifically this paper describes AWS physical and operational security processes for the network and server infrastructure under the management of AWS ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 1 Introduction Amazon Web Services (AWS) delivers a scalable cloud computing pl atform with high availability and dependability providing the tools that enable customers to run a wide range of applications Helping to protect the confidentiality integrity and availability of our customers’ systems and data is of the utmost importan ce to AWS as is maintaining customer trust and confidence Shared Security Responsibility Model Before covering the details of how AWS secures its resources it is important to understand how security in the cloud is slightly different than security in yo ur on premises data centers When you move computer systems and data to the cloud security responsibilities become shared between you and your cloud service provider In this case AWS is responsible for securing the underlying infrastructure that support s the cloud and you’re responsible for anything you put on the cloud or connect to the cloud This shared security responsibility model can reduce your operational burden in many ways and in some cases may even improve your default security posture witho ut additional action on your part Figure 1: AWS shared security responsibility model The amount of security configuration work you have to do varies depending on which services you select and how sensitive your data is However there are certain security ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 2 features —such as individual user accounts and credentials SSL/TLS for data transmissions and user activity logging —that you should configure no matter which AWS service you use For more information about these security featur es see the AWS Account Security Features sectio n AWS Security Responsibilities Amazon Web Services is responsible for protecting the global infrastructure that runs all of the services offered in the AWS Cloud Th is infrastructure comprise s the hardware software networking and facilities that run AWS services Protecting this infrastructure is the number one priority of AWS Although you can’t visit our data centers or offices to see this protection firsthand we provide several reports from third party auditors who have verified our compliance with a variety of computer security standards and regulations For more information visit AWS Compliance Note that in addition to protecting this global infrastructure AWS is responsible for the security configuration of its products that are considered managed services Examples of these types of services include Amazon DynamoDB Amazon RDS Amazon Redshift Amazon EMR Amazon WorkSpaces and several other services These services provide the scalability and flexibility of cloud based resources with the additional benefit of being managed For these services AWS handle s basic security tasks like guest operat ing system (OS) and database patching firewall configuration and disaster recovery For most of these managed services all you have to do is configure logical access controls for the resources and protect your account credentials A few of them may requ ire additional tasks such as setting up database user accounts but overall the security configuration work is performed by the service Customer Security Responsibilities With the AWS cloud you can provision virtual servers storage databases and desk tops in minutes instead of weeks You can also use cloud based analytics and workflow tools to process your data as you need it and then store it in your own data centers or in the cloud The AWS services that you use determine how much configuration work you have to perform as part of your security responsibilities AWS products that fall into the well understood category of Infrastructure asaService (IaaS) —such as Amazon EC2 Amazon VPC and Amazon S3 —are completely under your control and require you t o perform all of the necessary security configuration and management tasks For example for EC2 instances you’re responsible for management of the guest OS (including updates and security patches) any application ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 3 software or utilities you install on the instances and the configuration of the AWS provided firewall (called a security group) on each instance These are basically the same security tasks that you’re used to performing no matter where your servers are located AWS managed services like Amazon RDS or Amazon Redshift provide all of the resources you need to perform a specific task —but without the configuration work that can come with them With managed services you don’t have to worry about launching and maintaining instances patching the gues t OS or database or replicating databases —AWS handles that for you But as with all services you should protect your AWS Account credentials and set up individual user accounts with Amazon Identity and Access Management (IAM) so that each of your users h as their own credentials and you can implement segregation of duties We also recommend using multi factor authentication (MFA) with each account requiring the use of SSL/TLS to communicate with your AWS resources and setting up API/user activity logging with AWS CloudTrail For more information about additional measures you can take refer to the AWS Security Best Practices whitepaper and recommended reading on the AWS Security Learning webpage AWS Global Infrastructure Security AWS operates the global cloud infrastruct ure that you use to provision a variety of basic computing resources such as processing and storage The AWS global infrastructure includes the facilities network hardware and operational software (eg host OS virtualization software etc) that supp ort the provisioning and use of these resources The AWS global infrastructure is designed and managed according to security best practices as well as a variety of security compliance standards As an AWS customer you can be assured that you’re building w eb architectures on top of some of the most secure computing infrastructure in the world AWS Compliance Program AWS Compliance enables customers to understand the robust controls in place at AWS to maintain security and data protection in the cloud As systems are built on top of AWS cloud infrastructure compliance responsibilities are shared By tying together governance focused audit friendly service features with applicable compliance or audit standards AWS Compliance enablers build on traditional programs; helping customers to establish and operate in an AWS security control environment The IT infrastructure ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 4 that AWS provides to its customers is d esigned and managed in alignment with security best practices and a variety of IT security standards including: • SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70) • SOC 2 • SOC 3 • FISMA DIACAP and FedRAMP • DOD CSM Levels 1 5 • PCI DSS Level 1 • ISO 9001 / ISO 27001 / ISO 27017 / ISO 27018 • ITAR • FIPS 140 2 • MTCS Level 3 • HITRUST In addition the flexibility and control that the AWS platform provides allows customers to deploy solutions that meet several industry specific standards including: • Criminal Justice Information Servi ces (CJIS) • Cloud Security Alliance (CSA) • Family Educational Rights and Privacy Act (FERPA) • Health Insurance Portability and Accountability Act (HIPAA) • Motion Picture Association of America (MPAA) AWS provides a wide range of information regarding its IT co ntrol environment to customers through white papers reports certifications accreditations and other third party attestations For m ore information see AWS Compliance Physical and Environmental Securit y AWS data centers are state of the art utilizing innovative architectural and engineering approaches Amazon has many years of experience in designing constructing and operating large scale data centers This experience has been applied to the AWS platform and infrastructure AWS data centers are housed in facilities that are not ArchivedAmazon Web Services Amazon Web Services: Overview of Secu rity Processes Page 5 branded as AWS facilities Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillan ce intrusion detection systems and other electronic means Authorized staff must pass two factor authentication a minimum of two times to access data center floors All visitors are required to present identification and are signed in and continually esc orted by authorized staff AWS only provides data center access and information to employees and contractors who have a legitimate business need for such privileges When an employee no longer has a business need for these privileges his or her access is immediately revoked even if they continue to be an employee of Amazon or Amazon Web Services All physical access to data centers by AWS employees is logged and audited routinely Fire Detection and Suppression Automatic fire detection and suppression equ ipment has been installed to reduce risk The fire detection system utilizes smoke detection sensors in all data center environments mechanical and electrical infrastructure spaces chiller rooms and generator equipment rooms These areas are protected by either wet pipe double interlocked pre action or gaseous sprinkler systems Power The data center electrical power systems are designed to be fully redundant and maintainable without impact to operations 24 hours a day and seven days a week Uninterru ptible Power Supply (UPS) units provide back up power in the event of an electrical failure for critical and essential loads in the facility Data centers use generators to provide back up power for the entire facility Climate and Temperature Climate control is required to maintain a constant operating temperature for servers and other hardware which prevents overheating and reduces the possibility of service outages Data centers are conditioned to maintain atmospheric conditions at optimal levels Personnel and systems monitor and control temperature and humidity at appropriate levels ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 6 Management AWS monitors electrical mechanical and life support systems and equipment so that any issues are immediately identified Preventative maintenance is performed to maintain the continued operability of equipment Storage Device Decommissioning When a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent customer data from be ing exposed to unauthorized individuals AWS uses the techniques detailed in NIST 800 88 (“Guidelines for Media Sanitization”) as part of the decommissioning process Business Continuity Management Amazon’s infrastructure has a high level of availability a nd provides customers the features to deploy a resilient IT architecture AWS has designed its systems to tolerate system or hardware failures with minimal customer impact Data center Business Continuity Management at AWS is under the direction of the Ama zon Infrastructure Group Availability Data centers are built in clusters in various global regions All data centers are online and serving customers; no data center is “cold” In case of failure automated processes move customer data traffic away from t he affected area Core applications are deployed in an N+1 configuration so that in the event of a data center failure there is sufficient capacity to enable traffic to be load balanced to the remaining sites AWS provides you with the flexibility to pl ace instances and store data within multiple geographic regions as well as across multiple availability zones within each region Each availability zone is designed as an independent failure zone This means that availability zones are physically separated within a typical metropolitan region and are located in lower risk flood plains (specific flood zone categorization varies by Region) In addition to discrete uninterruptable power supply (UPS) and onsite backup generation facilities they are each fed vi a different grids from independent utilities to further reduce single points of failure Availability zones are all redundantly connected to multiple tier 1 transit providers You should architect your AWS usage to take advantage of multiple regions and availability zones Distributing applications across multiple availability zones provides ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 7 the ability to remain resilient in the face of most failure modes including natural disasters or system failures Incident Response The Amazon Incident Management team employs industry standard diagnostic procedures to drive resolution during business impacting events Staff operators provide 24x7x365 coverage to detect incidents and to manage the impact and resolution Company Wide Executive Review Amazon’s Internal Au dit group has recently reviewed the AWS services resiliency plans which are also periodically reviewed by members of the Senior Executive management team and the Audit Committee of the Board of Directors Communication AWS has implemented various methods of internal communication at a global level to help employees understand their individual roles and responsibilities and to communicate significant events in a timely manner These methods include orientation and training programs for newly hired employees ; regular management meetings for updates on business performance and other matters; and electronics means such as video conferencing electronic mail messages and the posting of information via the Amazon intranet AWS has also implemented various method s of external communication to support its customer base and the community Mechanisms are in place to allow the customer support team to be notified of operational issues that impact the customer experience A Service Health Dashboard i s available and maintained by the customer support team to alert customers to any issues that may be of broad impact The AWS Cloud Security Center is available to provide you with security and compliance details about AWS You can also subscribe to AWS Support offerings that include direct communication with the customer support team and proactive alerts to any customer impacting issues Network Security The AWS network has been architected to permit you to select the level of security and resiliency appropriate for your workload To enable you to build geographica lly dispersed fault tolerant web architectures with cloud resources AWS has implemented a world class network infrastructure that is carefully monitored and managed ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 8 Secure Network Architecture Network devices including firewall and other boundary devic es are in place to monitor and control communications at the external boundary of the network and at key internal boundaries within the network These boundary devices employ rule sets access control lists (ACL) and configurations to enforce the flow of information to specific information system services ACLs or traffic flow policies are established on each managed interface which manage and enforce the flow of traffic ACL policies are approved by Amazon Information Security These policies are auto matically pushed using AWS’s ACL Manage tool to help ensure these managed interfaces enforce the most up todate ACLs Secure Access Points AWS has strategically placed a limited number of access points to the cloud to allow for a more comprehensive monitoring of inbound and outbound communications and network traffic These customer access points are called API endpoints and they allow secure HTTP access (HTTPS) which allows you to establish a secure communication session with your st orage or compute instances within AWS To support customers with FIPS cryptographic requirements the SSL terminating load balancers in AWS GovCloud (US) are FIPS 140 2compliant In addition AWS has implemented network devices that are dedicated to manag ing interfacing communications with Internet service providers (ISPs) AWS employs a redundant connection to more than one communication service at each Internet facing edge of the AWS network These connections each have dedicated network devices Transmi ssion Protection You can connect to an AWS access point via HTTP or HTTPS using Secure Sockets Layer (SSL) a cryptographic protocol that is designed to protect against eavesdropping tampering and message forgery For customers who require additional lay ers of network security AWS offers the Amazon Virtual Private Cloud (VPC) which provides a private subnet within the AWS cloud and the ability to use an IPsec Virtual Private Network (VPN) device to provide an encrypted tunnel between the Amazon VPC and your data center For more information about VPC configuration options see the Amazon Virtual Private Cloud (Amazon VPC) Security section ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 9 Amazon Corporate Segregation Logically the AWS Production network is se gregated from the Amazon Corporate network by means of a complex set of network security / segregation devices AWS developers and administrators on the corporate network who need to access AWS cloud components in order to maintain them must explicitly req uest access through the AWS ticketing system All requests are reviewed and approved by the applicable service owner Approved AWS personnel then connect to the AWS network through a bastion host that restricts access to network devices and other cloud com ponents logging all activity for security review Access to bastion hosts require SSH public key authentication for all user accounts on the host For more information on AWS developer and administrator logical access see AWS Access below Fault Toleran t Design Amazon’s infrastructure has a high level of availability and provides you with the capability to deploy a resilient IT architecture AWS has designed its systems to tolerate system or hardware failures with minimal customer impact Data centers ar e built in clusters in various global regions All data centers are online and serving customers; no data center is “cold” In case of failure automated processes move customer data traffic away from the affected area Core applications are deployed in an N+1 configuration so that in the event of a data center failure there is sufficient capacity to enable traffic to be load balanced to the remaining sites AWS provides you with the flexibility to place instances and store data within multiple geographic regions as well as across multiple availability zones within each region Each availability zone is designed as an independent failure zone This means that availability zones are physically separated within a typical metropolitan region and are located i n lower risk flood plains (specific flood zone categorization varies by region) In addition to utilizing discrete uninterruptable power supply (UPS) and onsite backup generators they are each fed via different grids from independent utilities to further reduce single points of failure Availability zones are all redundantly connected to multiple tier 1 transit providers You should architect your AWS usage to take advantage of multiple regions and availability zones Distributing applications across multi ple availability zones provides the ability to remain resilient in the face of most failure scenarios including natural disasters or system failures However you should be aware of location dependent ArchivedAmazon Web Services Amazon Web Services: Overview of Securi ty Processes Page 10 privacy and compliance requirements such as the EU Da ta Privacy Directive Data is not replicated between regions unless proactively done so by the customer thus allowing customers with these types of data placement and privacy requirements the ability to establish compliant environments It should be noted that all communications between regions is across public internet infrastructure; therefore appropriate encryption methods should be used to protect sensitive data Data centers are built in clusters in various global regions including: US East (Norther n Virginia) US West (Oregon) US West (Northern California) AWS GovCloud (US) (Oregon) EU (Frankfurt) EU (Ireland) Asia Pacific (Seoul) Asia Pacific (Singapore) Asia Pacific (Tokyo) Asia Pacific (Sydney) China (Beijing) and South America (Sao Paulo) For a complete list of AWS R egions see the AWS Global Infrastructure page AWS GovCloud (US) is an isolated AWS Region designed to allow US government agencies and customers to move workloads into the cloud by helping them meet certain regulatory and compliance requirements The AWS GovCloud (US) framework allows US government agencies and their contractors to comply with US International Traffic in Arms Regulations (ITAR) reg ulations as well as the Federal Risk and Authorization Management Program (FedRAMP) requirements AWS GovCloud (US) has received an Agency Authorization to Operate (ATO) from the US Department of Health and Human Services (HHS) utilizing a FedRAMP accredit ed Third Party Assessment Organization (3PAO) for several AWS services The AWS GovCloud (US) Region provides the same fault tolerant design as other regions with two Availability Zones In addition the AWS GovCloud (US) region is a mandatory AWS Virtual Private Cloud (VPC) service by default to create an isolated portion of the AWS cloud and launch Amazon EC2 instances that have private (RFC 1918) addresses For more information see AWS GovCloud (US) Network Monitoring and Protection AWS u ses a wide variety of automated monitoring systems to provide a high level of service performance and availability AWS monitoring tools are designed to detect unusual or unauthorized activities and conditions at in gress and egress communication points These tools monitor server and network usage port scanning activities application usage and unauthorized intrusion attempts The tools have the ability to set custom performance metrics thresholds for unusual activ ity Systems within AWS are extensively instrumented to monitor key operational metrics Alarms are configured to automatically notify operations and management personnel when early warning thresholds are crossed on key operational metrics An on call ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 11 sche dule is used so personnel are always available to respond to operational issues This includes a pager system so alarms are quickly and reliably communicated to operations personnel Documentation is maintained to aid and inform operations personnel in han dling incidents or issues If the resolution of an issue requires collaboration a conferencing system is used which supports communication and logging capabilities Trained call leaders facilitate communication and progress during the handling of operatio nal issues that require collaboration Post mortems are convened after any significant operational issue regardless of external impact and Cause of Error (COE) documents are drafted so the root cause is captured and preventative actions are taken in the future Implementation of the preventative measures is tracked during weekly operations meetings AWS Access The AWS Production network is segregated from the Amazon Corporate network and requires a separate set of credentials for logical access The Amazo n Corporate network relies on user IDs passwords and Kerberos wh ereas the AWS Production network requires SSH public key authentication through a bastion host AWS developers and administrators on the Amazon Corporate network who need to access AWS clou d components must explicitly request access through the AWS access management system All requests are reviewed and approved by the appropriate owner or manager Account Review and Audit Accounts are reviewed every 90 days; explicit re approval is required or access to the resource is automatically revoked Access is also automatically revoked when an employee’s record is terminated in Amazon’s Human Resources system Windows and UNIX accounts are disabled and Amazon’s permission management system removes the user from all systems Requests for changes in access are captured in the Amazon permissions management tool audit log When changes in an employee’s job function occur continued access must be explicitly approved to the resource or it will be automati cally revoked ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 12 Background Checks AWS has established formal policies and procedures to delineate the minimum standards for logical access to AWS platform and infrastructure hosts AWS conducts criminal background checks as permitted by law as part of pre employment screening practices for employees and commensurate with the employee’s position and level of access The policies also identify functional responsibilities for the administration of logical access and security Credentials Policy AWS Securi ty has established a credentials policy with required configurations and expiration intervals Passwords must be complex and are forced to be changed every 90 days Secure Design Principles The AWS development process follows secure software development be st practices which include formal design reviews by the AWS Security Team threat modeling and completion of a risk assessment Static code analysis tools are run as a part of the standard build process and all deployed software undergoes recurring pene tration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoing operations Change Management Routine emergency and configuration chan ges to existing AWS infrastructure are authorized logged tested approved and documented in accordance with industry norms for similar systems Updates to the AWS infrastructure are done to minimize any impact on the customer and their use of the servic es AWS will communicate with customers either via email or through the AWS Service Health Dashboard when service use is likely to be adversely affected Software AWS applies a systematic approach to mana ging change so that changes to customer impacting services are thoroughly reviewed tested approved and well communicated The AWS change management process is designed to avoid unintended service disruptions and to maintain the integrity of service to t he customer Changes deployed into production environments are: ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 13 • Reviewed – Peer reviews of the technical aspects of a change are required • Tested – Changes being applied are tested to help ensure they will behave as expected and not adversely impact perfor mance • Approved – All changes must be authorized in order to provide appropriate oversight and understanding of business impact Changes are typically pushed into production in a phased deployment starting with lowest impact areas Deployments are tested on a single system and closely monitored so impacts can be evaluated Service owners have a number of configurable metrics that measure the health of the service’s upstream dependencies These metrics are closely monitored with thresholds and alarmi ng in place Rollback procedures are documented in the Change Management (CM) ticket When possible changes are scheduled during regular change windows Emergency changes to production systems that require deviations from standard change management proced ures are associated with an incident and are logged and approved as appropriate Periodically AWS performs self audits of changes to key services to monitor quality maintain high standards and facilitate continuous improvement of the change management p rocess Any exceptions are analyzed to determine the root cause and appropriate actions are taken to bring the change into compliance or roll back the change if necessary Actions are then taken to address and remediate the process or people issue Infras tructure Amazon’s Corporate Applications team develops and manages software to automate IT processes for UNIX/Linux hosts in the areas of third party software delivery internally developed software and configuration management The Infrastructure team ma intains and operates a UNIX/Linux configuration management framework to address hardware scalability availability auditing and security management By centrally managing hosts through the use of automated processes that manage change Amazon is able to achieve its goals of high availability repeatability scalability security and disaster recovery Systems and network engineers monitor the status of these automated tools on a continuous basis reviewing reports to respond to hosts that fail to obtain or update their configuration and software ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 14 Internally developed configuration management software is installed when new hardware is provisioned These tools are run on all UNIX hosts to validate that they are configured and that software is installed in c ompliance with standards determined by the role assigned to the host This configuration management software also helps to regularly update packages that are already installed on the host Only approved personnel enabled through the permissions service may log in to the central configuration management servers AWS Account Security Features AWS provides a variety of tools and features that you can use to keep your AWS Account and resources safe from unauthorized use This includes credentials for access control HTTPS endpoints for encrypted data transmission the creation of separate IAM user accounts user activity logging for security monitoring and Trusted Advisor security checks You can take advantage of all of these security tools no matter which AWS services you select AWS Credentials To help ensure that only authorized users and processes access your AWS Account and resources AWS uses several types of credentials for authentication These include passwords cryptographic keys digital signatures and certificates We also provide the option of requiring multi factor authentication (MFA) to log into your AWS Account or IAM user accounts The following table highlights the various AWS credentia ls and their uses Table 1: Credential types and uses Credential Type Use Description Passwords AWS root account or IAM user account login to the AWS Management Console A string of characters used to log into your AWS account or IAM account AWS passwords must be a minimum of 6 characters and may be up to 128 characters Multi Factor Authentication (MFA) AWS root account or IAM user account login to the AWS Management Console A six digit single use code that is required in addition to your password to log in to your AWS Account or IAM user account ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 15 Credential Type Use Description Access Keys Digitally signed requests to AWS APIs (using the AWS SDK CLI or REST/Query APIs) Includes an access key ID and a secret access key You use access keys to digitally sign programmatic requests that you make to AWS Key Pairs SSH login to EC2 instances CloudFront signed URLs A key pair is required to connect to an EC2 instance launched from a public AMI The supported lengths are 1024 2048 and 4096 If you connect using SSH while using the EC2 Instance Connect API the supported lengths are 2048 and 4096 You can have a key pair generated automatically for you when you launch the instance or you can upload your own X509 Certificates Digitally signed S OAP requests to AWS APIs SSL server certificates for HTTPS X509 certificates are only used to sign SOAP based requests (currently used only with Amazon S3) You can have AWS create an X509 certificate and private key that you can download or you can upload your own certificate by using the Security Credentials page You can download a Credential Report for your account at any time from the Security Credentials page This report lists all of your account’s users and the status of their credentials —whether they use a password whether their password expires and must be changed regularly the last time they changed their password the last time they rotated their access keys and whether they have MFA enabled For security reasons if your credentials ha ve been lost or forgotten you cannot recover them or re download them However you can create new credentials and then disable or delete the old set of credentials In fact AWS recommends that you change (rotate) your access keys and certificates on a regular basis To help you do this without potential impact to your application’s availability AWS supports multiple concurrent access keys and certificates With this feature you can rotate keys and certificates into and out of operation on a regular bas is without any downtime to your application This can help to mitigate risk from lost or ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 16 compromised access keys or certificates The AWS IAM API enables you to rotate the access keys of your AWS Account as well as for IAM user accounts Passwords Password s are required to access your AWS Account individual IAM user accounts AWS Discussion Forums and the AWS Support Center You specify the password when you first create the account and you can change it at any time by going to the Security Credentials p age AWS passwords can be up to 128 characters long and contain special characters so we encourage you to create a strong password that cannot be easily guessed You can set a password policy for your IAM user accounts to ensure that strong passwords are used and that they are changed often A password policy is a set of rules that define the type of password an IAM user can set For more information about password policies see Managing Passwords for IAM Users AWS Multi Factor Authentication (MFA) AWS Multi Factor Authentication (MFA) is an additional layer of security for accessing AWS s ervices When you enable this optional feature you must provide a six digit single use code in addition to your standard user name and password credentials before access is granted to your AWS Account settings or AWS services and resources You get this s ingle use code from an authentication device that you keep in your physical possession This is called multi factor authentication because more than one authentication factor is checked before access is granted: a password (something you know) and the prec ise code from your authentication device (something you have) You can enable MFA devices for your AWS Account as well as for the users you have created under your AWS Account with AWS IAM In addition you add MFA protection for access across AWS Accounts for when you want to allow a user you’ve created under one AWS Account to use an IAM role to access resources under another AWS Account You can require the user to use MFA before assuming the role as an additional layer of security AWS MFA supports the use of both hardware tokens and virtual MFA devices Virtual MFA devices use the same protocols as the physical MFA devices but can run on any mobile hardware device including a smartphone A virtual MFA device uses a software application that generates sixdigit authentication codes that are compatible with the Time Based One Time Password (TOTP) standard as described in RFC 6238 Most virtual MFA applications allow you to host more than one virtual MFA device which makes them more convenient than har dware MFA devices However you should be ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 17 aware that because a virtual MFA might be run on a less secure device such as a smartphone a virtual MFA might not provide the same level of security as a hardware MFA device You can also enforce MFA authenticati on for AWS service APIs in order to provide an extra layer of protection over powerful or privileged actions such as terminating Amazon EC2 instances or reading sensitive data stored in Amazon S3 You do this by adding an MFA authentication requirement to an IAM access policy You can attach these access policies to IAM users IAM groups or resources that support Access Control Lists (ACLs) like Amazon S3 buckets SQS queues and SNS topics It is easy to obtain hardware tokens from a participating third party provider or virtual MFA applications from an AppStore and to set it up for use via the AWS website More information is available at AWS Multi Factor Authentication (MFA) Access Keys AWS requires that all API requests be signed —that is they must include a digital signature that AWS can use to verify the identity of the requestor You calculate the digital signature using a cryptographic hash function The input to the hash function in this case includes the text of your request and your secret access key If you use any of the AWS SDKs to generate requests the digital signature calculation is done for you; otherwise you can have your application calculate it and include it in your RE ST or Query requests by following the directions in Making Requests Using the AWS SDKs Not only does the signing process help protect message integrity by p reventing tampering with the request while it is in transit it also helps protect against potential replay attacks A request must reach AWS within 15 minutes of the time stamp in the request Otherwise AWS denies the request The most recent version of the digital signature calculation process is Signature Version 4 which calculates the signature using the HMAC SHA256 protocol Version 4 provides an additional measure of protection over previous versions by requiring that you sign the message using a ke y that is derived from your secret access key rather than using the secret access key itself In addition you derive the signing key based on credential scope which facilitates cryptographic isolation of the signing key Because access keys can be misuse d if they fall into the wrong hands we encourage you to save them in a safe place and not embed them in your code For customers with large fleets of elastically scaling EC2 instances the use of IAM roles can be a more secure and convenient way to manage the distribution of access keys IAM roles ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 18 provide temporary credentials which not only get automatically loaded to the target instance but are also automatically rotated multiple times a day Key Pairs Amazon EC2 instances created from a public AMI use a public/private key pair rather than a password for signing in via Secure Shell (SSH)The public key is embedded in your instance and you use the private key to sign in securely without a password After you create your own AMIs you can choose other mechanisms to securely log in to your new instances You can have a key pair generated automatically for you when you launch the instance or you can upload your own Save the private key in a safe place on your system and record the location where you sa ved it For Amazon CloudFront you use key pairs to create signed URLs for private content such as when you want to distribute restricted content that someone paid for You create Amazon CloudFront key pairs by using the Security Credentials page CloudFr ont key pairs can be created only by the root account and cannot be created by IAM users X509 Certificates X509 certificates are used to sign SOAP based requests X509 certificates contain a public key and additional metadata (like an expiration date t hat AWS verifies when you upload the certificate) and is associated with a private key When you create a request you create a digital signature with your private key and then include that signature in the request along with your certificate AWS verifi es that you're the sender by decrypting the signature with the public key that is in your certificate AWS also verifies that the certificate you sent matches the certificate that you uploaded to AWS For your AWS Account you can have AWS create an X509 certificate and private key that you can download or you can upload your own certificate by using the Security Credentials page For IAM users you must create the X509 certificate (signing certificate) by using third party software In contrast with roo t account credentials AWS cannot create an X509 certificate for IAM users After you create the certificate you attach it to an IAM user by using IAM In addition to SOAP requests X509 certificates are used as SSL/TLS server certificates for customers who want to use HTTPS to encrypt their transmissions To use them for HTTPS you can use an open source tool like OpenSSL to create a unique ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 19 private key You’ll need the private key to create the Certificate Signing Request (CSR) that you submit to a cert ificate authority (CA) to obtain the server certificate You’ll then use the AWS CLI to upload the certificate private key and certificate chain to IAM You’ll also need an X509 certificate to create a customized Linux AMI for EC2 instances The certifi cate is only required to create an instance backed AMI (as opposed to an EBS backed AMI) You can have AWS create an X509 certificate and private key that you can download or you can upload your own certificate by using the Security Credentials page Individual User Accounts AWS provides a centralized mechanism called AWS Identity and Access Management (IAM) for creating and managing individual users within your AWS Account A user can be any individual system or application that interacts with AWS reso urces either programmatically or through the AWS Management Console or AWS Command Line Interface (CLI) Each user has a unique name within the AWS Account and a unique set of security credentials not shared with other users AWS IAM eliminates the need to share passwords or keys and enables you to minimize the use of your AWS Account credentials With IAM you define policies that control which AWS services your users can access and what they can do with them You can grant users only the minimum permis sions they need to perform their jobs See the AWS Identity and Access Management (AWS IAM) section for more information Secure HTTPS Access Points For greater communication security when accessing AWS resources you s hould use HTTPS instead of HTTP for data transmissions HTTPS uses the SSL/TLS protocol which uses public key cryptography to prevent eavesdropping tampering and forgery All AWS services provide secure customer access points (also called API endpoints) that allow you to establish secure HTTPS communication sessions Several services also now offer more advanced cipher suites that use the Elliptic Curve Diffie Hellman Ephemeral (ECDHE) protocol ECDHE allows SSL/TLS clients to provide Perfect Forward Sec recy which uses session keys that are ephemeral and not stored anywhere This helps prevent the decoding of captured data by unauthorized third parties even if the secret long term key itself is compromised ArchivedAmazon Web Services Amazon Web Services: Overview of Security P rocesses Page 20 Security Logs As important as credentials and encrypted endpoints are for preventing security problems logs are just as crucial for understanding events after a problem has occurred And to be effective as a security tool a log must include not just a list of what happened and when but also identif y the source To help you with your after thefact investigations and near real time intrusion detection AWS CloudTrail provides a log of events within your account For each event you can see what service was accessed what action was performed and who made the request CloudTrail captures API calls as well as other things such as console sign in events Once you have enabled CloudTrail event logs are delivered about every 5 minutes You can configure CloudTrail so that it aggregates log files from mu ltiple regions and/or accounts into a single Amazon S3 bucket By default a single trail will record and deliver events in all current and future regions In addition to S3 you can send events to CloudWatch Logs for custom metrics and alarming or you c an upload the logs to your favorite log management and analysis solutions to perform security analysis and detect user behavior patterns For rapid response you can create CloudWatch Events rules to take timely action to specific events By default log f iles are stored securely in Amazon S3 but you can also archive them to Amazon S3 Glacier to help meet audit and compliance requirements In addition to CloudTrail’s user activity logs you can use the Amazon CloudWatch Logs feature to collect and monitor system application and custom log files from your EC2 instances and other sources in near real time For example you can monitor your web server's log files for invalid user messages to detect unauthorized login attempts to your guest OS AWS Trusted Ad visor Security Checks The AWS Trusted Advisor customer support service not only monitors for cloud performance and resiliency but also cloud security Trusted Advisor inspects your AWS environment and makes recommendations when opportunities may exist to save money improve system performance or close security gaps It provides alerts on several of the most common security misconfigurations that can occur including leaving certain ports open that make you vulnerable to hacking and unauthorized access neglecting to create IAM accounts for your internal users allowing public access to Amazon S3 buckets not turning on user activity logging (AWS CloudTrail) or not using MFA on your root AWS Account You also have the option for a Security contact at your ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 21 organization to automatically receive a weekly email with an updated status of your Trusted Advisor security checks The AWS Trusted Advisor service provides four checks at no additional charge to all users including three important security checks: speci fic ports unrestricted IAM use and MFA on root account When you sign up for Business or Enterprise level AWS Support you receive full access to all Trusted Advisor checks AWS Config Security Checks AWS Config is a continuous monitoring and assessment service that records changes to the configuration of your AWS resources You can view the current and historic configurations of a resource and use this information to troubleshoot outages conduct security attack analysis and much more You can view the configuration at any point in time and use that information to re configure your resources and bring them into a steady state during an outage situation Using AWS Config Rules you can run continuous assessment checks on your resources to verify that the y comply with your own security policies industry best practices and compliance regimes such as PCI/HIPAA For example AWS Config provides a managed AWS Config Rules to ensure that encryption is turned on for all EBS volumes in your account You can als o write a custom AWS Config Rule to essentially “codify” your own corporate security policies AWS Config alerts you in real time when a resource is misconfigured or when a resource violates a particular security policy AWS Service Specific Security Not only is security built into every layer of the AWS infrastructure but also into each of the services available on that infrastructure AWS services are architected to work efficiently and securely with all AWS networks and platforms Each service prov ides extensive security features to enable you to protect sensitive data and applications Compute Services Amazon Web Services provides a variety of cloud based computing services that include a wide selection of compute instances that can scale up and do wn automatically to meet the needs of your application or enterprise ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 22 Amazon Elastic Compute Cloud (Amazon EC2) Security Amazon Elastic Compute Cloud ( Amazon EC2) is a key component in Amazon’s Infrastructure asaService (IaaS) providing resizable comput ing capacity using server instances in AWS’s data centers Amazon EC2 is designed to make web scale computing easier by enabling you to obtain and configure capacity with minimal friction You create and launch instances which are collections of platform hardware and software Multiple Levels of Security Security within Amazon EC2 is provided on multiple levels: the operating system (OS) of the host platform the virtual instance OS or guest OS a firewall and signed API calls Each of these items builds on the capabilities of the others The goal is to prevent data contained within Amazon EC2 from being intercepted by unauthorized systems or users and to provide Amazon EC2 instances themselves that are as secure as possible without sacrificing the flexib ility in configuration that customers demand Hypervisor Amazon EC2 currently utilizes a highly customized version of the Xen hypervisor taking advantage of paravirtualization (in the case of Linux guests) Because para virtualized guests rely on the hype rvisor to provide support for operations that normally require privileged access the guest OS has no elevated access to the CPU The CPU provides four separate privilege modes: 0 3 called rings Ring 0 is the most privileged and 3 the least The host OS executes in Ring 0 However rather than executing in Ring 0 as most operating systems do the guest OS runs in a lesser privileged Ring 1 and applications in the least privileged Ring 3 This explicit virtualization of the physical resources leads to a cl ear separation between guest and hypervisor resulting in additional security separation between the two Traditionally hypervisors protect the physical hardware and bios virtualize the CPU storage networking and provide a rich set of management capab ilities With the Nitro System we are able to break apart those functions offload them to dedicated hardware and software and reduce costs by delivering all of the resources of a server to your instances The Nitro Hypervisor provides consistent perform ance and increased compute and memory resources for EC2 virtualized instances by removing host system software components It allows AWS to offer larger instance sizes (like c518xlarge) that provide practically all of the resources from the server to cust omers Previously C3 and C4 instances each eliminated software components by moving VPC and EBS functionality ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 23 to hardware designed and built by AWS This hardware enables the Nitro Hypervisor to be very small and uninvolved in data processing tasks for ne tworking and storage Nevertheless as AWS expands its global cloud infrastructure Amazon EC2’s use of its Xenbased hypervisor will also continue to grow Xen will remain a core component of EC2 instances for the foreseeable future Instance Isolation Different instances running on the same physical machine are isolated from each other via the Xen hypervisor Amazon is active in the Xen community which provides awareness of the latest developments In addition the AWS firewall resides within the hypervi sor layer between the physical network interface and the instance's virtual interface All packets must pass through this layer thus an instance’s neighbors have no more access to that instance than any other host on the Internet and can be treated as if they are on separate physical hosts The physical RAM is separated using similar mechanisms Customer instances have no access to raw disk devices but instead are presented with virtualized disks The AWS proprietary disk virtualization layer automatical ly resets every block of storage used by the customer so that one customer’s data is never unintentionally exposed to another In addition memory allocated to guests is scrubbed (set to zero) by the hypervisor when it is unallocated to a guest The memor y is not returned to the pool of free memory available for new allocations until the memory scrubbing is complete AWS recommends customers further protect their data using appropriate means One common solution is to run an encrypted file system on top of the virtualized disk device ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 24 Figure 2: Amazon EC2 multiple layers of security Host Operating System: Administrators with a business need to access the management plane are required to use multi factor authentication to gain access to purpose built administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane of the cloud All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems can be revoked Guest Operating System: Virtual instances are completely controlled by you the customer You have full root access or administrative control over accounts services and applications AWS does not have any access rights to your instances or the guest OS AWS recommends a base set of security best practices to include disabling password only access to your guests and utilizing some form of multi factor authentication to gain access to your instances (or at a minimum certificate based SSH Version 2 access) Additionally you should employ a privilege escalation mechanism with logging on a per user basis For example if the guest OS is Linux after hardening your instance you should utilize certificate based SSHv2 to access the virtual instance disable remote root login use command line logging and use ‘sudo’ for privilege escalation You should gene rate your own key pairs in order to guarantee that they are unique and not shared with other customers or with AWS ArchivedAmazon Web Services Amazon Web Services: Overview of Security Pro cesses Page 25 AWS also supports the use of the Secure Shell (SSH) network protocol to enable you to log in securely to your UNIX/Linux EC2 instances Aut hentication for SSH used with AWS is via a public/private key pair to reduce the risk of unauthorized access to your instance You can also connect remotely to your Windows instances using Remote Desktop Protocol (RDP) by utilizing an RDP certificate gener ated for your instance You also control the updating and patching of your guest OS including security updates Amazon provided Windows and Linux based AMIs are updated regularly with the latest patches so if you do not need to preserve data or customiza tions on your running Amazon AMI instances you can simply relaunch new instances with the latest updated AMI In addition updates are provided for the Amazon Linux AMI via the Amazon Linux yum repositories Firewall: Amazon EC2 provides a complete firewa ll solution; this mandatory inbound firewall is configured in a default deny all mode and Amazon EC2 customers must explicitly open the ports needed to allow inbound traffic The traffic may be restricted by protocol by service port as well as by source IP address (individual IP or Classless InterDomain Routing (CIDR) block) The firewall can be configured in groups permitting different classes of instances to have different rules Consider for example the case of a traditional three tiered web applica tion The group for the web servers would have port 80 (HTTP) and/or port 443 (HTTPS) open to the Internet The group for the application servers would have port 8000 (application specific) accessible only to the web server group The group for the databas e servers would have port 3306 (MySQL) open only to the application server group All three groups would permit administrative access on port 22 (SSH) but only from the customer’s corporate network Highly secure applications can be deployed using this ex pressive mechanism See the following figure ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 26 Figure 3: Amazon EC2 security group firewall The firewall isn’t controlled through the guest OS; rather it requires your X509 certificate and key to authorize changes thus adding an extra layer of security AWS supports the ability to grant granular access to different administrative functions on the instances and the firewall therefore enabling you to implement additional security through separation of duties The level of security afforded by the firewall is a function of which ports you open and for what duration and purpose The default state is to deny all incoming traffic and you should plan carefully what you will open when building and securing your applications W ell informed traffic management and security design are still required on a per instance basis AWS further encourages you to apply additional per instance filters with host based firewalls such as IPtables or the Windows Firewall and VPNs This can res trict both inbound and outbound traffic API Access: API calls to launch and terminate instances change firewall parameters and perform other functions are all signed by your Amazon Secret Access Key which could be either the AWS Accounts Secret Access Key or the Secret Access key of a user created with AWS IAM Without access to your Secret Access Key Amazon EC2 API calls cannot be made on your behalf In addition API calls can be encrypted with SSL to maintain confidentiality Amazon recommends alway s using SSL protected API endpoints Permissions: AWS IAM also enables you to further control what APIs a user has permissions to call ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 27 Elastic Block Storage (Amazon EBS) Security Amazon Elastic Block Storage ( Amazon EBS) allows you to create storage volum es from 1 GB to 16 TB that can be mounted as devices by Amazon EC2 instances Storage volumes behave like raw unformatted block devices with user supplied device names and a block device interface You can create a file system on top of Amazon EBS volume s or use them in any other way you would use a block device (like a hard drive) Amazon EBS volume access is restricted to the AWS Account that created the volume and to the users under the AWS Account created with AWS IAM if the user has been granted ac cess to the EBS operations thus denying all other AWS Accounts and users the permission to view or access the volume Data stored in Amazon EBS volumes is redundantly stored in multiple physical locations as part of normal operation of those services and at no additional charge However Amazon EBS replication is stored within the same availability zone not across multiple zones; therefore it is highly recommended that you conduct regular snapshots to Amazon S3 for long term data durability For customer s who have architected complex transactional databases using EBS it is recommended that backups to Amazon S3 be performed through the database management system so that distributed transactions and logs can be checkpointed AWS does not perform backups of data that are maintained on virtual disks attached to running instances on Amazon EC2 You can make Amazon EBS volume snapshots publicly available to other AWS Accounts to use as the basis for creating your own volumes Sharing Amazon EBS volume snapshots does not provide other AWS Accounts with the permission to alter or delete the original snapshot as that right is explicitly reserved for the AWS Account that created the volume An EBS snapshot is a block level view of an entire EBS volume Note that da ta that is not visible through the file system on the volume such as files that have been deleted may be present in the EBS snapshot If you want to create shared snapshots you should do so carefully If a volume has held sensitive data or has had files deleted from it a new EBS volume should be created The data to be contained in the shared snapshot should be copied to the new volume and the snapshot created from the new volume Amazon EBS volumes are presented to you as raw unformatted block devices that have been wiped prior to being made available for use Wiping occurs immediately before reuse so that you can be assured that the wipe process completed If you have procedures requiring that all data be wiped via a specific method such as those detailed in NIST 800 88 (“Guidelines for Media Sanitization”) you have the ability to do ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 28 so on Amazon EBS You should conduct a specialized wipe procedure prior to deleting the volume for compliance with your established requirements Encryption of sensitive data is generally a good security practice and AWS provides the ability to encrypt EBS volumes and their snapshots with AES 256 The encryption occurs on the servers that host the EC2 instances providing encryption of data as it moves between EC2 instan ces and EBS storage In order to be able to do this efficiently and with low latency the EBS encryption feature is only available on EC2's more powerful instance types (eg M3 C3 R3 G2) Auto Scaling Security Auto Scaling allows you to automatically scale your Amazon EC2 capacity up or down according to conditions you define so that the number of Amazon EC2 instances you are using scales up seamlessly during demand spikes to maintain performance and scales down automatically during demand lulls to m inimize costs Like all AWS services Auto Scaling requires that every request made to its control API be authenticated so only authenticated users can access and manage Auto Scaling Requests are signed with an HMAC SHA1 signature calculated from the requ est and the user’s private key However getting credentials out to new EC2 instances launched with Auto Scaling can be challenging for large or elastically scaling fleets To simplify this process you can use roles within IAM so that any new instances launched with a role will be given credentials automatically When you launch an EC2 instance with an IAM role temporary AWS security credentials with permissions specified by the role are securely provisioned to the instance and are made availa ble to your application via the Amazon EC2 Instance Metadata Service The Metadata Service make s new temporary security credentials available prior to the expiration of the current active credentials so that valid credentials are always available on the i nstance In addition the temporary security credentials are automatically rotated multiple times per day providing enhanced security You can further control access to Auto Scaling by creating users under your AWS Account using AWS IAM and controlling what Auto Scaling APIs these users have permission to call For m ore information about using roles when launching instances see Identity and Access Management for Amazon EC2 Networking Services Amazon Web Services provides a range of networking services that enable you to create a logically isolated network that you define establish a private network ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 29 connection to the AWS cl oud use a highly available and scalable DNS service and deliver content to your end users with low latency at high data transfer speeds with a content delivery web service Elastic Load Balancing Security Elastic Load Balancing is used to manage traffic o n a fleet of Amazon EC2 instances distributing traffic to instances across all availability zones within a region Elastic Load Balancing has all the advantages of an on premises load balancer plus several security benefits: • Takes over the encryption and decryption work from the Amazon EC2 instances and manages it centrally on the load balancer • Offers clients a single point of contact and can also serve as the first line of defense against attacks on your network • When used in an Amazon VPC supports crea tion and management of security groups associated with your Elastic Load Balancing to provide additional networking and security options • Supports end toend traffic encryption using TLS (previously SSL) on those networks that use secure HTTP (HTTPS) connec tions When TLS is used the TLS server certificate used to terminate client connections can be managed centrally on the load balancer rather than on every individual instance HTTPS/TLS uses a long term secret key to generate a short term session key to be used between the server and the browser to create the ciphered (encrypted) message Elastic Load Balancing configures your load balancer with a pre defined cipher set that is used for TLS negotiation when a connection is established between a client and your load balancer The pre defined cipher set provides compatibility with a broad range of clients and uses strong cryptographic algorithms However some customers may have requirements for allowing only specific ciphers and protocols (such as PCI S OX etc) from clients to ensure that standards are met In these cases Elastic Load Balancing provides options for selecting different configurations for TLS protocols and ciphers You can choose to enable or disable the ciphers depending on your specifi c requirements To help ensure the use of newer and stronger cipher suites when establishing a secure connection you can configure the load balancer to have the final say in the cipher suite selection during the client server negotiation When the Server Order Preference option is selected the load balancer select s a cipher suite based on the server’s prioritization ArchivedAmazon Web Services Amazon Web Services: Overview of Security Proce sses Page 30 of cipher suites rather than the client’s This gives you more control over the level of security that clients use to connect to your load ba lancer For even greater communication privacy Elastic Load Balanc ing allows the use of Perfect Forward Secrecy which uses session keys that are ephemeral and not stored anywhere This prevents the decoding of captured data even if the secret long term key itself is compromised Elastic Load Balancing allows you to identify the originating IP address of a client connecting to your servers whether you’re using HTTPS or TCP load balancing Typically client connection information such as IP address and p ort is lost when requests are proxied through a load balancer This is because the load balancer sends requests to the server on behalf of the client making your load balancer appear as though it is the requesting client Having the originating client IP address is useful if you need more information about visitors to your applications in order to gather connection statistics analyze traffic logs or manage whitelists of IP addresses Elastic Load Balancing access logs contain information about each HTTP and TCP request processed by your load balancer This includes the IP address and port of the requesting client the backend IP address of the instance that processed the request the size of the request and response and the actual request line from the client (for example GET http://wwwexamplecom: 80/HTTP/11) All requests sent to the load balancer are logged including requests that never made it to backend instances Amazon Virtual Private Cloud (Amazon VPC) Security Normally each Amazon EC2 insta nce that you launch is randomly assigned a public IP address in the Amazon EC2 address space Amazon VPC enables you to create an isolated portion of the AWS cloud and launch Amazon EC2 instances that have private (RFC 1918) addresses in the range of your choice (eg 10000/16) You can define subnets within your VPC grouping similar kinds of instances based on IP address range and then set up routing and security to control the flow of traffic in and out of the instances and subnets AWS offers a var iety of VPC architecture templates with configurations that provide varying levels of public access: • VPC with a single public subnet only Your instances run in a private isolated section of the AWS cloud with direct access to the Internet Network ACLs a nd security groups can be used to provide strict control over inbound and outbound network traffic to your instances ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 31 • VPC with public and private subnets In addition to containing a public subnet this configuration adds a private subnet whose instances a re not addressable from the Internet Instances in the private subnet can establish outbound connections to the Internet via the public subnet using Network Address Translation (NAT) • VPC with public and private subnets and hardware VPN access This config uration adds an IPsec VPN connection between your Amazon VPC and your data center effectively extending your data center to the cloud while also providing direct access to the Internet for public subnet instances in your Amazon VPC In this configuration customers add a VPN appliance on their corporate data center side • VPC with private subnet only and hardware VPN access Your instances run in a private isolated section of the AWS cloud with a private subnet whose instances are not addressable from the Internet You can connect this private subnet to your corporate data center via an IPsec VPN tunnel You can also connect two VPCs using a private IP address which allows instances in the two VPCs to communicate with each other as if they are within the s ame network You can create a VPC peering connection between your own VPCs or with a VPC in another AWS account within a single region Security features within Amazon VPC include security groups network ACLs routing tables and external gateways Each of these items is complementary to providing a secure isolated network that can be extended through selective enabling of direct Internet access or private connectivity to another network Amazon EC2 instances running within an Amazon VPC inherit all of t he benefits described below related to the guest OS and protection against packet sniffing Note however that you must create VPC security groups specifically for your Amazon VPC; any Amazon EC2 security groups you have created will not work inside your Amazon VPC Also Amazon VPC security groups have additional capabilities that Amazon EC2 security groups do not have such as being able to change the security group after the instance is launched and being able to specify any protocol with a standard pro tocol number (as opposed to just TCP UDP or ICMP) Each Amazon VPC is a distinct isolated network within the cloud; network traffic within each Amazon VPC is isolated from all other Amazon VPCs At creation time you select an IP address range for each Amazon VPC You may create and attach an Internet ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 32 gateway virtual private gateway or both to establish external connectivity subject to the controls below API Access: Calls to create and delete Amazon VPCs change routing security group and network A CL parameters and perform other functions are all signed by your Amazon Secret Access Key which could be either the AWS Account’s Secret Access Key or the Secret Access key of a user created with AWS IAM Without access to your Secret Access Key Amazon VPC API calls cannot be made on your behalf In addition API calls can be encrypted with SSL to maintain confidentiality Amazon recommends always using SSL protected API endpoints AWS IAM also enables a customer to further control what APIs a newly crea ted user has permissions to call Subnets and Route Tables: You create one or more subnets within each Amazon VPC; each instance launched in the Amazon VPC is connected to one subnet Traditional Layer 2 security attacks including MAC spoofing and ARP spo ofing are blocked Each subnet in an Amazon VPC is associated with a routing table and all network traffic leaving the subnet is processed by the routing table to determine the destination Firewall (Security Groups): Like Amazon EC2 Amazon VPC supports a complete firewall solution enabling filtering on both ingress and egress traffic from an instance The default group enables inbound communication from other members of the same group and outbound communication to any destination Traffic can be restric ted by any IP protocol by service port as well as source/destination IP address (individual IP or Classless Inter Domain Routing (CIDR) block) The firewall isn’t controlled through the guest OS; rather it can be modified only through the invocation of Amazon VPC APIs AWS supports the ability to grant granular access to different administrative functions on the instances and the firewall therefore enabling you to implement additional security through separation of duties The level of security afforded by the firewall is a function of which ports you open and for what duration and purpose Well informed traffic management and security design are still required on a perinstance basis AWS further encourages you to apply additional per instance filters with host based firewalls such as IP tables or the Windows Firewall ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 33 Figure 4: Amazon VPC network architecture Network Access Control Lists: To add a further layer of security within Amazon VPC you can configure network ACLs These are stateless traffic filters that apply to all traffic inbound or outbound from a subnet within Amazon VPC These ACLs can contain ordered rules to allow or deny traffic based upon IP protocol by service port as well as source/destination IP address Like security groups network ACLs are managed through Amazon VPC APIs adding an additional layer of protection and enabling additional security through separation of duties The diagram below depicts how the security controls above inter relate to enable flexible network topologies while providing complete control over network traffic flows ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 34 Figure 5: Flexible network topologies Virtual Private Gateway: A virtual private gateway enables private connectivity between the Amazon VPC and another network Network traffic within each virtual private gateway is isolated from network traffic within all other virtual private gateways You can establish VPN connections to the virtual private gateway from gateway devices at your p remises Each connection is secured by a pre shared key in conjunction with the IP address of the customer gateway device Internet Gateway: An Internet gateway may be attached to an Amazon VPC to enable direct connectivity to Amazon S3 other AWS services and the Internet Each instance desiring this access must either have an Elastic IP associated with it or route traffic through a NAT instance Additionally network routes are configured (see above) to direct traffic to the Internet gateway AWS provide s reference NAT AMIs that you can extend to perform network logging deep packet inspection application layer filtering or other security controls This access can only be modified through the invocation of Amazon VPC APIs AWS supports the ability to gr ant granular access to different administrative functions on the instances and the Internet gateway therefore enabling you to implement additional security through separation of duties You can use a network address translation (NAT) ArchivedAmazon Web Services Amazon Web Services: Overview of Security Process es Page 35 gateway to enable ins tances in a private subnet to connect to the internet or other AWS services but prevent the internet from initiating a connection with those instances Dedicated Instances: Within a VPC you can launch Amazon EC2 instances that are physically isolated at the host hardware level (ie they will run on single tenant hardware) An Amazon VPC can be created with ‘dedicated’ tenancy so that all instances launched into the Amazon VPC use this feature Alternatively an Amazon VPC may be created with ‘default’ tenancy but you can specify dedicated tenancy for particular instances launched into it Elastic Network Interfaces: Each Amazon EC2 instance has a default network interface that is assigned a private IP address on your Amazon VPC network You can create and attach an additional network interface known as an elastic network interface to any Amazon EC2 instance in your Amazon VPC for a total of two network interfaces per instance Attaching more than one network interface to an instance is useful when you want to create a management network use network and security appliances in your Amazon VPC or create dual homed instances with workloads/roles on distinct subnets A network interface 's attributes including the private IP address elastic IP addresses and MAC address follow s the network interface as it is attached or detached from an instance and reattached to another instance For m ore information about Amazon VPC see Amazon Virtual Private Cloud Addit ional Network Access Control with EC2 VPC If you launch instances in a Region where you did not have instances before AWS launched the new EC2 VPC feature (also called Default VPC) all instances are automatically provisioned in a ready touse default VPC You can choose to create additional VPCs or you can create VPCs for instances in regions where you already had instances before we launched EC2 VPC If you create a VPC later using regular VPC you specify a CIDR block create subnets enter the routing and security for those subnets and provision an Internet gateway or NAT instance if you want one of your subnets to be able to reach the Internet When you launch EC2 instances into an EC2 VPC most of this work is automatically performed for you When y ou launch an instance into a default VPC using EC2 VPC we do the following to set it up for you: • Create a default subnet in each Availability Zone ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 36 • Create an internet gateway and connect it to your default VPC • Create a main route table for your default VPC with a rule that sends all traffic destined for the Internet to the Internet gateway • Create a default security group and associate it with your default VPC • Create a default network access control list (ACL) and associate it with your default VPC • Associate the default DHCP options set for your AWS account with your default VPC In addition to the default VPC having its own private IP range EC2 instances launched in a default VPC can also receive a public IP The following table summarizes the diffe rences between instances launched into EC2 Classic instances launched into a default VPC and instances launched into a non default VPC Table 2: Differences between different EC2 instances Characteristic EC2Classic EC2VPC (Default VPC) Regular VPC IP address by default unless you specify otherwise during launch Unless you specify otherwise during launch Private IP address Your instance receives a private IP address from the EC2Classic range each time it's started Your instance receives a static private IP address from the address range of your default VPC Your instance receives a static private IP address from the address range of your VPC Multiple private IP addresses We select a single IP address for your instance Multiple IP addresses are not supported You can assign multiple private IP addresses to your instance You can assign multiple private IP addresses to your instance ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 37 Characteristic EC2Classic EC2VPC (Default VPC) Regular VPC Elastic IP address An EIP is disassociated from your instance when you stop it An EIP remains associated with your instance when you stop it An EIP remains associated with your instance when you stop it DNS hostnames DNS hostnames are enabled by default DNS hostnames are enabled by default DNS hostnames are disabled by default Security group A security group can reference security groups that belong to other AWS accounts A security group can reference security groups for your VPC only A security group can reference security groups for your VPC only Secu rity group association You must terminate your instance to change its security group You can change the security group of your running instance You can change the security group of your running instance Security group rules You can add rules for inboun d traffic only You can add rules for inbound and outbound traffic You can add rules for inbound and outbound traffic Tenancy Your instance runs on shared hardware; you cannot run an instance on single tenant hardware You can run your instance on shared hardware or single tenant hardware You can run your instance on shared hardware or single tenant hardware ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 38 Note: Security groups for instances in EC2 Classic are slightly different than security groups for instances in EC2 VPC For example you can add rules for inbound traffic for EC2 Classic but you can add rules for both inbound and outbound traffic to EC2 VPC In EC2 Classic you can’t change the security groups assigned to an instance after it’s launched but in EC2 VPC you can change secu rity groups assigned to an instance after it’s launched In addition you can't use the security groups that you've created for use with EC2 Classic with instances in your VPC You must create security groups specifically for use with instances in your VPC The rules you create for use with a security group for a VPC can't reference a security group for EC2 Classic and vice versa Amazon Route 53 Security Amazon Route 53 is a highly available and scalable Domain Name System (DNS) service that answers DNS q ueries translating domain names into IP addresses so computers can communicate with each other Route 53 can be used to connect user requests to infrastructure running in AWS – such as an Amazon EC2 instance or an Amazon S3 bucket – or to infrastructure o utside of AWS Amazon Route 53 lets you manage the IP addresses (records) listed for your domain names and it answers requests (queries) to translate specific domain names into their corresponding IP addresses Queries for your domain are automatically routed to a nearby DNS server using anycast in order to provide the lowest latency possible Route 53 makes it possible for you to manage traffic globally through a variety of routing types including Latency Based Routing (LBR) Geo DNS and Weighted Round Robin (WRR) —all of which can be combined with DNS Failover in order to help create a variety of low latency fault tolerant architectures The failover algorithms implemented by Amazon Route 53 are designed not only to route traffic to e ndpoints that are healthy but also to help avoid making disaster scenarios worse due to misconfigured health checks and applications endpoint overloads and partition failures Route 53 also offers Domain Name Registration – you can purchase and manage domain names such as examplecom and Route 53 will automatically configure default DNS settings for your domains You can buy manage and transfer (both in and out) domains from a wide selection of generic and country specific top level domains (TLDs) Du ring the registration process you have the option to enable privacy protection for your domain This option will hide most of your personal information from the public Whois database in order to help thwart scraping and spamming ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 39 Amazon Route 53 is built using AWS’s highly available and reliable infrastructure The distributed nature of the AWS DNS servers helps ensure a consistent ability to route your end users to your application Route 53 also helps ensure the availability of your website by providing health checks and DNS failover capabilities You can easily configure Route 53 to check the health of your website on a regular basis (even secure web sites that are available only over SSL) and to switch to a backup site if the primary one is unresponsiv e Like all AWS Services Amazon Route 53 requires that every request made to its control API be authenticated so only authenticated users can access and manage Route 53 API requests are signed with an HMAC SHA1 or HMAC SHA256 signature calculated from t he request and the user’s AWS Secret Access key Additionally the Amazon Route 53 control API is only accessible via SSL encrypted endpoints It supports both IPv4 and IPv6 routing You can control access to Amazon Route 53 DNS management functions by cr eating users under your AWS Account using AWS IAM and controlling which Route 53 operations these users have permission to perform Amazon CloudFront Security Amazon CloudFront gives customers an easy way to distribute content to end users with low latenc y and high data transfer speeds It delivers dynamic static and streaming content using a global network of edge locations Requests for customers’ objects are automatically routed to the nearest edge location so content is delivered with the best possi ble performance Amazon CloudFront is optimized to work with other AWS services like Amazon S3 Amazon EC2 Elastic Load Balancing and Amazon Route 53 It also works seamlessly with any non AWS origin server that stores the original definitive versions of your files Amazon CloudFront requires every request made to its control API be authenticated so only authorized users can create modify or delete their own Amazon CloudFront distributions Requests are signed with an HMAC SHA1 signature calculated fr om the request and the user’s private key Additionally the Amazon CloudFront control API is only accessible via SSL enabled endpoints There is no guarantee of durability of data held in Amazon CloudFront edge locations The service may from time to time remove objects from edge locations if those objects are not requested frequently Durability is provided by Amazon S3 which works as the origin server for Amazon CloudFront holding the original definitive copies of objects delivered by Amazon CloudFront ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 40 If you want control over who is able to download content from Amazon CloudFront you can enable the service’s private content feature This feature has two components: the first controls how content is delivered from the Amazon CloudFront edge location t o viewers on the Internet The second controls how the Amazon CloudFront edge locations access objects in Amazon S3 CloudFront also supports Geo Restriction which restricts access to your content based on the geographic location of your viewers To contr ol access to the original copies of your objects in Amazon S3 Amazon CloudFront allows you to create one or more “Origin Access Identities” and associate these with your distributions When an Origin Access Identity is associated with an Amazon CloudFront distribution the distribution will use that identity to retrieve objects from Amazon S3 You can then use Amazon S3’s ACL feature which limits access to that Origin Access Identity so the original copy of the object is not publicly readable To control who is able to download objects from Amazon CloudFront edge locations the service uses a signed URL verification system To use this system you first create a public private key pair and upload the public key to your account via the AWS Management Conso le Second you configure your Amazon CloudFront distribution to indicate which accounts you would authorize to sign requests – you can indicate up to five AWS Accounts you trust to sign requests Third as you receive requests you will create policy docum ents indicating the conditions under which you want Amazon CloudFront to serve your content These policy documents can specify the name of the object that is requested the date and time of the request and the source IP (or CIDR range) of the client maki ng the request You then calculate the SHA1 hash of your policy document and sign this using your private key Finally you include both the encoded policy document and the signature as query string parameters when you reference your objects When Amazon C loudFront receives a request it will decode the signature using your public key Amazon CloudFront only serve s requests that have a valid policy document and matching signature Note: Private content is an optional feature that must be enabled when you s et up your CloudFront distribution Content delivered without this feature enabled will be publicly readable Amazon CloudFront provides the option to transfer content over an encrypted connection (HTTPS) By default CloudFront accept s requests over both HTTP and HTTPS protocols However you can also configure CloudFront to require HTTPS for all requests or have CloudFront redirect HTTP requests to HTTPS You can even ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 41 configure CloudFront distributions to allow HTTP for some objects but require HTTPS for other objects Figure 6: Amazon CloudFront encrypted transmission You can configure one or more CloudFront origins to require CloudFront fetch objects from your origin using the protocol that the viewer used to request the object s For example when you use this CloudFront setting and the viewer uses HTTPS to request an object from CloudFront CloudFront also uses HTTPS to forward the request to your origin Amazon CloudFront uses the SSLv3 or TLSv1 protocols and a selection of ci pher suites that includes the Elliptic Curve Diffie Hellman Ephemeral (ECDHE) protocol on connections to both viewers and the origin ECDHE allows SSL/TLS clients to provide Perfect Forward Secrecy which uses session keys that are ephemeral and not stored anywhere This helps prevent the decoding of captured data by unauthorized third parties even if the secret long term key itself is compromised Note: If you're using your own server as your origin and you want to use HTTPS both between viewers and CloudFront and between CloudFront and your origin you must install a valid SSL certificate on the HTTP server that is signed by a third party certificate authority for example VeriSign or DigiCert By default you can deliver content to viewers over HTT PS by using your CloudFront distribution domain name in your URLs; for example https://dxxxxxcloudfrontnet/imagejpg If you want to deliver your content over HTTPS using your own domain name and your own SSL certificate you can use SNI Custom SSL or D edicated IP Custom SSL With Server Name Identification (SNI) Custom SSL CloudFront relies on the SNI extension of the TLS protocol which is supported by most modern web browsers However some users may not be able to access your content ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 42 because some ol der browsers do not support SNI (For a list of supported browsers visit CloudFront FAQs ) With Dedicated IP Custom SSL CloudFront dedicates IP addresses to your SSL certificate at each CloudFront edge location so that CloudFront can associate the incoming requests with the proper SSL certificate Amazon CloudFront access logs contain a comprehensive set of information about requests for content including the object requested the date and time of the request the edge location serving the request the client IP address the referrer and the user agent To enable access logs just specify the name of the Amazon S3 bucket to store the logs in when you configure your Amazon CloudFront distribution AWS Direct Connect Security With AWS Direct Connect you can provision a direct link between your internal network and an AWS region using a high throughput dedicated connection Doing this may help reduce your network costs improve throughput or provid e a more consistent network experience With this dedicated connection in place you can then create virtual interfaces directly to the AWS Cloud (for example to Amazon EC2 and Amazon S3) and Amazon VPC With Direct Connect you bypass internet service providers in your network path You can procure rack space within the facility housing the AWS Direct Connect location and deploy your equipment nearby Once deployed you can connect this equipment to AWS D irect Connect using a cross connect Each AWS Direct Connect location enables connectivity to the geographically nearest AWS region as well as access to other US regions For example you can provision a single connection to any AWS Direct Connect location in the US and use it to access public AWS services in all US Regions and AWS GovCloud (US) Using industry standard 8021q VLANs the dedicated connection can be partitioned into multiple virtual interfaces This allows you to use the same connection to a ccess public resources such as objects stored in Amazon S3 using public IP address space and private resources such as Amazon EC2 instances running within an Amazon VPC using private IP space while maintaining network separation between the public and private environments Amazon Direct Connect requires the use of the Border Gateway Protocol (BGP) with an Autonomous System Number (ASN) To create a virtual interface you use an MD5 cryptographic key for message authorization MD5 creates a keyed hash usin g your ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 43 secret key You can have AWS automatically generate a BGP MD5 key or you can provide your own Storage Services Amazon Web Services provides low cost data storage with high durability and availability AWS offers storage choices for backup archivi ng and disaster recovery as well as block and object storage Amazon Simple Storage Service (Amazon S3) Security Amazon Simple Storage Service ( Amazon S3) allows you to upload and retrieve data at any time from anywhere on the web Amazon S3 stores data as objects within buckets An object can be any kind of file: a text file a photo a video etc When you add a file to Amazon S3 you have the option of including metadata with the file and setting permissions to control access to the file For each buc ket you can control access to the bucket (who can create delete and list objects in the bucket) view access logs for the bucket and its objects and choose the geographical region where Amazon S3 will store the bucket and its contents Data Access Acce ss to data stored in Amazon S3 is restricted by default; only bucket and object owners have access to the Amazon S3 resources they create (note that a bucket/object owner is the AWS Account owner not the user who created the bucket/object) There are mult iple ways to control access to buckets and objects: • Identity and Access Management (IAM) Policies AWS IAM enables organizations with many employees to create and manage multiple users under a single AWS Account IAM policies are attached to the users ena bling centralized control of permissions for users under your AWS Account to access buckets or objects With IAM policies you can only grant users within your own AWS account permission to access your Amazon S3 resources • Access Control Lists (ACLs) Within Amazon S3 you can use ACLs to give read or write access on buckets or objects to groups of users With ACLs you can only grant other AWS accounts (not specific users) access to your Amazon S3 resources ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 44 • Bucket Policies Bucket policies in Amazon S3 ca n be used to add or deny permissions across some or all of the objects within a single bucket Policies can be attached to users groups or Amazon S3 buckets enabling centralized management of permissions With bucket policies you can grant users within your AWS Account or other AWS Accounts access to your Amazon S3 resources Table 3: Types of access control Type of Access Control AWS Account Level Control User Level Control IAM Policies No Yes ACLs Yes No Bucket Policies Yes Yes You can further restrict access to specific resources based on certain conditions For example you can restrict access based on request time (Date Condition) whether the request was sent using SSL (Boolean Conditions) a requester’s IP address (IP Addres s Condition) or based on the requester's client application (String Conditions) To identify these conditions you use policy keys For more information about action specific policy keys available within Amazon S3 see the Amazon Simple Storage Service Developer Guide Amazon S3 also gives developers the option to use query string authentication which allows them to share Amazon S3 objects through URLs that are valid for a predefined period of time Query string authentication is useful for giving HTTP or browser access to resources that would normally require authentication The signature in the query string secures the request Data Transfer For maximum security you ca n securely upload/download data to Amazon S3 via the SSL encrypted endpoints The encrypted endpoints are accessible from both the Internet and from within Amazon EC2 so that data is transferred securely both within AWS and to and from sources outside of AWS Data Storage Amazon S3 provides multiple options for protecting data at rest For customers who prefer to manage their own encryption they can use a client encryption library like the Amazon S3 Encryption Client to encrypt data before uploading to Amazon S3 ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 45 Alternatively you can use Amazon S3 Server Side Encryption (SSE) if you prefer to have Amazon S3 manage the encryption process for you Data is encrypted with a key generated by AWS or with a key you supply depending on your requirements With Amazon S3 SSE you can encrypt data on upload simply by adding an additional request header when writing the object De cryption happens automatically when data is retrieved Note: Metadata which you can include with your object is not encrypted Therefore AWS recommends that customers not place sensitive information in Amazon S3 metadata Amazon S3 SSE uses one of the s trongest block ciphers available – 256bit Advanced Encryption Standard (AES 256) With Amazon S3 SSE every protected object is encrypted with a unique encryption key This object key itself is then encrypted with a regularly rotated master key Amazon S3 SSE provides additional security by storing the encrypted data and encryption keys in different hosts Amazon S3 SSE also makes it possible for you to enforce encryption requirements For example you can create and apply bucket policies that require that only encrypted data can be uploaded to your buckets For long term storage you can automatically archive the contents of your Amazon S3 buckets to AWS’s archival service called Amazon S3 Glacier You can have data transferred at specific intervals to Ama zon S3 Glacier by creating lifecycle rules in Amazon S3 that describe which objects you want to be archived to Amazon S3 Glacier and when As part of your data management strategy you can also specify how long Amazon S3 should wait after the objects are p ut into Amazon S3 to delete them When an object is deleted from Amazon S3 removal of the mapping from the public name to the object starts immediately and is generally processed across the distributed system within several seconds Once the mapping is r emoved there is no remote access to the deleted object The underlying storage area is then reclaimed for use by the system Data Durability and Reliability Amazon S3 is designed to provide 99999999999% durability and 9999% availability of objects over a given year Objects are redundantly stored on multiple devices across multiple facilities in an Amazon S3 region To help provide durability Amazon S3 PUT and COPY operations synchronously store customer data across multiple facilities before returning SUCCESS Once stored Amazon S3 helps maintain the durability of ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 46 the objects by quickly detecting and repairing any lost redundancy Amazon S3 also regularly verifies the integrity of data stored using checksums If corruption is detected it is repaired u sing redundant data In addition Amazon S3 calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data Amazon S3 provides further protection via Versioning You can use Versioning to preserve retrieve and restore every version of every object stored in an Amazon S3 bucket With Versioning you can easily recover from both unintended user actions and application failures By default requests will retrieve the most recently written version Older versi ons of an object can be retrieved by specifying a version in the request You can further protect versions using Amazon S3 Versioning's MFA Delete feature Once enabled for an Amazon S3 bucket each version deletion request must include the six digit code and serial number from your multi factor authentication device Access Logs An Amazon S3 bucket can be configured to log access to the bucket and objects within it The access log contains details about each access request including request type the requested resource the requestor’s IP and the time and date of the request When logging is enabled for a bucket log records are periodically aggregated into log files and delivered to the specified Amazon S3 bucket Cross Origin Resource Sharing (CORS) AWS customers who use Amazon S3 to host static web pages or store objects used by other web pages can load content securely by configuring an Amazon S3 bucket to explicitly enable cross origin requests Modern browsers use the Same Origin policy to block JavaScript or HTML5 from allowing requests to load content from another site or domain as a way to help ensure that malicious content is not loaded from a less reputable source (such as during cross site scripting attacks) With the Cross Origin Resource S haring (CORS) policy enabled assets such as web fonts and images stored in an Amazon S3 bucket can be safely referenced by external web pages style sheets and HTML5 applications Amazon S3 Glacier Security Like Amazon S3 the Amazon S3 Glacier service p rovides low cost secure and durable storage But where Amazon S3 is designed for rapid retrieval Amazon S3 Glacier is meant to be used as an archival service for data that is not accessed often and for which retrieval times of several hours are suitable ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 47 Amazon S3 Glacier stores files as archives within vaults Archives can be any data such as a photo video or document and can contain one or several files You can store an unlimited number of archives in a single vault and can create up to 1000 vault s per region Each archive can contain up to 40 TB of data Data Upload To transfer data into Amazon S3 Glacier vaults you can upload an archive in a single upload operation or a multipart operation In a single upload operation you can upload archives u p to 4 GB in size However customers can achieve better results using the Multipart Upload API to upload archives greater than 100 MB Using the Multipart Upload API allows you to upload large archives up to about 40000 GB The Multipart Upload API call is designed to improve the upload experience for larger archives; it enables the parts to be uploaded independently in any order and in parallel If a multipart upload fails you only need to upload the failed part again and not the entire archive When you upload data to Amazon S3 Glacier you must compute and supply a tree hash Amazon S3 Glacier checks the hash against the data to help ensure that it has not been altered en route A tree hash is generated by computing a hash for each megabyte sized se gment of the data and then combining the hashes in tree fashion to represent ever growing adjacent segments of the data As an alternate to using the Multipart Upload feature customers with very large uploads to Amazon S3 Glacier may consider using the A WS Snowball service instead to transfer the data AWS Snowball facilitates moving large amounts of data into AWS using portable storage devices for transport AWS transfers your data directly off of storage devices using Amazon’s high speed internal networ k bypassing the Internet You can also set up Amazon S3 to transfer data at specific intervals to Amazon S3 Glacier You can create lifecycle rules in Amazon S3 that describe which objects you want to be archived to Amazon S3 Glacier and when You can als o specify how long Amazon S3 should wait after the objects are put into Amazon S3 to delete them To achieve even greater security you can securely upload/download data to Amazon S3 Glacier via the SSL encrypted endpoints The encrypted endpoints are acce ssible from both the Internet and from within Amazon EC2 so that data is transferred securely both within AWS and to and from sources outside of AWS ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 48 Data Retrieval Retrieving archives from Amazon S3 Glacier requires the initiation of a retrieval job which is generally completed in 3 to 5 hours You can then access the data via HTTP GET requests The data will remain available to you for 24 hours You can retrieve an entire archive or several files from an archive If you want to retrieve only a subset of an archive you can use one retrieval request to specify the range of the archive that contains the files you are interested or you can initiate multiple retrieval requests each with a range for one or more files You can also limit the number of vault inventory items retrieved by filtering on an archive creation date range or by setting a maximum items limit Whichever method you choose when you retrieve portions of your archive you can use the supplied checksum to help ensure the integrity of the file s provided that the range that is retrieved is aligned with the tree hash of the overall archive Data Storage Amazon S3 Glacier automatically encrypts the data using AES 256 and stores it durably in an immutable form Amazon S3 Glacier is designed to prov ide average annual durability of 99999999999% for an archive It stores each archive in multiple facilities and multiple devices Unlike traditional systems which can require laborious data verification and manual repair Amazon S3 Glacier performs regula r systematic data integrity checks and is built to be automatically self healing When an object is deleted from Amazon S3 Glacier removal of the mapping from the public name to the object starts immediately and is generally processed across the distrib uted system within several seconds Once the mapping is removed there is no remote access to the deleted object The underlying storage area is then reclaimed for use by the system Data Access Only your account can access your data in Amazon S3 Glacier To control access to your data in Amazon S3 Glacier you can use AWS IAM to specify which users within your account have rights to operations on a given vault AWS Storage Gateway Security The AWS Storage Gateway service connects your on premises software appliance with cloud based storage to provide seamless and secure integration between your IT environment and the AWS storage infrastructure The service enables you to securely ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 49 upload data to AWS’ scalable reliable and secure Amazon S3 storage service f or cost effective backup and rapid disaster recovery AWS Storage Gateway transparently backs up data off site to Amazon S3 in the form of Amazon EBS snapshots Amazon S3 redundantly stores these snapshots on multiple devices across multiple facilities de tecting and repairing any lost redundancy The Amazon EBS snapshot provides a point intime backup that can be restored on premises or used to instantiate new Amazon EBS volumes Data is stored within a single region that you specify AWS Storage Gateway offers three options: • Gateway Stored Volumes (where the cloud is backup) In this option your volume data is stored locally and then pushed to Amazon S3 where it is stored in redundant encrypted form and made available in the form of Amazon Elastic Block Storage ( Amazon EBS) snapshots When you use this model the on premises storage is primary delivering low latency access to your entire dataset and the cloud storage is the backup • Gateway Cached Volumes (where the cloud is primary) In this option your volume data is stored encrypted in Amazon S3 visible within your enterprise's network via an iSCSI interface Recently accessed data is cached on premises for low latency local access When you use this model the cloud storage is primary b ut you get low latency access to your active working set in the cached volumes on premises • Gateway Virtual Tape Library (VTL) In this option you can configure a Gateway VTL with up to 10 virtual tape drives per gateway 1 media changer and up to 1500 v irtual tape cartridges Each virtual tape drive responds to the SCSI command set so your existing on premises backup applications (either disk to tape or disk todiskto tape) will work without modification No matter which option you choose data is asy nchronously transferred from your on premises storage hardware to AWS over SSL The data is stored encrypted in Amazon S3 using Advanced Encryption Standard (AES) 256 a symmetric key encryption standard using 256 bit encryption keys The AWS Storage Gate way only uploads data that has changed minimizing the amount of data sent over the Internet The AWS Storage Gateway runs as a virtual machine (VM) that you deploy on a host in your data center running VMware ESXi Hypervisor v 41 or v 5 or Microsoft Hype rV (you download the VMware software during the setup process) You can also run within EC2 using a gateway AMI During the installation and configuration process you can ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 50 create up to 12 stored volumes 20 Cached volumes or 1500 virtual tape cartridges per gateway Once installed each gateway will automatically download install and deploy updates and patches This activity takes place during a maintenance window that you can set on a per gateway basis The iSCSI protocol supports authentication betwee n targets and initiators via CHAP (Challenge Handshake Authentication Protocol) CHAP provides protection against maninthemiddle and playback attacks by periodically verifying the identity of an iSCSI initiator as authenticated to access a storage volum e target To set up CHAP you must configure it in both the AWS Storage Gateway console and in the iSCSI initiator software you use to connect to the target After you deploy the AWS Storage Gateway VM you must activate the gateway using the AWS Storage Gateway console The activation process associates your gateway with your AWS Account Once you establish this connection you can manage almost all aspects of your gateway from the console In the activation process you specify the IP address of your gateway name your gateway identify the AWS region in which you want your snapshot backups stored and specify the gateway time zone AWS Snowball Security AWS Snowball is a simple secure method for physically transferring large amounts of data to A mazon S3 EBS or Amazon S3 Glacier storage This service is typically used by customers who have over 100 GB of data and/or slow connection speeds that would result in very slow transfer rates over the Internet With AWS Snowball you prepare a portable s torage device that you ship to a secure AWS facility AWS transfers the data directly off of the storage device using Amazon’s high speed internal network thus bypassing the Internet Conversely data can also be exported from AWS to a portable storage de vice Like all other AWS services the AWS Snowball service requires that you securely identify and authenticate your storage device In this case you will submit a job request to AWS that includes your Amazon S3 bucket Amazon EBS region AWS Access Key ID and return shipping address You then receive a unique identifier for the job a digital signature for authenticating your device and an AWS address to ship the storage device to For Amazon S3 you place the signature file on the root directory of yo ur device For Amazon EBS you tape the signature barcode to the exterior of the device The signature file is used only for authentication and is not uploaded to Amazon S3 or EBS ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 51 For transfers to Amazon S3 you specify the specific buckets to which the d ata should be loaded and ensure that the account doing the loading has write permission for the buckets You should also specify the access control list to be applied to each object loaded to Amazon S3 For transfers to EBS you specify the target region f or the EBS import operation If the storage device is less than or equal to the maximum volume size of 1 TB its contents are loaded directly into an Amazon EBS snapshot If the storage device’s capacity exceeds 1 TB a device image is stored within the sp ecified S3 log bucket You can then create a RAID of Amazon EBS volumes using software such as Logical Volume Manager and copy the image from S3 to this new volume For added protection you can encrypt the data on your device before you ship it to AWS F or Amazon S3 data you can use a PIN code device with hardware encryption or TrueCrypt software to encrypt your data before sending it to AWS For EBS and Amazon S3 Glacier data you can use any encryption method you choose including a PINcode device AW S will decrypt your Amazon S3 data before importing using the PIN code and/or TrueCrypt password you supply in your import manifest AWS uses your PIN to access a PIN code device but does not decrypt software encrypted data for import to Amazon EBS or Ama zon S3 Glacier The following table summarizes your encryption options for each type of import/export job Table 4: Encryption options for import/export jobs Import to Amazon S3 Source Target Result • Files on a device file system • Encrypt data using PIN code device and/or TrueCrypt before shipping device • Objects in an existing Amazon S3 bucket • AWS decrypts the data before performing the import • One object for each file • AWS erases your device after every import job prior to shipping Export from Amazon S3 Source Target Result ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 52 • Objects in one or more Amazon S3 buckets • Provide a PIN code and/or password that AWS will use to encrypt your data • Files on your storage device • AWS formats your device • AWS copies your data to an encrypted file container on your device • One file for each object • AWS encrypts your data prior to shipping • Use PIN code device and/or TrueCrypt to decrypt the files Import to Amazon S3 Glacier Source Target Result • Entire device • Encrypt the data using the encryption method of your choice before shipping • One archive in an existing Amazon S3 Glacier vault • AWS does not decrypt your device • Device image stored as a single archive • AWS erases your device after every import job prior to shipping Import t o Amazon EBS (Device Capacity < 1 TB) Source Target Result • Entire device • Encrypt the data using the encryption method of your choice before shipping • One Amazon EBS snapshot • AWS does not decrypt your device • Device image is stored as a single snapshot • If the device was encrypted the image is encrypted • AWS erases your device after every import job prior to shipping Import to Amazon EBS (Device Capacity > 1 TB) Source Target Result • Entire device • Encrypt the data using the encryption method of your choice before shipping • Multiple objects in an existing Amazon S3 bucket • AWS does not decrypt your device • Device image chunked into series of 1 TB snapshots stored as objects in Amazon S3 bucket specified in manifest file • If the device was encrypted the image is encrypted • AWS erases your device after every import job prior to shipping ArchivedAmazon Web Services Amazo n Web Services: Overview of Security Processes Page 53 After the import is complete AWS Snowball will erase the contents of your storage device to safeguard the data during return shipment AWS overwrites all writable blocks on the storage device with zeroes You will need to repartition and format the device after the wipe If AWS is unable to erase the data on the device it will be scheduled for destruction and our support team will contact yo u using the email address specified in the manifest file you ship with the device When shipping a device internationally the customs option and certain required subfields are required in the manifest file sent to AWS AWS Snowball uses these values to va lidate the inbound shipment and prepare the outbound customs paperwork Two of these options are whether the data on the device is encrypted or not and the encryption software’s classification When shipping encrypted data to or from the United States the encryption software must be classified as 5D992 under the United States Export Administration Regulations Amazon Elastic File System Security Amazon Elastic File System (Amazon EFS) provides simple scalable file storage for use with Amazon EC2 instances in the AWS Cloud With Amazon EFS storage capacity is elastic growing and shrinking automatically as you add and remove files Amazon EFS file systems are distributed across an unconstrained number of storage servers enabling file systems to grow elast ically to petabyte scale and allowing massively parallel access from Amazon EC2 instances to your data Data Access With Amazon EFS you can create a file system mount the file system on an Amazon EC2 instance and then read and write data from to and fr om your file system You can mount an Amazon EFS file system on EC2 instances in your VPC through the Network File System versions 40 and 41 (NFSv4) protocol To access your Amazon EFS file system in a VPC you create one or more mount targets in the VP C A mount target provides an IP address for an NFSv4 endpoint You can then mount an Amazon EFS file system to this end point using its DNS name which will resolve to the IP address of the EFS mount target in the same Availability Zone as your EC2 instan ce You can create one mount target in each Availability Zone in a region If there are multiple subnets in an Availability Zone in your VPC you create a mount target in one of the subnets and all EC2 instances in that Availability Zone share that mount target You ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 54 can also mount an EFS file system on a host in an on premises datacenter using AWS Direct Connect When using Amazon EFS you specify Amazon EC2 security groups for your EC2 instances and security groups for the EFS mount targets associated wit h the file system Security groups act as a firewall and the rules you add define the traffic flow You can authorize inbound/outbound access to your EFS file system by adding rules that allow your EC2 instance to connect to your Amazon EFS file system vi a the mount target using the NFS port After mounting the file system via the mount target you use it like any other POSIX compliant file system Files and directories in an EFS file system support standard Unix style read/write/execute permissions based on the user and group ID asserted by the mounting NFSv41 client For information about NFS level permissions and related considerations see Working with Users Groups and Permissions at the Network File System (NFS) Level All Amazon EFS file systems are owned by an AWS Account You can use IAM policies to grant permissions to other users so that they can perform administrative operations on your file systems including deleting a file system or modifying a mount target’s security groups For more information about EFS permissions see Overview of Managing Access Permissions to Your Amazon EFS Resources Data Durability and Reliability Amazon EFS is designed to be highly durable and highly available All data and metadata is stored across multiple Availability Zones and all service components are designed to be highly availa ble EFS provides strong consistency by synchronously replicating data across Availability Zones with read afterwrite semantics for most file operations Amazon EFS incorporates checksums for all metadata and data throughout the service Using a file sys tem checking process (FSCK) EFS continuously validates a file system's metadata and data integrity Data Sanitization Amazon EFS is designed so that when you delete data from a file system that data will never be served again If your procedures require that all data be wiped via a specific method such as those detailed in DoD 522022 M (“National Industrial Security Program Operating Manual “) or NIST 800 88 (“Guidelines for Media Sanitization”) we recommend that you conduct a specialized wipe procedur e prior to deleting the file system ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 55 Database Services Amazon Web Services provides a number of database solutions for developers and businesses —from managed relational and NoSQL database services to in memory caching as a service and petabyte scale data warehouse service Amazon DynamoDB Security Amazon DynamoDB is a managed NoSQL database service that provides fast and predictable performance with seamless scalability Amazon DynamoDB enables you to offload the administrative burdens of operating and scaling distributed databases to AWS so you don’t have to worry about hardware provisioning setup and configuration replication software patching or cluster scaling You can create a database table that can store and retrieve any amount of data and serve any level of request traffic DynamoDB automatically spreads the data and traffic for the table over a sufficient number of servers to handle the request capacity you specified and the amount of data stored while maintaining consistent fast performance All data items are stored on Solid State Drives (SSDs) and are automatically replicated across multiple availability zones in a region to provide built in high availability and data durability You can set up automatic backups using a sp ecial template in AWS Data Pipeline that was created just for copying DynamoDB tables You can choose full or incremental backups to a table in the same region or a different region You can use the copy for disaster recovery (DR) in the event that an erro r in your code damages the original table or to federate DynamoDB data across regions to support a multi region application To control who can use the DynamoDB resources and API you set up permissions in AWS IAM In addition to controlling access at the resource level with IAM you can also control access at the database level —you can create database level permissions that allow or deny access to items (rows) and attributes (columns) based on the needs of your application These database level permission s are called fine grained access controls and you create them using an IAM policy that specifies under what circumstances a user or application can access a DynamoDB table The IAM policy can restrict access to individual items in a table access to the a ttributes in those items or both at the same time ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 56 Figure 7: Database level permissions You can optionally use web identity federation to control access by application users who are authenticated by Login with Amazon Facebook or Google Web identity federation removes the need for creating individual IAM users; instead users can sign in to an identity provider and then obtain temporary security credentials from AWS Security Token Service (AWS STS) AWS STS returns temporary AW S credentials to the application and allows it to access the specific DynamoDB table In addition to requiring database and user permissions each request to the DynamoDB service must contain a valid HMAC SHA256 signature or the request is rejected The AWS SDKs automatically sign your requests; however if you want to write your own HTTP POST requests you must provide the signature in the header of your request to Amazon DynamoDB To calculate the signature you must request temporary security credential s from the AWS Security Token Service Use the temporary security credentials to sign your requests to Amazon DynamoDB Amazon DynamoDB is accessible via T LS/SSL encrypted endpoints Amazon Relational Database Service (Amazon RDS) Security Amazon RDS allow s you to quickly create a relational database (DB) instance and flexibly scale the associated compute resources and storage capacity to meet application demand Amazon RDS manages the database instance on your behalf by performing backups handling failove r and maintaining the database software Currently Amazon RDS is available for MySQL Oracle Microsoft SQL Server and PostgreSQL database engines Amazon RDS has multiple features that enhance reliability for critical production databases including DB security groups permissions SSL connections automated backups DB snapshots and multi AZ deployments DB instances can also be deployed in an Amazon VPC for additional network isolation ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 57 Access Control When you first create a DB Instance within Amazon RDS you will create a master user account which is used only within the context of Amazon RDS to control access to your DB Instance(s) The master user account is a native database user account that allows you to log on to your DB Instance with all data base privileges You can specify the master user name and password you want associated with each DB Instance when you create the DB Instance Once you have created your DB Instance you can connect to the database using the master user credentials Subsequ ently you can create additional user accounts so that you can restrict who can access your DB Instance You can control Amazon RDS DB Instance access via DB Security Groups which are similar to Amazon EC2 Security Groups but not interchangeable DB Secur ity Groups act like a firewall controlling network access to your DB Instance Database Security Groups default to a “deny all” access mode and customers must specifically authorize network ingress There are two ways of doing this: authorizing a network I P range or authorizing an existing Amazon EC2 Security Group DB Security Groups only allow access to the database server port (all others are blocked) and can be updated without restarting the Amazon RDS DB Instance which allows a customer seamless contr ol of their database access Using AWS IAM you can further control access to your RDS DB instances AWS IAM enables you to control what RDS operations each individual AWS IAM user has permission to call Network Isolation For additional network access con trol you can run your DB Instances in an Amazon VPC Amazon VPC enables you to isolate your DB Instances by specifying the IP range you wish to use and connect to your existing IT infrastructure through industry standard encrypted IPsec VPN Running Amaz on RDS in a VPC enables you to have a DB instance within a private subnet You can also set up a virtual private gateway that extends your corporate network into your VPC and allows access to the RDS DB instance in that VPC Refer to the Amazon VPC User Guide for more details For Multi AZ deployments defining a subnet for all availability zones in a region will allow Amazon RDS to create a new standby in another availability zone should the need arise You can create DB Subnet Groups which are collections of subnets that you may want to designate for your RDS DB Instances in a VPC Each DB Subnet Group should have at least one subnet for every availability zone in a given reg ion In this case when you create a DB Instance in a VPC you select a DB Subnet Group; Amazon RDS then uses that DB Subnet Group and your preferred availability zone to select a subnet and ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 58 an IP address within that subnet Amazon RDS creates and associat es an Elastic Network Interface to your DB Instance with that IP address DB Instances deployed within an Amazon VPC can be accessed from the Internet or from Amazon EC2 Instances outside the VPC via VPN or bastion hosts that you can launch in your public subnet To use a bastion host you will need to set up a public subnet with an EC2 instance that acts as an SSH Bastion This public subnet must have an Internet gateway and routing rules that allow traffic to be directed via the SSH host which must then forward requests to the private IP address of your Amazon RDS DB instance DB Security Groups can be used to help secure DB Instances within an Amazon VPC In addition network traffic entering and exiting each subnet can be allowed or denied via network A CLs All network traffic entering or exiting your Amazon VPC via your IPsec VPN connection can be inspected by your on premises security infrastructure including network firewalls and intrusion detection systems Encryption You can encrypt connections be tween your application and your DB Instance using SSL For MySQL and SQL Server RDS creates an SSL certificate and installs the certificate on the DB instance when the instance is provisioned For MySQL you launch the mysql client using the ssl_ca para meter to reference the public key in order to encrypt connections For SQL Server download the public key and import the certificate into your Windows operating system Oracle RDS uses Oracle native network encryption with a DB instance You simply add the native network encryption option to an option group and associate that option group with the DB instance Once an encrypted connection is established data transferred between the DB Instance and your application will be encrypted during transfer You can also require your DB instance to only accept encrypted connections Amazon RDS supports Transparent Data Encryption (TDE) for SQL Server (SQL Server Enterprise Edition) and Oracle (part of the Oracle Advanced Security option available in Oracle Enterpris e Edition) The TDE feature automatically encrypts data before it is written to storage and automatically decrypts data when it is read from storage Note: SSL support within Amazon RDS is for encrypting the connection between your application and your DB Instance; it should not be relied on for authenticating the DB Instance itself ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 59 While SSL offers security benefits be aware that SSL encryption is a compute intensive operation and will increase the latency of your database connection To learn how SSL works with SQL Server you can read more in the Amazon Relational Database Service User Guide Automated Backups and DB Snapshots Amazon RDS provi des two different methods for backing up and restoring your DB Instance(s): automated backups and database snapshots (DB Snapshots) Turned on by default the automated backup feature of Amazon RDS enables point in time recovery for your DB Instance Amazo n RDS will back up your database and transaction logs and store both for a user specified retention period This allows you to restore your DB Instance to any second during your retention period up to the last 5 minutes Your automatic backup retention pe riod can be configured to up to 35 days During the backup window storage I/O may be suspended while your data is being backed up This I/O suspension typically lasts a few minutes This I/O suspension is avoided with Multi AZ DB deployments since the ba ckup is taken from the standby DB Snapshots are user initiated backups of your DB Instance These full database backups are stored by Amazon RDS until you explicitly delete them You can copy DB snapshots of any size and move them between any of AWS’s pub lic regions or copy the same snapshot to multiple regions simultaneously You can then create a new DB Instance from a DB Snapshot whenever you desire DB Instance Replication Amazon cloud computing resources are housed in highly available data center fac ilities in different regions of the world and each region contains multiple distinct locations called Availability Zones Each Availability Zone is engineered to be isolated from failures in other Availability Zones and to provide inexpensive low latenc y network connectivity to other Availability Zones in the same region To architect for high availability of your Oracle PostgreSQL or MySQL databases you can run your RDS DB instance in several Availability Zones an option called a Multi AZ deployment When you select this option Amazon automatically provisions and maintains a synchronous standby replica of your DB instance in a different Availability Zone The primary DB instance is synchronously replicated across Availability Zones to the standby re plica In the event of DB instance or Availability Zone failure Amazon RDS will automatically failover to the standby so that database operations can resume quickly without administrative intervention ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 60 For customers who use MySQL and need to scale beyond the capacity constraints of a single DB Instance for read heavy database workloads Amazon RDS provides a Read Replica option Once you create a read replica database updates on the source DB instance are replicated to the read replica using MySQL’s nativ e asynchronous replication You can create multiple read replicas for a given source DB instance and distribute your application’s read traffic among them Read replicas can be created with Multi AZ deployments to gain read scaling benefits in addition to the enhanced database write availability and data durability provided by Multi AZ deployments Automatic Software Patching Amazon RDS will make sure that the relational database software powering your deployment stays up todate with the latest patches W hen necessary patches are applied during a maintenance window that you can control You can think of the Amazon RDS maintenance window as an opportunity to control when DB Instance modifications (such as scaling DB Instance class) and software patching oc cur in the event either are requested or required If a “maintenance” event is scheduled for a given week it will be initiated and completed at some point during the 30 minute maintenance window you identify The only maintenance events that require Amaz on RDS to take your DB Instance offline are scale compute operations (which generally take only a few minutes from start to finish) or required software patching Required patching is automatically scheduled only for patches that are security and durabilit y related Such patching occurs infrequently (typically once every few months) and should seldom require more than a fraction of your maintenance window If you do not specify a preferred weekly maintenance window when creating your DB Instance a 30 minut e default value is assigned If you wish to modify when maintenance is performed on your behalf you can do so by modifying your DB Instance in the AWS Management Console or by using the ModifyDBInstance API Each of your DB Instances can have different preferred maintenance windows if you so choose Running your DB Instance as a Multi AZ deployment can further reduce the impact of a maintenance event as Amazon RDS will conduct maintenan ce via the following steps: 1) Perform maintenance on standby 2) Promote standby to primary and 3) Perform maintenance on old primary which becomes the new standby When an Amazon RDS DB Instance deletion API (DeleteDBInstance) is run the DB Instance i s marked for deletion Once the instance no longer indicates ‘deleting’ status it has been removed At this point the instance is no longer accessible and unless a final ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 61 snapshot copy was asked for it cannot be restored and will not be listed by any of t he tools or APIs Event Notification You can receive notifications of a variety of important events that can occur on your RDS instance such as whether the instance was shut down a backup was started a failover occurred the security group was changed or your storage space is low The Amazon RDS service groups events into categories that you can subscribe to so that you can be notified when an event in that category occurs You can subscribe to an event category for a DB instance DB snapshot DB securi ty group or for a DB parameter group RDS events are published via AWS SNS and sent to you as an email or text message For more information about RDS notification event categories refer to the Amazon Relational Database Service User Guide Amazon Redshift Security Amazon Redshift is a petabyte scale SQL data warehouse service that runs on highly optimized and managed AWS compute and storage resources The service ha s been architected to not only scale up or down rapidly but to significantly improve query speeds – even on extremely large datasets To increase performance Redshift uses techniques such as columnar storage data compression and zone maps to reduce the amount of IO needed to perform queries It also has a massively parallel processing (MPP) architecture parallelizing and distributing SQL operations to take advantage of all available resources When you create a Redshift data warehouse you provision a single node or multi node cluster specifying the type and number of nodes that will make up the cluster The node type determines the storage size memory and CPU of each node Each multi node cluster includes a leader node and two or more compute nodes A leader node manages connections parses queries builds execution plans and manages query execution in the compute nodes The compute nodes store data perform computations and run queries as directed by the leader node The leader node of each cluste r is accessible through ODBC and JDBC endpoints using standard PostgreSQL drivers The compute nodes run on a separate isolated network and are never accessed directly After you provision a cluster you can upload your dataset and perform data analysis queries by using common SQL based tools and business intelligence applications ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 62 Cluster Access By default clusters that you create are closed to everyone Amazon Redshift enables you to configure firewall rules (security groups) to control network access to your data warehouse cluster You can also run Redshift inside an Amazon VPC to isolate your data warehouse cluster in your own virtual network and connect it to your existing IT infrastructure using industry standard encrypted IPsec VPN The AWS accoun t that creates the cluster has full access to the cluster Within your AWS account you can use AWS IAM to create user accounts and manage permissions for those accounts By using IAM you can grant different users permission to perform only the cluster op erations that are necessary for their work Like all databases you must grant permission in Redshift at the database level in addition to granting access at the resource level Database users are named user accounts that can connect to a database and are authenticated when they login to Amazon Redshift In Redshift you grant database user permissions on a per cluster basis instead of on a per table basis However a user can see data only in the table rows that were generated by his own activities; ro ws generated by other users are not visible to him The user who creates a database object is its owner By default only a superuser or the owner of an object can query modify or grant permissions on the object For users to use an object you must gran t the necessary permissions to the user or the group that contains the user And only the owner of an object can modify or delete it Data Backups Amazon Redshift distributes your data across all compute nodes in a cluster When you run a cluster with at l east two compute nodes data on each node will always be mirrored on disks on another node reducing the risk of data loss In addition all data written to a node in your cluster is continuously backed up to Amazon S3 using snapshots Redshift stores your snapshots for a user defined period which can be from one to thirty five days You can also take your own snapshots at any time; these snapshots leverage all existing system snapshots and are retained until you explicitly delete them Amazon Redshift con tinuously monitors the health of the cluster and automatically re replicates data from failed drives and replaces nodes as necessary All of this happens without any effort on your part although you may see a slight performance degradation during the re replication process ArchivedAmazon Web Services Amazon We b Services: Overview of Security Processes Page 63 You can use any system or user snapshot to restore your cluster using the AWS Management Console or the Amazon Redshift APIs Your cluster is available as soon as the system metadata has been restored and you can start running queries w hile user data is spooled down in the background Data Encryption When creating a cluster you can choose to encrypt it in order to provide additional protection for your data at rest When you enable encryption in your cluster Amazon Redshift stores all data in user created tables in an encrypted format using hardware accelerated AES 256 block encryption keys This includes all data written to disk as well as any backups Amazon Redshift uses a four tier key based architecture for encryption These keys consist of data encryption keys a database key a cluster key and a master key: • Data encryption keys encrypt data blocks in the cluster Each data block is assigned a randomly generated AES 256 key These keys are encrypted by using the database key for the cluster • The database key encrypts data encryption keys in the cluster The database key is a randomly generated AES 256 key It is stored on disk in a separate network from the Amazon Redshift cluster and passed to the cluster across a secure channe l • The cluster key encrypts the database key for the Amazon Redshift cluster You can use either AWS or a hardware security module (HSM) to store the cluster key HSMs provide direct control of key generation and management and make key management separat e and distinct from the application and the database • The master key encrypts the cluster key if it is stored in AWS The master key encrypts the cluster keyencrypted database key if the cluster key is stored in an HSM You can have Redshift rotate the en cryption keys for your encrypted clusters at any time As part of the rotation process keys are also updated for all of the cluster's automatic and manual snapshots Note: Enabling encryption in your cluster will impact performance even though it is hard ware accelerated Encryption also applies to backups When restoring from an encrypted snapshot the new cluster will be encrypted as well ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 64 To encrypt your table load data files when you upload them to Amazon S3 you can use Amazon S3 server side encryptio n When you load the data from Amazon S3 the COPY command will decrypt the data as it loads the table Database Audit Logging Amazon Redshift logs all SQL operations including connection attempts queries and changes to your database You can access the se logs using SQL queries against system tables or choose to have them downloaded to a secure Amazon S3 bucket You can then use these audit logs to monitor your cluster for security and troubleshooting purposes Automatic Software Patching Amazon Redshift manages all the work of setting up operating and scaling your data warehouse including provisioning capacity monitoring the cluster and applying patches and upgrades to the Amazon Redshift engine Patches are applied only during specified maintenance windows SSL Connections To protect your data in transit within the AWS cloud Amazon Redshift uses hardware accelerated SSL to communicate with Amazon S3 or Amazon DynamoDB for COPY UNLOAD backup and restore operations You can encrypt the connection between your client and the cluster by specifying SSL in the parameter group associated with the cluster To have your clients also authenticate the Redshift server you can install the public key (pem file) for the SSL certificate on your client and use the key to connect to your clusters Amazon Redshift offers the newer stronger cipher suites that use the Elliptic Curve Diffie Hellman Ephemeral protocol ECDHE allows SSL clients to provide Perfect Forward Secrecy between the client and the Redshift clu ster Perfect Forward Secrecy uses session keys that are ephemeral and not stored anywhere which prevents the decoding of captured data by unauthorized third parties even if the secret long term key itself is compromised You do not need to configure any thing in Amazon Redshift to enable ECDHE; if you connect from a SQL client tool that uses ECDHE to encrypt communication between the client and server Amazon Redshift will use the provided cipher list to make the appropriate connection ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 65 Amazon ElastiCache Security Amazon ElastiCache is a web service that makes it easy to set up manage and scale distributed in memory cache environments in the cloud The service improves the performance of web applications by allowing you to retrieve information from a fas t managed in memory caching system instead of relying entirely on slower disk based databases It can be used to significantly improve latency and throughput for many readheavy application workloads (such as social networking gaming media sharing and Q&A portals) or compute intensive workloads (such as a recommendation engine) Caching improves application performance by storing critical pieces of data in memory for low latency access Cached information may include the results of I/O intensive datab ase queries or the results of computationally intensive calculations The Amazon ElastiCache service automates time consuming management tasks for in memory cache environments such as patch management failure detection and recovery It works in conjunction with other Amazon Web Services (such as Amazon EC2 Amazon CloudWatch and Amazon SNS) t o provide a secure high performance and managed in memory cache For example an application running in Amazon EC2 can securely access an Amazon ElastiCache Cluster in the same region with very low latency Using the Amazon ElastiCache service you crea te a Cache Cluster which is a collection of one or more Cache Nodes each running an instance of the Memcached service A Cache Node is a fixed size chunk of secure network attached RAM Each Cache Node runs an instance of the Memcached service and has its own DNS name and port Multiple types of Cache Nodes are supported each with varying amounts of associated memory A Cache Cluster can be set up with a specific number of Cache Nodes and a Cache Parameter Group that controls the properties for each C ache Node All Cache Nodes within a Cache Cluster are designed to be of the same Node Type and have the same parameter and security group settings Amazon ElastiCache allows you to control access to your Cache Clusters using Cache Security Groups A Cache Security Group acts like a firewall controlling network access to your Cache Cluster By default network access is turned off to your Cache Clusters If you want your applications to access your Cache Cluster you must explicitly enable access from hosts in specific EC2 security groups Once ingress rules are configured the same rules apply to all Cache Clusters associated with that Cache Security Group To allow network access to your Cache Cluster create a Cache Security Group and use the Authorize Ca che Security Group Ingress API or CLI command to authorize the desired EC2 security group (which in turn specifies the EC2 instances allowed) IP ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 66 range based access control is currently not enabled for Cache Clusters All clients to a Cache Cluster must be within the EC2 network and authorized via Cache Security Groups ElastiCache for Redis provides backup and restore functionality where you can create a snapshot of your entire Redis cluster as it exists at a specific point in time You can schedule auto matic recurring daily snapshots or you can create a manual snapshot at any time For automatic snapshots you specify a retention period; manual snapshots are retained until you delete them The snapshots are stored in Amazon S3 with high durability and can be used for warm starts backups and archiving Application Services Amazon Web Services offers a variety of managed services to use with your applications including services that provide application streaming queueing push notification email deli very search and transcoding Amazon CloudSearch Security Amazon CloudSearch is a managed service in the cloud that makes it easy to set up manage and scale a search solution for your website Amazon CloudSearch enables you to search large collections o f data such as web pages document files forum posts or product information It enables you to quickly add search capabilities to your website without having to become a search expert or worry about hardware provisioning setup and maintenance As your volume of data and traffic fluctuates Amazon CloudSearch automatically scales to meet your needs An Amazon CloudSearch domain encapsulates a collection of data you want to search the search instances that process your search requests and a configuratio n that controls how your data is indexed and searched You create a separate search domain for each collection of data you want to make searchable For each domain you configure indexing options that describe the fields you want to include in your index a nd how you want to us them text options that define domain specific stopwords stems and synonyms rank expressions that you can use to customize how search results are ranked and access policies that control access to the domain’s document and search endpoints All Amazon CloudSearch configuration requests must be authenticated using standard AWS authentication ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 67 Amazon CloudSearch provides separate endpoints for accessing the configuration search and document services: • The configuration service is acc essed through a general endpoint: cloudsearchus east1amazonawscom • The document service endpoint is used to submit documents to the domain for indexing and is accessed through a domain specific endpoint: http://doc domainname domainidus east 1cloudsearchamazonawscom/ • The search endpoint is used to submit search requests to the domain and is accessed through a domain specific endpoint: http://search domain name domainidus east1cloudsearchamazonawscom Like all AWS Services Amazon CloudSearch requires that every request made to its control API be authenticated so only authenticated users can access and manage your CloudSearch domain API requests are sig ned with an HMAC SHA1 or HMAC SHA256 signature calculated from the request and the user’s AWS Secret Access key Additionally the Amazon CloudSearch control API is accessible via SSL encrypted endpoints You can control access to Amazon CloudSearch manag ement functions by creating users under your AWS Account using AWS IAM and controlling which CloudSearch operations these users have permission to perform Amazon Simple Queue Service (Amazon SQS) Security Amazon SQS is a highly reliable scalable message queuing service that enables asynchronous message based communication between distributed components of an application The components can be computers or Amazon EC2 instances or a combination of both With Amazon SQS you can send any number of messages to an Amazon SQS queue at any time from any component The messages can be retrieved from the same component or a different one right away or at a later time (within 4 days) Messages are highly durable; each message is persistently stored in highly availa ble highly reliable queues Multiple processes can read/write from/to an Amazon SQS queue at the same time without interfering with each other Amazon SQS access is granted based on an AWS Account or a user created with AWS IAM Once authenticated the AW S Account has full access to all user operations An AWS IAM user however only has access to the operations and queues for which they have been granted access via policy By default access to each individual queue is restricted to the AWS Account that c reated it However you can allow other access to a queue using either an SQS generated policy or a policy you write ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 68 Amazon SQS is accessible via SSL encrypted endpoints The encrypted endpoints are accessible from both the Internet and from within Amazo n EC2 Data stored within Amazon SQS is not encrypted by AWS; however the user can encrypt data before it is uploaded to Amazon SQS provided that the application utilizing the queue has a means to decrypt the message when retrieved Encrypting messages before sending them to Amazon SQS helps protect against access to sensitive customer data by unauthorized persons including AWS Amazon Simple Notification Service (Amazon SNS) Security Amazon Simple Notification Service (Amazon SNS) is a web service that makes it easy to set up operate and send notifications from the cloud It provides developers with a highly scalable flexible and cost effective capability to publish messages from an application and immediately deliver them to subscribers or other app lications Amazon SNS provides a simple web services interface that can be used to create topics that customers want to notify applications (or people) about subscribe clients to these topics publish messages and have these messages delivered over clien ts’ protocol of choice (ie HTTP/HTTPS email etc) Amazon SNS delivers notifications to clients using a “push” mechanism that eliminates the need to periodically check or “poll” for new information and updates Amazon SNS can be leveraged to build hig hly reliable event driven workflows and messaging applications without the need for complex middleware and application management The potential uses for Amazon SNS include monitoring applications workflow systems timesensitive information updates mob ile applications and many others Amazon SNS provides access control mechanisms so that topics and messages are secured against unauthorized access Topic owners can set policies for a topic that restrict who can publish or subscribe to a topic Additiona lly topic owners can encrypt transmission by specifying that the delivery mechanism must be HTTPS Amazon SNS access is granted based on an AWS Account or a user created with AWS IAM Once authenticated the AWS Account has full access to all user operations An AWS IAM user however only has access to the operations and topics for which they have been granted access via policy By default access to each individual topic is restricted to the AWS Account that created it However you can allow othe r access to SNS using either an SNS generated policy or a policy you write ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 69 Amazon Simple Workflow Service (Amazon SWF) Security The Amazon Simple Workflow Service ( Amazon SWF) makes it easy to build applications that coordinate work across distributed co mponents Using Amazon SWF you can structure the various processing steps in an application as “tasks” that drive work in distributed applications and Amazon SWF coordinates these tasks in a reliable and scalable manner Amazon SWF manages task execution dependencies scheduling and concurrency based on a developer’s application logic The service stores tasks dispatches them to application components tracks their progress and keeps their latest state Amazon SWF provides simple API calls that can be executed from code written in any language and run on your EC2 instances or any of your machines located anywhere in the world that can access the Internet Amazon SWF acts as a coordination hub with which your application hosts interact You create desir ed workflows with their associated tasks and any conditional logic you wish to apply and store them with Amazon SWF Amazon SWF access is granted based on an AWS Account or a user created with AWS IAM All actors that participate in the execution of a work flow— deciders activity workers workflow administrators —must be IAM users under the AWS Account that owns the Amazon SWF resources You cannot grant users associated with other AWS Accounts access to your Amazon SWF workflows An AWS IAM user however o nly has access to the workflows and resources for which they have been granted access via policy Amazon Simple Email Service (Amazon SES) Security Amazon Simple Email Service (SES) built on Amazon’s reliable and scalable infrastructure is a mail service that can both send and receive mail on behalf of your domain Amazon SES helps you maximize email deliverability and stay informed of the delivery status of your emails Amazon SES integrates with other AWS services making it easy to send emails from app lications being hosted on services such as Amazon EC2 Unfortunately with other email systems it's possible for a spammer to falsify an email header and spoof the originating email address so that it appears as though the email originated from a differen t source To mitigate these problems Amazon SES requires users to verify their email address or domain in order to confirm that they own it and to prevent others from using it To verify a domain Amazon SES requires the sender to publish a DNS record tha t Amazon SES supplies as proof of control over the domain ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 70 Amazon SES periodically reviews domain verification status and revokes verification in cases where it is no longer valid Amazon SES takes proactive steps to prevent questionable content from bein g sent so that ISPs receive consistently high quality email from our domains and therefore view Amazon SES as a trusted email origin Below are some of the features that maximize deliverability and dependability for all of our senders: • Amazon SES uses con tentfiltering technologies to help detect and block messages containing viruses or malware before they can be sent • Amazon SES maintains complaint feedback loops with major ISPs Complaint feedback loops indicate which emails a recipient marked as spam A mazon SES provides you access to these delivery metrics to help guide your sending strategy • Amazon SES uses a variety of techniques to measure the quality of each user’s sending These mechanisms help identify and disable attempts to use Amazon SES for un solicited mail and detect other sending patterns that would harm Amazon SES’s reputation with ISPs mailbox providers and anti spam services • Amazon SES supports authentication mechanisms such as Sender Policy Framework (SPF) and DomainKeys Identified Ma il (DKIM) When you authenticate an email you provide evidence to ISPs that you own the domain Amazon SES makes it easy for you to authenticate your emails If you configure your account to use Easy DKIM Amazon SES will DKIM sign your emails on your beh alf so you can focus on other aspects of your email sending strategy To ensure optimal deliverability we recommend that you authenticate your emails As with other AWS services you use security credentials to verify who you are and whether you have per mission to interact with Amazon SES For information about which credentials to use see Using Credentials with Amazon SES Amazon SES also integrates with AWS IAM so that you can specify which Amazon SES API actions a user can perform If you choose to co mmunicate with Amazon SES through its SMTP interface you are required to encrypt your connection using TLS Amazon SES supports two mechanisms for establishing a TLS encrypted connection: STARTTLS and TLS Wrapper If you choose to communicate with Amazon SES over HTTP then all communication will be protected by TLS through Amazon SES’s HTTPS endpoint When delivering email to its ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 71 final destination Amazon SES encrypts the email content with opportunistic TLS if supported by the receiver Amazon Elastic T ranscoder Service Security The Amazon Elastic Transcoder service simplifies and automates what is usually a complex process of converting media files from one format size or quality to another The Elastic Transcoder service converts standard definition (SD) or high definition (HD) video files as well as audio files It reads input from an Amazon S3 bucket transcodes it and writes the resulting file to another Amazon S3 bucket You can use the same bucket for input and output and the buckets can be in any AWS region The Elastic Transcoder accepts input files in a wide variety of web consumer and professional formats Output file types include the MP3 MP4 OGG TS WebM HLS using MPEG 2 TS and Smooth Streaming using fmp4 container types storing H 264 or VP8 video and AAC MP3 or Vorbis audio You'll start with one or more input files and create transcoding jobs in a type of workflow called a transcoding pipeline for each file When you create the pipeline you'll specify input and output buckets as well as an IAM role Each job must reference a media conversion template called a transcoding preset and will result in the generation of one or more output files A preset tells the Elastic Transcoder what settings to use when processing a particular input file You can specify many settings when you create a preset including the sample rate bit rate resolution (output height and width) the number of reference and keyframes a video bit rate some thumbnail creation options etc A best effort is m ade to start jobs in the order in which they’re submitted but this is not a hard guarantee and jobs typically finish out of order since they are worked on in parallel and vary in complexity You can pause and resume any of your pipelines if necessary Elastic Transcoder supports the use of SNS notifications when it starts and finishes each job and when it needs to tell you that it has detected an error or warning condition The SNS notification parameters are associated with each pipeline It can also use the List Jobs by Status function to find all of the jobs with a given status (eg "Completed") or the Read Job function to retrieve detailed information about a particular job Like all other AWS services Elastic Transcoder integrates with AWS Identity and Access Management (IAM) which allows you to control access to the service and to other AWS resources that Elastic Transcoder requires including Amazon S3 buckets ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 72 and Amazon SNS topics By default IAM users have no access to Elastic Transcoder or to the resources that it uses If you want IAM users to be able to work with Elastic Transcoder you must explicitly grant them permissions Amazon Elastic Transcoder requires every request made to its control API be authenticated so only authenticated proce sses or users can create modify or delete their own Amazon Transcoder pipelines and presets Requests are signed with an HMAC SHA256 signature calculated from the request and a key derived from the user’s secret key Additionally the Amazon Elastic Tran scoder API is only accessible via SSL encrypted endpoints Durability is provided by Amazon S3 where media files are redundantly stored on multiple devices across multiple facilities in an Amazon S3 region For added protection against users accidently de leting media files you can use the Versioning feature in Amazon S3 to preserve retrieve and restore every version of every object stored in an Amazon S3 bucket You can further protect versions using Amazon S3 Versioning's MFA Delete feature Once enabl ed for an Amazon S3 bucket each version deletion request must include the six digit code and serial number from your multi factor authentication device Amazon AppStream 20 Security The Amazon AppStream 20 service provides a framework for running stream ing applications particularly applications that require lightweight clients running on mobile devices It enables you to store and run your application on powerful parallel processing GPUs in the cloud and then stream input and output to any client devic e This can be a pre existing application that you modify to work with Amazon AppStream 20 or a new application that you design specifically to work with the service The Amazon AppStream 20 SDK simplifies the development of interactive streaming applications and client applications The SDK provides APIs that connect your customers’ devices directly to your application capture and encode audio and video stream content across the Internet i n near real time decode content on client devices and return user input to the application Because your application's processing occurs in the cloud it can scale to handle extremely large computational loads Amazon AppStream 20 deploys streaming appl ications on Amazon EC2 When you add a streaming application through the AWS Management Console the service creates the AMI required to host your application and makes your application available ArchivedAmazon Web Services Amazon Web Se rvices: Overview of Security Processes Page 73 to streaming clients The service scales your application as needed within the capacity limits you have set to meet demand Clients using the Amazon AppStream 20 SDK automatically connect to your streamed application In most cases you’ll want to ensure that the user running the client is authorized to use your a pplication before letting him obtain a session ID We recommend that you use some sort of entitlement service which is a service that authenticates clients and authorizes their connection to your application In this case the entitlement service will also call into the Amazon AppStream 20 REST API to create a new streaming session for the client After the entitlement service creates a new session it returns the session identifier to the authorized client as a single use entitlement URL The client then uses the entitlement URL to connect to the application Your entitlement service can be hosted on an Amazon EC2 instance or on AWS Elastic Beanstalk Amazon AppStream 20 utilizes an AWS CloudForm ation template that automates the process of deploying a GPU EC2 instance that has the AppStream 20 Windows Application and Windows Client SDK libraries installed; is configured for SSH RDC or VPN access; and has an elastic IP address assigned to it By using this template to deploy your standalone streaming server all you need to do is upload your application to the server and run the command to launch it You can then use the Amazon AppStream 20 Service Simulator tool to test your application in stan dalone mode before deploying it into production Amazon AppStream 20 also utilizes the STX Protocol to manage the streaming of your application from AWS to local devices The Amazon AppStream 20 STX Protocol is a proprietary protocol used to stream high quality application video over varying network conditions; it monitors network conditions and automatically adapts the video stream to provide a low latency and high resolution experience to your customers It minimizes latency while syncing audio and vid eo as well as capturing input from your customers to be sent back to the application running in AWS Analytics Services Amazon Web Services provides cloud based analytics services to help you process and analyze any volume of data whether your need is for managed Hadoop clusters real time streaming data petabyte scale data warehousing or orchestration ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 74 Amazon EMR Security Amazon EMR is a managed web service you can use to run Hadoop clusters that process vast amounts of data by distributing the work and data among several servers It utilizes an enhanced version of the Apache Hadoop framework running on the web scale infrastructure of Amazon EC2 and Amazon S3 You simply upload your input data and a data processing application into Amazon S3 Amazon EMR then launches the number of Amazon EC2 instances you specify The service begins the job flow execution while pulling the input data from Amazon S3 into the launched Amazon EC2 instances Once the job flow is finished Amazon EMR transfers the output data to Amazon S3 where you can then retrieve it or use it as input in another job flow When launching job flows on your behalf Amazon EMR sets up two Amazon EC2 security groups: one for the master nodes and another for the slaves The master security group has a port open for communication with the service It also has the SSH port open to allow you to SSH into the instances using the key specified at startup The slaves start in a separate security group which only allows interaction with the master insta nce By default both security groups are set up to not allow access from external sources including Amazon EC2 instances belonging to other customers Since these are security groups within your account you can reconfigure them using the standard EC2 to ols or dashboard To protect customer input and output datasets Amazon EMR transfers data to and from Amazon S3 using SSL Amazon EMR provides several ways to control access to the resources of your cluster You can use AWS IAM to create user accounts and roles and configure permissions that control which AWS features those users and roles can access When you launch a cluster you can associate an Amazon EC2 key pair with the cluster which you can then use when you connect to the cluster using SSH You c an also set permissions that allow users other than the default Hadoop user to submit jobs to your cluster By default if an IAM user launches a cluster that cluster is hidden from other IAM users on the AWS account This filtering occurs on all Amazon E MR interfaces —the console CLI API and SDKs —and helps prevent IAM users from accessing and inadvertently changing clusters created by other IAM users It is useful for clusters that are intended to be viewed by only a single IAM user and the main AWS acc ount You also have the option to make a cluster visible and accessible to all IAM users under a single AWS account For an additional layer of protection you can launch the EC2 instances of your EMR cluster into an Amazon VPC which is like launching it into a private subnet This allows ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 75 you to control access to the entire subnetwork You can also launch the cluster into a VPC and enable the cluster to access resources on your internal network using a VPN connection You can encrypt the input data before you upload it to Amazon S3 using any common data encryption tool If you do encrypt the data before it’s uploaded you then need to add a decryption step to the beginning of your job flow when Amazon Elastic MapReduce fetches the data from Amazon S3 Amazo n Kinesis Security Amazon Kinesis is a managed service designed to handle real time streaming of big data It can accept any amount of data from any number of sources scaling up and down as needed You can use Kinesis in situations that call for large scale real time data ingestion and processing such as server logs social media or market data feeds and web clickstream data Applications read and write data records to Amazon Kinesis in streams You can create any number of Kinesis streams to capture store and transport data Amazon Kinesis automatically manages the infrastructure storage networking and configuration needed to collect and process your data at the level of throughput your streaming applications need You don’t have to worry about pr ovisioning deployment or ongoing maintenance of hardware software or other services to enable real time capture and storage of large scale data Amazon Kinesis also synchronously replicates data across three facilities in an AWS Region providing high availability and data durability In Amazon Kinesis data records contain a sequence number a partition key and a data blob which is an un interpreted immutable sequence of bytes The Amazon Kinesis service does not inspect interpret or change the da ta in the blob in any way Data records are accessible for only 24 hours from the time they are added to an Amazon Kinesis stream and then they are automatically discarded Your application is a consumer of an Amazon Kinesis stream which typically runs o n a fleet of Amazon EC2 instances A Kinesis application uses the Amazon Kinesis Client Library to read from the Amazon Kinesis stream The Kinesis Client Library takes care of a variety of details for you including failover recovery and load balancing allowing your application to focus on processing the data as it becomes available After processing the record your consumer code can pass it along to another Kinesis stream; write it to an Amazon S3 bucket a Redshift data warehouse or a DynamoDB table; or simply discard it A connector library is available to help you integrate Kinesis with other ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 76 AWS services (such as DynamoDB Redshift and Amazon S3) as well as third party products like Apache Storm You can control logical access to Kinesis resources and management functions by creating users under your AWS Account using AWS IAM and controlling which Kinesis operations these users have permission to perform To facilitate running your producer or consumer applications on an Amazon EC2 instance you c an configure that instance with an IAM role That way AWS credentials that reflect the permissions associated with the IAM role are made available to applications on the instance which means you don’t have to use your long term AWS security credentials Roles have the added benefit of providing temporary credentials that expire within a short timeframe which adds an additional measure of protection See the AWS Ident ity and Access Management User Guide for more information about IAM roles The Amazon Kinesis API is only accessible via an SSL encrypted endpoint (kinesisus east1amazonawscom) to help ensure secure transmission of your data to AWS You must connect to that endpoint to access Kinesis but you can then use the API to direct AWS Kinesis to create a stream in any AWS Region AWS Data Pipeline Security The AWS Data Pipeline service helps you process and move data between different data sources at specified intervals using data driven workflows and built in dependency checking When you create a pipeline you define data sources preconditions destinations processing steps and an operational schedule Once you define and activate a pip eline it will run automatically according to the schedule you specified With AWS Data Pipeline you don’t have to worry about checking resource availability managing inter task dependencies retrying transient failures/timeouts in individual tasks or c reating a failure notification system AWS Data Pipeline takes care of launching the AWS services and resources your pipeline needs to process your data (eg Amazon EC2 or EMR) and transferring the results to storage (eg Amazon S3 RDS DynamoDB or E MR) When you use the console AWS Data Pipeline creates the necessary IAM roles and policies including a trusted entities list for you IAM roles determine what your pipeline can access and the actions it can perform Additionally when your pipeline cre ates a resource such as an EC2 instance IAM roles determine the EC2 instance's permitted resources and actions When you create a pipeline you specify one IAM role that governs your pipeline and another IAM role to govern your pipeline's resources (refe rred to as a "resource role") which can be the same role for both As part of the security best ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 77 practice of least privilege we recommend that you consider the minimum permissions necessary for your pipeline to perform work and define the IAM roles accord ingly Like most AWS services AWS Data Pipeline also provides the option of secure (HTTPS) endpoints for access via SSL Deployment and Management Services Amazon Web Services provides a variety of tools to help with the deployment and management of your applications This includes services that allow you to create individual user accounts with credentials for access to AWS services It also includes services for creating and updating stacks of AWS resources deploying applications on those resources and monitoring the health of those AWS resources Other tools help you manage cryptographic keys using hardware security modules (HSMs) and log AWS API activity for security and compliance purposes AWS Identity and Access Management (IAM) IAM allows you to create multiple users and manage the permissions for each of these users within your AWS Account A user is an identity (within an AWS Account) with unique security credentials that can be used to access AWS Service s IAM eliminates the need to share passwords or keys and makes it easy to enable or disable a user’s access as appropriate IAM enables you to implement security best practices such as least privilege by granting unique credentials to every user within your AWS Account and only granting permission to access the AWS services and resources required for the users to perform their jobs IAM is secure by default; new users have no access to AWS until permissions are explicitly granted IAM is also integrated with the AWS Marketplace so that you can control who in your organization can subscribe to the software and services offered in the Marketplace Since subscribing to certain software in the Marketplace launches an EC2 instance to run the software this i s an important access control feature Using IAM to control access to the AWS Marketplace also enables AWS Account owners to have fine grained control over usage and software costs IAM enables you to minimize the use of your AWS Account credentials Once you create IAM user accounts all interactions with AWS Services and resources should occur with IAM user security credentials ArchivedAmazon Web Services Amazon Web Serv ices: Overview of Security Processes Page 78 Roles An IAM role uses temporary security credentials to allow you to delegate access to users or services that normally don't have access to your AWS resources A role is a set of permissions to access specific AWS resources but these permissions are not tied to a specific IAM user or group An authorized entity (eg mobile user EC2 instance) assumes a role and receives tempo rary security credentials for authenticating to the resources defined in the role Temporary security credentials provide enhanced security due to their short life span (the default expiration is 12 hours) and the fact that they cannot be reused after the y expire This can be particularly useful in providing limited controlled access in certain situations: • Federated (non AWS) User Access Federated users are users (or applications) who do not have AWS Accounts With roles you can give them access to your AWS resources for a limited amount of time This is useful if you have non AWS users that you can authenticate with an external service such as Microsoft Active Directory LDAP or Kerberos The temporary AWS credentials used with the roles provide ident ity federation between AWS and your non AWS users in your corporate identity and authorization system If your organization supports SAML 20 (Security Assertion Markup Language 20) you can create trust between your organization as an identity provider ( IdP) and other organizations as service providers In AWS you can configure AWS as the service provider and use SAML to provide your users with federated single sign on (SSO) to the AWS Management Console or to get federated access to call AWS APIs Roles are also useful if you create a mobile or web based application that accesses AWS resources AWS resources require security credentials for programmatic requests; however you shouldn't embed long term security credentials in your application because they are accessible to the application's users and can be difficult to rotate Instead you can let users sign in to your application using Login with Amazon Facebook or Google and then use their authentication information to assume a role and get temporary security credentials ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 79 • Cross Account Access For organizations who use multiple AWS Accounts to manage their resources you can set up roles to provide users who have permissions in one account to access resources under another account For organizations w ho have personnel who only rarely need access to resources under another account using roles helps ensures that credentials are provided temporarily only as needed • Applications Running on EC2 Instances that Need to Access AWS Resources If an applicatio n runs on an Amazon EC2 instance and needs to make requests for AWS resources such as Amazon S3 buckets or a DynamoDB table it must have security credentials Using roles instead of creating individual IAM accounts for each application on each instance ca n save significant time for customers who manage a large number of instances or an elastically scaling fleet using AWS Auto Scaling The temporary credentials include a security token an Access Key ID and a Secret Access Key To give a user access to cer tain resources you distribute the temporary security credentials to the user you are granting temporary access to When the user makes calls to your resources the user passes in the token and Access Key ID and signs the request with the Secret Access Ke y The token will not work with different access keys How the user passes in the token depends on the API and version of the AWS product the user is making calls to For more information about temporary security credentials see AWS Security Token Service API Reference The use of temporary credentials means additional protection for you because you don’t have to manage or distribute long term credentials to temporary users I n addition the temporary credentials get automatically loaded to the target instance so you don’t have to embed them somewhere unsafe like your code Temporary credentials are automatically rotated or changed multiple times a day without any action on you r part and are stored securely by default For m ore information about using IAM roles to auto provision keys on EC2 instances see the AWS Identity and Access Management Documentation Amazon CloudWatch Security Amazon CloudWatch is a web service that provides monitoring for AWS cloud resources starting with Amazon EC2 It provides customers with visibility into resource utilization operational performance and overall demand patterns —includi ng metrics ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 80 such as CPU utilization disk reads and writes and network traffic You can set up CloudWatch alarms to notify you if certain thresholds are crossed or to take other automated actions such as adding or removing EC2 instances if Auto Scaling is enabled CloudWatch captures and summarizes utilization metrics natively for AWS resources but you can also have other logs sent to CloudWatch to monitor You can route your guest OS application and custom log files for the software installed on your E C2 instances to CloudWatch where they will be stored in durable fashion for as long as you'd like You can configure CloudWatch to monitor the incoming log entries for any desired symbols or messages and to surface the results as CloudWatch metrics You could for example monitor your web server's log files for 404 errors to detect bad inbound links or invalid user messages to detect unauthorized login attempts to your guest OS Like all AWS Services Amazon CloudWatch requires that every request made to its control API be authenticated so only authenticated users can access and manage CloudWatch Requests are signed with an HMAC SHA1 signature calculated from the request and the user’s private key Additionally the Amazon CloudWatch control API is only a ccessible via SSL encrypted endpoints You can further control access to Amazon CloudWatch by creating users under your AWS Account using AWS IAM and controlling what CloudWatch operations these users have permission to call AWS CloudHSM Security The AW S CloudHSM service provides customers with dedicated access to a hardware security module (HSM) appliance designed to provide secure cryptographic key storage and operations within an intrusion resistant tamper evident device You can generate store an d manage the cryptographic keys used for data encryption so that they are accessible only by you AWS CloudHSM appliances are designed to securely store and process cryptographic key material for a wide variety of uses such as database encryption Digital Rights Management (DRM) Public Key Infrastructure (PKI) authentication and authorization document signing and transaction processing They support some of the strongest cryptographic algorithms available including AES RSA and ECC and many others The AWS CloudHSM service is designed to be used with Amazon EC2 and VPC providing the appliance with its own private IP within a private subnet You can connect to CloudHSM appliances from your EC2 servers through SSL/TLS which uses two way ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 81 digital certif icate authentication and 256 bit SSL encryption to provide a secure communication channel Selecting CloudHSM service in the same region as your EC2 instance decreases network latency which can improve your application performance You can configure a client on your EC2 instance that allows your applications to use the APIs provided by the HSM including PKCS#11 MS CAPI and Java JCA/JCE (Java Cryptography Architecture/Java Cryptography Extensions) Before you begin using an HSM you must set up at least o ne partition on the appliance A cryptographic partition is a logical and physical security boundary that restricts access to your keys so only you control your keys and the operations performed by the HSM AWS has administrative credentials to the applia nce but these credentials can only be used to manage the appliance not the HSM partitions on the appliance AWS uses these credentials to monitor and maintain the health and availability of the appliance AWS cannot extract your keys nor can AWS cause th e appliance to perform any cryptographic operation using your keys The HSM appliance has both physical and logical tamper detection and response mechanisms that erase the cryptographic key material and generate event logs if tampering is detected The HSM is designed to detect tampering if the physical barrier of the HSM appliance is breached In addition after three unsuccessful attempts to access an HSM partition with HSM Admin credentials the HSM appliance erases its HSM partitions When your CloudHSM subscription ends and you have confirmed that the contents of the HSM are no longer needed you must delete each partition and its contents as well as any logs As part of the decommissioning process AWS zeroizes the appliance permanently erasing all ke y material AWS CloudTrail Security AWS CloudTrail provides a log of user and system actions affecting AWS resources within your account For each event recorded you can see what service was accessed what action was performed any parameters for the acti on and who made the request For mutating actions you can see the result of the action Not only can you see which one of your users or services performed an action on an AWS service but you can see whether it was as the AWS root account user or an IAM user or whether it was with temporary security credentials for a role or federated user ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 82 CloudTrail captures information about API calls to an AWS resource whether that call was made from the AWS Management Console CLI or an SDK If the API request returned an error CloudTrail provides the description of the error including messages for authorization failures It even captures AWS Management Console sign in events creating a log record every time an AWS account owner a federated user or an IAM user simply signs into the console Once you have enabled CloudTrail event logs are delivered about every 5 minutes to the Amazon S3 bucket of your choice The log files are organized by AWS Account ID region service name date and time You can configure CloudTrail so that it aggregates log files from multiple regions and/or accounts into a single Amazon S3 bucket By default a single trail will record and deliver events in all current and future regions In addition to S3 you can send events to CloudWatch Logs for custom metrics and alarming or you can upload the logs to your favorite log management and analysis solutions to perform security analysis and detect user behavior patterns For rapid response you can create CloudWatch Events rules to take immediate action to specific events By default log files are stored indefinitely The log files are automatically encrypted using Amazon S3's Server Side Encryption and will remain in the bucket until you choose to delete or archive them For even more security you can use KMS to encrypt the log files using a key that you own You can use Amazon S3 lifecycle configuration rules to automatically delete old log files or archive them to Amazon S3 Glacier for additional longevity at significant savings By enabling the optional log file validation you can validate that logs have not been added deleted or tampered with Like every other AWS service you can limit access to CloudTrail to only certain users You can use IAM to control which AWS users can create configure or delete AWS CloudTrail trails as well as which users can start and stop logging You can control access to the log files by applying I AM or Amazon S3 bucket policies You can also add an additional layer of security by enabling MFA Delete on your Amazon S3 bucket Mobile Services AWS mobile services make it easier for you to build ship run monitor optimize and scale cloud powered applications for mobile devices These services also help you authenticate users to your mobile application synchronize data and collect and analyze application usage ArchivedAmazon Web Services Amazon Web Servic es: Overview of Security Processes Page 83 Amazon Cognito Amazon Cognito provides identity and sync services for mobile and web based applications It simplifies the task of authent icating users and storing managing and syncing their data across multiple devices platforms and applications It provides temporary limited privilege credentials for both authenticated and unauthenticated users without having to manage any backend inf rastructure Amazon Cognito works with well known identity providers like Google Facebook and Amazon to authenticate end users of your mobile and web applications You can take advantage of the identification and authorization features provided by these services instead of having to build and maintain your own Your application authenticates with one of these identity providers using the provider’s SDK Once the end user is authenticated with the provider an OAuth or OpenID Connect token returned from th e provider is passed by your application to Cognito which returns a new Amazon Cognito ID for the user and a set of temporary limited privilege AWS credentials To begin using Amazon Cognito you create an identity pool through the Amazon Cognito console The identity pool is a store of user identity information that is specific to your AWS account During the creation of the identity pool you will be asked to create a new IAM role or pic k an existing one for your end users An IAM role is a set of permissions to access specific AWS resources but these permissions are not tied to a specific IAM user or group An authorized entity (eg mobile user EC2 instance) assumes a role and receiv es temporary security credentials for authenticating to the AWS resources defined in the role Temporary security credentials provide enhanced security due to their short life span (the default expiration is 12 hours) and the fact that they cannot be reuse d after they expire The role you select has an impact on which AWS services your end users will be able to access with the temporary credentials By default Amazon Cognito creates a new role with limited permissions – end users only have access to the Amazon Cognito Sync service and Amazon Mobile Analytics If your application needs access to other AWS resources such as Amazon S3 or DynamoDB you can modify your roles directly from the IAM management console With Amazon Cognito there’s no need to create individual AWS accounts or even IAM accounts for every one of your web/mobile app’s end users who will need to access your AWS resources In conjunction with IAM roles mobile users can securely access AWS resources and application features and even save data to the AWS cloud without having to create an account or log in However if they choose to do this later Amazon Cognito merge s data and identification information Because Amazon Cognito stores data locally as well as in the service your ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 84 end users can continue to interact with their data even when they are offline Their offline data may be stale but anything they put into the dataset they can immediately retrieve whether they are online or not The client SDK manages a local SQLite store so that the application can work even when it is not connected The SQLite store functions as a cache and is the target of all read and write operations Cognito's sync facility compares the local version of the data to the cloud version and pushes up or pulls down deltas as needed Note that in order to sync data across devices your identity pool must support authenticated identities Unauthenticated identities are tied to the device so unless an end user authenticates no data can be synced across multiple devices With Amazon Cognito your application communicates directly with a supported public identity provider (Amazon Facebook or Google) to authenticate users Amazon Cognito does not receive or store user credentials —only the OAuth or OpenID Connect token received from the identity provider Once Amazon Cognito receives the token it returns a new Amazon Cognito ID for the user and a set of temporary limited privilege AWS credentials Each Amazon Cognito identity has access only to its own data in the sync store and this data is encrypted when stored In addition all identity data is transmitted over HTTPS The unique Amazon Cognito identifier on the device is stored in the appropriate secure location —on iOS for example the Amazon Cognito identifier is stored in the iOS keychain User data is cached in a local SQLite database within the application’s sandbox; if you require additional security you can encrypt this iden tity data in the local cache by implementing encryption in your application Amazon Mobile Analytics Amazon Mobile Analytics is a service for collecting visualizing and understanding mobile application usage data It enables you to track customer behavio rs aggregate metrics and identify meaningful patterns in your mobile applications Amazon Mobile Analytics automatically calculates and updates usage metrics as the data is received from client devices running your app and displays the data in the consol e You can integrate Amazon Mobile Analytics with your application without requiring users of your app to be authenticated with an identity provider (like Google Facebook or Amazon) For these unauthenticated users Mobile Analytics works with Amazon Cognit o to provide temporary limited privilege credentials To do this you first create an identity pool in Amazon Cognito The identity pool will use IAM roles which is a set of permissions not tied to a specific IAM user or group but which allows an entity to access ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 85 specific AWS resources The entity assumes a role and receives temporary security credentials for authenticating to the AWS resources defined in the role By default Amazon Cognito creates a new role with limited permissions – end users only hav e access to the Amazon Cognito Sync service and Amazon Mobile Analytics If your application needs access to other AWS resources such as Amazon S3 or DynamoDB you can modify your roles directly from the IAM management console You can integrate the AWS Mo bile SDK for Android or iOS into your application or use the Amazon Mobile Analytics REST API to send events from any connected device or service and visualize data in the reports The Amazon Mobile Analytics API is only accessible via an SSL encrypted end point ( https://mobileanalyticsus east 1amazonawscom ) Applications AWS applications are managed services that enable you to provide your users with secure centralized storage and work area s in the cloud Amazon WorkSpaces Amazon WorkSpaces is a managed desktop service that allows you to quickly provision cloud based desktops for your users Simply choose a Windows 7 bundle that best meets the needs of your users and the number of WorkSpaces that you would like to launch Once the WorkSpaces are ready users receive an email informing them where they can download the relevant client and log into their WorkSpace They can then access their cloud based desktops from a variety of endpoint device s including PCs laptops and mobile devices However your organization’s data is never sent to or stored on the end user device because Amazon WorkSpaces uses PC overIP (PCoIP ) which provides an interactive video stream without transmitting actual data The PCoIP protocol compresses encrypts and encodes the users’ desktop computing experience and transmits ‘pixels only’ across any standard IP network to end user devices In order to access their WorkSpace users must sign in using a set of unique credentials or their regular Active Directory credentials When you integrate Amazon WorkSpaces with your corporate Active Directory each WorkSpace joins your Active Directory domain and can be man aged just like any other desktop in your organization This means that you can use Active Directory Group Policies to manage your users’ ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 86 WorkSpaces to specify configuration options that control the desktop If you choose not to use Active Directory or othe r type of on premises directory to manage your user WorkSpaces you can create a private cloud directory within Amazon WorkSpaces that you can use for administration To provide an additional layer of security you can also require the use of multi factor authentication upon sign in in the form of a hardware or software token Amazon WorkSpaces supports MFA using an on premise Remote Authentication Dial in User Service (RADIUS) server or any security provider that supports RADIUS authentication It current ly supports the PAP CHAP MS CHAP1 and MS CHAP2 protocols along with RADIUS proxies Each Workspace resides on its own EC2 instance within a VPC You can create WorkSpaces in a VPC you already own or have the WorkSpaces service create one for you autom atically using the WorkSpaces Quick Start option When you use the Quick Start option WorkSpaces not only creates the VPC but it performs several other provisioning and configuration tasks for you such as creating an Internet Gateway for the VPC settin g up a directory within the VPC that is used to store user and WorkSpace information creating a directory administrator account creating the specified user accounts and adding them to the directory and creating the WorkSpace instances Or the VPC can be connected to an on premises network using a secure VPN connection to allow access to an existing on premises Active Directory and other intranet resources You can add a security group that you create in your Amazon VPC to all the WorkSpaces that belong t o your Directory This allows you to control network access from Amazon WorkSpaces in your VPC to other resources in your Amazon VPC and on premises network Persistent storage for WorkSpaces is provided by Amazon EBS and is automatically backed up twice a day to Amazon S3 If WorkSpaces Sync is enabled on a WorkSpace the folder a user chooses to sync will be continuously backed up and stored in Amazon S3 You can also use WorkSpaces Sync on a Mac or PC to sync documents to or from your WorkSpace so that y ou can always have access to your data regardless of the desktop computer you are using Because it’s a managed service AWS takes care of several security and maintenance tasks like daily backups and patching Updates are delivered automatically to your WorkSpaces during a weekly maintenance window You can control how patching is configured for a user’s WorkSpace By default Windows Update is turned on but you have the ability to customize these settings or use an alternative patch management approach if you desire For the underlying OS Windows Update is enabled by default ArchivedAmazon Web Services Amazon Web Services: Overview of Security Processes Page 87 on WorkSpaces and configured to install updates on a weekly basis You can use an alternative patching approach or to configure Windows Update to perform updates at a time of your choosing You can use IAM to control who on your team can perform administrative functions like creating or deleting WorkSpaces or setting up user directories You can also set up a WorkSpace for directory administration install your favorite Active Direc tory administration tools and create organizational units and Group Policies in order to more easily apply Active Directory changes for all your WorkSpaces users Amazon WorkDocs Amazon WorkDocs is a managed enterprise storage and sharing service with fee dback capabilities for user collaboration Users can store any type of file in a WorkDocs folder and allow others to view and download them Commenting and annotation capabilities work on certain file types such as MS Word and without requiring the applic ation that was used to originally create the file WorkDocs notifies contributors about review activities and deadlines via email and performs versioning of files that you have synced using the WorkDocs Sync application User information is stored in an Ac tive Directory compatible network directory You can either create a new directory in the cloud or connect Amazon WorkDocs to your on premises directory When you create a cloud directory using WorkDocs’ quick start setup it also creates a directory admi nistrator account with the administrator email as the username An email is sent to your administrator with instructions to complete registration The administrator then uses this account to manage your directory When you create a cloud directory using Wo rkDocs’ quick start setup it also creates and configures a VPC for use with the directory If you need more control over the directory configuration you can choose the standard setup which allows you to specify your own directory domain name as well as one of your existing VPCs to use with the directory If you want to use one of your existing VPCs the VPC must have an Internet gateway and at least two subnets Each of the subnets must be in a different Availability Zone Using the Amazon WorkDocs Mana gement Console administrators can view audit logs to track file and user activity by time IP address and device and choose whether to allow users to share files with others outside their organization Users can then control who can access individual fi les and disable downloads of files they share ArchivedAmazon Web Services Amazon Web Services : Overview of Security Processes Page 88 All data in transit is encrypted using industry standard SSL The WorkDocs web and mobile applications and desktop sync clients transmit files directly to Amazon WorkDocs using SSL WorkDocs users can also uti lize Multi Factor Authentication or MFA if their organization has deployed a Radius server MFA uses the following factors: username password and methods supported by the Radius server The protocols supported are PAP CHAP MS CHAPv1 and MS CHAPv2 You choose the AWS Region where each WorkDocs site’s files are stored Amazon WorkDocs is currently available in the US East (Virginia) US West (Oregon) and EU (Ireland) AWS Regions All files comments and annotations stored in WorkDocs are automatical ly encrypted with AES 256 encryption Document Revisions Date Description March 2020 Updated compliance certifications hypervisor AWS Snowball February 2019 Added information about deleting objects in Amazon S3 Glacier December 2018 Edit made to the Amazon Redshift Security topic May 2017 Added section on AWS Config Security Checks April 2017 Added section on Amazon Elastic File System March 2017 Migrated into new format January 2017 Updated regions
|
General
|
consultant
|
Best Practices
|
AWS_Response_to_CACP_Information_and_Communication_Technology_SubCommittee
|
Amazon Web Services May 2017 Page 1 of 38 AWS Response to CACP Information and Communication Technology Sub Committee Offsite Data Storage and Processing Best Practices May 2017 Amazon Web Services May 2017 Page 2 of 38 © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are s ubject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether ex press or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Amazon Web Services May 2017 Page 3 of 38 Contents Introduction 4 CACP Requirements 5 Vendor Requirements 6 Information Security Requirements 18 Data Centre Security Requirements 27 Personnel Security Requirements 32 Access Control Requirements 34 Document Revisions 38 Amazon Web Services May 2017 Page 4 of 38 Introduction This document provide s information that Canadian police agencies can use to help determine how AWS services support their requirements and how to integrate AWS into the existing control framework that supports their IT environment For more information about compliance on AWS see AWS Risk and Compliance Overview (https://d0awsstaticcom/whitepapers/compliance/A WS_Risk_and_Compliance_Overviewpdf ) The tables listed in CACP Requirements below address the requirements listed in the Canadian Association of Chiefs of Police (CACP) Information and Communication Technology Sub Com mittee’s Offsite Data Storage and Processing Best Practices Further supporting details on AWS’s alignment with the CACP Sub Committee’s best practices can be requested subject to a non disclosure agreement with AWS Please contact your AWS account representative Amazon Web Services May 2017 Page 5 of 38 CACP Requirements The following tables describe how AWS aligns with the CACP information storage requirements Protected A and Protected B refer to security levels that the Canadian government has defined for sensitive government informa tion and assets Unauthorized access to Protected A information could lead to “Injury to an individual organization or government” Unauthorized access to Protected B information could lead to “ Serious injury to an individual organization or government” Values in Protected A and Protected B are set to the following possible states: • M – Mandatory • H – Highly Desirable • D – Desirable Amazon Web Services May 2017 Page 6 of 38 Vendor Requirements Requirement Protected A Protected B Reference AWS Responsibility 24x7 managed tier 1 and tier 2 support M M CJIS AWS provides a variety of options for 24x7 tier 1 and tier 2 support at the Business Support level or better For more information see https://awsamazoncom/premiumsupport/compareplans/ Uptime Guarantee of a minimum of 999% H H CACP ICT Each AWS service provides details on availability SLAs For instance Amazon EC2 has an availability SLA of 9995% (https://awsamazoncom/ec2/sla ) and Amazon S3 has an availability SLA of 9999% ( https://awsamazoncom/s3/sla ) Amazon Web Services May 2017 Page 7 of 38 Documented and proven configuration management processes M M MITS AWS maintains a documented and proven configuration management process that is performed during information system design development implementation and operation Documented and proven change control processes that adhere to ITIL service management processes M M MITS/CACP ICT AWS maintains change control processes that support the scale and complexity of the business and have been independently assessed Documented and proven incident response processes including: • Incident Identification • Incident Response • Incident Reporting • Incident Recovery • PostIncident Analysis M M MITS The AWS incident response program (detection investigation and response to incidents) has been developed in alignment with ISO 27001 standards Amazon Web Services May 2017 Page 8 of 38 Provide a current SOC Level 2 Compliance Report (if financi al data is used or stored) M M CACP ICT AWS provides access to its SOC 1 Type 2 and SOC 2 Type 2: Security & Availability report s subject to a nondisclosure agreement while the SOC 3: Security & Availability report is publicly available For more information see https://awsamazoncom/compliance/soc faqs/ Maintain current PCI compliance (if PCI data is used or stored) M M CACP ICT AWS maintains compliance with PCI DSS v32 as a Level 1 service provider For more information see https://awsamazoncom/compliance/pci dsslevel 1faqs/ Maintain current Cloud Controls Matrix (CCM) compliance report and provide to the agency upon request H H CACP ICT AWS is listed on the CSA’s Star registrant’s page located at https://cloudsecurityallianceorg/star registrant/amazonaws/ Amazon Web Services May 2017 Page 9 of 38 The Contractor must possess adequate disaster recovery and business continuity processes from a manmade or natural disaster The Contractor must provide their business continuity and disaster recovery plan to customer upon request The plans must include but i s not limited to: • How long it would take to recover from a disruption • How long it will take to switch to a backup site • The level of service and functionality provided by the backup site; and within what time frame the provider will recover th e primary da ta and service • A report on how and how often the customer data is backed up M M RCMP Customer resiliency in the cloud is transformed with the use of cloud Businesses are using AWS to enable faster disaster recovery of critical IT systems and we provide a whitepaper ( https://awsamazoncom/blogs/aws/new whitepaper useaws fordisaster recovery/ ) on using AWS for disaster recovery Customer resiliency is then not tied to any underlying infrastructure impacts AWS maintains internal operational continuity processes including N+2 physical redundancy from generators to third party service providers at every data centre globally Ability to determine where all agency information is at all times including online data and backups D M CACP ICT When using AWS c ustomers have full control of the movement of their data with the ability to choose the region in which their data is kept Amazon Web Services May 2017 Page 10 of 38 Ensure any connections to the Internet other external networks or information systems occur through controlled interfaces (eg proxies gateways routers firewalls encrypted tunnels) H M CJIS AWS has a limited number of access points to the information system to allow for a more comprehensive monitoring of inbound and outbound communications and network traffic These customer access points are called API endpoints which allow customers to establish a secure communication session with their storage or compute instances within AW S Customers have the ability to deploy various tools and mechanisms to monitor traffic and activity such as VPC configurations EC2 Security Groups the AWS Web Application Firewall (WAF) as well as secure encrypted connections For more information se e https://awsamazoncom/security/ Employ tools and techniques to monitor network events detect attacks and provide identification of unauthorized use 24x7 D M CJIS AWS customers benefit from AWS servi ces and technologies built from the ground up to provide resilience in the face of DDoS attacks to include services designed with an automatic response to DDoS to help minimize time to mitigate and reduce impact The customer has broad latitude to implement similar capabilities within their customer environment to monitor system events detect attacks and provide identification of unauthorized use 24x7 to include vulnerability scanning and penetration testing For more information see https://d0awsstaticcom/whitepapers/DDoS_White_Paper_June 2015pdf Ensure the operational failure of the boundary protection mechanisms do not result in any unauthorized release of informatio n outside of the information system boundary (ie the device shall “fail closed” vs “fail open”) D M CJIS AWS users have the ability to configure their services to operate in a number of ways compliant with fail secure requirements Amazon Web Services May 2017 Page 11 of 38 Allocate publicly accessible information system components (eg public Web servers) to separate sub networks with separate network interfaces D H CACP ICT AWS does not operate publicly accessible information system components such as public web servers from within the cloud infrastructure All external interaction with the infrastructure is through a set of well known structured API end points Internet facing servers in the customer’s account are entirely within their operational control For more information see https://awsamazoncom/whitepapers/aws security best practices/ Data in transit is encrypted H M MITS AWS provides several means for supporting encrypting data in transit Enc rypted IPSec tunnels can be created between a customer’s endpoint and their VPC For more information see https://awsamazoncom/vpc Data at rest (local or backups) is encrypted H M MITS AWS provides a variety of options for encryption of data at rest For instance with S3 customers can securely upload or download data to Amazon S3 via the SSL encrypted endpoints using the HTTPS protocol Amazon S3 can automatically encrypt customer data at rest and gives sev eral choices for key management Alternatively customers can use a client encryption library such as the Amazon S3 Encryption Client to encrypt data before uploading to Amazon S3 If desired Amazon S3 can encrypt customer data at rest with server side en cryption (SSE); Amazon S3 will automatically encrypt customer data on write and decrypt your data on retrieval When Amazon S3 SSE encrypts data at rest it uses Advanced Encryption Standard (AES) 256 bit symmetric keys Amazon Web Services May 2017 Page 12 of 38 There are three ways to manage the encryption keys with server side encryption with Amazon S3: • SSE with Amazon S3 Key Management (SSE S3): Amazon S3 will encrypt data at rest and manage the encryption keys • SSE with Customer Provided Keys (SSEC): Amazon S3 will encrypt data at rest using the customer encryption keys customers provide • SSE with AWS KMS (SSEKMS): Amazon S3 will encrypt data at rest using keys only the customer manages in the AWS Key Management Service (KMS) For more information see: • https://awsamazoncom/s3/details/#security • https://awsamazoncom/kms/ When encryption is employed the cryptographic keys meet or exceed AES 256 H M CACP ICT AWS supports the use of AE S 256 Amazon Web Services May 2017 Page 13 of 38 When encryption is employed the cryptographic module used shall be certified to meet FIPS 1402 standards D H MITS AWS GovCloud (US) provides endpoints compliance with FIPS 1402 requirements Customers have the ability to deploy FIPS compliant modules within their account depending on their application’s ability to support FIPS 140 2 cryptographic modules Encryption keys be highly secured protected and available to the agency upon request M M MITS The use of AWS CloudHSM or AWS KMS provides the options for customers to create and control their own encryption keys For more information see: • https://awsamazoncom/kms/ • https://awsamazoncom/cloudhsm/ Encryption keys are controlled and stored by the agency D H CACP ICT The use of AWS CloudHSM or AWS KMS provides the options for customers to create and control their own encryption keys For more information see: • https://awsamazoncom/kms/ • https://awsamazoncom/cloudhsm/ Amazon Web Services May 2017 Page 14 of 38 External access to the administrative or management functions must be over VPN only This includes mode ms FTP or any protocol/port support provided by the equipment manufacturer This access must be limited to users with twofactor authentication D H NPISAB Customers can connect to the management console to administer their environment over VPN and mandate the use of twofactor authentication per internal agency requirements For more information see https://awsamazoncom/iam/details/mfa/ AWS infrastructure administrative connections to the AWS infrastructure are performed using secure mechanisms Agency data shall not be used by any service provider for any purposes The service provider shall be prohibited from scanning data files for the purpose of data mining or advertising M M CACP ICT AWS does not access or use customer content for any purpose other than as legally required and for maintaining the AWS services and providing them to customers and their end users AWS never uses customer content or derives information from it for marketing or advertising For more information see https://awsamazoncom/compliance/data privacy faq/ The AWS Privacy Policy describes how AWS collects and uses information that customers provide in connection with the creation or administration of AWS accounts which is referred to as “Account Information ” For example Account Information includes names usernames phone numbers email addresses and billing information associated with a customer’ s AWS account The AWS Privacy Policy applies to customers’ Account Information and does not apply to the content that customers store on AWS including any personal information of customer end users AWS will not disclose move access or use customer content except as provided in the customer’s agreement with AWS The customer agreement with AWS (https://awsamazoncom/agreement/ ) and the AWS Data Protection FAQ contain more information about how we handle content you store on our systems Amazon Web Services May 2017 Page 15 of 38 All firewalls meet the minimum standard of Evaluation Assurance Level (EAL) 4 H M NPISAB AWS provides multiple features and services to help customers protec t data including the AWS Web Application Firewall (WAF) There are also several vendors in the AWS Marketplace with similar security utility product offerings For more information see: • https://awsamazoncom/waf/ • https://awsamazoncom/marketplace Ensure regular virus malware & penetration testing of their environment M M NPISAB AWS ensures regular virus malware and penetration testing of the infrastructure environment Customers can also conduct their own penetration testing within their account For more information see https://awsamazoncom/security/penetration testing/ Provide sufficient documentation of their virus malware & penetration testing results and upon request by the agency the vendor will provide a current report H M CACP ICT AWS ’ program processes and p rocedures for managing antivirus/malicious software are in alignment with the ISO 27001 standard and are referenced in AWS SOC reports AWS Security regularly engages independent security firms to perform external vulnerability threat assessments and has been validated and certified by an independent auditor to confirm al ignment with ISO 27 001 certification standard Amazon Web Services May 2017 Page 16 of 38 Provide sufficient documentation of all patch management and upon request by the agency the vendor will provide a current report H M CACP ICT Customers retain control of their own guest operating systems software and applications and are responsible for performing vulnerability scans and patching of their own systems Customers can request permission to conduct scans of their cloud infrastructure as long as they are limited to the customer’s instances and do not violate the AWS Acceptable Use Policy AWS regularly scans all Internet facing service endpoint IP addresses for vulnerabilities AWS Security notifies the appropriate parties to remediate any identified vulnerabilities AWS’ own maintenance and system patching generally do not impact customers For more information see AWS Security Whitepaper (available at https://awsamazoncom/security/ ) and ISO 27001 standard Annex A domain 12 AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard Continual monitoring and logging for the following events: • DDOS attacks • Unauthorized changes to the system hardware firmware and software • System performance anomalies • Known attack signatures D M MITS AWS employs a variety of tools and techniques to monitor network events and unauthorized use 24x7 AWS customers benefit from AWS services and technologies built from the ground up to provide resilience in the face of DDoS attacks including servic es designed with an automatic response to DDoS to help minimize time to mitigate and reduce impact The customer has broad latitude to implement similar capabilities within their customer environment to monitor system events detect attacks and provide id entification of unauthorized use 24x7 For more information see: • https://d0awsstaticcom/whitepapers/DDoS_White_Pap er_June2015pdf • https://awsamazoncom/security Amazon Web Services May 2017 Page 17 of 38 Ability to enable data retention policies as defined by the customer D H CACP ICT While AWS provides customers with the ability to delete their data AWS customers retain control and ownership of their data and are responsible for managing data retention to their own requirements AWS maintains data retention policies in accordance with several well known international standards and regulations such as SOC and PCI DSS that are independently assessed and attested Amazon Web Services May 2017 Page 18 of 38 Information Security Requirements Requirement Protected A Protected B Reference AWS Responsibility Ability to determine where all agency information is at all times including online data and backups D M CACP ICT Customers have full control of the movement of their data when using AWS with the choice of the region in which their data is kept Ensure any connections to the Internet other external networks or information systems occur through controlled interfaces (eg proxies gateways routers firewalls encrypted tunnels) H M CJIS AWS has a limited number of access points to the information system to allow for a more comprehensive monitoring of inbound and outbound communications and network traffic These customer access points are called API endpoints which allow customers to establish a secure communication session with their storage or compute instances within AWS Customers have the ability to deploy various tools and mechan isms to monitor traffic and activity such as VPC configurations EC2 Security Groups the AWS Web Application Firewall (WAF) as well as secure encrypted connections For more information see https://awsa mazoncom/security/ Amazon Web Services May 2017 Page 19 of 38 Employ tools and techniques to monitor network events detect attacks and provide identification of unauthorized use 24x7 D M CJIS AWS customers benefit from AWS services and technologies built from the ground up to provide resilience in the face of DDoS attacks to include services designed with an automatic response to DDoS to help minimize time to mitigate and reduce impact The customer has broad latitude to implement similar capabilities within their customer environment to moni tor system events detect attacks and provide identification of unauthorized use 24x7 to include vulnerability scanning and penetration testing For more information see • https://d0awsstaticcom/whitepapers/DDoS_White_Paper_J une2015pdf • https://awsamazoncom/security • https://awsamazoncom/security/penetrat iontesting/ Ensure the operational failure of the boundary protection mechanisms do not result in any unauthorized release of information outside of the information system boundary (ie the device shall “fail closed” vs “fail open”) D M CJIS Users in AWS have the ability to configure their services to operate in a number of ways compliant with fail secure requirements Amazon Web Services May 2017 Page 20 of 38 Allocate publicly accessible information system components (eg public Web servers) to separate sub networks with separate network interfaces D H CACP ICT AWS does not operate publicly accessible information system components such as public web servers from within the cloud infrastructure All external interaction with the infrastructure is through a set of well known struc tured API end points Internet facing servers in the customer’s account are entirely within their operational control For more information see https://awsamazoncom/whitepap ers/aws security best practices/ Data in transit is encrypted H M MITS AWS provides several options for supporting encrypting data in transit Encrypted IPSec tunnels can be created between a customer’s endpoint and their VPC For more information see https://awsamazoncom/vpc Data at rest (local or backups) is encrypted H M MITS AWS provides a variety of options for encryption of data at rest For example with S3 customers can securely upload or download data to Amazon S3 via the SSL encrypted endpoints using the HTTPS protocol Amazon S3 can automatically encrypt customer data at rest and offers several choices for key management Alternatively customers can use a client encryption library such as the Amazon S3 Encryption Client to encrypt data before uploading to Amazon S3 If desired Amazon S3 can encrypt customer data at rest with server side encryption (SSE); Amazon S3 will automatically encrypt customer dat a on write and decrypt your data on retrieval When Amazon S3 SSE encrypts data at rest it uses Advanced Encryption Standard (AES) 256 bit symmetric keys There are three ways to Amazon Web Services May 2017 Page 21 of 38 manage the encryption keys with server side encryption with Amazon S3: • SSE with Amazon S3 Key Management (SSE S3): Amazon S3 will encrypt data at rest and manage the encryption keys ; • SSE with Customer Provided Keys (SSEC): Amazon S3 will encrypt data at rest using the customer encryption keys customers provide; or • SSE with AWS KMS (SSEKMS): Amazon S3 will encrypt data at rest using keys only the customer manages in the AWS Key Management Service (KMS) For more information see • https://awsamazoncom/s3/details/#s ecurity • https://awsamazoncom/kms When encryption is employed the cryptographic keys meet or exceed AES 256 H M CACP ICT AWS supports the use of AES 256 Amazon Web Services May 2017 Page 22 of 38 When encryption is employed the cryptographic module used shall be certified to meet FIPS 140 2 standards D H MITS AWS GovCloud (US) provides endpoints compliance with FIPS 1402 requirements Customers have the ability to deploy FIPS compliant modules within their account depending on their application’s ability to support FIPS 1402 cryptographic modules For more information see https://awsamazoncom/federal/ Encryption keys be highly secured protected and available to the agency upon request M M MITS The use of AWS CloudHSM or AWS KMS provides the options for customers to create and control their own encryption keys For more information see: • https://awsamazoncom/kms/ • https://awsamazoncom/cloudhsm/ Encryption keys are controlled and stored by the agency D H CACP ICT The use of AWS CloudHSM or AWS KMS provides the options for customers to create and control their own encryption keys For more information see: • https://awsamazoncom/kms/ • https://awsamazoncom/cloudhsm/ Amazon Web Services May 2017 Page 23 of 38 External access to the administrative or management functions must be over vpn only This includes modems ftp or any protocol/port support provided by the equipment manufacturer This access must be limited t o users with two factor authentication D H NPISAB Customers can connect to the management console to administer their environment over VPN and mandate the use of twofactor authentication per internal agency requirements For more information see https://awsamazoncom/iam/details/mfa/ AWS infrastructure administrative connections to the AWS infrastructure are performed using secure mechanisms Agency data shall not be used by any service provider for any purposes The service provider shall be prohibited from scanning data files for the purpose of data mining or advertising M M CACP ICT AWS does not access or use customer content for any purpose other than as legally required and for maintaining t he AWS services and providing them to customers and their end users AWS never uses customer content or derives information from it for marketing or advertising For more information see https://awsamazoncom/compliance/data privacy faq/ The AWS Privacy Policy describes how AWS collects and uses information that customers provide in connection with the creation or administration of AWS accounts which is referred to as “Account Informat ion” For example Account Information includes names usernames phone numbers email addresses and billing information associated with a customer’s AWS account The AWS Privacy Policy applies to customers’ Account Information and does not apply to the content that customers store on AWS including any personal information of customer end users AWS will not disclose move access or use customer content except as provided in the customer’s agreement with AWS The customer agreement with AWS (https://awsamazoncom/agreement/ ) and the AWS Data Protection FAQ contain more information about how we handle content you store on our systems Amazon Web Services May 2017 Page 24 of 38 All firewalls meet the minimum standard of Evaluation Assurance Level (EAL) 4 H M NPISAB AWS provides multiple features and services to help customers protect data including the AWS Web Application Firewall (WAF) There are also several vendors in the AWS Marketplace with similar security utility product offerings For more information see • https://awsamazoncom/waf/ • https://awsamazoncom/marketplace Ensure regular virus malware & penetration testing of their environment M M NPISAB AWS ensures regular virus malware and penetration testing of the infrastructure environment Customers can also conduct their own penetration testing within their account For more information see https://awsamazoncom/security/penetrationtesting/ Provide sufficient documentation of their virus malware & penetration testing results and upon request by the agency the vendor will provide a current report H M CACP ICT Custom ers retain control of their own guest operating systems software and applications and are responsible for performing vulnerability scans and patching of their own systems Customers can request permission to conduct scans of their cloud infrastructure as long as they are limited to the customer’s instances and do not violate the AWS Acceptable Use Policy AWS regularly scans all Internet facing service endpoint IP addresses for vulnerabilities AWS Security notifies the appropriate parties to remediate any identified vulnerabilities AWS’ own maintenance and system patching generally do not impact customers Amazon Web Services May 2017 Page 25 of 38 For more information see AWS Security Whitepaper (available at https://awsamazoncom/security/ ) and ISO 27001 standard Annex A domain 12 AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard Provide sufficient documentation of all patch management and upon request by the agenc y the vendor will provide a current report H M CACP ICT Customers retain control of their own guest operating systems software and applications and are responsible for performing vulnerability scans and patching of their own systems Customers can request permission to conduct scans of their cloud infrastructure as long as they are limited to the customer’s instances and do not violate the AWS Acceptable Use Policy AWS regularly scans all Internet facing service endpoint IP addresses for vulnerabili ties AWS Security notifies the appropriate parties to remediate any identified vulnerabilities AWS’ own maintenance and system patching generally do not impact customers For more information see AWS Security Whitepaper (available at https://awsamazoncom/security/ ) and ISO 27001 standard Annex A domain 12 AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard Amazon Web Services May 2017 Page 26 of 38 Continual monitoring an d logging for the following events: • DDOS Attacks • Unauthorized changes to the system hardware firmware and software • System performance anomalies • Known attack signatures D M MITS AWS employs a variety of tools and techniques to monitor network events and unauthorized use 24x7 AWS customers benefit from AWS services and technologies built from the ground up to provide resilience in the face of DDoS attacks including services designed with an automatic response to DDoS to help minimize time to mitigate and reduce impact The customer has broad latitude to implement similar capabilities within their customer environment to monitor system events detect attacks and provide identification of unauthorized use 24x7 For more information see: • https://d0awsstaticcom/whitepapers/DDoS_White_Paper_J une2015pdf • https://awsamazoncom/security Ability to enable data retention policies a s defined by the customer D H CACP ICT While AWS provides customers with the ability to delete their data AWS customers retain control and ownership of their data and are responsible for managing data retention to their own requirements AWS maintains data retention policies in accordance with several wellknown international standards and regulations such as SOC and PCIDSS that are independently assessed and attested Amazon Web Services May 2017 Page 27 of 38 Data Centre Security Requirements Requirement Protected A Protected B Reference AWS Responsibility The data centre must be physically secured against the entry of unauthorized personnel H M MITS AWS strictly controls access to data centres even for internal employees Physical access to all AWS data centres housing IT infrastructure components is restricted to authorized data centre employees vendors and contractors who require access in order to execute their jobs AWS data centres utilize trained security guards 24x7 Due to the fact that our data centres host multi ple customers AWS does not allow data center tours by customers as this exposes a wide range of customers to physical access of a third party To meet this customer need an independent and competent auditor validates the presence and operation controls as part of our SOC 1 Type II report This broadly accepted thirdparty validation provides customers with the independent perspective of the effec tiveness of controls in place Locked doors with access control systems that restrict entry to authorized p arties only All activity must be logged H M RCMP Physical access to the AWS data centres is controlled by an access control system and all activity is logged Amazon Web Services May 2017 Page 28 of 38 Logs of personnel access privilege shall be kept for a minimum of one year and provided to the agency upon request D M CACP ICT Physical access logs are maintained for a minimum of one year Access logs are provided to independent auditors in support of our formal compliance audits Logs of personnel access changes shall be kept for a minimum of one year and provided to the agency upon request D M CJIS Physical access logs are maintained for a minimum of one year Building must be constructed with walls that are difficult to breach D M RCMP Buildings are constructed according to local building code (typically concrete) Amazon Web Services May 2017 Page 29 of 38 Twofactor authentication to enter the building containing the data centre D H MITS Access to AWS data centres requires a variety of twofactor authentication mechanisms CCTV Video displayed and recorded for all entr y and exit paths and building exterior D M CACP ICT CCTV systems are in use for every AWS data centre with recorded video 24x 7 guard personnel at all main entry points to the building Bags and packages will be examined upon entry D M CJIS AWS uses gu ard personnel at all main entry points 24x7 with bag searches in place Amazon Web Services May 2017 Page 30 of 38 Authenticate visitors before authorizing escorted access to the data centre H M CJIS Physical access to all AWS data centres housing IT infrastructure components is restricted to authorized data centre employees vendors and contractors who require access in order to execute their jobs and includes the escorting of visitors where applicab le All customer information must be logically (and/or physically) separated from all other customer’s information This separation must be tested by an unbiased third party or demonstrated by the data centre management D H CJIS All customer information is logically separated by default through the use of the Amazon Virtual Private Cloud (VPC) service – a service that has been assessed by multiple third party assessors For more information see https://awsamazoncom/vpc/ Ability to indicate and limit which data centres agency data will be stored in D M CACP ICT The location of customer data is determined by the customer at the region level AWS does not access use or move customer content for any purpose other than as legally required and for maintaining the AWS services and providing them to customers and their end users Amazon Web Services May 2017 Page 31 of 38 Agency information kept within a secure server room (SSR) that includes the following: • Vibration detection on walls • Intrusion detection system inside the secure server room • Two person authentication to enter the secure server room D H RCMP AWS utilizes several layers of security to protect the server rooms within the data centre (“red zones”) AWS employs several physic al security mechanisms including intrusion detection systems and two person authentication Disposal of hard drives with agency information includes the following steps to meet Canadian Standard ITSG 06: 1 Disk Encryption or overwriting 2 Grind or hammer mill into at least three pieces D M RCMP AWS uses multiple steps during the process of media decommissioning for both magnetic hard drives (HDD) and solid state drives (SSD) On site HDDs are degaussed and then bent to an abrupt angle and SSDs are logically overwritten before being punched Both types of drives are ultimately shredded for recycling of materials Customers have the ability to conduct a variety of sanitization methods themselves including data deletion using relevant tools or encrypting data and destroying the encryption key rendering the data permanently unusable Amazon Web Services May 2017 Page 32 of 38 Personnel Security Requirements Requirement Protected A Protected B Reference AWS Responsibility All system administrators and personnel with access to the facility must have Enhanced Security Check completed by a substantive law enforcement agency A Canadian federal security clearance of level Secret or higher may be substituted and considered equivalent A US federal security clearance of level Secret or higher may be substi tuted and considered equivalent H M RCMP All AWS employees must complete a comprehensive preemployment background check Several specific positions are also processed through a separate Trusted Position Check Additionally there are many employees that hold or are otherwise processed for a US national security clearance (TS/SCI) (reinvestigated every five years) and/or Criminal Justice Information Services (CJIS) fingerprint and records check Personnel must have initial background checks at the time of first employment with the Data Centre owner Security clearances must be maintained within the expiry period All system administrators and personnel with access to the facility must have the background check repeated on a five year cycle H M RCMP All AWS employees must complete a comprehensive pre employment background check Several specific positions are also processed through a separate Trusted Position Check Additionally there are many employees that hold or are otherwise processed for a US na tional security clearance (TS/SCI) (reinvestigated every five years) and/or Criminal Justice Information Services (CJIS) fingerprint and records check Employees with physical access are not provisioned logical access Amazon Web Services May 2017 Page 33 of 38 Upon termination of individual emplo yment shall immediately terminate access to the facility M M RCMP Upon termination all employees’ access to systems and facilities are revoked immediately Must maintain a list of personnel who have been authorized system or physical access to the date centre and its systems and upon request provide a current copy to the agency H M CJIS AWS maintains a list of employees with physical access as granted through the process to receive physical access Logical access lists are retained as part of the LDAP permission group structure and does not constitute a consolidated list for distribution All access management for both physical and logical are independently audited by multiple third party auditors for several formal compliance programs The Contractor must enforce separation of job duties require commercially reasonable non disclosure agreements and limit staff knowledge of customer data to that which is absolutely needed to perform to work H M RCMP AWS rigorously employs the principles of least pri vilege separation of roles and responsibilities and disclosure of information on a need to know basis Amazon Web Services May 2017 Page 34 of 38 Access Control Requirements Requirement Protected A Protected B Reference AWS Responsibility A password minimum length will be 8 characters and will have 3 of the 4 complexity requirements: • Upper case • Lower case • Special characters • Numeric Characters H M CJIS Access to the AWS infrastructure requires multi factor authentication to include password complexity requirements Customers can im plement this requirement within their account which AWS does not manage on their behalf The following password rules are implemented : • A password re use restriction will be used • Password lifespans will be implemented and the time is configurable by the agency (standard 90 days) • Not be a dictionary word or proper name • Not be the same as the user ID • Not be identical to the previous 6 passwords • Must be transmitted and stored in an encrypted state H M CJIS/NPISAB Access to the AWS infrastructure requires multi factor authentication to include password complexity and protection requirements Customers can implement this requirement within their account which AWS does not manage on their behalf Amazon Web Services May 2017 Page 35 of 38 • Not be displayed when entered • Automatic storage and caching o f passwords by applications must be disabled User lockout after failed login attempts will be implemented and the count is configurable by the agency (default to 5) M M CJIS/NPISAB Customers can implement this requirement within their account which AWS does not manage on their behalf Password reset will leverage automated email personal identity verification questions M M CACP ICT Customers can implement this requirement within their account which AWS does not manage on their behalf Amazon Web Services May 2017 Page 36 of 38 Policy exist to ensure passwords must not be emailed or given over the phone M M CACP ICT Customers can implement this requirement within their account which AWS does not manage on their behalf When using a Personal Identification Number (PIN) as a standa rd authenticator the following rules are implemented : • Must be a minimum of 6 digits • Have no repeating digits (eg 112233) • Have no sequential patterns (eg 12345) • Expire within a maximum of 365 days (unless PIN is second factor) • Not be identical to previous 3 PINS • Must be transmitted and stored in an encrypted state • Not be displayed when entered H M CJIS Customers can implement this requirement within their account which AWS does not manage on their behalf Amazon Web Services May 2017 Page 37 of 38 System activity timer that will redirect user to the login page after a specific time that is configurable by the agency (Session Lock) (default to 30 mins) M M CJIS Customers can implement this requirement within their account which AWS does not manage on their behalf The information system shall display an agency configurable system use message notification message D H CJIS Customers can implement this requirement within their account which AWS does not manage on their behalf Continual monitoring and logging for the following events: • Successful and Unsuccessful login attempts • Successful and Unsuccessful attempts to view/modify/delete permissions files directory or system resources D M MITS AWS maintains logging and monitoring requirements in accordance with a variety of standards and requirements to include ISO 27001 SOC PCI DSS FedRAMP US Department of Defense Cloud Computing Security Requirements Guidance (DoD CC SRG) CJIS and others covering these requirements Amazon Web Services May 2017 Page 38 of 38 • Successful and Unsuccessful attempts to change account passwords • Successful and Unsuccessful attempts to view/modify/delete audit logs Utilize Strong Identification & Authentication leveraging Public Key Infrastructure (PKI) D H CACP ICT Customers can implement this requirement wi thin their account which AWS does not manage on their behalf Document Revisions Date Description May 2017 First publication
|
General
|
consultant
|
Best Practices
|
AWS_Risk__Compliance
|
Amazon Web Services: Risk and Compliance Amazon Web Services: Risk and Compliance Amazon Web Services: Risk and Compliance Copyright © Amazon Web Services Inc and/or its affiliates All rights reserved Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to or sponsored by AmazonAmazon Web Services: Risk and Compliance Table of Contents Amazon Web Services: Risk and Compliance1 Abstract1 Introduction2 Shared responsibility model3 Evaluating and integrating AWS controls4 AWS risk and compliance program5 AWS business risk management5 Operational and business management 5 Control environment and automation6 Controls assessment and continuous monitoring6 AWS certifications programs reports and thirdparty attestations7 Cloud Security Alliance7 Customer cloud compliance governance8 Conclusion 9 Contributors 10 Further reading11 Document Revisions12 Notices13 iiiAmazon Web Services: Risk and Compliance Abstract Amazon Web Services: Risk and Compliance Publication date: March 11 2021 (Document Revisions (p 12)) Abstract AWS serves a variety of customers including those in regulated industries Through our shared responsibility model we enable customers to manage risk effectively and efficiently in the IT environment and provide assurance of effective risk management through our compliance with established widely recognized frameworks and programs This paper outlines the mechanisms that AWS has implemented to manage risk on the AWS side of the Shared Responsibility Model and the tools that customers can leverage to gain assurance that these mechanisms are being implemented effectively 1Amazon Web Services: Risk and Compliance Introduction AWS and its customers share control over the IT environment Therefore security is a shared responsibility When it comes to managing security and compliance in the AWS Cloud each party has distinct responsibilities A customer’s responsibility depends on which services they are using However in general customers are responsible for building their IT environment in a manner that aligns with their specific security and compliance requirements This paper provides more details about each party’s security responsibilities and the ways customers can benefit from the AWS Risk and Compliance Program 2Amazon Web Services: Risk and Compliance Shared responsibility model Security and compliance are shared responsibilities between AWS and the customer Depending on the services deployed this shared model can help relieve the customer’s operational burden This is because AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates The customer assumes responsibility and management of the guest operating system (including updates and security patches) and other associated application software in addition to the configuration of the AWSprovided security group firewall We recommend that customers carefully consider the services they choose because their responsibilities vary depending on the services used the integration of those services into their IT environment and applicable laws and regulations It is possible for customers to enhance their security and/or meet their more stringent compliance requirements by leveraging technology such as hostbased firewalls host based intrusion detection and prevention encryption and key management The nature of this shared responsibility also provides the flexibility and customer control that permits customers to deploy solutions that meet industryspecific certification requirements This shared responsibility model also extends to IT controls Just as the responsibility to operate the IT environment is shared between AWS and its customers the management operation and verification of IT controls is also a shared responsibility AWS can help customers by managing those controls associated with the physical infrastructure deployed in the AWS environment Customers can then use the AWS control and compliance documentation available to them to perform their control evaluation and verification procedures as required For examples of how responsibility for certain controls is shared between AWS and its customers see the AWS Shared Responsibility Model 3Amazon Web Services: Risk and Compliance Evaluating and integrating AWS controls AWS provides a wide range of information about its IT control environment to customers through technical papers reports certifications and other thirdparty attestations This documentation helps customers to understand the controls in place relevant to the AWS services they use and how those controls have been validated This information also helps customers account for and validate that controls in their extended IT environment are operating effectively Traditionally internal and/or external auditors validate the design and operational effectiveness of controls by process walkthroughs and evidence evaluation This type of direct observation and verification by the customer or customer’s external auditor is generally performed to validate controls in traditional onpremises deployments In the case where service providers are used (such as AWS) customers can request and evaluate third party attestations and certifications These attestations and certifications can help assure the customer of the design and operating effectiveness of control objective and controls validated by a qualified independent third party As a result although some controls might be managed by AWS the control environment can still be a unified framework where customers can account for and verify that controls are operating effectively and accelerating the compliance review process Thirdparty attestations and certifications of AWS provide customers with visibility and independent validation of the control environment Such attestations and certifications may help relieve customers of the requirement to perform certain validation work themselves for their IT environment in the AWS Cloud 4Amazon Web Services: Risk and Compliance AWS business risk management AWS risk and compliance program AWS has integrated a risk and compliance program throughout the organization This program aims to manage risk in all phases of service design and deployment and continually improve and reassess the organization’s riskrelated activities The components of the AWS integrated risk and compliance program are discussed in greater detail in the following sections AWS business risk management AWS has a business risk management (BRM) program that partners with AWS business units to provide the AWS Board of Directors and AWS senior leadership a holistic view of key risks across AWS The BRM program demonstrates independent risk oversight over AWS functions Specifically the BRM program does the following: •Performs risk assessments and risk monitoring of key AWS functional areas •Identifies and drives remediation of risks •Maintains a register of known risks To drive the remediation of risks the BRM program reports the results of its efforts and escalates where necessary to directors and vice presidents across the business to inform business decisionmaking Operational and business management AWS uses a combination of weekly monthly and quarterly meetings and reports to among other things ensure communication of risks across all components of the risk management process In addition AWS implements an escalation process to provide management visibility into high priority risks across the organization These efforts taken together help ensure that risk is managed consistently with the complexity of the AWS business model In addition through a cascading responsibility structure vice presidents (business owners) are responsible for the oversight of their business To this end AWS conducts weekly meetings to review operational metrics and identify key trends and risks before they impact the business Executive and senior leadership play important roles in establishing the AWS tone and core values Every employee is provided with the company’s Code of Business Conduct and Ethics and employees complete periodic training Compliance audits are performed so that employees understand and follow established policies The AWS organizational structure provides a framework for planning executing and controlling business operations The organizational structure includes roles and responsibilities to provide for adequate staffing efficiency of operations and the segregation of duties Management has also established appropriate lines of reporting for key personnel The company’s hiring verification processes include validation of education previous employment and in some cases background checks as permitted by law and regulation for employees commensurate with the employee’s position and level of access to AWS facilities The company follows a structured onboarding process to familiarize new employees with Amazon tools processes systems policies and procedures 5Amazon Web Services: Risk and Compliance Control environment and automation Control environment and automation AWS implements security controls as a foundational element to manage risk across the organization The AWS control environment is comprised of the standards processes and structures that provide the basis for implementing a minimum set of security requirements across AWS While processes and standards included as part of the AWS control environment stand on their own AWS also leverages aspects of Amazon’s overall control environment Leveraged tools include: •Tools used across all Amazon businesses such as the tool that manages separation of duties •Certain Amazonwide business functions such as legal human resources and finance In instances where AWS leverages Amazon’s overall control environment the standards and processes governing these mechanisms are tailored specifically for the AWS business This means that the expectations for their use and application within the AWS control environment may differ from the expectations for their use and application within the overall Amazon environment The AWS control environment ultimately acts as the foundation for the secure delivery of AWS service offerings Control automation is a way for AWS to reduce human intervention in certain recurring processes comprising the AWS control environment It is key to effective information security control implementation and associated management of risks Control automation seeks to proactively minimize potential inconsistencies in process execution that might arise due to the flawed nature of humans conducting a repetitive process Through control automation potential process deviations are eliminated This provides increased levels of assurance that a control will be applied as designed Engineering teams at AWS across security functions are responsible for engineering the AWS control environment to support increased levels of control automation wherever possible Examples of automated controls at AWS include: •Governance and Oversight: Policy versioning and approval •Personnel Management: Automated training delivery rapid employee termination •Development and Configuration Management: Code deployment pipelines code scanning code backup integrated deployment testing •Identity and Access Management: Automated segregation of duties access reviews permissions management •Monitoring and Logging: Automated log collection and correlation alarming •Physical Security: Automated processes related to AWS data centers including hardware management data center security training access alarming and physical access management •Scanning and Patch Management: Automated vulnerability scanning patch management and deployment Controls assessment and continuous monitoring AWS implements a variety of activities prior to and after service deployment to further reduce risk within the AWS environment These activities integrate security and compliance requirements during the design and development of each AWS service and then validate that services are operating securely after they are moved into production (launched) Risk management and compliance activities include two prelaunch activities and two postlaunch activities The prelaunch activities are: •AWS Application Security risk management review to validate that security risks have been identified and mitigated 6Amazon Web Services: Risk and Compliance AWS certifications programs reports and thirdparty attestations •Architecture readiness review to help customers ensure alignment with compliance regimes At the time of its deployment a service will have gone through rigorous assessments against detailed security requirements to meet the AWS high bar for security The postlaunch activities are: •AWS Application Security ongoing review to help ensure service security posture is maintained •Ongoing vulnerability management scanning These control assessments and continuous monitoring allow regulated customers the ability to confidently build compliant solutions on AWS services For a list of services in the scope for various compliance programs see the AWS Services in Scope webpage AWS certifications programs reports and third party attestations AWS regularly undergoes independent thirdparty attestation audits to provide assurance that control activities are operating as intended More specifically AWS is audited against a variety of global and regional security frameworks dependent on region and industry AWS participates in over 50 different audit programs The results of these audits are documented by the assessing body and made available for all AWS customers through AWS Artifact AWS Artifact is a no cost selfservice portal for ondemand access to AWS compliance reports When new reports are released they are made available in AWS Artifact allowing customers to continuously monitor the security and compliance of AWS with immediate access to new reports Depending on a country’s or industry’s local regulatory or contractual requirements AWS may also undergo audits directly with customers or governmental auditors These audits provide additional oversight of the AWS control environment to ensure that customers have the tools to help themselves operate confidently compliantly and in a riskbased manner using AWS services For more detailed information about the AWS certification programs reports and thirdparty attestations visit the AWS Compliance Program webpage You can also visit the AWS Services in Scope webpage for servicespecific information Cloud Security Alliance AWS participates in the voluntary Cloud Security Alliance (CSA) Security Trust & Assurance Registry (STAR) SelfAssessment to document its compliance with CSApublished best practices The CSA is “the world’s leading organization dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment”The CSA Consensus Assessments Initiative Questionnaire (CAIQ) provides a set of questions the CSA anticipates a cloud customer and/or a cloud auditor would ask of a cloud provider It provides a series of security control and process questions which can then be used for a wide range of efforts including cloud provider selection and security evaluation There are two resources available to customers that document the alignment of AWS to the CSA CAIQ The first is the CSA CAIQ Whitepaper and the second is a more detailed control mapping to our SOC2 controls which is available to via AWS Artifact For more information about the AWS participation in CSA CAIQ see the AWS CSA site 7Amazon Web Services: Risk and Compliance Customer cloud compliance governance AWS customers are responsible for maintaining adequate governance over their entire IT control environment regardless of how or where IT is deployed Leading practices include: •Understanding the required compliance objectives and requirements (from relevant sources) •Establishing a control environment that meets those objectives and requirements •Understanding the validation required based on the organization’s risk tolerance •Verifying the operating effectiveness of their control environment Deployment in the AWS Cloud gives enterprises different options to apply various types of controls and various verification methods Strong customer compliance and governance may include the following basic approach: 1Reviewing the AWS Shared Responsibility Model AWS Security Documentation AWS compliance reports and other information available from AWS together with other customerspecific documentation Try to understand as much of the entire IT environment as possible and then document all compliance requirements into a comprehensive cloud control framework 2Designing and implementing control objectives to meet the enterprise compliance requirements as laid out in the AWS Shared Responsibility Model 3Identifying and documenting controls owned by outside parties 4Verifying that all control objectives are met and all key controls are designed and operating effectively Approaching compliance governance in this manner will help customers gain a better understanding of their control environment and will help clearly delineate the verification activities to be performed 8Amazon Web Services: Risk and Compliance Conclusion Providing highly secure and resilient infrastructure and services to our customers is a top priority for AWS Our commitment to our customers is focused on working to continuously earn customer trust and ensure customers maintain confidence in operating their workloads securely on AWS To achieve this AWS has integrated risk and compliance mechanisms that include: •The implementation of a wide array of security controls and automated tools •Continuous monitoring and assessment of security controls to help ensure AWS operational effectiveness and strict adherence to compliance regimes •Independent risk assessment by the AWS Business Risk Management program •Operational and business management mechanisms In addition AWS regularly undergoes independent thirdparty audits to provide assurance that the control activities are operating as intended These audits along with the many certifications AWS has obtained provide an additional level of validation of the AWS control environment that benefit customers Taken together with customermanaged security controls these efforts allow AWS to securely innovate on behalf of customers and help customers improve their security posture when building on AWS 9Amazon Web Services: Risk and Compliance Contributors Contributors to this document include: •Marta Taggart Senior Program Manager AWS Security •Bradley Roach Risk Manager AWS Business Risk Management •Patrick Woods Senior Security Specialist AWS Security 10Amazon Web Services: Risk and Compliance Further reading AWS provides customers with information regarding its security and control environment by: •Obtaining and maintaining industry certifications and independent thirdparty attestations as listed on the AWS Compliance Program Page •Consistently publishing information about the AWS security and control practices in whitepapers and web content like the AWS Security Blog •Providing indepth descriptions of how AWS utilizes automation at scale to manage our service infrastructure in The AWS Builders Library •Enhancing transparency by providing compliance certificates reports and other documentation directly to AWS customers via the selfservice portal known as AWS Artifact •Providing AWS Compliance Resources and consistently documenting and publishing answers to queries on AWS Compliance FAQs webpage •Customers can follow the design principles in the AWS WellArchitected Framework for guidance of how to approach the above the line configuration of their workloads build on AWS 11Amazon Web Services: Risk and Compliance Document Revisions To be notified about updates to this whitepaper subscribe to the RSS feed updatehistorychange updatehistorydescription updatehistorydate Minor updates (p 12) Reviewed for technical accuracy March 10 2021 Whitepaper updated (p 12) This version includes substantial changes that include removing the reference information about compliance programs and schemes because this information is available on the AWS Compliance Programs and AWS Services in Scope by Compliance Program webpages Additionally we removed the section covering common compliance questions because that information is now available on the AWS Compliance FAQs webpageNovember 1 2020 Initial publication (p 12) Amazon Web Services: Risk and Compliance whitepaper publishedMay 1 2011 12Amazon Web Services: Risk and Compliance Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved 13
|
General
|
consultant
|
Best Practices
|
AWS_Risk_and_Compliance_Overview
|
Archived AWS Risk and Compliance Overview This paper has been archived January 2017 For the latest information on risk and compliance see Amazon Web Services: Risk and ComplianceArchived © 201 7 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assuranc es from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Shared Responsibility Environment 1 Strong Compliance Governance 2 Evaluating and Integrating AWS Controls 3 AWS IT Control Information 3 AWS Global Regions 5 AWS Risk and Compliance Program 5 Risk Management 5 Control Environment 6 Information Security 7 AWS Contact 7 Further Reading 8 Document Revisions 8 Archived Abstract This paper provides information to help customers integrate AWS into their existing control framework including a basic approach for evaluating AWS controls ArchivedAmazon Web Services – Risk and Compliance Overview Page 1 Introduction AWS and its customers share control over the IT environment AWS’ part in this shared responsibility includes providing its services on a highly secure and controlled platform and providing a wide array of security features customers can use The customers’ responsibility includes configuring their IT environments in a secure and controlled manner for their purposes While customers don’t communicate their use and configurations to AWS AWS does communicate its security and control environment relevant to customers AWS does this by doing the following: • Obtaining industry certifications and independent third party attestations described in this document • Publishing information about the AWS security and control practices in whitepapers and web site content • Providing certificates reports and other documentation directly to AWS customers under NDA (as required) For a more detailed description of AWS security please see AWS Security Center For a more detailed description of AWS Compliance please see AWS Compliance page Additionally the AWS Overview of Security Processes whitepaper covers AWS’ general security controls and service specific security Shared Responsibility Environment Moving IT infrastructure to AWS services creates a model of shared responsibility between the customer and AWS This shared model can help relieve customer’s operational burden as AWS opera tes manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates The customer assumes responsibility and management of the guest operating system (including updates and security patches) other associated application software as well as the configuration of the AWS provided security group firewall Customers should carefully consider the services they choose as their responsibilities vary depending on the services used the integration of those ArchivedAmazon Web Services – Risk and Compliance Overview Page 2 services into their IT environment and applicable laws and regulations It is possible for customers to enhance security and/or meet their more stringent compliance requirements by leveraging technology such as host based firewalls host based intrusion detection/prevention encryption and key management The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment of solutions that meet industry spec ific certification requirements This customer/AWS shared responsibility model also extends to IT controls Just as the responsibility to operate the IT environment is shared between AWS and its customers so is the management operation and verification of IT controls shared AWS can help relieve customer burden of operating controls by managing those controls associated with the physical infrastructure deployed in the AWS environment that may previously have been managed by the customer As every custome r is deployed differently in AWS customers can take advantage of shifting management of certain IT controls to AWS which results in a (new) distributed control environment Customers can then use the AWS control and compliance documentation available to t hem (described in AWS Certifications and Third party Attestations) to perform their control evaluation and verification procedures as required Strong Compliance Governance As always AWS customers are required to continue to maintain adequate governance over the entire IT control environment regardless of how IT is deployed Leading practices include an understanding of required compliance objectives and requirements (from relevant sources) establishment of a control environment that meets those objectives and requirements an understanding of the validation required based on the organization’s risk tolerance and verification of the operating effectiveness of their control environment Deployment in the AWS cloud gives enterprises different options to ap ply various types of controls and various verification methods Strong customer compliance and governance might include the following basic approach: 1 Review information available from AWS together with other information to understand as much of the entire IT environment as possible and then document all compliance requirements ArchivedAmazon Web Services – Risk and Compliance Overview Page 3 2 Design and implement control objectives to meet the enterprise compliance requirements 3 Identify and document controls owned by outside parties 4 Verify that all control objectives are met and all key controls are designed and operating effectively Approaching compliance governance in this manner will help companies gain a better understanding of their control environment and will help clearly delineate the verification activities to be performed Evaluating and Integrating AWS Controls AWS provides a wide range of information regarding its IT control environment to customers through white papers reports certifications and other third party attestations This documentation assists customers in understanding the controls in place relevant to the AWS services they use and how those controls have been validated This information also assists customers in their efforts to account for and to validate that controls in their extended IT environment are operating effectively Traditionally the design and operating effectiveness of control objectives and controls are validated by internal and/or external auditors via process walkthroughs and evidence evaluation Direct observation/verification by the customer or customer’s external auditor is generally performed to validate controls In the case where service providers such as AWS are used companies request and evaluate third party attestations and certifications in order to gain reasonable assurance of the design and operating effectiveness of control objective and controls As a result although customer’s key controls may be managed by AWS the control environment can still be a unified framework where all controls are accounte d for and are verified as operating effectively Third party attestations and certifications of AWS can not only provide a higher level of validation of the control environment but may relieve customers of the requirement to perform certain validation work themselves for their IT environment in the AWS cloud AWS IT Control Information AWS provides IT control information to customers in the following ways: ArchivedAmazon Web Services – Risk and Compliance Overview Page 4 Specific control definition AWS customers are able to identify key controls managed by AWS Key controls are critical to the customer’s control environment and require an external attestation of the operating effectiveness of these key controls in order to comply with compliance requirements —such as the annual financial audit For this purpose AWS publishes a wide range of specific IT controls in its Service Organization Controls 1 (SOC 1) Type II report The SOC 1 report formerly the Statement on Auditing Standards (SAS) No 70 Service Organizations report is a widely recognized auditing standard developed by the American Institute of Certified Public Accountants (AICPA) The SOC 1 audit is an in depth audit of both the design and operating effectiveness of AWS’ defined control o bjectives and control activities (which include control objectives and control activities over the part of the infrastructure AWS manages) “Type II” refers to the fact that each of the controls described in the report are not only evaluated for adequacy o f design but are also tested for operating effectiveness by the external auditor Because of the independence and competence of AWS’ external auditor controls identified in the report should provide customers with a high level of confidence in AWS’ contr ol environment AWS’ controls can be considered designed and operating effectively for many compliance purposes including Sarbanes Oxley (SOX) Section 404 financial statement audits Leveraging SOC 1 Type II reports is also generally permitted by other ex ternal certifying bodies (eg ISO 27001 auditors may request a SOC 1 Type II report in order to complete their evaluations for customers) Other specific control activities relate to AWS’ Payment Card Industry (PCI) and Federal Information Security Mana gement Act (FISMA) compliance AWS is compliant with FISMA Moderate standards and with the PCI Data Security Standard These PCI and FISMA standards are very prescriptive and require independent validation that AWS adheres to the published standard Genera l control standard compliance If an AWS customer requires a broad set of control objectives to be met evaluation of AWS’ industry certifications may be performed With the AWS ISO 27001 certification AWS complies with a broad comprehensive security sta ndard and follows best practices in maintaining a secure environment With the PCI Data Security Standard (PCI DSS) AWS complies with a set of controls important to companies that handle credit card information With AWS’ ArchivedAmazon Web Services – Risk and Compliance Overview Page 5 compliance with the FISMA standar ds AWS complies with a wide range of specific controls required by US government agencies Compliance with these general standards provides customers with in depth information on the comprehensive nature of the controls and security processes in place and can be considered when managing compliance AWS Global Regions Data centers are built in clusters in various global regions including: US East (Northern Virginia) US West (Oregon) US West (Northern California) AWS GovCloud (US) (Oregon) EU (Frankfurt) EU (Ireland) Asia Pacific (Seoul) Asia Pacific (Singapore) Asia Pacific (Tokyo) Asia Pacific (Sydney) China (Beijing) and South America (Sao Paulo) For a complete list of regions see the AWS Global Infrastructure page AWS Risk and Compliance Program AWS provides information about its risk and compliance program to enable customers to incorporate AWS controls into their governance framework This information can assist customers in documenting a complete control and governance framework with AWS included as an important part of that framework Risk Management AWS management has developed a strategic business plan which includes risk identification and the implementation of cont rols to mitigate or manage risks AWS management re evaluates the strategic business plan at least biannually This process requires management to identify risks within its areas of responsibility and to implement appropriate measures designed to address those risks In addition the AWS control environment is subject to various internal and external risk assessments AWS’ Compliance and Security teams have established an information security framework and policies based on the Control Objectives for Inform ation and related Technology (COBIT) framework and have effectively integrated the ISO 27001 certifiable framework based on ISO 27002 controls American Institute of Certified Public Accountants (AICPA) Trust Services Principles the PCI DSS v31 and the National Institute of ArchivedAmazon Web Services – Risk and Compliance Overview Page 6 Standards and Technology (NIST) Publication 800 53 Rev 3 (Recommended Security Controls for Federal Information Systems) AWS maintains the security policy provides security training to employees and performs application security reviews These reviews assess the confidentiality integrity and availability of data as well as conformance to the information security policy AWS Security regularly scans all Internet facing service endpoint IP addresses for vulnerabilities (these scan s do not include customer instances) AWS Security notifies the appropriate parties to remediate any identified vulnerabilities In addition external vulnerability threat assessments are performed regularly by independent security firms Findings and reco mmendations resulting from these assessments are categorized and delivered to AWS leadership These scans are done in a manner for the health and viability of the underlying AWS infrastructure and are not meant to replace the customer’s own vulnerability scans required to meet their specific compliance requirements Customers can request permission to conduct scans of their cloud infrastructure as long as they are limited to the customer’s instances and do not violate the AWS Acceptable Use Policy Advance approval for these types of scans can be initiated by submitting a request via the AWS Vulnerability / Penetration Testing Request Form Control Environment AWS manages a comprehensi ve control environment that includes policies processes and control activities that leverage various aspects of Amazon’s overall control environment This control environment is in place for the secure delivery of AWS’ service offerings The collective co ntrol environment encompasses the people processes and technology necessary to establish and maintain an environment that supports the operating effectiveness of AWS’ control framework AWS has integrated applicable cloud specific controls identified by leading cloud computing industry bodies into the AWS control framework AWS continues to monitor these industry groups for ideas on which leading practices can be implemented to better assist customers with managing their control environment The control e nvironment at Amazon begins at the highest level of the Company Executive and senior leadership play important roles in establishing the Company’s tone and core values Every employee is provided with the Company’s Code of Business Conduct and Ethics and completes periodic ArchivedAmazon Web Services – Risk and Compliance Overview Page 7 training Compliance audits are performed so that employees understand and follow the established policies The AWS organizational structure provides a framework for planning executing and controlling business operations The organizati onal structure assigns roles and responsibilities to provide for adequate staffing efficiency of operations and the segregation of duties Management has also established authority and appropriate lines of reporting for key personnel Included as part of the Company’s hiring verification processes are education previous employment and in some cases background checks as permitted by law and regulation for employees commensurate with the employee’s position and level of access to AWS facilities The Com pany follows a structured on boarding process to familiarize new employees with Amazon tools processes systems policies and procedures Information Security AWS has implemented a formal information security program designed to protect the confidentialit y integrity and availability of customers’ systems and data AWS publishes a security whitepaper that is available on the public website that addresses how AWS can help customers secure their data AWS Contact Customers can request the reports and certifications produced by our thirdparty auditors or can request more information about AWS Compliance by contacting AWS Sales and Business Development The representative will route customers to the p roper team depending on nature of the inquiry For additional information on AWS Compliance see the AWS Compliance site or send questions directly to mailto:awscompliance@amazoncom ArchivedAmazon Web Services – Risk and Compliance Overview Page 8 Further Reading For additional information see the following sources: • CSA Consensus Assessments Initiative Questionnaire • AWS Certifications Program s Reports and Third Party Attestations • AWS Answers to Key Compliance Questions Document Revisions Date Description January 2017 Migrate to new template January 2016 First publication
|
General
|
consultant
|
Best Practices
|
AWS_Security_Best_Practices
|
ArchivedAWS Security Best Practices August 2016 This paper has been archived For the latest technical content on Security and Compliance see https://awsamazoncom/architecture/ securityidentitycompliance/ArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Know the AWS Shared Responsibility Model 2 Understanding the AWS Secure Global Infrastructure 3 Sharing Security Responsibility for AWS Services 4 Using the Trusted Advisor Tool 10 Define and Categorize Assets on AWS 10 Design Your ISMS to Protect Your Assets on AWS 11 Manage AWS Accounts IAM Users Groups and Roles 13 Strategies for Using Multiple AWS Ac counts 14 Managing IAM Users 15 Managing IAM Groups 15 Managing AWS Credentials 16 Understa nding Delegation Using IAM Roles and Temporary Security Credentials 17 Managing OS level Access to Amazon EC2 Instances 20 Secure Your Data 22 Resource Access Authorization 22 Storing and Managing Encryption Keys in the Cloud 23 Protecting Data at Rest 24 Decommission Data and Media Securely 31 Protect Data in Transit 32 Secure Your Operating Systems and Applications 38 Creating Custom AMIs 39 Bootstrapping 41 Managing Patches 42 Controlling Security for Public AMIs 42 Protecting Your System from Malware 42 ArchivedMitigating Compromise and Abuse 45 Using Additional Application Security Practices 48 Secure Your Infrastructure 49 Using Amazon Virtual Private Cloud (VPC) 49 Using Security Zoning and Network Segmentation 51 Strengthening Network Security 54 Securing Periphery Systems: User Repositories DNS NTP 55 Building Threat Protection Layers 57 Test Security 60 Managing Metrics and Improvement 61 Mitigating and Protecting Against DoS & DDoS Attacks 62 Manage Security Monitoring Alerting Audit Trail and Incident Response 65 Using Change Management Logs 68 Managing Logs for Critical Transactions 68 Protecting Log Information 69 Logging Faults 70 Conclusion 70 Contributors 70 Further Reading 70 Document Revisions 71 ArchivedAbstract This whitepaper is intended f or existing and potential customers who are designing the security infrastructure and configuration for applications running in Amazon Web Services (AWS) It provides security best practices that will help you define your Information Security Management Sy stem (ISMS) and build a set of security policies and processes for your organization so you can protect your data and assets in the AWS Cloud The whitepaper also provides an overview of different security topics such as identifying categorizing and prote cting your assets on AWS managing access to AWS resources using accounts users and groups and suggesting ways you can secure your data your operating systems and applications and overall infrastructure in the cloud The paper is targeted at IT decision makers and security personnel and assumes that you are familiar with basic security concepts in the area of networking operating systems data encryption and operational controls ArchivedAmazon Web Services AWS Security Be st Practices Page 1 Introduction Information security is of paramount importance to Amazon Web Services (AWS) customers Security is a core functional requirement that protects mission critical information from accidental or deliberate theft leakage integrity compromise and deletion Under the AWS shared respon sibility model AWS provides a global secure infrastructure and foundation compute storage networking and database services as well as higher level services AWS provides a range of security services and features that AWS customers can use to secure the ir assets AWS customers are responsible for protecting the confidentiality integrity and availability of their data in the cloud and for meeting specific business requirements for information protection For more information on AWS’s security features please read Overview of Security Processes Whitepaper This whitepaper describes best practices that you can leverage to build and define an Information Security Management System (ISMS) that is a collection of information security policies and processes for your organization’s assets on AWS For more inform ation about ISMSs see ISO 27001 at https://wwwisoorg/standard/54534html Although it is not required to build an ISMS to use AWS we think that the structured approach for managing information sec urity that is built on basic building blocks of a widely adopted global security approach will help you improve your organization’s overall security posture We address the following topics: • How security responsibilities are shared between AWS and you the customer • How to define and categorize your assets • How to manage user access to your data using privileged accounts and groups • Best practices for securing your data operating systems and network • How monitoring and alerting can help you achieve your secur ity objectives This whitepaper discusses security best practices in these areas at a high level (It does not provide “how to” configuration guidance For service specific configuration guidance see the AWS Security Documentation ) ArchivedAmazon Web Services AWS Security Best Practices Page 2 Know the AWS Shared Responsibility Model Amazon Web Services provides a secure global infrastructure and services in the cloud You can build your systems using AWS as the foundation and architect an ISMS that takes advantag e of AWS features To design an ISMS in AWS you must first be familiar with the AWS shared responsibility model which requires AWS and customers to work together towards security objectives AWS provides secure infrastructure and services while you the customer are responsible for secure operating systems platforms and data To ensure a secure global infrastructure AWS configures infrastructure components and provides services and features you can use to enhance security such as the Identity and Ac cess Management (IAM) service which you can use to manage users and user permissions in a subset of AWS services To ensure secure services AWS offers shared responsibility models for each of the different type of service that we offer : • Infrastructure se rvices • Container services • Abstracted services The shared responsibility model for infrastructure services such as Amazon Elastic Compute Cloud (Amazon EC2) for example specifies that AWS manages the security of the following assets: • Facilities • Physical s ecurity of hardware • Network infrastructure • Virtualization infrastructure Consider AWS the owner of these assets for the purposes of your ISMS asset definition Leverage these AWS controls and include them in your ISMS In this Amazon EC2 example you as th e customer are responsible for the security of the following assets: • Amazon Machine Images (AMIs) • Operating systems ArchivedAmazon Web Services AWS Security Best Practices Page 3 • Applications • Data in transit • Data at rest • Data stores • Credentials • Policies and configuration Specific services further delineate how responsibilities are shared between you and AWS For more information see https://awsamazoncom/compliance/shared responsibility model/ Underst anding the AWS Secure Global Infrastructure The AWS secure global infrastructure and services are managed by AWS and provide a trustworthy foundation for enterprise systems and individual applications AWS establishes high standards for information securit y within the cloud and has a comprehensive and holistic set of control objectives ranging from physical security through software acquisition and development to employee lifecycle management and security organization The AWS secure global infrastructure and services are subject to regular third party compliance audits See the Amazon Web Services Risk and Compliance whitepaper for more information Using the IAM Service The IAM service is one component of the AWS secure global infrastructure that we discuss in this paper With IAM you can centrally manage users security credentials such as passwords access keys and permissions policies that contr ol which AWS services and resources users can access When you sign up for AWS you create an AWS account for which you have a user name (your email address) and a password The user name and password let you log into the AWS Management Console where you can use a browser based interface to manage AWS resources You can also create access keys (which consist of an access key ID and secret access key) to use when you make programmatic calls to AWS using the command line interface (CLI) the AWS SDKs or A PI calls IAM lets you create individual users within your AWS account and give them each their own user name password and access keys Individual users can then log into the ArchivedAmazon Web Services AWS Security Best Practices Page 4 console using a URL that’s specific to your account You can also create access keys for individual users so that they can make programmatic calls to access AWS resources All charges for activities performed by your IAM users are billed to your AWS account As a best practic e we recommend that you create an IAM user even for yourself and that you do not use your AWS account credentials for everyday access to AWS See Security Best Practices in IAM for more information Regions Availability Zones and Endpoints You should also be familiar with regions Availability Zones and endpoints which are components of the AWS secure global infrastructure Use AWS regions to manage network latency and regulatory compliance When you store data in a specific region it is not replicated outside that region It is your responsibility to replicate data across regions if your business needs require that AWS provides information about the country and wh ere applicable the state where each region resides; you are responsible for selecting the region to store data with your compliance and network latency requirements in mind Regions are designed with availability in mind and consist of at least two often more Availability Zones Availability Zones are designed for fault isolation They are connected to multiple Internet Service Providers (ISPs) and different power grids They are interconnected using high speed links so applications can rely on Local Ar ea Network (LAN) connectivity for communication between Availability Zones within the same region You are responsible for carefully selecting the Availability Zones where your systems will reside Systems can span multiple Availability Zones and we recom mend that you design your systems to survive temporary or prolonged failure of an Availability Zone in the case of a disaster AWS provides web access to services through the AWS Management Console availab le at and then through individual consoles for each service AWS provides programmatic access to services through Application Programming Interfaces (APIs) and command line interfaces (CLIs) Service endpoints which are managed by AWS provide management (“backplane”) access Sharing Security Responsibility for AWS Services AWS offers a variety of different infrastructure and platform services For the purpose of understanding security and shared responsibility of these AWS services let’s categorize them in three main categories: infrastructure container and abstracted services Each ArchivedAmazon Web Services AWS Security Best Practices Page 5 category comes with a slightly different security ownership model based on how you interact and access the functionality • Infrastructure Services: This category includes comp ute services such as Amazon EC2 and related services such as Amazon Elastic Block Store (Amazon EBS) Auto Scaling and Amazon Virtual Private Cloud (Amazon VPC) With these services you can architect and build a cloud infrastructure using technologies similar to and largely compatible with on premises solutions You control the operating system and you configure and operate any identity management system that provides access to the user layer of the virtualization stack • Container Services: Services i n this category typically run on separate Amazon EC2 or other infrastructure instances but sometimes you don’t manage the operating system or the platform layer AWS provides a managed service for these application “containers” You are responsible for se tting up and managing network controls such as firewall rules and for managing platform level identity and access management separately from IAM Examples of container services include Amazon Relational Database Services (Amazon RDS) Amazon Elastic Map Reduce (Amazon EMR) and AWS Elastic Beanstalk • Abstracted Services: This category includes high level storage database and messaging services such as Amazon Simple Storage Service (Amazon S3) Amazon Glacier Amazon DynamoDB Amazon Simple Queuing Servic e (Amazon SQS) and Amazon Simple Email Service (Amazon SES) These services abstract the platform or management layer on which you can build and operate cloud applications You access the endpoints of these abstracted services using AWS APIs and AWS mana ges the underlying service components or the operating system on which they reside You share the underlying infrastructure and abstracted services provide a multi tenant platform which isolates your data in a secure fashion and provides for powerful int egration with IAM Let’s dig a little deeper into the shared responsibility model for each service type Shared Responsibility Model for Infrastructure Services Infrastructure services such as Amazon EC2 Amazon EBS and Amazon VPC run on top of the AWS global infrastructure They vary in terms of availability and durability objectives but always operate within the specific region where they have been launched You can build systems that meet availability objectives exceeding those of ArchivedAmazon Web Services AWS Security Best Practices Page 6 individual services from AWS by employing resilient components in multiple Availability Zones Figure 1 depicts the building blocks for the shared responsibility model for infrastructure services Figure 1: Shared Responsibility Model for Infrastruc ture Services Building on the AWS secure global infrastructure you install and configure your operating systems and platforms in the AWS cloud just as you would do on premises in your own data centers Then you install your applications on your platform Ultimately your data resides in and is managed by your own applications Unless you have more stringent business or compliance requirements you don’t need to introduce additional layers of protection beyond those provided by the AWS secure glob al infrastructure For certain compliance requirements you might require an additional layer of protection between the services from AWS and your operating systems and platforms where your applications and data reside You can impose additional controls such as protection of data at rest and protection of data in transit or introduce a layer of opacity between services from AWS and your platform The opacity layer can include data encryption data integrity authentication software and data signing s ecure time stamping and more AWS provides technologies you can implement to protect data at rest and in transit See the Managing OS level Access to Amazon EC2 Instances and Secure Your Data sections in this whitepaper for more information Alternatively you might introduce your own data protection tools or leverage AWS partner offerings The previous section describes the ways in which you can manage access to resources that require authentication to AWS services However in order to access the operating ArchivedAmazon Web Services AWS Security Best Practices Page 7 system on your EC2 instances you need a different set of credentials In the shared responsibility model you own the operating system credentials but AWS helps you bootstrap the initial access to the operating system When you launch a new Amazon EC2 instance from a standard AMI you can access that instance using secure remote system access protocols such as Secure Shell (SSH) or Windows Remote Desktop Protocol (R DP) You must successfully authenticate at the operating system level before you can access and configure the Amazon EC2 instance to your requirements After you have authenticated and have remote access into the Amazon EC2 instance you can set up the ope rating system authentication mechanisms you want which might include X509 certificate authentication Microsoft Active Directory or local operating system accounts To enable authentication to the EC2 instance AWS provides asymmetric key pairs known a s Amazon EC2 key pairs These are industry standard RSA key pairs Each user can have multiple Amazon EC2 key pairs and can launch new instances using different key pairs EC2 key pairs are not related to the AWS account or IAM user credentials discussed previously Those credentials control access to other AWS services; EC2 key pairs control access only to your specific instance You can choose to generate your own Amazon EC2 key pairs using industry standard tools like OpenSSL You generate the key pair in a secure and trusted environment and only the public key of the key pair is imported in AWS; you store the private key securely We advise using a high quality random number generator if you take this path You can choose to have Amazon EC2 key pairs generated by AWS In this case both the private and public key of the RSA key pair are presented to you when you first create the instance You must download and securely store the private key of the Amazon EC2 key pair AWS does not store the private key ; if it is lost you must generate a new key pair For Amazon EC2 Linux instances using the cloud init service when a new instance from a standard AWS AMI is launched the public key of the Amazon EC2 key pair is appended to the initial operating system us er’s ~/ssh/authorized_keys file That user can then use an SSH client to connect to the Amazon EC2 Linux instance by configuring the client to use the correct Amazon EC2 instance user’s name as its identity (for example ec2 user) and providing the priva te key file for user authentication ArchivedAmazon Web Services AWS Security Best Practices Page 8 For Amazon EC2 Windows instances using the ec2config service when a new instance from a standard AWS AMI is launched the ec2config service sets a new random Administrator password for the instance and encrypts it usin g the corresponding Amazon EC2 key pair’s public key The user can get the Windows instance password by using the AWS Management Console or command line tools and by providing the corresponding Amazon EC2 private key to decrypt the password This password along with the default Administrative account for the Amazon EC2 instance can be used to authenticate to the Windows instance AWS provides a set of flexible and practical tools for managing Amazon EC2 keys and providing industry standard authentication into newly launched Amazon EC2 instances If you have higher security requirements you can implement alternative authentication mechanisms including LDAP or Active Directory authentication and disable Amazon EC2 key pair authentication Shared Responsi bility Model for Container Services The AWS shared responsibility model also applies to container services such as Amazon RDS and Amazon EMR For these services AWS manages the underlying infrastructure and foundation services the operating system and t he application platform For example Amazon RDS for Oracle is a managed database service in which AWS manages all the layers of the container up to and including the Oracle database platform For services such as Amazon RDS the AWS platform provides dat a backup and recovery tools; but it is your responsibility to configure and use tools in relation to your business continuity and disaster recovery (BC/DR) policy For AWS Container services you are responsible for the data and for firewall rules for acce ss to the container service For example Amazon RDS provides RDS security groups and Amazon EMR allows you to manage firewall rules through Amazon EC2 security groups for Amazon EMR instances Figure 2 depicts the shared responsibility model for containe r services ArchivedAmazon Web Services AWS Security Best Practices Page 9 Figure 2: Shared Responsibility Model for Container Services Shared Responsibility Model for Abstracted Services For abstracted services such as Amazon S3 and Amazon DynamoDB AWS operates the infrastructure layer t he operating system and platforms and you access the endpoints to store and retrieve data Amazon S3 and DynamoDB are tightly integrated with IAM You are responsible for managing your data (including classifying your assets) and for using IAM tools to a pply ACL type permissions to individual resources at the platform level or permissions based on user identity or user responsibility at the IAM user/group level For some services such as Amazon S3 you can also use platform provided encryption of data a t rest or platform provided HTTPS encapsulation for your payloads for protecting your data in transit to and from the service Figure 3 outlines the shared responsibility model for AWS abstracted services: Figure 3: Shared Respo nsibility Model for Abstracted Services ArchivedAmazon Web Services AWS Security Best Practices Page 10 Using the Trusted Advisor Tool Some AWS Premium Support plans include access to the Trusted Advisor tool which offers a one view snap shot of your service and helps identify common security misconfigurations suggestions for improving system performance and underutilized resources In this whitepaper we cover the security aspects of Trusted Advisor that apply to Amazon EC2 Trusted Advisor checks for compliance with the following security recommendations: • Limited access to common administrative ports to only a small subset of addresses This includes ports 22 (SSH) 23 (Telnet) 3389 (RDP) and 5500 (VNC) • Limited access to co mmon database ports This includes ports 1433 (MSSQL Server) 1434 (MSSQL Monitor) 3306 (MySQL) Oracle (1521) and 5432 (PostgreSQL) • IAM is configured to help ensure secure access control of AWS resources • Multi factor authentication (MFA) token is enabl ed to provide two factor authentication for the root AWS account Define and Categorize Assets on AWS Before you design your ISMS identify all the information assets that you need to protect and then devise a technically and financially viable solution fo r protecting them It can be difficult to quantify every asset in financial terms so you might find that using qualitative metrics (such as negligible/low/medium/high/very high) is a better option Assets fall into two categories: • Essential elements such as business information process and activities • Components that support the essential elements such as hardware software personnel sites and partner organizations Table 1 shows a sample matrix of assets ArchivedAmazon Web Services AWS Security Best Practices Page 11 Table 1: Sample asset matrix Asset Name Asset Owner Asset Category Dependencies Customer facing website applications ECommerce team Essential EC2 Elastic Load Balancing Amazon RDS development Customer credit card data ECommerce team Essential PCI card holder environment encryption AWS PCI service Personnel data COO Essential Amazon RDS encryption provider dev and ops IT third party Data archive COO Essential S3 S3 Glacier dev and ops IT HR management system HR Essential EC2 S3 RDS dev and ops IT third party AWS Direct Connect infrastructure CIO Network Network ops TelCo provider AWS Direct Connect Business intelligence platform BI team Software EMR Redshift DynamoDB S3 dev and op s Business intelligence services COO Essential BI infrastructure BI analysis teams LDAP directory IT Security team Security EC2 IAM custom software dev and ops Windows AMI Server team Software EC2 patch management software dev and ops Customer credentials Compliance team Security Daily updates; archival infrastructure Design Your ISMS to Protect Your Assets on AWS After you have determined assets categories and costs establish a standard for implementing operating monitoring reviewing maintaining and improving your information security management system (ISMS) on AWS Security requirements differ in every organization depending on the following factors: ArchivedAmazon Web Services AWS Security Best Practices Page 12 • Business needs and objectives • Processes employed • Size and s tructure of the organization All these factors can change over time so it is a good practice to build a cyclical process for managing all of this information Table 2 suggests a phased approach to designing and building an ISMS in AWS You might also find standard frameworks such as ISO 27001 helpful with ISMS design and implementation Table 2: Phases of building an ISMS Phase Title Description 1 Define scope and boundaries Define which regions Availability Zones instances and AWS resources are “in scope” If you exclude any component (for example AWS manages facilities so you can leave it out of your own management system) state what you have excluded and why explicitly 2 Define an ISMS policy Include the following: • Objectives that set the direction and principles for action regarding information security • Legal contractual and regulatory requirements • Risk management objectives for your organization • How you will measure risk • How management approves the pla n 3 Select a risk assessment methodology Select a risk assessment methodology based on input from groups in your organization about the following factors: • Business needs • Information security requirements • Information technology capabilities and use • Legal requirements • Regulatory responsibilities Because public cloud infrastructure operates differently from legacy environments it is critical to set criteria for accepting risks and identifying the acceptable levels of risk (risk tolerances) We recomme nded starting with a risk assessment and leveraging automation as much as possible AWS risk automation can narrow down the scope of resources required for risk management There are several risk assessment methodologies including OCTAVE (Operationally Cr itical Threat Asset and Vulnerability Evaluation) ISO 31000:2009 Risk Management ENISA (European Network and Information Security Agency IRAM (Information Risk Analysis Methodology) and NIST (National Institute of Standards & Technology) Special Publ ication (SP) 800 30 rev1 Risk Management Guide ArchivedAmazon Web Services AWS Security Best Practices Page 13 Phase Title Description 4 Identify risks We recommend that you create a risk register by mapping all your assets to threats and then based on the vulnerability assessment and impact analysis results creating a new risk matrix for each AWS environment Here’s an example risk register: • Assets • Threats to those assets • Vulnerabilities that could be exploited by those threats • Consequences if those vulnerabilities are exploited 5 Analyze and evaluate risks Analyze and evaluate the risk by calculating business impact likelihood and probability and risk levels 6 Address risks Select options for addressing risks Options include applying security controls accepting risks avoiding risk or transferring risks 7 Choose a security control framework When you choose your security controls use a framework such as ISO 27002 NIST SP 800 53 COBIT (Control Objectives for Information and related Technology) and CSA CCM (Cloud Security Alliance Cloud Control Matrix The se frameworks comprise a set of reusable best practices and will help you to choose relevant controls 8 Get management approval Even after you have implemented all controls there will be residual risk We recommend that you get approval from your busine ss management that acknowledges all residual risks and approvals for implementing and operating the ISMS 9 Statement of applicability Create a statement of applicability that includes the following information: • Which controls you chose and why • Which controls are in place • Which controls you plan to put in place • Which controls you excluded and why Manage AWS Accounts IAM Users Groups and Roles Ensuring that users have appropriate levels of permissions to access the resources they need but no more than that is an important part of every ISMS You can use IAM to help perform this function You create IAM users under your AWS account and then assign them permissions directly or assign them to groups to which you assign permissions Here's a little more detail about AWS accounts and IAM users: ArchivedAmazon Web Services AWS Security Best Practices Page 14 • AWS account This is the account that you create when you first sign up for AWS Your AWS account represe nts a business relationship between you and AWS You use your AWS account to manage your AWS resources and services AWS accounts have root permissions to all AWS resources and services so they are very powerful Do not use root account credentials for da ytoday interactions with AWS In some cases your organization might choose to use several AWS accounts one for each major department for example and then create IAM users within each of the AWS accounts for the appropriate people and resources • IAM u sers With IAM you can create multiple users each with individual security credentials all controlled under a single AWS account IAM users can be a person service or application that needs access to your AWS resources through the management console C LI or directly via APIs Best practice is to create individual IAM users for each individual that needs to access services and resources in your AWS account You can create fine grained permissions to resources under your AWS account apply them to group s you create and then assign users to those groups This best practice helps ensure users have least privilege to accomplish tasks Strategies for Using Multiple AWS Accounts Design your AWS account strategy to maximize security and follow your business a nd governance requirements Table 3 discusses possible strategies Table 3: AWS Account strategies Business Requirement Proposed Design Comments Centralized security management Single AWS account Centralize information security management and minimize overhead Separation of production development and testing environments Three AWS accounts Create one AWS account for production services one for development and one for testing Multiple autonomous departments Multiple AWS accounts Create separate AWS accounts for each autonomous part of the organization You can assign permissions and policies under each account ArchivedAmazon Web Services AWS Security Best Practices Page 15 Business Requirement Proposed Design Comments Centralized security management with multiple autonomous independent projects Multiple AWS accounts Create a single AWS account for common project resources (such as DNS services Active Directory CMS etc)Then create separate AWS accounts per project You can assign permissions and policies under each project account and grant ac cess to resources across accounts You can configure a consolidated billing relationship across multiple accounts to ease the complexity of managing a different bill for each account and leverage economies of scale When you use billing consolidation th e resources and credentials are not shared between accounts Managing IAM Users IAM users with the appropriate level of permissions can create new IAM users or manage and delete existing ones This highly privileged IAM user can create a distinct IAM user for each individual service or application within your organization that manages AWS configuration or accesses AWS resources directly We strongly discourage the use of shared user identities where multiple entities share the same credentials Managing IAM Groups IAM groups are collections of IAM users in one AWS account You can create IAM groups on a functional organizational or geographic basis or by project or on any other basis where IAM users need to access similar AWS resources to do their jo bs You can provide each IAM group with permissions to access AWS resources by assigning one or more IAM policies All policies assigned to an IAM group are inherited by the IAM users who are members of the group For example let’s assume that IAM user Jo hn is responsible for backups within an organization and needs to access objects in the Amazon S3 bucket called Archives You can give John permissions directly so he can access the Archives bucket But then your organization places Sally and Betty on the same team as John While you can assign user permissions individually to John Sally and Betty to give them access to the Archives bucket assigning the permissions to a group and placing John Sally and Betty in that group will be easier to manage and maintain If additional users require the same access you can give it to them by adding them to the group When a user no ArchivedAmazon Web Services AWS Security Best Practices Page 16 longer needs access to a resource you can remove them from the groups that provide access to that resource IAM groups are a powerfu l tool for managing access to AWS resources Even if you only have one user who requires access to a specific resource as a best practice you should identify or create a new AWS group for that access and provision user access via group membership as we ll as permissions and policies assigned at the group level Managing AWS Credentials Each AWS account or IAM user is a unique identity and has unique long term credentials There are two primary types of credentials associated with these identities: (1) th ose used for sign in to the AWS Management Console and AWS portal pages and (2) those used for programmatic access to the AWS APIs Table 4 describes two types of sign in credentials Table 4: Sign in credentials Sign In Credential Type Details Username/Password User names for AWS accounts are always email addresses IAM user names allow for more flexibility Your AWS account password can be anything you define IAM user passwords can be forced to comply with a policy you define (that is you can require minimum password length or the use of non alphanumeric characters) Multi factor authentication (MFA) AWS Multi factor authentication (MFA) provides an extra level of security for sign in credentials With MFA enabled when users signs in to an AWS website they will be prompted for their user name and password (the first factor –what they know) as well as for an authentication code from their MFA device (the second factor – what they have) You can also require MFA for users to delete S3 objects We recommend you activate MFA for your AWS account and your IAM users to prevent unauthorized access to your AWS environment Currently AWS supports Gemalto hardware MFA devices as well as virtual MFA devices in the form of smartphone applications Table 5 describes types of credentials used for programmatic access to APIs ArchivedAmazon Web Services AWS Security Best Practices Page 17 Table 5: API access credentials Access Credential Type Details Access keys Access keys are used to digitally sign API calls made to AWS services Each access key credential is comprised of an access key ID and a secret key The secret key portion must be secured by the AWS account holder or the IAM user to whom they are assigned Users can have two sets of active access k eys at any one time As a best practice users should rotate their access keys on a regular basis MFA for API calls Multi factor authentication (MFA) protected API access requires IAM users to enter a valid MFA code before they can use certain functions which are APIs Policies you create in IAM will determine which APIs require MFA Because the AWS Management Console calls AWS service APIs you can enforce MFA on APIs whether access is through the console or via APIs Understanding Delegation Using IAM Roles and Temporary Security Credentials There are scenarios in which you want to delegate access to users or services that don't normally have access to your AWS resources Table 6 below outlines common use cases for delegating such a ccess Table 6: Common delegation use cases Use Case Description Applications running on Amazon EC2 instances that need to access AWS resources Applications that run on an Amazon EC2 instance and that need access to AWS resources such as Amazon S3 buckets or an Amazon DynamoDB table must have security credentials in order to make programmatic requests to AWS Developers might distribute their credentials to each instance and applications can then use those credentials to access resources but distributing longterm credentials to each instance is challenging to manage and a potential security risk ArchivedAmazon Web Services AWS Security Best Practices Page 18 Use Case Description Cross account access To manage access to resources you might have multiple AWS accounts —for example to isolate a developmen t environment from a production environment However users from one account might need to access resources in the other account such as promoting an update from the development environment to the production environment Although users who work in both accounts could have a separate identity in each account managing credentials for multiple accounts makes identity management difficult Identity federation Users might already have identities outside of AWS such as in your corporate directory However those users might need to work with AWS resources (or work with applications that access those resources) If so these users also need AWS security credentials in order to make requests to AWS IAM roles and temporary security credentials address these use cases An IAM role lets you define a set of permissions to access the resources that a user or service needs but the permissions are not attached to a specific IAM user or group Instead IAM users mobile and EC2 based appli cations or AWS services (like Amazon EC2) can programmatically assume a role Assuming the role returns temporary security credentials that the user or application can use to make for programmatic requests to AWS These temporary security credentials have a configurable expiration and are automatically rotated Using IAM roles and temporary security credentials means you don't always have to manage long term credentials and IAM users for each entity that requires access to a resource IAM Roles for Amazon EC2 IAM Roles for Amazon EC2 is a specific implementation of IAM roles that addresses the first use case in Table 6 In the following figure a developer is running an application on an Amazon EC2 instance that requires access to the Amazon S3 bucket name d photos An administrator creates the Get pics role The role includes policies that grant read permissions for the bucket and that allow the developer to launch the role with an Amazon EC2 instance When the application runs on the instance it can acces s the photos bucket by using the role's temporary credentials The administrator doesn't have to grant the developer permission to access the photos bucket and the developer never has to share credentials ArchivedAmazon Web Services AWS Security Best Practices Page 19 Figure 4: How roles fo r EC2 work 1 An administrator uses IAM to create the Getpics role In the role the administrator uses a policy that specifies that only Amazon EC2 instances can assume the role and that specifies only read permissions for the photos bucket 2 A developer lau nches an Amazon EC2 instance and associates the Getpics role with that instance 3 When the application runs it retrieves credentials from the instance metadata on the Amazon EC2 instance 4 Using the role credentials the application accesses the photo bucket with read only permissions Cross Account Access You can use IAM roles to address the second use case in Table 6 by enabling IAM users from another AWS account to access resources within your AWS account This process is referred to as cross accoun t access Cross account access lets you share access to your resources with users in other AWS accounts To establish cross account access in the trusting account (Account A) you create an IAM policy that grants the trusted account (Account B) access to specific resources Account B can then delegate this access to its IAM users Account B cannot delegate more access to its IAM users than the permissions that it has been granted by Account A Identity Federation You can use IAM roles to address the third use case in Table 6 by creating an identity broker that sits between your corporate users and your AWS resources to manage the authentication and authorization process without needing to re create all your users as IAM users in AWS ArchivedAmazon Web Services AWS Security Best Practices Page 20 Figure 5: AWS identity federation with temporary security credentials 1 The enterprise user accesses the identity broker application 2 The identity broker application authenticates the users against the corporate identity store 3 The identity broker application has permissions to access the AWS Security Token Service (STS) to request temporary security credentials 4 Enterprise users can get a temporary URL that gives them access to the AWS APIs or the Management Console A sample identity broker applic ation for use with Microsoft Active Directory is provided by AWS Managing OS level Access to Amazon EC2 Instances The previous section describes the ways in which you can manage access to resources that require authentication to AWS services However in order to access the operating system on your EC2 instances you need a different set of credentials In the shared responsibility model you own the operating system credentials but AWS helps you bootstrap the initial access to the operating system When y ou launch a new Amazon EC2 instance from a standard AMI you can access that instance using secure remote system access protocols such as Secure Shell (SSH) or Windows Remote Desktop Protocol (RDP) You must successfully ArchivedAmazon Web Services AWS Security Best Practices Page 21 authenticate at the operating system level before you can access and configure the Amazon EC2 instance to your requirements After you have authenticated and have remote access into the Amazon EC2 instance you can set up the operating system authentication mechanisms you want which migh t include X509 certificate authentication Microsoft Active Directory or local operating system accounts To enable authentication to the EC2 instance AWS provides asymmetric key pairs known as Amazon EC2 key pairs These are industry standard RSA key pairs Each user can have multiple Amazon EC2 key pairs and can launch new instances using different key pairs EC2 key pairs are not related to the AWS account or IAM user credentials discussed previously Those credentials control access to other AWS services; EC2 key pairs control access only to your specific instance You can choose to generate your own Amazon EC2 key pairs using industry standard tools like OpenSSL You generate the key pair in a secure and trusted environment and only the public ke y of the key pair is imported in AWS; you store the private key securely We advise using a high quality random number generator if you take this path You can choose to have Amazon EC2 key pairs generated by AWS In this case both the private and public key of the RSA key pair are presented to you when you first create the instance You must download and securely store the private key of the Amazon EC2 key pair AWS does not store the private key; if it is lost you must generate a new key pair For Amazon EC2 Linux instances using the cloud init service when a new instance from a standard AWS AMI is launched the public key of the Amazon EC2 key pair is appended to the initial operating system user’s ~/ssh/authorized_keys file That user can then use an SSH client to connect to the Amazon EC2 Linux instance by configuring the client to use the correct Amazon EC2 instance user’s name as its identity (for example ec2 user) and providing the private key file for user authentication For Amazon EC2 Windows instances using the ec2config service when a new instance from a standard AWS AMI is launched the ec2config service sets a new random Administrator password for the instance and encrypts it using the corresponding Amazon EC2 key pair’s public key The us er can get the Windows instance password by using the AWS Management Console or command line tools and by providing the corresponding Amazon EC2 private key to decrypt the password This password along with the default Administrative account for the Amaz on EC2 instance can be used to authenticate to the Windows instance ArchivedAmazon Web Services AWS Security Best Practices Page 22 AWS provides a set of flexible and practical tools for managing Amazon EC2 keys and providing industry standard authentication into newly launched Amazon EC2 instances If you have highe r security requirements you can implement alternative authentication mechanisms including LDAP or Active Directory authentication and disable Amazon EC2 key pair authentication Secure Your Data This section discusses protecting data at rest and in tran sit on the AWS platform We assume that you have already identified and classified your assets and established protection objectives for them based on their risk profiles Resource Access Authorization After a user or IAM role has been authenticated they can access resources to which they are authorized You provide resource authorization using resource policies or capability policies depending on whether you want the user to have control over the resources or whether you want to override individual user control • Resource policies are appropriate in cases where the user creates resources and then wants to allow other users to access those resources In this model the policy is attached directly to the resource and describes who can do what with the resour ce The user is in control of the resource You can provide an IAM user with explicit access to a resource The root AWS account always has access to manage resource policies and is the owner of all resources created in that account Alternatively you ca n grant users explicit access to manage permissions on a resource • Capability policies (which in the IAM docs are referred to as "user based permissions") are often used to enforce company wide access policies Capability policies are assigned to an IAM u ser either directly or indirectly using an IAM group They can also be assigned to a role that will be assumed at run time Capability policies define what capabilities (actions) the user is allowed or denied to perform They can override resource based po licies permissions by explicitly denying them • IAM policies can be used to restrict access to a specific source IP address range or during specific days and times of the day as well as based on other conditions ArchivedAmazon Web Services AWS Security Best Practices Page 23 • Resource policies and capability policies and are cumulative in nature: An individual user’s effective permissions is the union of a resources policies and the capability permissions granted directly or through group membership Storing and Managing Encryption Keys in the Cloud Security measures t hat rely on encryption require keys In the cloud as in an on premises system it is essential to keep your keys secure You can use existing processes to manage encryption keys in the cloud or you can leverage server side encryption with AWS key manage ment and storage capabilities If you decide to use your own key management processes you can use different approaches to store and protect key material We strongly recommend that you store keys in tamper proof storage such as Hardware Security Modules Amazon Web Services provides an HSM service in the cloud known as AWS CloudHSM Alternatively you can use HSMs that store keys on premises and access them over secure links such as IPSec virtual private networks (VPNs) to Amazon VPC or AWS Direct Con nect with IPSec You can use on premises HSMs or CloudHSM to support a variety of use cases and applications such as database encryption Digital Rights Management (DRM) and Public Key Infrastructure (PKI) including authentication and authorization document signing and transaction processing CloudHSM currently uses Luna SA HSMs from SafeNet The Luna SA is designed to meet Federal Information Processing Standard (FIPS) 140 2 and Common Criteria EAL4+ standards and supports a variety of industry standard cryptographic algorithms When you sign up for CloudHSM you receive dedicated single tenant access to CloudHSM appliances Each appliance appears as a resource in your Amazon VPC You not AWS initialize and manage the cryptographic domain of the CloudHSM The cryptographic domain is a logical and physical security boundary that restricts access to your keys Only you can control your keys and operations performed by the CloudHSM AWS administrators manage maintain and monitor the health of the CloudHSM appliance but do not have access to the cryptographic domain After you initialize the cryptographic domain you can configure clients on your EC2 instances that allow applications to use the APIs provided by CloudHSM Your applicat ions can use the standard APIs supported by the CloudHSM such as PKCS#11 MS CAPI and Java JCA/JCE (Java Cryptography Architecture/Java Cryptography Extensions) The CloudHSM client provides the APIs to your applications ArchivedAmazon Web Services AWS Security Best Practices Page 24 and implements each API call by c onnecting to the CloudHSM appliance using a mutually authenticated SSL connection You can implement CloudHSMs in multiple Availability Zones with replication between them to provide for high availability and storage resilience Protecting Data at Rest For regulatory or business requirement reasons you might want to further protect your data at rest stored in Amazon S3 on Amazon EBS Amazon RDS or other services from AWS Table 7 lists concern to consider when you are implementing protection of data at r est on AWS Table 7: Threats to data at rest Concern Recommended Protection Approach Strategies Accidental information disclosure Designate data as confidential and limit the number of users who can access it Use AWS permissions to manage access to resources for services such as Amazon S3 Use encryption to protect confidential data on Amazon EBS or Amazon RDS Permissions File partition volume or application level encryption Data integrity compromise To ensure that data integrity is not compromised through deliberate or accidental modification use resource permissions to limit the scope of users who can modify the data Even with resource permissions accidental deletion by a privileged user is still a t hreat (including a potential attack by a Trojan using the privileged user’s credentials) which illustrates the importance of the principle of least privilege Perform data integrity checks such as Message Authentication Codes (SHA 1/SHA 2) or Hashed Mes sage Authentication Codes (HMACs) digital signatures or authenticated encryption (AES GCM) to detect data integrity compromise If you detect data compromise restore the data from backup or in the case of Amazon S3 from a previous object version Permissions Data integrity checks (MAC/HMAC/Digital Signatures/Authenticated Encryption) Backup Versioning (Amazon S3) ArchivedAmazon Web Services AWS Security Best Practices Page 25 Concern Recommended Protection Approach Strategies Accidental deletion Using the correct permissions and the rule of the least privilege is the best protection against accidental or malicious deletion For services such as Amazon S3 you can use MFA Delete to require multi factor authentication to delete an object limiting access to Amazon S3 objects to privileged users If you detect data compromise restore the data f rom backup or in the case of Amazon S3 from a previous object version Permissions Backup Versioning (Amazon S3) MFA Delete (Amazon S3) System infrastructure hardware or software availability In the case of a system failure or a natural disaster restore your data from backup or from replicas Some services such as Amazon S3 and Amazon DynamoDB provide automatic data replication between multiple Availability Zones within a region Other services require you to configure replication or backups Backup Replication Analyze the threat landscape that applies to you and employ the relevant protection techniques as outlined in Table 1 The following sections describe how you can configure different services from AWS to protect data at rest Protecting Data at Rest on Amazon S3 Amazon S3 provides a number of security features for protection of data at rest which you can use or not depending on your threat profile Table 8 summarizes these features: Table 8: Amazon S3 features for protecting data at rest Amazon S3 Feature Description Permissions Use bucket level or object level permissions alongside IAM policies to protect resources from unauthorized access and to prevent information disclosure data integrity compromise or deletion Versioning Amazon S3 supports object versions Versioning is disabled by default Enable versioning to store a new versio n for every modified or deleted object from which you can restore compromised objects if necessary ArchivedAmazon Web Services AWS Security Best Practices Page 26 Amazon S3 Feature Description Replication Amazon S3 replicates each object across all Availability Zones within the respective region Replication can provide data and service availabil ity in the case of system failure but provides no protection against accidental deletion or data integrity compromise –it replicates changes across all Availability Zones where it stores copies Amazon S3 offers standard redundancy and reduced redundancy o ptions which have different durability objectives and price points Backup Amazon S3 supports data replication and versioning instead of automatic backups You can however use application level technologies to back up data stored in Amazon S3 to other AWS regions or to on premises backup systems Encryption –server side Amazon S3 supports server side encryption of user data Server side encryption is transparent to the end user AWS generates a unique encryption key for each object and then encry pts the object using AES 256 The encryption key is then encrypted itself using AES 256with a master key that is stored in a secure location The master key is rotated on a regular basis Encryption –client side With client side encryption you create and manage your own encryption keys Keys you create are not exported to AWS in clear text Your applications encrypt data before submitting it to Amazon S3 and decrypt data after receiving it from Amazon S3 Data is stored in an encrypted form wi th keys and algorithms only known to you While you can use any encryption algorithm and either symmetric or asymmetric keys to encrypt the data the AWS provided Java SDK offers Amazon S3 client side encryption features See Further Reading for more information Protecting Data at Rest on Amazon EBS Amazon EBS is the AWS abstract block storage service You receive each Amazon EBS volume in raw unformatted mode as if it were a new hard disk You can partition the Amaz on EBS volume create software RAID arrays format the partitions with any file system you choose and ultimately protect the data on the Amazon EBS volume All of these decisions and operations on the Amazon EBS volume are opaque to AWS operations You ca n attach Amazon EBS volumes to Amazon EC2 instances Table 9 summarizes features for protecting Amazon EBS data at rest with the operating system running on an Amazon EC2 instance ArchivedAmazon Web Services AWS Security Best Practices Page 27 Table 9: Amazon EBS features for protecting data at rest Amazon EBS Feature Description Replication Each Amazon EBS volume is stored as a file and AWS creates two copies of the EBS volume for redundancy Both copies reside in the same Availability Zone however so while Amazon EBS replication can survive hardware failure; it is not suitable as an availability tool for prolonged outages or disaster recovery purposes We recommend that you replicate data at the application level and/or create backups Backup Amazon EBS provides snapshots that captu re the data stored on an Amazon EBS volume at a specific point in time If the volume is corrupt (for example due to system failure) or data from it is deleted you can restore the volume from snapshots Amazon EBS snapshots are AWS objects to which IAM users groups and roles can be assigned permissions so that only authorized users can access Amazon EBS backups Encryption: Microsoft Windows EFS If you are running Microsoft Windows Server on AWS and you require an additional level of data confidentiality you can implement Encrypted File System (EFS) to further protect sensitive data stored on system or data partitions EFS is an extension to the NTFS file system that provides for transparent file and folder encryption and integrates with Windows and Active Directory key management facilities and PKI You can manage your own keys on EFS Encryption: Microsoft Windows BitLocker is a volume (or partition in the case of single drive) encryption solution included in Windows Server 2008 and l ater operating systems BitLocker uses Windows Bitlocker AES 128 and 256 bit encryption By default BitLocker requires a Trusted Platform Module (TPM) to store keys; this is not supported on Amazon EC2 However you can protect EBS volumes using BitLock er if you configure it to use a password Encryption: Linux dmcrypt On Linux instances running kernel versions 26 and later you can use dm crypt to configure transparent data encryption on Amazon EBS volumes and swap space You can use various ciphers as well as Linux Unified Key Setup (LUKS) for key management Encryption: TrueCrypt TrueCrypt is a third party tool that offers transparent encryption of data at rest on Amazon EBS volumes TrueCrypt supports both Microsoft Windows and L inux operating systems Encryption and integrity authentication: SafeNet ProtectV SafeNet ProtectV is a third party offering that allows for full disk encryption of Amazon EBS volumes and pre boot authentication of AMIs SafeNet ProtectV provides data confidentiality and data integrity authentication for data and the underlying operating system ArchivedAmazon Web Services AWS Security Best Practices Page 28 Protecting Data at Rest on Amazon RDS Amazon RDS leverages the same secure infrastructure as Amazon EC2 You can use the Amazon RDS service without additional protection but if you require encryption or data integrity authentication of data at rest for compliance or other purposes you can add protection at the application layer or at the platform layer using SQL cryptographic functions You could add protecti on at the application layer for example using a built in encryption function that encrypts all sensitive database fields using an application key before storing them in the database The application can manage keys by using symmetric encryption with PK I infrastructure or other asymmetric key techniques to provide for a master encryption key You could add protection at the platform using MySQL cryptographic functions; which can take the form of a statement like the following: INSERT INTO Customers (Cust omerFirstNameCustomerLastName) VALUES (AES_ENCRYPT('John'@key) AES_ENCRYPT('Smith'@key); Platform level encryption keys would be managed at the application level like application level encryption keys Table 10 summarizes Amazon RDS platform level protection options Table 10: Amazon RDS platform level data protection at rest Amazon RDS Platform Comment MySQL MySQL cryptographic functions include encryption hashing and compression For more information see https://devmysqlcom/doc/refman/55/en/encryption functionshtml Oracle Oracle Transparent Data Encryption is supported on Amazon RDS for Oracle Enterprise Edition under the Bring Your Own License (BYOL) model Microsoft SQL Microsoft Transact SQL data protection functions include encryption signing and hashing For more information see http://msdnmicrosoftcom/en us/library/ms173744 Note that SQL range queries are no longer applicable to the encrypted portion of the data This query for example would not return the expected results for names like “John” “Jonathan” and “Joan” if the contents of column CustomerFirstName is encrypted at the application or platform layer: ArchivedAmazon Web Services AWS Security Best Practices Page 29 SELECT CustomerFirstName CustomerLastName from Customers WHERE CustomerName LIKE 'Jo%';” Direct comparisons such as the following would work and return the expected result for all fields where CustomerFirstName matches “John” exactly SELECT CustomerFirstName CustomerLastName FROM Customers WHERE CustomerFirstName = AES_ENCRYPT('John' @key); Range queries would also work on fields that are not encrypted For example a Date field in a table could be left unencrypt ed so that you could use it in range queries Oneway functions are a good way to obfuscate personal identifiers such as social security numbers or equivalent personal IDs where they are used as unique identifiers While you can encrypt personal identifie rs and decrypt them at the application or platform layer before using them it’s more convenient to use a one way function such as keyed HMAC SHA1 to convert the personal identifier to a fixed length hash value The personal identifier is still unique because collisions in commercial HMACs are extremely rare The HMAC is not reversible to the original personal identifier however so you cannot track back the data to the original individual unless you know the original personal ID and process it via th e same keyed HMAC function In all regions Amazon RDS supports Transparent Data Encryption and Native Network Encryption both of which are components of the Advanced Security option for the Oracle Database 11g Enterprise Edition Oracle Database 11g Ente rprise Edition is available on Amazon RDS for Oracle under the Bring Your OwnLicense (BYOL) model There is no additional charge to use these features Oracle Transparent Data Encryption encrypts data before it is written to storage and decrypts data whe n it is read from storage With Oracle Transparent Data Encryption you can encrypt table spaces or specific table columns using industry standard encryption algorithms such as Advanced Encryption Standard (AES) and Data Encryption Standard (Triple DES) Protecting Data at Rest on Amazon S3 Glacier Data at rest stored in Amazon S3 Glacier is automatically server side encrypted using 256bit Advanced Encryption Standard (AES 256) with keys maintained by AWS The encryption key is then encrypted itself using AES256 with a master key that is stored in ArchivedAmazon Web Services AWS Security Best Practices Page 30 a secure location The master key is rotated on a regular basis For more information about the default encryption behavior for an Amazon S3 bucket see Amazon S3 Default Encryption Protecting Data at Rest on Amazon DynamoDB Amazon DynamoDB is a shared service from AWS You can use DynamoDB without adding protection but you can also implement a data encryption layer over the s tandard DynamoDB service See the previous section for considerations for protecting data at the application layer including impact on range queries DynamoDB supports number string and raw binary data type formats When storing encrypted fields in Dyna moDB it is a best practice to use raw binary fields or Base64 encoded string fields Protecting Data at Rest on Amazon EMR Amazon EMR is a managed service in the cloud AWS provides the AMIs required to run Amazon EMR and you can’t use custom AMIs or you r own EBS volumes By default Amazon EMR instances do not encrypt data at rest Amazon EMR clusters often use either Amazon S3 or DynamoDB as the persistent data store When an Amazon EMR cluster starts it can copy the data required for it to operate fro m the persistent store into HDFS or use data directly from Amazon S3 or DynamoDB To provide for a higher level of data at rest confidentiality or integrity you can employ a number of techniques summarized in Table 11 Table 11: Protecting data at rest in Amazon EMR Requirement Description Amazon S3 server side encryption –no HDFS copy Data is permanently stored on Amazon S3 only and not copied to HDFS at all Hadoop fetches data from Amazon S3 and processes it locally without making persistent local copies See the Protecting Data at Rest on Amazon S3 section for more information on Amazon S3 server side encryption ArchivedAmazon Web Services AWS Security Best Practices Page 31 Requirement Description Amazon S3 client side encryption Data is permanently stored on Am azon S3 only and not copied to HDFS at all Hadoop fetches data from Amazon S3 and processes it locally without making persistent local copies To apply client side decryption you can use a custom Serializer/Deserializer (SerDe) with products such as Hiv e or InputFormat for Java Map Reduce jobs Apply encryption at each individual row or record so that you can split the file See the Protecting Data at Rest on Amazon S3 section for more information on Amazon S3 cli entside encryption Application level encryption –entire file encrypted You can encrypt or protect the integrity of the data (for example by using HMAC SHA1) at the application level while you store data in Amazon S3 or DynamoDB To decrypt the data you would use a custom SerDe with Hive or a script or a bootstrap action to fetch the data from Amazon S3 decrypt it and load it into HDFS befo re processing Because the entire file is encrypted you might need to execute this action on a single node such as the master node You can use tools such as S3Distcp with special codecs Application level encryption –individual fields encrypted/structur e preserved Hadoop can use a standard SerDe such as JSON Data decryption can take place during the Map stage of the Hadoop job and you can use standard input/output redirection via custom decryption tools for streaming jobs Hybrid You might want to employ a combination of Amazon S3 server side encryption and client side encryption as well as application level encryption AWS Partner Network (APN) partners provide specialized solutions for protecting data at rest and in transit on Amazon EMR for more information visit the AWS Security Partner Solutions page Decommission Data and Media Securely You decommission data differently in the cloud than you do in traditional on premises environments When you ask AWS to delete data in the cloud AWS does not decommission the underlying physical media; instead the storage blocks are mark ed as unallocated AWS uses secure mechanisms to reassign the blocks elsewhere When you provision block storage the hypervisor or virtual machine manager (VMM) keeps track of which blocks your instance has written to When an instance writes to a block o f storage the previous ArchivedAmazon Web Services AWS Security Best Practices Page 32 block is zeroed out and then overwritten with your block of data If your instance attempts to read from a block previously written to your previously stored data is returned If an instance attempts to read from a block it has no t previously written to the hypervisor zeros out the previous data on disk and returns a zero to the instance When AWS determines that media has reached the end of its useful life or it experiences a hardware fault AWS follows the techniques detailed i n Department of Defense (DoD) 522022 M (“National Industrial Security Program Operating Manual”) or NIST SP 800 88 (“Guidelines for Media Sanitization”) to destroy data as part of the decommissioning process For more information about deletion of data in the cloud see the AWS Overview of Security Processes whitepaper When you have regulatory or business reasons to require further controls for securely decommissioning data you can implement data encryption at rest using customer managed keys which are not stored in the cloud Then in addition to following the previous process you would delete the key used to protect the decommissioned data making it irrecoverable Protec t Data in Transit Cloud applications often communicate over public links such as the Internet so it is important to protect data in transit when you run applications in the cloud This involves protecting network traffic between clients and servers and network traffic between servers Table 12 lists common concerns to communication over public links such as the Internet Table 12: Threats to data in transit Concern Comments Recommended Protection Accidental information disclosure Access to your confidential data should be limited When data is traversing the public network it should be protected from disclosure through encryption Encrypt data in transit using IPSec ESP and/or SSL/TLS ArchivedAmazon Web Services AWS Security Best Practices Page 33 Concern Comments Recommended Protection Data integrity compromise Whether or not data is confidential you want to know that data integrity is not compromised through deliberate or accidental modification Authenticate data integrity using IPSec ESP/AH and/or SSL/TLS Peer identity compromise/ identity spoofing/ man inthe middle Encryption and data integrity authentication are important for protecting the communications channel It is equally important to authenticate the identity of the remote end of the connection An encrypted channel is worthless if the remote end happens to be an attacker or an imposter relaying the connection to the intended recipient Use IPSec with IKE with pre shared keys or X509 certificates to authenticate the remote end Alternatively use SSL/TLS with server certificate authentication based on the server common name (CN) or Alternative Name (AN/SAN) Services from AWS provide support for both IPSec and SSL/TLS for protection of data in transit IPSec is a protocol that extends the IP protocol stack often in n etwork infrastructure and allows applications on upper layers to communicate securely without modification SSL/TLS on the other hand operates at the session layer and while there are third party SSL/TLS wrappers it often requires support at the appli cation layer as well The following sections provide details on protecting data in transit Managing Application and Administrative Access to AWS Public Cloud Services When accessing applications running in the AWS public cloud your connections traverse t he Internet In most cases your security policies consider the Internet an insecure communications medium and require application data protection in transit Table 13 outlines approaches for protecting data in transit when accessing public cloud services ArchivedAmazon Web Services AWS Security Best Practic es Page 34 Table 13: Protecting application data in transit when accessing public cloud Protocol/Scenari o Description Recommended Protection Approach HTTP/HTTPS traffic (web applications) By default HTTP traffic is unprotected SSL/TLS protection for HTTP traffic also known as HTTPS is industry standard and widely supported by web servers and browsers HTTP traffic can include not just client access to web pages but also web services (REST based access) as well Use HTTPS (HTTP over SSL/TLS) with server certificate authentication HTTPS offload (web applications) While using HTTPS is often recommended especially for sensitive data SSL/TLS processing requires additional CPU and memory resources from both the web server and the client This can put a considerable load on web servers handling thousands of SSL/TLS sessions There is less impact on the client where only a limited number of SSL/TLS connections are terminated Offload HTTPS processing on Elastic Load Balancing to minimize impact on web servers while still protecting data in transit Further protect the backend connection to instances using an application protocol such as HTTP over SSL Remote Desktop Protocol (RDP) traffic Users who access Windows Terminal Services in the public cloud usually use the Microsoft Remote Desktop Protocol (RDP) By default RDP connections establish an underlying SSL/TLS connection For optimal protection the Windows server being accessed should be issued a trusted X50 9 certificate to protect from identity spoofing or man inthemiddle attacks By default Windows RDP servers use selfsigned certificates which are not trusted and should be avoided ArchivedAmazon Web Services AWS Security Best Practices Page 35 Protocol/Scenari o Description Recommended Protection Approach Secure Shell (SSH) traffic SSH is the preferred approach for establi shing administrative connections to Linux servers SSH is a protocol that like SSL provides a secure communications channel between the client and the server In addition SSH also supports tunneling which you should use for running applications such as XWindows on top of SSH and protecting the application session in transit Use SSH version 2 using non privileged user accounts Database server traffic If clients or servers need to access databases in the cloud they might need to traverse the Internet as well Most modern databases support SSL/TLS wrappers for native database protocols For database servers running on Amazon EC2 we recommend this approach to protecting data in transit Amazon RDS provides support for SSL/TLS in some cases See the Protecting Data in Transit to Amazon RDS section for more details Protecting Data in Transit when Managing AWS Services You can manage your services from AWS such as Amazon EC2 and Amazon S3 using the AWS Man agement Console or AWS APIs Examples of service management traffic include launching a new Amazon EC2 instance saving an object to an Amazon S3 bucket or amending a security group on Amazon VPC The AWS Management Console uses SSL/TLS between the client browser and console service endpoints to protect AWS service management traffic Traffic is encrypted data integrity is authenticated and the client browser authenticates the identity of the console service endpoint by using an X509 certificate After an SSL/TLS session is established between the client browser and the console service endpoint all subsequent HTTP traffic is protected within the SSL/TLS session You can alternatively use AWS APIs to manage services from AWS either directly from applicat ions or third party tools or via SDKs or via AWS command line tools AWS ArchivedAmazon Web Services AWS Security Best Practices Page 36 APIs are web services (REST) over HTTPS SSL/TLS sessions are established between the client and the specific AWS service endpoint depending on the APIs used and all subsequent tr affic including the REST envelope and user payload is protected within the SSL/TLS session Protecting Data in Transit to Amazon S3 Like AWS service management traffic Amazon S3 is accessed over HTTPS This includes all Amazon S3 service management requ ests as well as user payload such as the contents of objects being stored/retrieved from Amazon S3 and associated metadata When the AWS service console is used to manage Amazon S3 an SSL/TLS secure connection is established between the client browser a nd the service console endpoint All subsequent traffic is protected within this connection When Amazon S3 APIs are used directly or indirectly an SSL/TLS connection is established between the client and the Amazon S3 endpoint and then all subsequent HTTP and user payload traffic is encapsulated within the protected session Protecting Data in Transit to Amazon RDS If you’re connecting to Amazon RDS from Amazon EC2 instances in the same region you can rely on the security of the AWS networ k but if you’re connecting from the Internet you might want to use SSL/TLS for additional protection SSL/TLS provides peer authentication via server X509 certificates data integrity authentication and data encryption for the client server connection SSL/TLS is currently supported for connections to Amazon RDS MySQL and Microsoft SQL instances For both products Amazon Web Services provides a single self signed certificate associated with the MySQL or Microsoft SQL listener You can download the selfsigned certificate and designate it as trusted This provides for peer identity authentication and prevents man inthemiddle or identity spoofing attacks on the server side SSL/TLS provides for native encryption and data integrity authentication of the communications channel between the client and the server Because the same self signed certificate is used on all Amazon RDS MySQL instances on AWS and another single self signed certificate is used across all Amazon RDS Microsoft SQL instances on AWS peer identity authentication does not provide for individual instance authentication If you require individual server authentication via SSL/TLS you might need to leverage Amazon EC2 and self managed relational database services ArchivedAmazon Web Services AWS Security Best Practices Page 37 Amazon RDS for Oracle Na tive Network Encryption encrypts the data as it moves into and out of the database With Oracle Native Network Encryption you can encrypt network traffic travelling over Oracle Net Services using industry standard encryption algorithms such as AES and Tri ple DES Protecting Data in Transit to Amazon DynamoDB If you're connecting to DynamoDB from other services from AWS in the same region you can rely on the security of the AWS network but if you're connecting to DynamoDB across the Internet you should u se HTTP over SSL/TLS (HTTPS) to connect to DynamoDB service endpoints Avoid using HTTP for access to DynamoDB and for all connections across the Internet Protecting Data in Transit to Amazon EMR Amazon EMR includes a number of application communication paths each of which requires separate protection mechanisms for data in transit Table 14 outlines the communications paths and the protection approach we recommend Table 14: Protecting data in transit on Amazon EMR Type of Amazon EMR Traffic Description Recommended Protection Approach Between Hadoop nodes Hadoop Master Worker and Core nodes all communicate with one another using proprietary plain TCP connections However all Hadoop nodes on Amazon EMR reside in the same Availability Zone and are protected by security standards at the physical and infrastructure layer No additional protection typically required – all nodes reside in the same facility Between Hadoop Cluster and Amazon S3 Amazon EMR uses HTTPS to send data between DynamoDB and Amazon EC2 For more information see the Protecting Data in Transit to Amazon S3 section HTTPS used by default ArchivedAmazon Web Services AWS Security Best Practices Page 38 Type of Amazon EMR Traffic Description Recommended Protection Approach Between Hadoop Cluster and Amazon DynamoDB Amazon EMR uses HTTPS to send data between Amazon S3 and Amazon EC2 For more information see the Protecting Data in Transit to Amazon DynamoDB section HTTPS used by default Use SSL/TLS if Thrift REST or Avro are used User or application access to Hadoop cluster Clients or applications on premises can access Amazon EMR clusters across the Internet using scripts (SSH based access) REST or protocols such as Thrift or Avro Use SSH for interactive access to applications or for tunneling other protocols within SSH Administrative access to Hadoop cluster Amazon EMR cluster administrators typically use SSH to manage the cluster Use SSH to the Amazon EMR master node Secure Your Operating Systems and Applications With the AWS shared responsibility model you manage your operating systems and applications security Amazon EC2 presents a true virtual computing environment in which you can use web service interfaces to launch instances with a variety of operating systems with custom preloaded applications You can standardize operating system and application builds and centrally manage the security of your operating systems and applications in a single secure build repository You can build and test a pre configured AMI to meet your security requirements Recommendations include: • Disable root API access keys and secret key • Restrict access to instances from limited IP ranges using Security Groups • Password protect the pem file on user machines ArchivedAmazon Web Services AWS Securit y Best Practices Page 39 • Delete keys f rom the authorizedkeys file on your instances when someone leaves your organization or no longer requires access • Rotate credentials (DB Access Keys) • Regularly run least privilege checks using IAM user Access Advisor and IAM user Last Used Access Keys • Use bastion hosts to enforce control and visibility This section is not intended to provide a comprehensive list of hardening standards for AMIs Sources of industry accepted system hardening standards include but are not limited to: • Center for Internet Secu rity (CIS) • International Organization for Standardization (ISO) • SysAdmin Audit Network Security (SANS) Institute • National Institute of Standards Technology (NIST) We recommend that you develop configuration standards for all system components Ensure that these standards address all known security vulnerabilities and are consistent with industry accepted system hardening standards If a published AMI is found to be in violation of best practices or poses a significant risk to customers running the AMI AWS reserves the right to take measures to remove the AMI from the public catalog and notify the publisher and those running the AMI of the findings Creating Custom AMIs You can create your own AMIs that meet the specific requirements of your organization and publish them for internal (private) or external (public) use As a publisher of an AMI you are responsible for the initial security posture of the machine images tha t you use in production The security controls you apply on the AMI are effective at a specific point in time they are not dynamic You can configure private AMIs in any way that meets your business needs and does not violate the AWS Acceptable Use Policy For more information see the Amazon Web Services Acceptable Use Policy Users who launch from AMIs however might not be security experts so we recommend that you meet certain minimum security standards ArchivedAmazon Web Services AWS Security Best Practices Page 40 Before you publish an AMI make sure that the published software is up to date with relevant security patches and perform the clean up and hardening tasks listed in Table 15 Table 15: Clean up tasks before publishing an AMI Area Recommended Task Disable insecure applications Disable services and protocols that authenticate users in clear text over the network or otherwise insecurely Minimize exposure Disable non essential network services on startup Only administrative services (SSH/RDP) and the services required for essential applications should be started Protect credentials Securely delete all AWS credentials from disk and configuration files Protect credentials Securely delete any third party crede ntials from disk and configuration files Protect credentials Securely delete all additional certificates or key material from the system Protect credentials Ensure that software installed does not use default internal accounts and passwords Use good governance Ensure that the system does not violate the Amazon Web Services Acceptable Use Policy Examples of violations include open SMTP relays or proxy servers For more information see the Amazon Web Se rvices Acceptable Use Policy Tables 16 and 17 list additional operating system specific clean up tasks Table 16 lists the steps for securing Linux AMIs Table 16: Securing Linux/UNIX AMIs Area Hardening Activity Secure services Configure sshd to allow only public key authentication Set PubkeyAuthentication to Yes and PasswordAuthentication to No in sshd_config Secure services Generate a unique SSH host key on instance creation If the AMI uses cloud init it will hand le this automatically ArchivedAmazon Web Services AWS Security Best Practices Page 41 Area Hardening Activity Protect credentials Remove and disable passwords for all user accounts so that they cannot be used to log in and do not have a default password Run passwd l <USERNAME> for each account Protect credentials Securely delete all user SSH public and private key pairs Protect data Securely delete all shell history and system log files containing sensitive data Table 17: Securing Windows AMIs Area Hardening Activity Protect credentials Ensure that all enabled user accounts have new randomly generated passwords upon instance creation You can configure the EC2 Config Service to do this for the Administrator account upon boot but you must explicitly do so before bundling the image Protect credentials Ensure that the Guest account is disabled Protect data Clear the Windows event logs Protect credentials Make sure the AMI is not a part of a Windows domain Minimizing exposure Do not enable any file sharing print spooler RPC and other Windows services that are not essential but are enabled by default Bootstrapping After the hardened AMI is instantiated you can still amend and update security controls by using bootstrapping applications Common bootstrapping applications includ e Puppet Chef Capistrano Cloud Init and Cfn Init You can also run custom bootstrapping Bash or Microsoft Windows PowerShell scripts without using third party tools Here are a few bootstrap actions to consider: • Security software updates install the lat est patches service packs and critical updates beyond the patch level of the AMI ArchivedAmazon Web Services AWS Security Best Practices Page 42 • Initial application patches install application level updates beyond the current application level build as captured in the AMI • Contextual data and configuration enables instances to apply configurations specific to the environment in which they are being launched –production test or DMZ/internal for example • Register instances with remote security monitoring and management systems Managing Patches You are responsible f or patch management for your AMIs and live instances We recommend that you institutionalize patch management and maintain a written procedure While you can use third party patch management systems for operating systems and major applications it is a goo d practice to keep an inventory of all software and system components and to compare the list of security patches installed on each system to the most recent vendor security patch list to verify that current vendor patches are installed Implement proces ses to identify new security vulnerabilities and assign risk rankings to such vulnerabilities At a minimum rank the most critical highest risk vulnerabilities as “High” Controlling Security for Public AMIs Take care that you don’t leave important crede ntials on AMIs when you share them publicly For more information see How To Share and Use Public AMIs in A Secure Manner Protecting Your System from Malware Protect your systems in the cloud as you would protect a conventional infrastructure from threats such as viruses worms Trojans rootkits botnets and spam It’s important to understand the implications of a malware infection to an individual instance as well as to the entire cloud system: When a user –wittingly or unwittingly – executes a program on a Linux or Windows system the executable assumes the privileges of that user (or in some cases impersonates another user) The code can carry out an y action that the user who launched it has permissions for Users must ensure that they only execute trusted code ArchivedAmazon Web Services AWS Security Best Practices Page 43 If you execute a piece of untrusted code on your system it’s no longer your system –it belongs to someone else If a superuser or a user with administrative privileges executes an untrusted program the system on which the program was executed can no longer be trusted –malicious code might change parts of the operating system install a rootkit or establish back doors for accessing the system It might delete data or compromise data integrity or compromise the availability of services or disclose information in a covert or overt fashion to third parties Consider the instance on which the code was executed to be infected If the infected insta nce is part of a single sign on environment or if there is an implicit trust model for access between instances the infection can quickly spread beyond the individual instance into the entire system and beyond An infection of this scale can quickly lead to data leakage data and service compromise and it can erode the company’s reputation It might also have direct financial consequences if for example it compromises services to third parties or over consumes cloud resources You must manage the threat of malware Table 18 outlines some common approaches to malware protection Table 18: Approaches for protection from malware Factor Common Approaches Untrusted AMIs Launch instances from trusted AMIs only Trusted AMIs include t he standard Windows and Linux AMIs provided by AWS and AMIs from trusted third parties If you derive your own custom AMIs from the standard and trusted AMIs all the additional software and settings you apply to it must be trusted as well Launching an un trusted third party AMI can compromise and infect your entire cloud environment Untrusted software Only install and run trusted software from a trusted software provider A trusted software provider is one who is well regarded in the industry and develops software in a secure and responsible fashion not allowing malicious code into its software packages Open source software can also be trusted software and you should be able to compile your own executables We strongly recommend that you perform careful code reviews to ensure that source code is non malicious Trusted software providers often sign their software using code signing certificates or provide MD5 or SHA 1 signatures of their products so that you can verify the integrity of the softwar e you download ArchivedAmazon Web Services AWS Security Best Practices Page 44 Factor Common Approaches Untrusted software depots You download trusted software from trusted sources Random sources of software on the Internet or elsewhere on the network might actually be distributing malware inside an otherwise legitimate and reputable softwa re package Such untrusted parties might provide MD5 or SHA 1 signatures of the derivative package with malware in it so such signatures should not be trusted We advise that you set up your own internal software depots of trusted software for your users to install and use Strongly discourage users from the dangerous practice of downloading and installing software from random sources on the Internet Principle of least privilege Give users the minimum privileges they need to carry out their tasks That way even if a user accidentally launches an infected executable the impact on the instance and the wider cloud system is minimized Patching Patch external facing and internal systems to the latest security level Worms often spread through unpatched systems on the network Botnets If an infection –whether from a conventional virus a Trojan or a worm –spreads beyond the individual instance and infects a wider fleet it might carry malicious code that creates a botnet –a network of infected ho sts that can be controlled by a remote adversary Follow all the previous recommendations to avoid a botnet infection Spam Infected systems can be used by attackers to send large amounts of unsolicited mail (spam) AWS provides special controls to limit how much email an Amazon EC2 instance can send but you are still responsible for preventing infection in the first place Avoid SMTP open relay which can be used to spread spam and which might also represent a breach of the AWS Acceptable Use Poli cy For more information see the Amazon Web Services Acceptable Use Policy Antivirus/ Antispam software Be sure to use a reputable and up todate antiv irus and antispam solution on your system Host based IDS software Many AWS customers install host based IDS software such as the open source product OSSEC that includes file integrity checking and rootkit detection software Use these products to analy ze important system files and folders and calculate checksum that reflect their trusted state and then regularly check to see whether these files have been modified and alert the system administrator if so If an instance is infected antivirus software might be able to detect the infection and remove the virus We recommend the most secure and widely recommended approach which is to save all the system data then reinstall all the systems platforms and ArchivedAmazon Web Services AWS Security Best Practices Page 45 application executables from a trusted source and then restore the data only from backup Mitigating Compromise and Abuse AWS provides a global infrastructure for customers to build solutions on many of which face the Internet Our customer solutions must operate in a manner that does no harm to the rest of Internet community that is they must avoid abuse activities Abuse activities are externally observed behaviors of AWS customers’ instances or other resources that are malicious offensive illegal or could harm other Internet sites AWS works with you to detect and address suspicious and malicious activities from your AWS resources Unexpected or suspicious behaviors from your resources can indicate that your AWS resources have been compromised which signals potential risks to your business AWS uses the following mechanisms to detect abuse activities from customer resources : • AWS internal event monitoring • External security intelligence against AWS network space • Internet abuse complaints against AWS re sources While the AWS abuse response team aggressively monitors and shuts down malicious abusers or fraudsters running on AWS the majority of abuse complaints refer to customers who have legitimate business on AWS Common causes of unintentional abuse act ivities include: • Compromised resource For example an unpatched Amazon EC2 instance could be infected and become a botnet agent • Unintentional abuse For example an overly aggressive web crawler might be classified as a DOS attacker by some Internet site s • Secondary abuse For example an end user of the service provided by an AWS customer might post malware files on a public Amazon S3 bucket • False complaints Internet users might mistake legitimate activities for abuse AWS is committed to working with AWS customers to prevent detect and mitigate abuse and to defend against future re occurrences When you receive an AWS abuse warning your security and operational staffs must immediately investigate the matter Delay can prolong the damage to other In ternet sites and lead to reputation and legal ArchivedAmazon Web Services AWS Security Best Practices Page 46 liability for you More importantly the implicated abuse resource might be compromised by malicious users and ignoring the compromise could magnify damages to your business Malicious illegal or harmful act ivities that use your AWS resources violate the AWS Acceptable Use Policy and can lead to account suspension For more information see the Amazon Web Services Acceptable Use Policy It is your responsibility to ma intain a well behaved service as evaluated by the Internet community If an AWS customer fails to address reported abuse activities AWS will suspend the AWS account to protect the integrity of the AWS platform and the Internet community Table 19 lists b est practices that can help you respond to abuse incidents: Table 19: Best practices for mitigating abuse Best Practice Description Never ignore AWS abuse communication When an abuse case is filed AWS immediately sends an email notification to the customer’s registered email address You can simply reply to the abuse warning email to exchange information with the AWS abuse response team All communications are saved in the AWS abuse tracking system for future reference The AWS abuse response team is committed to helping customers to understand the nature of the complaints AWS helps customers to mitigate and prevent abuse activities Account suspension is the last action the AWS abuse response team takes to stop abuse activ ities We work with our customers to mitigate problems and avoid having to take any punitive action But you must respond to abuse warnings take action to stop the malicious activities and prevent future re occurrence Lack of customer response is the le ading reason for instance and account blocks Follow security best practices The best protection against resource compromise is to follow the security best practices outlined in this document While AWS provides certain security tools to help you establi sh strong defenses for your cloud environment you must follow security best practices as you would for servers within your own data center Consistently adopt simple defense practices such as applying the latest software patches restricting network traf fic via a firewall and/or Amazon EC2 security groups and providing least privilege access to users ArchivedAmazon Web Services AWS Security Best Practices Page 47 Best Practice Description Mitigation to compromises If your computing environment has been compromised or infected we recommend taking the following steps to recover to a safe state: Consider any known compromised Amazon EC2 instance or AWS resource unsafe If your Amazon EC2 instance is generating traffic that cannot be explained by your application usage your instance has probably been compromised or infected with malici ous software Shut down and rebuild that instance completely to get back to a safe state While a fresh re launch can be challenging in the physical world in the cloud environment it is the first mitigation approach You might need to carry out forensic a nalysis on a compromised instance to detect the root cause Only well trained security experts should perform such an investigation and you should isolate the infected instance to prevent further damage and infection during the investigation To isolate a n Amazon EC2 instance for investigation you can set up a very restrictive security group for example close all ports except to accept inbound SSH or RDP traffic from one single IP address from which the forensic investigator can safely examine the insta nce You can also take an offline Amazon EBS snapshot of the infected instance and then deliver the offline snapshot to forensic investigators for deep analysis AWS does not have access to the private information inside your instances or other resources so we cannot detect guest operating system or application level compromises such as application account take over AWS cannot retroactively provide information (such as access logs IP traffic logs or other attributes) if you are not recording that infor mation via your own tools Most deep incident investigation and mitigation activities are your responsibility The final step you must take to recover from compromised Amazon EC2 instances is to back up key business data completely terminate the infected instances and re launch them as fresh resources To avoid future compromises we recommend that you review the security control environment on the newly launched instances Simple steps like applying the latest software patches and restricting firewalls g o a long way Set up security communication email address The AWS abuse response team uses email for abuse warning notifications By default this email goes to your registered email address but if you are in a large enterprise you might want to create a dedicated response email address You can set up additional email addresses on your Personal Information page under Configure Additional Contacts ArchivedAmazon Web Services AWS Security Best Practices Page 48 Using Additional Application Security Practices Here are some additional general security best practices for your operating systems and applications: • Always change vendor supplied defaults before creating new AMIs or prior to deploying new applications including but not limited to passwords simple network management protocol (SNMP) community strin gs and security configuration • Remove or disable unnecessary user accounts • Implement a single primary function per Amazon EC2 instance to keep functions that require different security levels from co existing on the same server For example implement we b servers database servers and DNS on separate servers • Enable only necessary and secure services protocols daemons etc as required for the functioning of the system Disable all non essential services because they increase the security risk exposu re for the instance as well as the entire system • Disable or remove all unnecessary functionality such as scripts drivers features subsystems EBS volumes Configure all services with security best practices in mind Enable security features for any re quired services protocols or daemons Choose services such as SSH which have built in security mechanisms for user/peer authentication encryption and data integrity authentication over less secure equivalents such as Telnet Use SSH for file transfers rather than insecure protocols like FTP Where you can’t avoid using less secure protocols and services introduce additional security layers around them such as IPSec or other virtual private network (VPN) technologies to protect the communications cha nnel at the network layer or GSS API Kerberos SSL or TLS to protect network traffic at the application layer While security governance is important for all organizations it is a best practice to enforce security policies Wherever possible configure your system security parameters to comply with your security policies and guidelines to prevent misuse For administrative access to systems and applications encrypt all non console administrative access using strong cryptographic mechanisms Use technolo gies such ArchivedAmazon Web Services AWS Security Best Practices Page 49 as SSH user and site tosite IPSec VPNs or SSL/TLS to further secure remote system management Secure Your Infrastructure This section provides recommendations for securing infrastructure services on the AWS platform Using Amazon Virtual Priva te Cloud (VPC) With Amazon Virtual Private Cloud (VPC) you can create private clouds within the AWS public cloud Each customer Amazon VPC uses IP address space allocated by customer You can use private IP addresses (as recommended by RFC 1918) for your Amazon VPCs building private clouds and associated networks in the cloud that are not directly routable to the Internet Amazon VPC provides not only isolation from other customers in the private cloud it provides layer 3 (Network Layer IP routing) isola tion from the Internet as well Table 20 lists options for protecting your applications in Amazon VPC: ArchivedAmazon Web Services AWS Security Best Practices Page 50 Table 20: Accessing resources in Amazon VPC Concern Description Recommended Protection Approach Internet only The Amazon VPC is not connected to any of your infrastructure on premises or elsewhere You might or might not have additional infrastructure residing on premises or elsewhere If you need to accept connections from Internet users you can provide inbound access by allo cating elastic IP addresses (EIPs) to only those Amazon VPC instances that need them You can further limit inbound connections by using security groups or NACLs for only specific ports and source IP address ranges If you can balance the load of traffic i nbound from the Internet you don’t need EIPs You can place instances behind Elastic Load Balancing For outbound (to the Internet) access for example to fetch software updates or to access data on AWS public services such as Amazon S3 you can use a NA T instance to provide masquerading for outgoing connections No EIPs are required Encrypt application and administrative traffic using SSL/TLS or build custom user VPN solutions Carefully plan routing and server placement in public and private subnets Use security groups and NACLs IPSec over the Internet AWS provides industry standard and resilient IPSec termination infrastructure for VPC Customers can establish IPSec tunnels from their on premises or other VPN infrastructure to Amazon VPC IPSec tunnels are established between AWS and your infrastructure endpoints Applications running in the cloud or on premises don’t require any modification and can benefit from IPSec data protection in transit immediately Establish a private IPSec connec tion using IKEv1 and IPSec using standard AWS VPN facilities (Amazon VPC VPN gateways customer gateways and VPN connections) Alternatively establish customer specific VPN software infrastructure in the cloud and on premises AWS Direct Connect witho ut IPSec With AWS Direct Connect you can establish a connection to your Amazon VPC using private peering with AWS over dedicated links without using the Internet You can opt to not use IPSec in this case subject to your data protection requirements Depending on your data protection requirements you might not need additional protection over private peering ArchivedAmazon Web Services AWS Security Best Practices Page 51 Concern Description Recommended Protection Approach AWS Direct Connect with IPSec You can use IPSec over AWS Direct Connect links for additional end to end protection See IPSec over the Internet a bove Hybrid Consider using a combination of these approaches Employ adequate protection mechanisms for each connectivity approach you use You can leverage Amazon VPC IPSec or VPC AWS Direct Connect to seamlessly integrate on premises or other hosted infrastructure with your Amazon VPC resources in a secure fashion With either approach IPSec connections protect data in transit while BGP on IPSec or AWS Direct Connect links integrate your Amazon VPC and on premises routing domains for transpar ent integration for any application even applications that don’t support native network security mechanisms Although VPC IPSec provides industry standard and transparent protection for your applications you might want to use additional levels of protect ion mechanisms such as SSL/TLS over VPC IPSec links For more information please refer to the Amazon VPC Connectivity Options whitepaper Using Security Zoning a nd Network Segmentation Different security requirements mandate different security controls It is a security best practice to segment infrastructure into zones that impose similar security controls While most of the AWS underlying infrastructure is manag ed by AWS operations and security teams you can build your own overlay infrastructure components Amazon VPCs subnets routing tables segmented/zoned applications and custom service instances such as user repositories DNS and time servers supplement t he AWS managed cloud infrastructure Usually network engineering teams interpret segmentation as another infrastructure design component and apply network centric access control and firewall rules to manage access Security zoning and network segmentation are two different concepts however: A network segment simply isolates one network from another where a security zone creates a group of system components with similar security levels with common controls ArchivedAmazon Web Services AWS Security Best Practices Page 52 On AWS you can build network segmen ts using the following access control methods: • Using Amazon VPC to define an isolated network for each workload or organizational entity • Using security groups to manage access to instances that have similar functions and security requirements; security groups are stateful firewalls that enable firewall rules in both directions for every allowed and established TCP session or UDP communications channel • Using Network Access Control Lists (NACLs) that allow stateless management of IP traffic NACLs are agnostic of TCP and UDP sessions but they allow granular control over IP protocols (for example GRE IPSec ESP ICMP) as well as control on a per source/destination IP address and port for TCP and UDP NACLs work in conjunction with s ecurity groups and can allow or deny traffic even before it reaches the security group • Using host based firewalls to control access to each instance • Creating a threat protection layer in traffic flow and enforcing all traffic to traverse the zone • Apply ing access control at other layers (eg applications and services) Traditional environments require separate network segments representing separate broadcast entities to route traffic via a central security enforcement system such as a firewall The conc ept of security groups in the AWS cloud makes this requirement obsolete Security groups are a logical grouping of instances and they also allow the enforcement of inbound and outbound traffic rules on these instances regardless of the subnet where these instances reside Creating a security zone requires additional controls per network segment and they often include: • Shared Access Control –a central Identity and Access Management (IDAM) system Note that although federation is possible this will often be separate from IAM • Shared Audit Logging –shared logging is required for event analysis and correlation and tracking security events • Shared Data Classification –see Table 1: Sample Asset Matrix Design Your ISMS to Protect Your Assets section for more information ArchivedAmazon Web Services AWS Security Best Pract ices Page 53 • Shared Management Infrastructure –various components such as anti virus/antispam systems patching systems and performance monitoring systems • Shared Secu rity (Confidentiality/Integrity) Requirements –often considered in conjunction with data classification To assess your network segmentation and security zoning requirements answer the following questions: • Do I control inter zone communication? Can I use n etwork segmentation tools to manage communications between security zones A and B? Usually access control elements such as security groups ACLs and network firewalls should build the walls between security zones Amazon VPCs by default builds inter zone isolation walls • Can I monitor inter zone communication using an IDS/IPS/DLP/SIEM/NBAD system depending on business requirements? Blocking access and managing access are different terms The porous communication between security zones mandates sophisticat ed security monitoring tools between zones The horizontal scalability of AWS instances makes it possible to zone each instance at the operating systems level and leverage host based security monitoring agents • Can I apply per zone access control rights? O ne of the benefits of zoning is controlling egress access It is technically possible to control access by resources such as Amazon S3 and Amazon SMS resources policies • Can I manage each zone using dedicated management channel/roles? Role Based Access Con trol for privileged access is a common requirement You can use IAM to create groups and roles on AWS to create different privilege levels You can also mimic the same approach with application and system users One of the new key features of Amazon VPC –based networks is support for multiple elastic network interfaces Security engineers can create a management overlay network using dual homed instances • Can I apply per zone confidentiality and integrity rules? Per zone encryption data classification and DRM simply increase the overall security posture If the security requirements are different per security zone then the data security requirements must be different as well And it is always a good policy to use different encryption options with rotating keys on each security zone AWS provides flexible security zoning options Security engineers and architects can leverage the following AWS features to build isolated security zones/segments on AWS per Amazon VPC access control: ArchivedAmazon Web Services AWS Security Best Practices Page 54 • Per subnet access control • Per security group access control • Per instance access control (host based) • Per Amazon VPC routing block • Per resource policies (S3/SNS/SMS) • Per zone IAM policies • Per zone log management • Per zone IAM users administrative users • Per zone log feed • Per zone administrative channels (roles interfaces management consoles) • Per zone AMIs • Per zone data storage resources (Amazon S3 buckets or Glacier archives) • Per zone user directories • Per zone applications/application controls With elastic cloud infrastructure an d automated deployment you can apply the same security controls across all AWS regions Repeatable and uniform deployments improve your overall security posture Strengthening Network Security Following the shared responsibility model AWS configures infr astructure components such as data center networks routers switches and firewalls in a secure fashion You are responsible for controlling access to your systems in the cloud and for configuring network security within your Amazon VPC as well as secure inbound and outbound network traffic While applying authentication and authorization for resource access is essential it doesn’t prevent adversaries from acquiring network level access and trying to impersonate authorized users Controlling access to ap plications and services based on network locations of the user provides an additional layer of security For example a webbased application with strong user authentication could also benefit from an IP address based firewall that limits source traffic to a specific range of IP addresses and ArchivedAmazon Web Services AWS Security Best Practices Page 55 an intrusion prevention system to limit security exposure and minimize the potential attack surface for the application Best practices for network security in the AWS cloud include the following: • Always use security groups: They provide stateful firewalls for Amazon EC2 instances at the hypervisor level You can apply multiple security groups to a single instance and to a single ENI • Augment security groups with Network ACLs: They are stateless but they provide fast and efficient controls Network ACLs are not instance specific so they can provide another layer of control in addition to security groups You can apply separation of duties to ACLs management and security group management • Use IPSec or AWS Direct Connec t for trusted connections to other sites Use Virtual Gateway (VGW) where Amazon VPC based resources require remote network connectivity • Protect data in transit to ensure the confidentiality and integrity of data as well as the identities of the communic ating parties • For large scale deployments design network security in layers Instead of creating a single layer of network security protection apply network security at external DMZ and internal layers • VPC Flow Logs provides further visibility as it enables you to capture information about the IP traffic going to and from network interfaces in your VPC Many of the AWS service endpoints that you interact with do not provide for native firewall functionality or access control lists AWS monitors and pr otects these endpoints with state oftheart network and application level control systems You can use IAM policies to restrict access to your resources based on the source IP address of the request Securing Periphery Systems: User Repositories DNS NTP Overlay security controls are effective only on top of a secure infrastructure DNS query traffic is a good example of this type of control When DNS systems are not properly secured DNS client traffic can be intercepted and DNS names in queries and responses can be spoofed Spoofing is a simple but efficient attack against an infrastructure that lacks basic controls SSL/TLS can provide additional protection ArchivedAmazon Web Services AWS Security Best Practices Page 56 Some AWS customers use Amazon Route 53 which is a secure DNS service If you require internal D NS you can implement a custom DNS solution on Amazon EC2 instances DNS is an essential part of the solution infrastructure and as such becomes a critical part of your security management plan All DNS systems as well as other important custom infrastru cture components should apply the following controls: Table 21: Controls for periphery system Common Control Description Separate administrative level access Implement role separation and access controls to limit access to such services often separate from access control required for application access or access to other parts of the infrastructure Monitoring alerting audit trail Log and monitor authorized and unauthorized activity Network layer access control Restri ct network access to only systems that require it If possible apply protocol enforcement for all network level access attempts (that is enforce custom RFC standards for NTP and DNS) Latest stable software with security patches Ensure that the software is patched and not subject to any known vulnerabilities or other risks Continuous security testing (assessments) Ensure that the infrastructure is tested regularly All other security controls processes in place Make sure the periphery systems follow your information security management system (ISMS) best practices in addition to service specific custom security controls In addition to DNS other infrastructure services might require specific controls Centralized access control is es sential for managing risk The IAM service provides rolebased identity and access management for AWS but AWS does not provide end user repositories like Active Directory LDAP or RADIUS for your operating systems and applications Instead you establish user identification and authentication systems alongside Authentication Authorization Accounting (AAA) servers or sometimes proprietary database tables All identity and access management servers for the purposes of user platforms and applications are c ritical to security and require special attention ArchivedAmazon Web Services AWS Security Best Practices Page 57 Time servers are also critical custom services They are essential in many security related transactions including log time stamping and certificate validation It is important to use a centralized time s erver and synchronize all systems with the same time server The Payment Card Industry (PCI) Data Security Standard (DSS) proposes a good approach to time synchronization: • Verify that time synchronization technology is implemented and kept current • Obtain and review the process for acquiring distributing and storing the correct time within the organization and review the time related system parameter settings for a sample of system components • Verify that only designated central time servers receive tim e signals from external sources and that time signals from external sources are based on International Atomic Time or Universal Coordinated Time (UTC) • Verify that the designated central time servers peer with each other to keep accurate time and that ot her internal servers receive time only from the central time servers • Review system configurations and time synchronization settings to verify that access to time data is restricted to only personnel who have a business need to access time data • Review sys tem configurations and time synchronization settings and processes to verify that any changes to time settings on critical systems are logged monitored and reviewed • Verify that the time servers accept time updates from specific industry accepted exter nal sources (This helps prevent a malicious individual from changing the clock) (You have the option of receiving those updates encrypted with a symmetric key and you can create access control lists that specify the IP addresses of client machines that will be updated (This prevents unauthorized use of internal time servers) Validating the security of custom infrastructure is an integral part of managing security in the cloud Building Threat Protection Layers Many organizations consider layered securi ty to be a best practice for protecting network infrastructure In the cloud you can use a combination of Amazon VPC implicit firewall rules at the hypervisor layer alongside network access control lists security ArchivedAmazon Web Services AWS Secur ity Best Practices Page 58 groups host based firewalls and IDS/I PS systems to create a layered solution for network security While security groups NACLs and host based firewalls meet the needs of many customers if you’re looking for defense in depth you should deploy a network level security control appliance and you should do so inline where traffic is intercepted and analyzed prior to being forwarded to its final destination such as an application server Figure 6: Layered Network Defense in the Cloud Examples of inline threat protec tion technologies include the following: • Third party firewall devices installed on Amazon EC2 instances (also known as soft blades) • Unified threat management (UTM) gateways • Intrusion prevention systems • Data loss management gateways • Anomaly detection gateways • Advanced persistent threat detection gateways ArchivedAmazon Web Services AWS Security Best Practices Page 59 The following key features in the Amazon VPC infrastructure support deploying threat protection layer technologies: • Support for Multiple Layers of Load Balancers: When you use threat protecti on gateways to secure clusters of web servers application servers or other critical servers scalability is a key issue AWS reference architectures underline deployment of external and internal load balancers for threat management and internal server lo ad distribution and high availability You can leverage Elastic Load Balancing or your custom load balancer instances for your multi tiered designs You must manage session persistence at the load balancer level for stateful gateway deployments • Support fo r Multiple IP Addresses: When threat protection gateways protect a presentation layer that consists of several instances (for example web servers email servers application servers) these multiple instances must use one security gateway in a many toone relationship AWS provides support for multiple IP addresses for a single network interface • Support for Multiple Elastic Network Interfaces (ENIs): Threat protection gateways must be dual homed and in many cases depending on the complexity of the networ k must have multiple interfaces Usingthe concept of ENIs AWS supports multiple network interfaces on several different instance types which makes it possible to deploy multi zone security features Latency complexity and other architectural constrain ts sometimes rule out implementing an inline threat management layer in which case you can choose one of the following alternatives • A distributed threat protection solution : This approach installs threat protection agents on individual instances in the c loud A central threat management server communicates with all host based threat management agents for log collection analysis correlation and active threat response purposes • An overlay network threat protection solution : Build an overlay network on top of your Amazon VPC using technologies such as GRE tunnels vtun interfaces or by forwarding traffic on another ENI to a centralized network traffic analysis and intrusion detection system which can provide active or passive threat response ArchivedAmazon Web Services AWS Security Best Practices Page 60 Test Securi ty Every ISMS must ensure regular reviews of the effectiveness of security controls and policies To guarantee the efficiency of controls against new threats and vulnerabilities customers need to ensure that the infrastructure is protected against attacks Verifying existing controls requires testing AWS customers should undertake a number of test approaches: • External Vulnerability Assessment: A third party evaluates system vulnerabilities with little or no knowledge of the infrastructure and its componen ts; • External Penetration Tests: A third party with little or no knowledge of the system actively tries to break into it in a controlled fashion • Internal Gray/White box Review of Applications and Platforms : A tester who has some or full knowledge of the s ystem validates the efficiency of controls in place or evaluates applications and platforms for known vulnerabilities The AWS Acceptable Use Policy outlines permitted and prohibited behavior in the AWS cloud and defines security violations and network abuse AWS supports both inbound and outbound penetration testing in the cloud although you must request permission to conduct penetration tests For more information see the Amazon Web Services Acceptable Use Policy To request penetration testing for your resources complete and submit the AWS Vulnerability Penetration Testing R equest Form You must be logged into the AWS Management Console using the credentials associated with the instances you want to test or the form will not pre populate correctly For third party penetration testing you must fill out the form yourself and then notify the third parties when AWS grants approval The form includes information about the instances to be tested the expected start and end dates and times of the tests and requires you to read and agree to the terms and conditions specific to pene tration testing and to the use of appropriate tools for testing AWS policy does not permit testing of m1small or t1micro instance types After you submit the form you will receive a response confirming receipt of the request within one business day If you need more time for additional testing you can reply to the authorization email asking to extend the test period Each request is subject to a separate approval process ArchivedAmazon Web Services AWS Security Best Practices Page 61 Managing Metrics and Improvement Measuring control effectiveness is an integral p rocess to each ISMS Metrics provide visibility into how well controls are protecting the environment Risk management often depends on qualitative and quantitative metrics Table 22 outlines measurement and improvement best practices: Table 22: Measuring and improving metrics Best Practice Improvement Monitoring and reviewing procedures and other controls • Promptly detect errors in the results of processing • Promptly identify attempted and successful security breaches and incidents • Enable management to determine whether the security activities delegated to people or implemented by information technology are performing as expected • Help detect security events and thereby prevent security incidents by the use of indicators • Determine whether the actions taken to resolve a breach of security were effective Regular reviews of the effectiveness of the ISMS • Consider results from security audits incidents and effectiveness measurements; and suggestions and feedback from all interested parties • Ensure that the ISMS meets the policy and objectives • Review security controls Measure controls effectiveness • Verify that security requirements have been met Risk assessments reviews at planned intervals • Review the residual risks and t he identified acceptable levels of risks taking into account: • Changes to the organization technology business objectives and processes identified threats • Effectiveness of the implemented controls • External events such as changes to the legal or regulat ory environment changed contractual obligations and changes in social climate Internal ISMS audits • First party audits (internal audits) are conducted by or on behalf of the organization itself for internal purposes ArchivedAmazon Web Services AWS Security Best Practices Page 62 Best Practice Improvement Regular management reviews • Ensure that the scope remains adequate • Identify improvements in the ISMS process Update security plans • Take into account the findings of monitoring and reviewing activities • Record actions and events that could affect the ISMS effectiveness or performance Mitigating and Protecting Against DoS & DDoS Attacks Organizations running Internet applications recognize the risk of being the subject of Denial of Service (DoS) or Distributed Denial of Service (DDoS) attacks by competitors activists or i ndividuals Risk profiles vary depending on the nature of business recent events the political situation as well as technology exposure Mitigation and protection techniques are similar to those used on premises If you’re concerned about DoS/DDoS attac k protection and mitigation we strongly advise you to enroll in AWS Premium Support services so that you can proactively and reactively involve AWS support services in the process of mitigating attacks or containing ongoing incidents in your environment on AWS Some services such as Amazon S3 use a shared infrastructure which means that multiple AWS accounts access and store data on the same set of Amazon S3 infrastructure components In this case a DoS/DDoS attack on abstracted services is likely to affect multiple customers AWS provides both mitigation and protection controls for DoS/DDoS on abstracted services from AWS to minimize the impact to you in the event such an attack You are not required to provide additional DoS/DDoS protection of such services but we do advise that you follow best practices outlined in this whitepaper Other services such as Amazon EC2 use shared physical infrastructure but you are expected to manage the operating system platform and customer data For such servic es we need to work together to provide for effective DDoS mitigation and protection AWS uses proprietary techniques to mitigate and contain DoS/DDoS attacks to the AWS platform To avoid interference with actual user traffic though and following the shared responsibility model AWS does not provide mitigation or actively block network ArchivedAmazon Web Services AWS Security Best Practices Page 63 traffic affecting individual Amazon EC2 instances: only you can determine whether excessive traffic is expected and benign or part of a DoS/DDoS attack While a number of techniques can be used to mitigate DoS/DDoS attacks in the cloud we strongly recommend that you establish a security and performance baseline that captures system parameters under normal circumstances potentially also considering daily weekly annual o r other patterns applicable to your business Some DoS/DDoS protection techniques such as statistical and behavioral models can detect anomalies compared to a given baseline normal operation pattern For example a customer who typically expects 2000 co ncurrent sessions to their website at a specific time of day might trigger an alarm using Amazon CloudWatch and Amazon SNS if the current number of concurrent sessions exceeds twice that amount (4000) Consider the same components that apply to on premise s deployments when you establish your secure presence in the cloud Table 23 outlines common approaches for DoS/DDoS mitigation and protection in the cloud Table 23: Techniques for mitigation and protection from DoS/DDoS attacks Technique Description Protection from DoS/DDoS Attacks Firewalls: Security groups network access control lists and host based firewalls Traditional firewall techniques limit the attack surface for potential attackers and deny traffic to and from the source of destination of attack • Manage the list of allowed destination servers and services (IP addresses & TCP/UDP ports) • Manage the list of allowed sources of traffic protocols • Explicitly deny access temporarily or permanently from specific IP addresses • Manage the list of allowed Web application firewalls (WAF) Web application firewalls provide deep packet inspection for web traffic • Platform and application specific attacks • Protocol sanity attacks • Unauthorized user access Host based or inline IDS/IPS systems IDS/IPS systems can use statistical/behavioral or signature based algorithms to detect and contain network attacks and Trojans • All types of attacks ArchivedAmazon Web Services AWS Security Best Practices Page 64 Technique Description Protection from DoS/DDoS Attacks Traffic shaping/rate limiting Often DoS/DDoS attacks deplete network and system resources Rate limiting is a good technique for protecting scarce resources from overconsumption • ICMP flooding • Application request flooding Embryonic session limits TCP SYN flooding attacks can take place in both simple and distributed form In either case if you have a baseline of the system you can detect considerable deviations from the norm in the number of half open (embryonic) TCP sessions and drop any further TCP SYN packets from the specific sources TCP SYN flooding Along with conven tional approaches for DoS/DDoS attack mitigation and protection the AWS cloud provides capabilities based on its elasticity DoS/DDoS attacks are attempts to deplete limited compute memory disk or network resources which often works against on premise s infrastructure By definition however the AWS cloud is elastic in the sense that new resources can be employed on demand if and when required For example you might be under a DDoS attack from a botnet that generates hundreds of thousands of request s per second that are indistinguishable from legitimate user requests to your web servers Using conventional containment techniques you would start denying traffic from specific sources often entire geographies on the assumption that there are only att ackers and no valid customers there But these assumptions and actions result in a denial of service to your customers themselves In the cloud you have the option of absorbing such an attack Using AWS technologies like Elastic Load Balancing and Auto Sc aling you can configure the web servers to scale out when under attack (based on load) and shrink back when the attack stops Even under heavy attack the web servers could scale to perform and provide optimal user experience by leveraging cloud elastici ty By absorbing the attack you might incur additional AWS service costs; but sustaining such an attack is so financially challenging for the attacker that absorbed attacks are unlikely to persist You could also use Amazon CloudFront to absorb DoS/DDoS flooding attacks AWS WAF integrates with AWS CloudFront to help protect your web applications from ArchivedAmazon Web Services AWS Security Best Practices Page 65 common web exploits that could affect application availability compromise security or consume exce ssive resources Potential attackers trying to attack content behind CloudFront are likely to send most or all requests to CloudFront edge locations where the AWS infrastructure would absorb the extra requests with minimal to no impact on the back end cust omer web servers Again there would be additional AWS service charges for absorbing the attack but you should weigh this against the costs the attacker would incur in order to continue the attack as well In order to effectively mitigate contain and ge nerally manage your exposure to DoS/DDoS attacks you should build a layer defense model as outlined elsewhere in this document Manage Security Monitoring Alerting Audit Trail and Incident Response The shared responsibility model requires you to monit or and manage your environment at the operating system and higher layers You probably already do this on premises or in other environments so you can adapt your existing processes tools and methodologies for use in the cloud For extensive guidance on security monitoring see the ENISA Procure Secure whitepaper which outlines the concepts of continuous security monitoring in the cloud (see Further Reading ) Security monitoring starts with answering the following questions : • What parameters should we measure? • How should we measure them? • What are the thresholds for these parameters? • How will escalation processes work? • Where will data be kept? Perhaps the most important question you must answer is “What do I need to log?” We recommend configuring the following areas for logging and analysis: • Actions taken by any individual with root or administrative privileges • Access to all audit trails ArchivedAmazon Web Services AWS Security Best Practices Page 66 • Invalid logical access attempts • Use of identification and authentication mechanisms • Initia lization of audit logs • Creation and deletion of system level objects When you design your log file keep the considerations in Table 24 in mind: Table 24: Log file considerations Area Consideration Log collection Note how log files are collected Often operating system application or third party/middleware agents collect log file information Log transport When log files are centralized transfer them to the central location in a secure reliable and timely fashion Log s torage Centralize log files from multiple instances to facilitate retention policies as well as analysis and correlation Log taxonomy Present different categories of log files in a format suitable for analysis Log analysis/ correlation Log files provide security intelligence after you analyze them and correlate events in them You can analyze logs in real time or at scheduled intervals Log protection/ security Log files are sensitive Protect them through network control identity and access management encryption data integrity authentication and tamper proof time stamping You might have multiple sources of security logs Various network components such as firewalls IDP DLP AV systems the operating system platforms and applications will generate log files Many will be related to security and those need to be part of the log file strategy Others which are not related to security are better excluded from the strategy Logs should include all user activities exception s and security events and you should keep them for a predetermined time for future investigations To determine which log files to include answer the following questions: • Who are the users of the cloud systems? How do they register how do they authenti cate how are they authorized to access resources? • Which applications access cloud systems? How do they get credentials how do they authenticate and how they are authorized for such access? ArchivedAmazon Web Services AWS Security Best Practices Page 67 • Which users have privileged access (administrative level access) to AWS infrastructure operating systems and applications? How do they authenticate how are they authorized for such access? Many services provide built in access control audit trails (for example Amazon S3 and Amazon EMR provide such logs) but in some cases your business requirements for logging might be higher than what’s available from the native service log In such cases consider using a privilege escalation gateway to manage access control logs and authorization When you use a privilege escalat ion gateway you centralize all access to the system via a single (clustered) gateway Instead of making direct calls to the AWS infrastructure your operating systems or applications all requests are performed by proxy systems that act as trusted interme diaries to the infrastructure Often such systems are required to provide or do the following: • Automated password management for privileged access: Privileged access control systems can rotate passwords and credentials based on given policies automatically using built in connectors for Microsoft Active Directory UNIX LDAP MYSQL etc • Regularly run least privilege checks using AWS IAM user Access Advisor and AWS IAM user Last Used Access Keys • User authentication on the front end and delegated access to services from AWS on the back end: Typically a website that provides single sign on for all users Users are assigned access privileges based on their authorization profiles A common approach is using token based authentication for the website and acquir ing click through access to other systems allowed in the user’s profile • Tamper proof audit trail storage of all critical activities • Different sign on credentials for shared accounts: Sometimes multiple users need to share the same password A privilege e scalation gateway can allow remote access without disclosing the shared account • Restrict leapfrogging or remote desktop hopping by allowing access only to target systems • Manage commands that can be used during sessions For interactive sessions like SSH or appliance management or AWS CLI such solutions can enforce policies by limiting the range of available commands and actions ArchivedAmazon Web Services AWS Security Best Practices Page 68 • Provide audit trail for terminals and GUI based sessions for compliance and security related purposes • Log everything and aler t based on given threshold for the policies Using Change Management Logs By managing security logs you can also track changes These might include planned changes which are part of the organization’s change control process (sometimes referred to as MACD –Move/Add/Change/Delete) ad hoc changes or unexpected changes such as incidents Changes might occur on the infrastructure side of the system but they might also be related to other categories such as changes in code repositories gold image/applicati on inventory changes process and policy changes or documentation changes As a best practice we recommend employing a tamper proof log repository for all the above categories of changes Correlate and interconnect change management and log management sy stems You need a dedicated user with privileges for deleting or modifying change logs; for most systems devices and applications change logs should be tamper proof and regular users should not have privileges to manage the logs Regular users should be unable to erase evidence from change logs AWS customers sometimes use file integrity monitoring or change detection software on logs to ensure that existing log data cannot be changed without generating alerts while adding new entries does not generate a lerts All logs for system components must be reviewed at the minimum on a daily basis Log reviews must include those servers that perform security functions such as intrusion detection system (IDS) and authentication authorization and accounting proto col (AAA) servers (for example RADIUS) To facilitate this process you can use log harvesting parsing and alerting tools Managing Logs for Critical Transactions For critical applications all Add Change/Modify and Delete activities or transactions must generate a log entry Each log entry should contain the following information: • User identification information • Type of event • Date and time stamp ArchivedAmazon Web Services AWS Security Best Practices Page 69 • Success or failure indication • Origination of event • Identity or name of affected data system component or resource Protecting Log Information Logging facilities and log information must be protected against tampering and unauthorized access Administrator and operator logs are often targets for erasing trails of activities Common controls for pr otecting log information include the following: • Verifying that audit trails are enabled and active for system components • Ensuring that only individuals who have a job related need can view audit trail files • Confirming that current audit trail files are pro tected from unauthorized modifications via access control mechanisms physical segregation and/or network segregation • Ensuring that current audit trail files are promptly backed up to a centralized log server or media that is difficult to alter • Verifying that logs for external facing technologies (for example wireless firewalls DNS mail) are offloaded or copied onto a secure centralized internal log server or media • Using file integrity monitoring or change detection software for logs by examining syste m settings and monitored files and results from monitoring activities • Obtaining and examining security policies and procedures to verify that they include procedures to review security logs at least daily and that follow up to exceptions is required • Verify ing that regular log reviews are performed for all system components • Ensuring that security policies and procedures include audit log retention policies and require audit log retention for a period of time defined by the business and compliance requiremen ts ArchivedAmazon Web Services AWS Security Best Practices Page 70 Logging Faults In addition to monitoring MACD events monitor software or component failure Faults might be the result of hardware or software failure and while they might have service and data availability implications they might not be related to a security incident Or a service failure might be the result of deliberate malicious activity such as a denial of service attack In any case faults should generate alerts and then you should use event analysis and correlation techniques to determine t he cause of the fault and whether it should trigger a security response Conclusion AWS Cloud Platform provides a number of important benefits to modern businesses including flexibility elasticity utility billing and reduced time to market It provid es a range of security services and features that that you can use to manage security of your assets and data in the AWS While AWS provides an excellent service management layer around infrastructure or platform services businesses are still responsible for protecting the confidentiality integrity and availability of their data in the cloud and for meeting specific business requirements for information protection Conventional security and compliance concepts still apply in the cloud Using the various best practices highlighted in this whitepaper we encourage you to build a set of security policies and processes for your organization so you can deploy applications and data quickly and securely Contributors Contributors to this document include : • Dob Todorov • Yinal Ozkan Further Reading For additional information see: • Amazon Web Services: Overview of Security Processes • Amazon Web Services Risk and Compliance Whitepaper ArchivedAmazon Web Services AWS Security Best Practices Page 71 • Amazon VPC Network Connectivity Options • AWS SDK support for Amazon S3 client side encryption • Amazon S3 Default Encryption f or S3 Buckets • AWS Security Partner Solutions • Identity Federation Sample Application for an Active Directory Use Case • Single Sign on to Amazon EC2 NET Applications from an On Premises Windows Domain • Authenticating Users of AWS Mobile Applications with a Token Vending Machine • Client Side Data Encryption with the AWS SDK for Java and Amazon S3 • Amazon Web Services Acceptable Use Policy • ENISA Procure Secure: A Guide to Monitoring of Security Service Levels in Cloud Contracts • The PCI Data Security Standard • ISO/IEC 27001:2013 Document Revisions Date Description August 2016 First publication
|
General
|
consultant
|
Best Practices
|
AWS_Security_Checklist
|
AWS Security Checklist This checklist provides customer recommendations that align with the WellArchitected Framework Security Pillar Identity & Access Management 1 Secure your AWS account Use AWS Organizations to manage your accounts use the root user by exception with multi factor authentication (MFA) enabled and configure account contacts 2 Rely on centralized identity provider Centralize identities using either AWS Single Sign On or a thirdparty provider to avoid routinely creat ing IAM users or using longterm access keys —this approach makes it easier to manage multiple AWS accounts and federated applications 3 Use multiple AWS accounts to separate workloads and workload stages such as production and non production Multiple AWS accounts allow you to separate data and resources and enab le the use of Service Control Policies to implement guardrails AWS Control Tower can hel p you easily set up and govern a multi account AWS environment 4 Store and use secrets securely Where you cannot use temporary credentials like tokens from AWS Security Token Service store your secrets like database passwords using AWS Secrets Manager which handles encryption rotation and access control Detection 1 Enable foundational services: AWS CloudTrail Amazon GuardDuty and AWS Security Hub For all your AWS accounts configure CloudTrail to log API activity use GuardDuty for continuous monitoring and use AWS Security Hub for a comprehensive view of your security posture 2 Configure service and application level logging In addition to your application logs enable logging at the service level such as Amazon VPC Flow Logs and Amazon S3 CloudTrail and Elastic Load Balancer access logging to gain visibility into events Configure logs to flow to a central account and protect them from manipulation or deletion 3 Configure monitoring and alerts and investigate events Enable AWS Config to track the history of resources and Config Managed Rules to automatically alert or remediate on undesired changes For all your sources of logs and events from AWS CloudTrail to Amazon GuardDuty and your application logs configure alerts for high priority events and investigate Infrastructure Protection 1 Patch your operating system applications and code Use AWS Systems Manager Patch Manager to automate the patching process of all systems and code for which you are responsible including your OS applications and code dependencies AWS Security Checklist 2 Implement distributed denial ofservice ( DDoS ) protection for your internet facing resources Use Amazon Cloudfront AWS WAF and AWS Shield to provide layer 7 and layer 3/ layer 4 DDoS protection 3 Control access using VPC Security Groups and subnet layers Use security groups for controlling i nbound and outbound traffic and automatically apply rules for both security groups and WAFs using AWS Firewall Manager Group different resources into different subnets to create routing layers for example database resources do not need a route to the internet Data Protection 1 Protect data at rest Use AWS Key Management Service (KMS) to protect data at rest across a wide range of AWS services and your applications Enable default encryption for Amazon EBS volumes and Amazon S3 buckets 2 Encrypt data in transit Enable encryption for all network traffic including Transport Layer Security (TLS) for web based network infrastructure you control using AWS Certificate Manager to manage and provision certificates 3 Use mechanisms to keep people away from data Keep all users away from directly accessing sensitive data and systems For example provide a n Amazon QuickSight dashboard to business users instead of direct access to a database and perform actions at a distance using AWS Systems Manager automation documents and Run C ommand Incident Response 1 Ensure you have an incident response (IR) plan Begin your IR plan by building runbooks to respond to unexpected events in your workload For details see the AWS Security Inciden t Response Guide 2 Make sure that someone is notified to take action on critical findings Begin with GuardDuty findings Turn on GuardDuty and ensure that someone with the ability to take action receives the notification s Automatically creating trouble tickets is the best way to ensure that GuardDuty findings are integrated with your operational processes 3 Practice respo nding to events Simulate and practice incident response by running regular game days incorporating the lessons learned into your incident management plans and continuously improving them For more best practices see the Security Pillar of the Well Architected Framework and Security Documentation Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create a ny commitmen ts or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to i ts customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2020 Amazon Web Services Inc or its affiliates All rights reserved
|
General
|
consultant
|
Best Practices
|
AWS_Serverless_MultiTier_Architectures_Using_Amazon_API_Gateway_and_AWS_Lambda
|
AWS Serverless Multi Tier Architectures With Amazon API Gateway and AWS Lambda First Published November 2015 Updated Octo ber 20 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Three tier architecture overview 2 Serverless logic tier 3 AWS Lambda 3 API Gateway 6 Data tier 11 Presentation tier 14 Sample architecture patterns 15 Mobile backend 16 Single page application 17 Web application 19 Microservices with Lambda 20 Conclusion 21 Contributors 21 Further reading 22 Document revisions 22 Abstract This whitepaper illustrates how innovations from Amazon Web Services (AWS) can be used to chang e the way you design multi tier architectures and implement popular patterns such as microservices mobile backends and single page applications Architects and developers can use Amazon API Gateway AWS Lambda and other services to reduce the developmen t and operations cycles required to create and manage multi tiered applications Amazon Web Services AWS Serverless Multi Tier Architectures Page 1 Introduction The multi tier application (three tier ntier and so forth) has been a cornerstone architecture pattern for decades and remains a popular pattern for user facing applications Although the language used to describe a multi tier architecture varies a multi tier application generally consists of the following components: • Presentation tier – Component that the user directly interacts w ith (for example webpage s and mobile app UI s) • Logic tier – Code required to translate user actions to application functionality (for example CRUD database operations and data processing) • Data tier – Storage media ( for example databases object stores caches and file systems) that hold the data relevant to the application The multi tier architecture pattern provides a general framework to ensure decoupled and independently scalable application components can be separately developed managed a nd maintained (often by distinct teams) As a consequence of this pattern in which the network (a tier must make a network call to interact with another tier) acts as the boundary between tiers developing a multi tier application often requires creating m any undifferentiated application components Some of these components include: • Code that defines a message queue for communication between tiers • Code that defines an application programming interface (API) and a data model • Security related code that ensures appropriate access to the application All of these examples can be considered “boilerplate” components that while necessary in multi tier applications do not vary greatly in their implementation from one application to the next AWS offers a numb er of services that enable the creation of serverless multi tier applications —greatly simplifying the process of deploying such applications to production and removing the overhead associated with traditional server management Amazon API Gateway a service for creating and managing APIs and AWS Lambda a service for running arbitrary code functions can be used together to simplify the creation of robust multi tier applications Amazon Web Services AWS Serverless Multi Tier Architectures Page 2 API Gateway’ s integration with AWS Lambda enable s userdefined code function s to be initiated directl y through HTT PS requests Regardle ss of the request volume both API Gatewa y and Lambda scale automaticall y to support exactl y the need s of your application (refe r to Amazon API Gatewa y quota s and important notes for scalability information) By combining these two services you can create a tie r that enables you to write onl y the code that matte rs to you r application and not focu s on variou s other undifferentiating aspect s of implementing a multitiered architecture such a s architecting for high availability writing client SDKs server and operating syste m (OS) management scaling and implementing a client authorization mechanism API Gatewa y and Lambda enable the creation of a serverle ss logic tier Depending on your application requirements AW S also provide s option s to create a serverless presentation tier (for example with Amazon CloudFront and Amazon Simple Storage Service (Amazon S3 ) and data tier (for example Amazon Aurora and Amazon DynamoDB ) This whitepaper focuses on the most popular example of a multitiered architecture the threetier web application However you can apply this multitier pattern well beyond a typical threetier web application Threeti er architectur e overview The threetie r architecture i s the most popula r implementation of a multitier architecture and consist s of a single presentation tier a logic tier and a data tier The following illustration show s an example of a simple generi c threetie r application Architectural pattern for a three tier application There are many great online resources where you can learn more about the general three tier architecture pattern This whitepaper focuses on a specific implementation pattern for this architecture using API Gateway and Lambda Amazon Web Services AWS Serverless Multi Tier Architectures Page 3 Serverless logic tier The logic tier of the three tier architecture represents the brains of the application This is where using API Gateway a nd AWS Lambda can have the most impact compared to a traditional server based implementation The features of these two services enable you to build a serverless application that is highly available scalable and secure In a traditional model your appl ication could require thousands of servers; however by using Amazon API Gateway and AWS Lambda you are not responsible for server management in any capacity In addition by using these managed services together you gain the following benefits: • Lambda o No OS to choose secure patch or manage o No servers to right size monitor or scale o Reduced risk to your cost from overprovisioning o Reduced risk to your performance from under provisioning • API Gateway o Simplified mechanisms to deploy monitor and secure APIs o Improved API performance through caching and content delivery AWS Lambda AWS Lambda is a compute service that enable s you to run arbitrary code functions in any of the supported languages (Nodejs Python Ruby Java Go NET For more informa tion refer to Lambda FAQs ) without provisioning managing or scaling servers Lambda functions are run in a managed isolated container and are launched in response to an event which can be one of several programmatic triggers that AWS makes available called an event source Refer to Lambda FAQs for all event sources Many popular use cases for Lambda r evolve around event driven data processing workflows such as processing files stor ed in Amazon S3 or streaming data records from Amazon Kinesis When used in conjunc tion with API Gateway a Lambda function performs the functionality of a typical web service: it initiates code in response to a client HTTPS request ; API Gateway acts as the front door for your logic tier and Lambda invokes the application code Amazon Web Services AWS Serverless Multi Tier Architectures Page 4 Your business logic goes here no servers necessary Lambda requires that you to write code functions called handlers which will run when initiat ed by an event To use Lambda with API Gateway you can configure API Gateway to launch handler functions when an HTTPS request to your API occurs In a serverless multi tier architecture each of the APIs you create in API Gateway will integrate with a Lambda function (and the handler within) that invok es the business logic required Using AWS Lambda functions to compose the logic tier enable s you to define a desired level of granularity for exposing the application functionality (one Lambda function per API or one Lambda function per API method) Inside the Lambda function the handler can reach out to any other dependencies ( for example other methods you’ve uploaded with your code libraries native binaries and external web services) or even other Lambda functions Creating or updating a Lambda function requires either uploadin g code as a Lambda deployment package in a zip file to an Amazon S3 bucket or packaging code as a container image along with all the dependencies The functions can use different deployment methods such as AWS Management Console running AWS Command Line Interface (CLI) or running infrastructure as code template s or framework s such as AWS CloudFormation AWS Serverless Application Model (AWS SAM) or AWS Cloud Development Kit (AWS CDK) When you create your function using any of these methods you specify which method inside your deployment package will act as the request handler You can reuse the same deployment package for multiple Lambda function definitions where each Lambda functio n might have a unique handler within the same deployment package Lambda security To run a Lambda function it must be invoked by an event or service that is permitted by an AWS Identity and Access Management (IAM) policy Using IAM policies you can create a Lambda function that cannot be initiated at all unless it is invoked by an API Gateway resource that you define Such policy can be defined using resource based policy across various AWS services Each Lambda function assumes an IAM role that is assigned when the Lambda function is deployed This IAM role defines the other AWS services and resources your Lambda function can interact with ( for example Amazon DynamoDB table and Amazon S3) In context of Lambda function this is called an execution role Amazon Web Services AWS Serverless Multi Tier Architectures Page 5 Do not store sensitive information inside a Lambda function IAM handles access to AWS services through the Lambda execution role; if you need to access other credentials ( for example database credentials and API keys) from inside your Lambda function you can use AWS Key Management Service (AWS KMS) with environment variables or use a service such as AWS Secrets Manager to keep this information safe when not in use Performance at scale Code pulled in as a container image from Amazon Elastic Container Registry (Amazon ECR) or from a zip file uploaded to Amazon S3 runs in an isolated environment managed by AWS You do not have to scale your Lambda functions —each time an event notification is received by your function AWS Lambda locates available capacity within its compute fleet and runs your code with runtime memory disk and timeout configurations that you define With this pattern AWS can start as many copies of your function as needed A Lambda based logic tier is always right sized for your customer needs The ability to quickly absorb surges in traffic through managed scaling and concurrent code initiation combined with Lambda payperuse pricing enables you to always meet customer requests while simultaneously not paying for idle compute capacity Serverless deployment and management To help you deploy and manage your Lambda functions use AWS Serverless Application Model (AWS SAM ) an open source framework that includes : • AWS SAM template specification – Syntax used to define your functions and describe their environments permissions configurations and events for simplified upload and deploym ent • AWS SAM CLI – Commands that enable you to verify AWS SAM template syntax invoke functions locally debug Lambda functions and deployment package functions You c an also use AWS CDK which is a software development framework for defining cloud infrastructure using programming languages and provisioning it through CloudFormation AWS CDK provides an imperative way to define AWS resources whereas AWS SAM provides a declarative way Amazon Web Services AWS Serverless Multi Tier Architectures Page 6 Typically when you deploy a Lambda function it is invok ed with permissions defined by its assigned IAM role and is able to reach internet facing endpoints As the core of your logic tier AWS Lambda is the component directly integrating w ith the data tier If your data tier contains sensitive business or user information it is important to ensure that this data tier is appropriately isolated (in a private subnet) You can configure a Lambda function to connect to private subnets in a virt ual private cloud (VPC) in your AWS account if you want the Lambda function to access resources that you cannot expose publicly like a private database instance When you connect a function to a VPC Lambda creates an elastic network interface for each subnet in your function's VPC configuration and elastic network interface is used to access your internal resources privately Lambda architecture pattern inside a VPC The use of Lambda with VPC means that databases and other storage media that your business logic depends on can be made inaccessible from the internet The VPC also ensures that the only way to interact with your data from the internet is through the APIs that you’ve defined and the Lambda code functions that you have written API Gateway API Gateway is a fully managed service that enables developers to create publish maintain monitor and secure APIs at any scale Amazon Web Services AWS Serverless Multi Tier Architectures Page 7 Clients ( that is presentation tier s) integrate with the APIs exposed through API Gateway using standard HTTPS requests The applicability of APIs exposed through API Gateway to a service oriented multi tier architecture is the ability to separate individual pieces of appli cation functionality and expose this functionality through REST endpoints API Gateway has specific features and qualities that can add powerful capabilities to your logic tier Integration with Lambda Amazon API Gateway supports both REST and HTTP type s of APIs An API Gateway API is made up of resources and methods A resource is a logical entity that an app can access through a resource path ( for example /tickets ) A method corresponds to an API request that is submitted to an API resource ( for example GET /tickets ) API Gateway enable s you to back each method with a Lambda function that is when you call the API through the HTTPS endpoint exposed in API Gateway API Gateway invokes the Lam bda function You can connect API Gateway and Lambda functions using proxy integrations and non proxy integrations Proxy integrations In a proxy integration the entire client HTTPS request is sent asis to the Lambda function API Gateway passes the enti re client request as the event parameter of the Lambda handler function and the output of the Lambda function is returned directly to the client (including status code headers and so forth) Nonproxy integrations In a nonproxy integration you configure how the parameters headers and body of the client request are passed to the event parameter of the Lambda handler function Additionally you configure how the Lambda output is translated back to the user Note : API Gateway can also proxy to ad ditional serverless resources outside of AWS Lambda such as mock integrations (useful for initial application development) and direct proxy to S3 objects Amazon Web Services AWS Serverless Multi Tier Architectures Page 8 Stable API performance across regions Each deployment of API Gateway includes a Amazon CloudFront distribution under the hood CloudFront is a content delivery service that uses Amazon’s global network of edge locations as connection points for clients using your API This helps decrease the response lat ency of your API By using multiple edge locations across the world CloudFront also provides capabilities to combat distributed denial of service (DDoS) attack scenarios For more information review the AWS Best Practices for DDoS Resiliency whitepaper You can improve the performance of specific API requests by using API Gateway to store responses in an optional in memory cache This approach not only provides performance benefits for repeated API requests but it also reduces the number of times your Lambda functions are invoked which can reduce your overall cost Encourage innovation and reduce overhead with builtin features The development cost to build any new application is an investment Using API Gateway can reduce the amount of time required for certain development tasks and lower the total development cost enab ling organizations to more freely experiment and innovate During initial application development phases implementation of logging and metrics gathering are often neglected to deliver a new application more quickly This can lead to technical debt and operational risk when deploying these features to an applicati on running in production API Gateway integrates seamlessly with Amazon CloudWatch which collects and processes raw data from API Gateway into readable near real time metrics for monitoring API implement ation API Gateway also supports access logging with configurable reports and AWS X Ray tracing for debugging Each of these features requires no code to be written and can be adjusted in applications running in production without risk to the core business logic The overall lifetime of an application m ight be unknown or it m ight be known to be short lived Creating a business case for building such applications can be made easier if your starting point alread y includes the managed features that API Gateway provides and if you only incur infrastructure costs after your APIs begin receiving requests For more information refer to Amazon API Gateway pr icing Amazon Web Services AWS Serverless Multi Tier Architectures Page 9 Iterate rapidly stay agile Using API Gateway and AWS Lambda to build the logic tier of your API enables you to quickly adapt to the changing demands of your user base by simplifying API deployment and version management Stage deployment When you deploy an API in API Gateway you must associate the deployment with an API Gateway stage—each stage is a snapshot of the API and is made available for client apps to call Using this convention you can easily deploy apps to dev test stage or prod stages and move deployments between stages Each time you deploy your API to a stage you create a different version of the API which can be r everted if necessary These features enable existing functionality and client dependencies to continue undis turbed while new functionality is released as a separate API version Decouple d integration with Lambda The integration between API in API Gateway and Lambda function can be decoupled using API Gateway stage variables and a Lambda function alias This simp lifies and speeds up the API deployment Instead of configuring the Lambda function name or alias in the API directly you can configure stage variable in API which can point to a particular alias in the Lambda function During deployment change the stage variable value to point to a Lambda function alias and API will run the Lambda function version behind the Lambda alias for a particular stage Canary release deployment Canary release is a software development strategy in which a new version of an API is deployed for testing purposes and the base version remains deployed as a production release for normal operations on the same stage In a canary release deployment tota l API traffic is separated at random into a production release and a canary release with a preconfigured ratio APIs in API Gateway can be configured for the canary release deployment to test new features with a limited set of users Custom domain names You can provide an intuitive business friendly URL name to API in stead of the URL provided by API Gateway API Gateway provides features to configure custom domain for the APIs With custom domain names you can set up your API's hostname and choose a multi level base path (for example myservice myservice/cat/v1 or myservice/dog/v2 ) to map the alternative URL to your API Amazon Web Services AWS Serverless Multi Tier Architectures Page 10 Prioritize API security All applications must ensure that only authorized clients have access to their API resources When designing a multi tier application you can take advantage of several different ways in which API Gateway contributes to securing your logic tier : Transit security All requests to your APIs can be made through HTTPS to enable encryption in transit API Gateway provide s built in SSL/TLS Certificates —if using the custom domain name option for public APIs you can provide your own SSL/TLS certificate using AWS Certificate Manager API Gateway also supports mutual TLS (mTLS) authentication Mutual TLS enhances the security of your API and helps protect your data from attacks such as client spoofing or man inthe middle attacks API authorization Each resource and method combination that you create as part of your A PI is granted a unique Amazon Resource Name (ARN) that can be referenced in AWS Identity and Access Management ( IAM) policies There are three general methods to add authorization to an API in API Gateway: • IAM roles and policies Clients use AWS Signature Version 4 (SigV4) authorization and IAM policies for API access The same credentials can restrict or permit access to other AWS services and resources as ne eded ( for example S3 buckets or Amazon DynamoDB tables) • Amazon Cognito user pools Clients sign in through an Amazon Cognito user pool and obtain tokens which are included in the authorization header of a request • Lambda authorizer Define a Lambda function that implements a custom authorization scheme that uses a bearer token strategy ( for example OAuth and SAML) or uses request par ameters to identify users Access restrictions API Gateway supports the generation of API keys and association of these keys with a configurable usage plan You can monitor API key usage with CloudWatch API Gateway supports throttling rate limits and bu rst rate limits for each method in your API Amazon Web Services AWS Serverless Multi Tier Architectures Page 11 Private APIs Using API Gateway you can create private REST APIs that can only be accessed from your virtual private cloud in Amazon VPC by using an interface VPC endpoint This is an endpoint network interface that you create in your VPC Using resource policies you can enable or deny access to your API from selected VPCs and VPC endpoints including across AWS accounts Each endpoint can be used to access multiple private APIs You can also use AWS Direct Connect to establish a connection from an on premises network to Amazon VPC and access your private API over that connection In all cases traffic to your private API uses secure connections and does not leave the Amazon network —it is isolated from the public internet Firewall protection using AWS WAF Internet facing APIs ar e vulnerable to malicious attacks AWS WAF is a we b application firewall which helps protect APIs from such attacks It protects APIs from common web exploits such as SQL injection and cross site scripting attacks You can use AWS WAF with API Gateway to help protect APIs Data tier Using AWS Lambda as your logic tier does not limit the data storage options available in your data tier Lambda functions connect to any data storage option by including the appropriate database driver in the Lambda deployment package and use IAM role based access or encrypted credentials ( through AWS KMS or Secrets Manager) Choosing a data store for your a pplication is highly dependent on your application requirements AWS offers a number of serverless and non serverless data stores that you can use to compose the data tier of your application Serverless data storage options • Amazon S3 is an object storage service that offers industry leading scalability data availability security and performance Amazon Web Services AWS Serverless Multi Tier Architectures Page 12 • Amazon Aurora is a MySQL compatible and PostgreSQL compatible relational database built for the cloud that combines the performance and availability of traditional enterprise databases with the simplicity and cost effectiveness of open source databases Aurora offers both serverless and traditional usage models • Amazon DynamoDB is a key value and document database that delivers single digit millisecond performance at any scale It is a fully manag ed serverless multi region durable database with built in security backup and restore and in memory caching for internet scale applications • Amazon Timestream is a fast scalable fully managed time se ries database service for IoT and operational applications that makes it simple to store and analyze trillions of events per day at 1/10th the cost of relational databases Driven by the rise of IoT devices IT systems and smart industrial machines time series data —data that measures how things change over time —is one of the fastest growing data types • Amazon Quantum Ledger Database (Amazon QLDB) is a fully managed ledger database that provides a transparent im mutable and cryptographically verifiable transaction log owned by a central trusted authority Amazon QLDB tracks each and every application data change and maintains a complete and verifiable history of changes over time • Amazon Keyspaces (for Apache Cassandra) is a scalable highly available and managed Apache Cassandra –compatible database service With Amazon Keyspaces you can run your Cassandra workloads on AWS using the same Cassandra application co de and developer tools that you use today You don’t have to provision patch or manage servers and you don’t have to install maintain or operate software Amazon Keyspaces is serverless so you pay for only the resources you use and the service can au tomatically scale tables up and down in response to application traffic Amazon Web Services AWS Serverless Multi Tier Architectures Page 13 • Amazon Elastic File System (Amazon EFS) provides a simple serverless set andforget elastic file system that lets you share file data without provisioning or managing storage It can be used with AWS Cloud services and on premises resources and is built to scale on demand to petabytes without disrupting applications With Amazon EFS you can grow and shrink your file systems automa tically as you add and remove files eliminating the need to provision and manage capacity to accommodate growth Amazon EFS can be mounted with Lambda function which makes it a viable file storage option for APIs Nonserverless data storage options • Amazon Relational Database Service (Amazon RDS) is a managed web service that enables you to set up operate and scale a relational database using several engines (Aurora PostgreSQL MySQL MariaDB Oracle and Micro soft SQL Server) and running on several different database instance types that are optimized for memory performance or I/O • Amazon Redshift is a fully managed petabyte scale data warehouse service in the c loud • Amazon ElastiCache is a fully managed deployment of Redis or Memcached Seamlessly deploy run and scale popular open source compatible in memory data stores • Amazon Neptun e is a fast reliable fully managed graph database service that makes it simple to build and run applications that work with highly connected datasets Neptune supports popular graph models —property graphs and W3C Resource Description Framework (RDF)—and their respective query languages enabl ing you to easily build queries that efficiently navigate highly connected datasets • Amazon DocumentDB (with MongoDB compatibi lity) is a fast scalable highly available and fully managed document database service that supports MongoDB workloads • Finally you can also use data stores running independently on Amazon EC2 as the data tier of a multi tier application Amazon Web Services AWS Serverless Multi Tier Architectures Page 14 Presentation tier The presentation tier is responsible for interacting with the logic tier through the API Gateway REST endpoints exposed over the internet Any HTTPS capable client or device can communicate with these endpoints giving your presentation tier the flexibility to take many forms (desktop applications mobile apps webpages IoT devices and so forth) Depending on your requirements your presentation tier can use the following AWS serverless offerings: • Amazon Cognito – A serverless user identity and data synchronization service that enable s you to add user sign up sign in and access control to your web and mobile apps quickly and efficien tly Amazon Cognito scales to millions of users and supports sign in with social identity providers such as Facebook Google and Amazon and enterprise identity providers through SAML 20 • Amazon S3 with CloudFront – Enables you to serve static websites such as single page applications directly from an S3 bucket without requiring provision of a web server You can use CloudFront as a managed content delivery network (CDN ) to improve performance and enable SSL/TL using managed or custom certificates AWS Amplify is a set of tools and services that can be used together or on their own to help front end web and mobile developers build scalable full stack applications powered by AWS Amplify offers a fully ma naged service for deploying and hosting static web applications globally served by Amazon's reliable CDN with hundreds of points of presence globally and with built in CI/CD workflows that accelerate your application release cycle Amplify supports popula r web frameworks including JavaScript React Angular Vue Nextjs and mobile platforms including Android iOS React Native Ionic and Flutter Depending on your networking configurations and application requirements you m ight need to enable your API Gateway APIs to be cross origin resource sharing (CORS) – compliant CORS compliance allows web browsers to directly invoke your APIs from within static webpages When you deploy a website with CloudFront you are provided a CloudFront domain name to reach your application ( for example d2d47p2vcczkh2cloudfrontnet ) You can use Amazon Route 53 to register domain names and direct them to your CloudFront distribution or direct already owned domain names t o your CloudFront distribution This enable s users to access your site using a familiar domain name Note Amazon Web Services AWS Serverless Multi Tier Architectures Page 15 that you can also assign a custom domain name using Route 53 to your API Gateway distribution which enable s users to invoke APIs using familiar domai n names Sample architecture patterns You can implement popular architecture patterns using API Gateway and AWS Lambda as your logic tier This whitepaper includes the most popular architecture patterns that use AWS Lambda based logic tier s: • Mobile backend – A mobile application communicates with API Gateway and Lambda to access application data This pattern can be extended to generic HTTPS clients that don’t use serverless AWS resources to host presentation tier resources ( such as desktop clients web ser ver running on EC2 and so forth) • Single page application – A single page application hosted in Amazon S3 and CloudFront communicates with API Gateway and AWS Lambda to access application data • Web application – The web application is a general purpose event driven web application back end that uses AWS Lambda with API Gateway for its business logic It also uses DynamoDB as its database and Amazon Cognito for user management All static content is hosted using Amplify In addition to t hese two patterns this whitepaper discuss es the applicability of AWS Lambda and API Gateway to a general microservice architecture A microservice architecture is a popular pattern that although not a standard three tier architecture involves decoupling application components and deploying them as stateless individual units of functionality that communicate with each other Amazon Web Services AWS Serverless Multi Tier Architectures Page 16 Mobile backend Architectural pattern for serverless mobile backend Amazon Web Services AWS Serverless Multi Tier Architectures Page 17 Table 1 Mobile backend tier components Tier Components Presentation Mobile application running on a user device Logic API Gateway with AWS Lambda This architecture shows three exposed services (/tickets /shows and /info ) API Gateway endpoints are secured by Amazon Cognito user pools In this method users sign in to Amazon Cognito user pools (using a federated third party if necessary) and receive access and ID tokens that are used to authorize API Gateway calls Each Lambda function is assigned its own Identity and Access Management (IAM) role to provide access to the appropriate data source Data DynamoDB is use d for the /tickets and /shows services Amazon RDS is used for the /info service This Lambda function retrieves Amazon RDS credentials from Secrets Manager and uses an elastic network interface to access the private subnet Single page application Architectural pattern for serverless single page application Amazon Web Services AWS Serverless Multi Tier Architectures Page 18 Table 2 Single page application components Tier Components Presentation Static website content is hosted in Amazon S3 and distributed by CloudFront AWS Certificate Manager allows a custom SSL/TLS certificate to be used Logic API Gateway with AWS Lambda This architecture shows three exposed services ( /tickets /shows and /info ) API Gateway endpoints are secured by a Lambda authorizer In this method users sign in through a third party identity provider and obtain access and ID tokens These tokens are included in API Gateway calls and the Lambda authorizer validates these tokens and generates an IAM policy containing API initiation permissions Each Lambda function is assigned its own IAM role to provide access to the appropria te data source Data DynamoDB is used for the /tickets and /shows services ElastiCache is used by the /shows service to improve database performance Cache misses are sent to DynamoDB Amazon S3 is used to host static content used by the /info service Amazon Web Services AWS Serverless Multi Tier Architectures Page 19 Web application Architectural pattern for web application Table 3 Web application components Tier Components Presentation The front end application is all static content (HTML CSS JavaScript and images ) which are generated by React utilities like create react app Amazon CloudFront hosts all these objects The web application when used downloads all the resources to the b rowser and starts to run from there The web application connects to the backend calling the APIs Logic Logic layer is built using Lambda functions fronted by API Gateway REST APIs This architecture shows multiple exposed services There are multiple d ifferent Lambda functions each handling a different aspect of the application The Lambda functions are behind API Gateway and accessible using API URL paths Amazon Web Services AWS Serverless Multi Tier Architectures Page 20 Tier Components The user authentication is handled using Amazon Cognito User Pools or federated user providers A PI Gateway uses out of box integration with Amazon Cognito Only after a user is authenticated the client will receive a JSON Web Token ( JWT) which it should then use when making the API calls Each Lambda function is assigned its own IAM role to provide access to the appropriate data source Data In this particular example DynamoDB is used for the data storage but other purpose built Amazon database or storage services can be used depending o n the use case and usage scenario Microservices with Lambda Architectural pattern for microservices with Lambda The microservice architecture pattern is not bound to the typical three tier architecture; however this popular pattern can realize significant benefits from the use of serverless resources In this architecture each of the application components are decoupled and indepe ndently deployed and operated An API created with API Gateway and functions Amazon Web Services AWS Serverless Multi Tier Architectures Page 21 subsequently launch ed by AWS Lambda is all that you need to build a microservice Your team can use these services to decouple and fragment your environment to the level of gran ularity desired In general a microservices environment can introduce the following difficulties: repeated overhead for creating each new microservice issues with optimizing server density and utilization complexity of running multiple versions of multi ple microservices simultaneously and proliferation of client side code requirements to integrate with many separate services When you create microservices using serverless resources these problems become less difficult to solve and in some cases simpl y disappear The serverless microservices pattern lowers the barrier for the creation of each subsequent microservice (API Gateway even allows for the cloning of existing APIs and use of Lambda functions in other accounts) Optimizing server utilization i s no longer relevant with this pattern Finally API Gateway provides programmatically generated client SDKs in a number of popular languages to reduce integration overhead Conclusion The multi tier architecture pattern encourages the best practice of cre ating application components that are simple to maintain decouple and scale When you create a logic tier where integration occurs by API Gateway and computation occurs within AWS Lambda you realize these goals while reducing the amount of effort to achieve them Together these services provide a n HTTPS API front end for your clients and a secure environment to apply your business log ic while removing the overhead involved with managing typical server based infrastructure Contributors Contributors to this document include : • Andrew Baird AWS Solutions Architect • Bryant Bost AWS ProServe Consultant • Stefano Buliani Senior Product Manage r Tech AWS Mobile • Vyom Nagrani Senior Product Manager AWS Mobile Amazon Web Services AWS Serverless Multi Tier Architectures Page 22 • Ajay Nair Senior Product Manager AWS Mobile • Rahul Popat Global Solutions Architect • Brajendra Singh Senior Solutions Architect Further reading For additional information refer to : • AWS Whitepapers and Guides Document revisions Date Description Octo ber 20 2021 Updated for new service features and patterns June 1 2021 Updated for new service features and patterns September 25 2019 Updated for new service features November 1 2015 First publication
|
General
|
consultant
|
Best Practices
|
AWS_Storage_Optimization
|
This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers AWS Storage Optimization March 2018 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2018 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 Identify Your Data Storage Requirements 1 AWS Storage Services 2 Object storage 2 Block storage 3 File storage 5 Optimizing Amazon S3 Storage 5 Optimizing Amazon EBS Storage 7 Delete Unattached Amazon EBS Volumes 8 Resize or Change the EBS Volume Type 8 Delete Stale Amazon EBS Snapshots 9 Optimizing Storage is an Ongoing Process 9 Conclusion 10 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract This is the last in a series of whitepapers designed to support your cloud journey This paper seeks to empower you to maximize value from your investments improve forecasting accuracy and cost predictability create a culture of ownership and cost transparency and continuously me asure your optimization status This paper discusses how to choose and optim ize AWS storage service s to meet your data storage needs and help you save costs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 1 Introduction Organizations tend to think of data storage as an ancillary service and do not optimize storage after data is moved to the cloud Many also fail to clean up unused storage and let these services run for days weeks and even months at significant cost According to this blog post by RightScale up to 7 % of all cloud spend is wasted on unused storage volumes and old snapshots (copies of storage volumes) AWS offers a broad and flexible set of data storage options that let you move between different tiers of storage and change storage types at any time This whitepaper discusses how to choose AWS storage services that meet your data storage needs at the lowest cost It also discusses how to optimize these services to achieve balance between performance availability and durability Identify Your Data Storage Requirements To optimize storage the first step is to understand the performance profile for each of your workloads You should conduct a performance analysis to measure input/output operations per second ( IOPS ) throughput and other variables AWS s torage services are optimized for different storage scenarios —there is no single data storage option t hat is ideal for all workloads When evaluating your storage requirements consider data storage options for each workload separately The following ques tions can help you segment data within each of your workload s and determine your storage requirements : • How often and how quickly do you need to access your data? AWS offers storage options and pricing tiers for frequently accessed less frequently accessed and infrequently accessed data • Does your data store require high IOPS or throughput? AWS provides categories of storage that are optimized for performance and throughput Understanding IOPS and throughput requirements will help you provision the right amount of storage and avoid overpaying This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 2 • How critical (durable) is your data? Critical or regulated data needs to be retained at almost any expense and tends to be stored for a long time • How sensitive is your data? Highly sensitive data needs to be protected from accidental and malicious changes not just data loss or corruption Durability cost and security are equally important to consider • How large is your data set? Knowing the total size of the data set helps in estima ting storage capacity and cost • How transient is your data? Transient data is short lived and typically does not require high durability (Note: Durability refers to average annual expected data loss) Clickstream and Twitter data are good examples of transient data • How much are you prepared to pay to store the data? Setting a budget for data storage will inform your d ecisions about storage options AWS Storage Service s Choosing the right AWS storage service for your data means finding the closest match in terms of data availability durability and performance Note: Availability refers to a storage volume’s ability to deliver data upon request Performance refers to the number of IOPS or the amount of throughput (measured in megabytes per second) that t he storage volume can deliver Amazon offers three broad categories of storage services: object block and file storage Each offering is designed to meet a different storage req uirement which gives you flexibility to find the solution that works b est for your storage scenarios Object storage Amazon Simple Storage Service (Amazon S3) is highly durable general purpose object storage tha t works well for unstructured data sets such as media content Amazon S3 provides the highest level of data durability and availability on the AWS Cloud There are three tiers of storage: one each for hot warm or This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 3 cold data In terms of pricing the colde r the data the cheaper it is to store and the costlier it is to access when needed You can easily move data between these storage options to optimize storage costs: • Amazon S3 Standard – The best storage option for data that you frequently access Amazon S3 delivers low latency and high throughput and is ideal for use cases such as cloud applications dynamic websites content distribution gaming and data analytics • Amazon S3 Standard Infrequent Access (Amazon S3 Standard IA) – Use this storage option for data that you access less frequently such as long term backups and disaster recovery It offers cheaper storage over time but higher charges to retrieve or transfer data • Amazon Glacier – Designed for long term storage of infr equently accessed data such as end oflifecycle compliance or regulatory backups Different methods of data retrieval are available at various speeds and cost Retrieval can take from a few minutes to several hours The following table shows comparative pricing for Amazon S3 Amazon S3 Pricing* Per Gigabyte Month Amazon S3 $0023 Amazon S3 Standard IA $00125 (plus $001/GB retrieval charge) Amazon Glacier $0004 *Based on US East (N Virginia) prices Block storage Amazon Elastic Block Store (Amazon EBS) volumes provide a durable block storage option for use with EC2 instances Use Amazon EBS for data that requires long term persistence and quick access at guaranteed levels of performance There are two types of block storage: solid state drive (SSD) storage and hard diskdrive (HDD) storage This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 4 • SSD storage is optimized for transactional workloads where performance is closely tied to IOPS There are two SSD vol ume options to choose from: o EBS Provisioned IOPS SSD (io1) – Best for latency sensitive workloads that requir e specific minimum guaranteed IOPS With io1 volumes you pay separately for provisioned IOPS so unless you need high levels of provisioned IOPS gp2 volumes are a better match at lower cost o EBS General Purpose SSD (gp2) – Designed for general use and offer a balance between cost and performance • HDD storage is designed for throughput intensive workloads such as data warehouses and log processing There are two types of HDD volumes: o Throughput Optimized HDD (st1) – Best for frequently accessed throughput intensive workloads o Cold HDD (sc1) – Designed for less frequently accessed throughput intensive workloads The following table shows comparative pricing for Amazon EBS Amazon EBS Pricing* Per Gigabyte Month General Purpose SSD (gp2) $010 per GB month of provisioned storage Provisioned IOPS SSD (io1) $0125 per GB month of provisioned storage plus $0065 per provisioned IOPS month Throughput Optimized HDD (st1) $0045 per GB month of provisioned storage Cold HDD (sc1) $0025 per GB month of provisioned storage Amazon EBS Snapshots to Amazon S3 $005 per GB month of data stored *Based on US East (N Virginia) prices This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 5 File storage Amazon Elastic File System (Amazon EFS) provides simple scalable file storage for use with EC2 instances Amazon EFS supports any number of instances at the same time Its storage capacity can scale from gigabytes to petabytes of data without needing to provision storage Amazon EFS is designed for workloads and applications such as big data media processing workflows content management and web serving Amazon EFS also supports file synchronization cap abilities so that you can efficiently and securel y synchronize files from on premises or cloud file systems to Amazon EFS at speeds of up to 5 times faster than standard Linux copy tools Amazon S3 and Amazon EFS allocate storage based on your usage and you pay for what you use However for EBS volumes you are charged for provisioned (allocated) storage whether or not you use it The key to keeping storage costs low without sacrificing required functionality is to maximize the use of Amazon S3 when possib le and use more expensive EBS volumes with provisioned I/O only when appl ication requirements demand it The following table shows pricing for Amazon EF S Amazon EFS Pricing* Per Gigabyte Month Amazon EFS $030 *Based on US East (N Virginia) prices Optimize Amazon S3 Storage Amazon S3 lets you analyze data access patterns create inventory lists and configure lifecycle policies You can set up rules to automatically move data objects to cheaper S3 storage tiers as objects are accessed less frequently or to automatically delete objects after an expiration date To manage storage data most effectively you can use tagging to categorize your S3 objects and filter on these tags in your data lifecycle policies To determine when to transition data to another storage class you can use Amazon S3 analytics storage class analysis to analyze storage access This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 6 patterns Analyze all th e objects in a bucket or use an object tag or common prefix to filter objects for analysis If you observe infrequent access patterns of a filtered data set over time you can use the information to choose a more appropriate storage class improve lifecycl e policies and make predictions around future usage and growth Another management tool is Amazon S3 Inventory which audits and reports on the replication and encryptio n status of your S3 objects on a weekly or monthly basis This feature provides CSV output files that list objects and their corresponding metadata and lets you configure multiple inventory lists for a single bucket organized by different S3 metadata tags You can also query Amazon S3 inventory using standard SQL by using Amazon Athena Amazon Redshift Spectrum and other tools such as Presto Apache Hive and Apace Spark Amazon S3 can also publish storage request and data transfer metrics to Amazon CloudWatch Storage metrics are reported daily are available at one minute intervals for granular visibility and can be collected and reported for an entire bucket or a subset of objects (selected via pref ix or tags) With all the information these storage management tools provide you can create policies to move less frequently accessed data S3 data to cheaper storage tiers for considerable savings For example by moving data from Amazon S3 Standard to Am azon S3 Standard IA you can save up to 60 % (on a per gigabyte basis) of Amazon S3 pricing By moving data that is at the end of its lifecycle and accessed on rare occasions to Amazon Glacier you can save up to 80 % of Amazon S3 pricing The following table compares the monthly cost of storing 1 petabyte of content on Amazon S3 Standard versus Amazon S3 Standard IA (the cost includes the content retrieval fee) It demonstrates that if 10 % of the content is accessed per month the savings would be 41 % with Amazon S3 Standard IA If 50 % of the content is accessed the savings would be 24 %—which is still significant Even if 100 % of the content is accessed per month you would still save 2 % using Amazon S3 Standard IA Comparing 1 Petabyte of Object Storage* This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 7 Content Accessed Per Month S3 Standard S3 Standard IA Savings 1 PB Monthly 10% $24117 $14116 41% 1 PB Monthly 50% $24117 $18350 24% 1 PB Monthly 100% $24117 $23593 2% *Based on US East prices Note: There is no charge for transferring data between Amazon S3 storage options as long as they are within the same AWS Region To further optimize costs associated to storage and data retrieval AWS announced the launch of Amazon S3 Select and Amazon Glacier Select Traditionally data in object storage had to be accessed as whole entities regardless of the size of the object Amazon S3 Select now lets you retrieve a subset of data from an object using simple SQL expressions which means that your applications no longer have to use compute resources to scan and filter the data from an object Using Amazon S3 Select you can potentially improve query performance by up to 400 % and reduce query costs as much as 8 0% AWS also supports efficient data retrieval with Amazon Glacier so that you do not have to restore an archived object to find the bytes needed for analytics With both Amazon S3 Select and Amazon Glacier Select you can lower your costs and uncover more insights from your data regardles s of what storage tier it’s in Optimize Amazon EBS Storage With Amazon EBS it’s important to keep in mind that you are paying for provisioned capacity and performance —even if the volume is unattached or has very low wri te activity To optimize storage performance and costs for Amazon EBS monitor volumes periodically to identify ones that are unattached or appear to be underutilized or overutilized and adjust provisioning to match actual usage This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 8 AWS offers tools that can help you optimize block storage Amazon CloudWatch automatically collects a range of data points for EBS volumes and lets you set alarms on volume behavior AWS Trusted Advisor is another way for you to analyze your infrastructure to identify unattached underutilized and overutilized EBS volumes Third party tools such as Cloudability can also provide insight i nto performance of EBS volumes Delet e Unattached Amazon EBS Volumes An easy way to reduce wasted spend is to find and delete unattached volumes However w hen EC2 instances are stopped or terminated attached EBS volumes are not automatically deleted and will continue to accrue charges since they are still operating To find unattached EBS volumes look for volumes that are available which indicat es that they are not attached to an EC2 instance You can also look at network throughput and IOPS to see whether there has been any volume activity over the previous two weeks If the volume is in a nonproduction environment hasn’t been used in weeks or h asn’t been attached in a month there is a good chance you can delete it Before deleting a volume store an Amazon EBS snapshot (a backup copy of an EBS volume) so that the volume can be quickly restored later if needed You can automate the process of de leting unattached volumes by using AWS Lambda functions with Amazon CloudWatch Resiz e or Chang e the EBS Volume Type Another way to optimize storage costs is to identify volumes that are underutilized and downsize them or change the volume type Monitor th e read write access of EBS volumes to determine if throughput is low If you have a current generation EBS volume attached to a current generation EC2 instance type you can use the elastic volumes feature to change the size or volume type or (for an SSD io1 volume) adjust IOPS performanc e without detaching the volume The following tips can help you optimize your EBS volumes: • For General Purpose SSD gp2 volumes you’ll want to optimize for capacity so that you’r e paying only for what you use This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 9 • With Provisi oned IOPS SSD io1 volumes pay close attention to IOPS utilization rather than throughput since you pay for IOPS directly Provision 10 –20% above maximum IOPS utilization • You can save by reducing provisioned IOPS or by switching from a Provision ed IOPS S SD io1 volume type to a General Purpose SSD gp2 volume type • If the volume is 500 gigabytes or larger consider converting to a Cold HDD sc1 volum e to save on your storage rate • You can always return a volume to its original settings if needed Delet e Stale Amazon EBS Snapshots If you have a backup policy that takes EBS volume snapshots daily or weekly you will quickly accumulate snapshots Check for stale snapshots that are over 30 days old and delete them to reduce storage costs Deleting a snapshot has no effect on the volume You can use the AWS Management Console or AWS Command Line Interface (CLI) for this purpose or third party tools such as Skeddly Storage Optimization is an Ongoing Process Maintain ing a storage architecture that is both right sized and right priced is an ongoing process To get the most efficient use of your storage spend you should optimize storage on a monthly basis You can streamline this effort by: • Establishing an ongoing mechani sm for optimizing storage a nd setting up storage policies • Monitoring costs closely using AWS cost and reporting tools such as Cost Explorer budgets and detailed billing reports in the Billi ng and Cost Management console • Enforcing Amazon S3 object tagging and establishing S3 lifecycle policies to continually optimize data storage throughout the data lifecycle This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Storage Optimiz ation Page 10 Conclusion Storage optimization is the ongoing process of evaluating changes in data storage usage and needs and choosing the most cost effective and appropriate AWS storage option For object stores you want to implement Amazon S3 lifecycle policies to automatically move data to cheaper storage tiers as data is accessed l ess frequently For Amazon EBS block stores monitor your storage usage and resiz e underutilized (or overutilized) volumes You also want to delete unattached volumes and stale Amazon EBS snapshots so that you’re not paying for unused resources You can st reamline the process of storage optimization by setting up a monthly schedule for this task and taking advantage of the powerful tools by AWS and third party vendors to monitor storage costs and evaluate volume usage
|
General
|
consultant
|
Best Practices
|
AWS_Storage_Services_Overview
|
AWS Storage Services Overview A Look at Storage Services Offered by AWS December 2016 Archived This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers© 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind wheth er express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are contr olled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Abstract 6 Introduction 1 Amazon S3 1 Usage Patterns 2 Performance 3 Durability and Availability 4 Scalability and Elasticity 5 Security 5 Interfaces 6 Cost Model 7 Amazon Glacier 7 Usage Patterns 8 Performance 8 Durability and Availability 9 Scalability and Elasticity 9 Security 9 Interfaces 10 Cost Model 11 Amazon EFS 11 Usage Patterns 12 Performance 13 Durability and A vailability 15 Scalability and Elasticity 15 Security 15 Interfaces 16 Cost Model 16 ArchivedAmazon EBS 17 Usage Patterns 17 Performance 18 Durability and Availability 21 Scalability and Elasticity 22 Security 23 Interfaces 23 Cost Model 24 Amazon EC2 Instanc e Storage 24 Usage Patterns 26 Performance 27 Durability and Availability 28 Scalability and Elasticity 28 Security 29 Interfaces 29 Cost Model 30 AWS Storage Gateway 30 Usage Pa tterns 31 Performance 32 Durability and Availability 32 Scalability and Elasticity 32 Security 33 Interfaces 33 Cost Model 34 AWS Snowball 34 Usage Patterns 34 Performance 35 Durability and A vailability 36 ArchivedScalability and Elasticity 36 Security 36 Interfaces 37 Cost Model 38 Amazon CloudFront 39 Usage Patterns 39 Performance 40 Durability and A vailability 40 Scalability and Elasticity 40 Security 41 Interfaces 41 Cost Model 42 Conclusion 42 Contributors 43 References and Further Reading 44 AWS Storage Services 44 Other Re sources 44 ArchivedAbstract Amazon Web Services (AWS) is a flexible costeffective easy touse cloud computing platform This whitepaper is designed to help architects and developers understand the different storage services and features available in the AWS Cloud We provide an overview of each storage service or feature and describe usage patterns performance durability and availability scalability and elasticity security interfaces and the cost model ArchivedAmazon Web Services – AWS Storage Services Overview Page 1 Introduction Amazon Web Services (AWS) provides lowcost data storage with high durability and availability AWS offers storage choices for backup archiving and disaster recovery use cases and provides block file and object storage In this whitepaper we examine the following AWS Cloud storage services and features Amazon Simple Storage Service (Amazon S3) A service that provides scalable and highly durable object storage in the cloud Amazon Glacier A service that provides low cost highly durable archive storage in the cloud Amazon Elastic File System (Amazon EFS) A service that provides scalable network file storage for Amazon EC2 instances Amazon Elastic Block Store (Amazon EBS) A service that provides block storage volumes for Amazon EC2 instances Amazon EC2 Instance Storage Temporary block storage volumes for Amazon EC2 instances AWS Storage Gateway An on premises storage appliance that integrates with cloud storage AWS Snowball A service that transports large amounts of data to and from the cloud Amazon CloudFront A service that provides a global content delivery network (C DN) Amazon S3 Amazon Simple Storage Service (Amazon S3) provides developers and IT teams secure durable highly scalable object storage at a very low cost 1 You can store and retrieve any amount of data at any time from anywhere on the web through a simple web service interface You can write read and de lete objects containing from zero to 5 TB of data Amazon S3 is highly scalable allowing concurrent read or write access to data by many separate clients or application threads ArchivedAmazon Web Services – AWS Storage Services Overview Page 2 Amazon S3 offers a range of storage classes designed for different use cases including the following: • Amazon S3 Standard for general purpose storage of frequently accessed data • Amazon S3 Standard Infrequent Access (Standard IA) for long lived but less frequently accessed data • Amazon Glacier for low cost archival data Usage Pat terns There are four common usage patterns for Amazon S3 First Amazon S3 is used to store and distribute static web content and media This content can be delivered directly from Amazon S3 because each object in Amazon S3 has a unique HTTP URL Alternat ively Amazon S3 can serve as an origin store for a content delivery network (CDN) such as Amazon CloudFront The elasticity of Amazon S3 makes it particularly well suited for hosting web content that requires bandwidth for addressing extreme demand spike s Also because no storage provisioning is required Amazon S3 works well for fast growing websites hosting data intensive user generated content such as video and photo sharing sites Second Amazon S3 is used to host entire static websites Amazon S3 provides a lowcost highly available and highly scalable solution including storage for static HTML files images videos and client side scripts in formats such as JavaScript Third Amazon S3 is used as a data store for computation and large scale analytics such as financial transaction analysis clickstream analytics and media transcoding Because of the horizontal scalability of Amazon S3 you can access your data from multiple computing nodes concurrently without being constrained by a single co nnection Finally Amazon S3 is often used as a highly durable scalable and secure solution for backup and archiving of critical data You can easily move cold data to Amazon Glacier using lifecycle management rules on data stored in Amazon S3 You can a lso use Amazon S3 cross region replication to automatically copy objects across S3 buckets in different AWS Regions asynchronously providing disaster recovery solutions for business continuity 2 ArchivedAmazon Web Services – AWS Storage Services Overview Page 3 Amazon S3 doesn’t suit all storage situations The following table presents some storage needs for which you should consider other AWS storage options Storage Need Solution AWS Services File system Amazon S3 uses a flat namespace and isn’t meant to serve as a standalone POSIX compliant file system Instead consider using Amazon EFS as a file system Amazon EFS Structured data with query Amazon S3 doesn’t offer query capabilities to retrieve specific objects When you use Amazon S3 you need to know the exact bucket name and key for the files you want to retrieve from the service Amazon S3 can ’t be used as a database or search engine by it self Instead you can pair Amazon S3 with Amazon DynamoDB Amazon CloudSearch or Amazon Relational Data base Service (Amazon RDS) to index and query metadata about Amazon S3 buckets and objects Amazon Dynam oDB Amazon RDS Amazon CloudSearch Rapidly changing data Data that must be updated very frequently might be better served by storage solutions that take into acco unt read and write latencies such as Amazon EBS volumes Amazon RDS Amazon DynamoDB Amazon EFS or relational databases running on Amazon EC2 Amazon EBS Amazon EFS Amazon DynamoDB Amazon RDS Archival data Data that requires encrypted archival storage with infrequent read access with a long recovery time objective (RTO) can be stored in Amazon Glacier more costeffectively Amazon Glacier Dynamic website hosting Although Amazon S3 is ideal for static content websites dynamic websites that depend on database interaction or use serv erside scripting should be hosted on Amazon EC2 or Amazon EFS Amazon EC2 Amazon EFS Performance In scenarios where you use Amazon S3 from within Amazon EC2 in the same Region access to Amazon S3 from Amazon EC2 is designed to be fast Amazon S3 is also designed so that server side latencies are insignificant relative to Internet latencies In additi on Amazon S3 is built to scale storage requests and numbers of users to support an extremely large number of web scale applications If you access Amazon S3 using multiple threads multiple applications or multiple clients concurrently total Amazon S3 aggregate throughput typically scales to rates that far exceed what any single server can generate or consume ArchivedAmazon Web Services – AWS Storage Services Overview Page 4 To improve the upload performance of large objects (typically over 100 MB) Amazon S3 offers a multipart upload command to upload a single object as a set of parts 3 After all parts of your object are uploaded Amazon S3 assembles these parts and creates the object Using multipart upload you can get improved throughput and quick recovery from any network issues Another benefit of using multipart upload is that you can upload multiple parts of a single object in parallel and restart the upload of smaller parts instead of restarting the upload of the entire larg e object To speed up access to relevant data many developers pair Amazon S3 with a search engine such as Amazon CloudSearch or a database such as Amazon DynamoDB or Amazon RDS In these scenarios Amazon S3 stores the actual information and the search e ngine or database serves as the repository for associated metadata (for example the object name size keywords and so on) Metadata in the database can easily be indexed and queried making it very efficient to locate an object’s reference by using a se arch engine or a database query This result can be used to pinpoint and retrieve the object itself from Amazon S3 Amazon S3 Transfer Acceleration enables fast easy and secure transfer of files over long distances between your client and your Amazon S3 bucket It leverages Amazon CloudFront globally distributed edge locations to route traffic to your Amazon S3 bucket over an Amazon optimized network path To get started with Amazon S3 Transfer Acceleration you first must enable it on an Amazon S3 bucket Then modify your Amazon S3 PUT and GET requests to use the s3 accelerate endpoint domain name (<bucketname>s3 accelerateamazonawscom) The Amazon S3 bucket can still be accessed using the regular endpoint Some customers have measured performance impro vements in excess of 500 percent when performing intercontinental uploads Durability and Availability Amazon S3 Standard storage and Standard IA storage provide high level s of data durability and availability by automatically and synchronously storing your data across both multiple devices and multiple facilities within your selected geographical region Error correction is built in and there are no single points of failure Amazon S3 is designed to sustain the concurrent loss of data in two facilities making it very well suited to serve as the primary data storage for ArchivedAmazon Web Services – AWS Storage Services Overview Page 5 mission critical data In fact Amazon S3 is designed for 99999999999 percent (11 nines) durability per o bject and 9999 percent availability over a one year period Additionally you have a choice of enabling cross region replication on each Amazon S3 bucket Once enabled cross region replication automatically copies objects across buckets in different AWS Regions asynchronously providing 11 nines of durability and 4 nines of availability on both the source and destination Amazon S3 objects Scalability and Elasticity Amazon S3 has been designed to offer a very high level of automatic scalability and elasti city Unlike a typical file system that encounters issues when storing a large number of files in a directory Amazon S3 supports a virtually unlimited number of files in any bucket Also unlike a disk drive that has a limit on the total amount of data th at can be stored before you must partition the data across drives and/or servers an Amazon S3 bucket can store a virtually unlimited number of bytes You can store any number of objects (files) in a single bucket and Amazon S3 will automatically manage s caling and distributing redundant copies of your information to other servers in other locations in the same Region all using Amazon’s high performance infrastructure Security Amazon S3 is highly secure It provides multiple mechanisms for fine grained control of access to Amazon S3 resources and it supports encryption You can manage access to Amazon S3 by granting other AWS accounts and users permission to perform the resource operations by writing an access policy 4 You can protect Amazon S3 data at rest by using serve rside encryption 5 in which you request Amazon S3 to encrypt your object before it’s written to disks in data centers and decrypt it when you download the object or by using client side encryption 6 in which you encrypt your data on the client side and upload the encrypted data to Amazon S3 You can protect the data in transit by using Secure Sockets Layer (SSL) or client side encryption ArchivedAmazon Web Services – AWS Storage Services Overview Page 6 You can use versioning to preserve retrieve and restore every version of every object stored in your Amazon S3 bucket With versioning you can easily recover from both unintended user actions and application failures Additionally you can add an optional layer of security by enabling Multi Factor Authentication (MFA) Delete for a bucket 7 With this option enabled for a bucket two forms of authentication are re quired to change the versioning state of the bucket or to permanently delete an object version: valid AWS account credentials plus a six digit code (a single use time based password) from a physical or virtual token device To track requests for access t o your bucket you can enable access logging 8 Each access log record provides details about a single access request such as the requester bucket name request time request action response status and error code if any Access log information can be useful in security and access audits It can al so help you learn about your customer base and understand your Amazon S3 bill Interfaces Amazon S3 provides standards based REST web service application program interfaces (APIs) for both management and data operations These APIs allow Amazon S3 objects to be stored in uniquely named buckets (top level folders) Each object must have a unique object key (file name) that serves as an identifier for the object within that bucket Although Amazon S3 is a web based object store with a flat naming structure ra ther than a traditional file system you can easily emulate a file system hierarchy (folder1/folder2/file) in Amazon S3 by creating object key names that correspond to the full path name of each file Most developers building applications on Amazon S3 use a higher level toolkit or software development kit (SDK) that wraps the underlying REST API AWS SDKs are available for Android Browser iOS Java NET Nodejs PHP Python Ruby and Go The integrated AWS Command Line Interface (AWS CLI) also provides a set of high level Linux like Amazon S3 file commands for common operations such as ls cp mv sync and so on Using the AWS CLI for Amazon S3 you can perform recursive uploads and downloads using a single folder level Amazon S3 command and also per form parallel transfers You can also use the AWS CLI for command line access to the low level Amazon S3 API Using the AWS Management Console you can easily create and manage Amazon S3 buckets ArchivedAmazon Web Services – AWS Storage Services Overview Page 7 upload and download objects and browse the contents of your S3 buckets using a simple web based user interface Additionally you can use the Amazon S3 notification feature to receive notifications when certain events happen in your bucket Currently Amazon S3 can publish events when an object is uploaded or when an object is deleted Notifications can be issued to Amazon Simple Notification Service (SNS) topics 9 Amazon Simple Queue Service (SQS) queues 10 and AWS Lambda functions 11 Cost Model With Amazon S3 you pay only for the storage you actually use There is no minimum fee and no setup cost Amazon S3 Standard has three pricing components: storage (per GB per month) data tran sfer in or out (per GB per month) and requests (per thousand requests per month) For new customers AWS provides the AWS Free Tier which includes up to 5 GB of Amazon S3 storage 20000 get requests 2000 put requests and 15 GB of data transfer out each month for one year for free 12 You can find pricing information at the Amazon S3 pricing page 13 There are Data Transfer IN and OUT fees if you enable Amazon S3 Transfer Acceleration on a bucket and the transfer performance is faster than regular Amazon S3 transfer If we determine that Transfer Acceleration is not likely to be faster than a regular Amazon S3 transfer of the same object to the same destination we will not charge for that use of Transfer Acceleration for that transfer and may bypass the Transfer Acceleration system for that upload Amazon Glacier Amazon Glacier is an extremely low cost storage service that provides highly secure durable and flexible storage for data archiving and online backup 14 With Amazon Glacier you can reliably store your data for as little as $0007 per gigabyte per month Amazon Glacie r enables you to offload the administrative burdens of operating and scaling storage to AWS so that you don’t have to worry about capacity planning hardware provisioning data replication hardware failure detection and repair or time consuming hardware migrations You store data in Amazon Glacier as archives An archive can represent a single file or you can combine several files to be uploaded as a single archive ArchivedAmazon Web Services – AWS Storage Services Overview Page 8 Retrieving archives from Amazon Glacier requires the initiation of a job You organize yo ur archives in vaults Amazon Glacier is designed for use with other Amazon web services You can seamlessly move data between Amazon Glacier and Amazon S3 using S3 data lifecycle policies Usage Patterns Organizations are using Amazon Glacier to support a number of use cases These use cases include archiving offsite enterprise information media assets and research and scientific data and also performing digital preservation and magnetic tape replacement Amazon Glacier doesn’t suit all storage situatio ns The following table presents a few storage needs for which you should consider other AWS storage options Storage Need Solution AWS Services Rapidly changing data Data that must be updated very frequently might be better served by a storage solution w ith lower read/write latencies such as Amazon EBS Amazon RDS Amazon EFS Amazon DynamoDB or relational databases running on Amazon EC2 Amazon EBS Amazon RDS Amazon EFS Amazon DynamoDB Amazon EC2 Immediate access Data stored in Amazon Glacier is not available immediately Retrieval jobs typically require 3 –5 hours to complete so if you need immediate access to your object data Amazon S3 is a better choice Amazon S3 Performance Ama zon Glacier is a low cost storage service designed to store data that is infrequently accessed and long lived Amazon Glacier retrieval jobs typically complete in 3 to 5 hours You can improve the upload experience for larger archives by using multipart upload for archives up to about 40 TB (the single archive limit) 15 You can upload separate parts of a large archive independently in any order and in parallel t o improve the upload experience for larger archives You can even perform range retrievals on archives stored in Amazon Glacie r by specifying a range or portion ArchivedAmazon Web Services – AWS Storage Services Overview Page 9 of the archive 16 Specifying a range of bytes for a retrieval can help control bandwidth costs manage your data downloads and retrieve a targeted part of a large archive Durability and Availability Amazon Glacier is designed to provide average annual durability of 9999 9999999 percent (11 nines) for an archive The service redundantly stores data in multiple facilities and on multiple devices within each facility To increase durability Amazon Glacier synchronously stores your data across multiple facilities before retu rning SUCCESS on uploading an archive Unlike traditional systems which can require laborious data verification and manual repair Amazon Glacier performs regular systematic data integrity checks and is built to be automatically self healing Scalability and Elasticity Amazon Glacier scales to meet growing and often unpredictable storage requirements A single archive is limited to 40 TB in size but there is no limit to the total amount of data you can store in the service Whether you’re storing petabyt es or gigabytes Amazon Glacier automatically scales your storage up or down as needed Security By default only you can access your Amazon Glacier data If other people need to access your data you can set up data access control in Amazon Glacier by usi ng the AWS Identity and Access Management (IAM) service 17 To do so simply create an IAM policy that specifies which account users have rights to operations on a given vault Amazon Glacier uses server side encr yption to encrypt all data at rest Amazon Glacier handles key management and key protection for you by using one of the strongest block ciphers available 256 bit Advanced Encryption Standard (AES 256) Customers who want to manage their own keys can enc rypt data prior to uploading it ArchivedAmazon Web Services – AWS Storage Services Overview Page 10 Amazon Glacier allows you to lock vaults where long term records retention is mandated by regulations or compliance rules You can set compliance controls on individual Amazon Glacier vaults and enforce these by using locka ble policies For example you might specify controls such as “undeletable records” or “time based data retention” in a Vault Lock policy and then lock the policy from future edits After it’s locked the policy becomes immutable and Amazon Glacier enforces the prescribed controls to help achieve your compliance objectives To help monitor data access Amazon Glacier is integrated with AWS CloudTrail allowing any API calls made to Amazon Glac ier in your AWS account to be captured and stored in log files that are delivered to an Amazon S3 bucket that you specify 18 Interfaces There are two ways to use Amazon Glacier each with its own interfaces The Amazon Glacier API provides both management and data operations First Amazon Glacier provides a native standards based REST web services interface This interface can be accessed using the Java SDK or the NET SDK You can use the AWS Management Console or Amazon Glacier API actions to create vau lts to organize the archives in Amazon Glacier You can then use the Amazon Glacier API actions to upload and retrieve archives to monitor the status of your jobs and also to configure your vault to send you a notification through Amazon SNS when a job is complete Second Amazon Glacier can be used as a storage class in Amazon S3 by using object lifecycle management that provides automatic policy driven archiving from Amazon S3 to Amazon Glacier You simply se t one or more lifecycle rules for an Amazon S3 bucket defining what objects should be transitioned to Amazon Glacier and when You can specify an absolute or relative time period (including 0 days) after which the specified Amazon S3 objects should be transitioned to Amazon Glacier The Amazon S3 API includes a RESTORE operation The retrieval process from Amazon Glacier using RESTORE takes three to five hours the same as other Amazon Glacier retrievals Retrieval puts a copy of the retrieved object in Am azon S3 Reduced Redundancy Storage (RRS) for a specified retention period The original archived object ArchivedAmazon Web Services – AWS Storage Services Overview Page 11 remains stored in Amazon Glacier For more information on how to use Amazon Glacier from Amazon S3 see the Object Lifecycle Management section of the Amazon S3 Developer Guide 19 Note that when using Amazon Glacier as a storage class in Amazon S3 you use the Amazon S3 API and when using “native” Amazon Glacier you use the Amazon Glacier API For example objects archived to Amazon Glacier using Amazon S3 lifecycle policies can only be listed and retrieved by using the Amazon S3 API or the Amazon S3 console You can ’t see them as archives in an Amazon Glacier vault Cost Model With Amazon Glacier you pay only for what you use and there is no minimum fee In normal use Amazon Glacier has three pricing components: storage (per GB per month) data transfer out (per GB per month) and requests (per thousand UPLOAD and R ETRIEVAL requests per month) Note that Amazon Glacier is designed with the expectation that retrievals are infrequent and unusual and data will be stored for extended periods of time You can retrieve up to 5 percent of your average monthly storage (pror ated daily) for free each month If you retrieve more than this amount of data in a month you are charged an additional (per GB) retrieval fee A prorated charge (per GB) also applies for items deleted prior to 90 days’ passage You can find pricing infor mation at the Amazon Glacier pricing page 20 Amazon EFS Amazon Elastic File System (Amazon EFS) delivers a simple scalable elastic highly available and highly durable network file system as a service to EC2 instances 21 It supports Network File System versions 4 (NFSv4) and 41 (NFSv41) which makes it easy to migrate enterprise applications to AWS or build new ones We recommend clients run NFSv41 to take advantage of the many performance benefits found in the latest version including scalability and parallelism You can create and configure file systems quickly and easily through a simple web services interface You don’t need to provision storag e in advance and there is no minimum fee or setup cost —you simply pay for what you use Amazon EFS is designed to provide a highly scalable network file system that can grow to petabytes which allows massively parallel access from EC2 instances to ArchivedAmazon Web Services – AWS Storage Services Overview Page 12 your da ta within a Region It is also highly available and highly durable because it stores data and metadata across multiple Availability Zones in a Region To understand Amazon EFS it is best to examine the different components that allow EC2 instances access to EFS file systems You can create one or more EFS file systems within an AWS Region Each file system is accessed by EC2 instances via mount targets which are created pe r Availability Zone You create one mount target per Availability Zone in the VPC you create using Amazon Virtual Private Cloud Traffic flow between Amazon EFS and EC2 instances is controlled using security groups associated with the EC2 instance and the EFS mount targets Access to EFS file system objects (files and directories) is controlled using standard Unix style read/write/execute permissions based on user and group IDs You can find more information about how EFS works in the Amazon EFS User Guide 22 Usage Patterns Amazon EFS is designed to meet the needs of multi threaded applications and applications that concurrently access data from multiple EC2 instances and that require substantial levels of aggregate throughput and input/output operations per second (IOPS) Its distributed design enables high levels of availability durability and scalability which results in a small latency overhead for each file operation Because o f this per operation overhead overall throughput generally increases as the average input/output (I/O) size increases since the overhead is amortized over a larger amount of data This makes Amazon EFS ideal for growing datasets consisting of larger files that need both high performance and multi client access Amazon EFS supports highly parallelized workloads and is designed to meet the performance needs of big data and analytics media processing content management web serving and home directories Amazon EFS doesn’t suit all storage situations The following table presents some storage needs for which you should consider other AWS storage options Storage Need Solution AWS Services Archival data Data that requires encrypted archival storage with infrequent read access with a long recovery time objective (RTO) can be stored in Amazon Glacier more costeffectively Amazon Glacier ArchivedAmazon Web Services – AWS Storage Services Overview Page 13 Storage Need Solution AWS Services Relational database storage In most cases relational databases require storage that is mounted accessed and locked by a single node (EC2 instance etc) When running relational databases on AWS look at leveraging Amazon RDS or Amazon EC2 with Amazon EBS PIOPS volumes Amazon RDS Amazon EC2 Amazon EBS Temporary storage Consider using local instance store volumes for needs such as scratch disks buffers queues and caches Amazon EC2 Local Instance Store Performance Amazon EFS file systems are distributed across an unconstrained number of storage servers e nabling file systems to grow elastically to petabyte scale and allowing massively parallel access from EC2 instances within a Region This distributed data storage design means that multi threaded applications and applications that concurrently access dat a from multiple EC2 instances can drive substantial levels of aggregate throughput and IOPS There are two different performance modes available for Amazon EFS: General Purpose and Max I/O General Purpose performance mode is the default mode and is approp riate for most file systems However i f your overall Amazon EFS workload will exceed 7000 file operations per second per file system we recommend the files system use Max I/O performance mode Max I/O performance mode is optimized for applications where tens hundreds or thousands of EC2 instances are accessing the file system With this mode file systems scale to higher levels of aggregate throughput and operations per second with a tradeoff of slightly higher latencies for file operations Due to the spiky nature of file based workloads Amazon EFS is optimized to burst at high throughput levels for short periods of time while delivering low levels of throughput the rest of the time A credit system determines when an Amazon EFS file system can burst Over time each file system earns burst credits at a baseline rate determined by the size of the file system and uses these credits whenever it reads or writes data A file system can drive throughput continuously at its baseline rate It accumulates c redits during periods of inactivity or when throughput is below its baseline rate These accumulated burst credits allow a file system to drive throughput above its baseline rate The file system can continue to drive throughput above its baseline rate as long as it has a positive burst credit ArchivedAmazon Web Services – AWS Storage Services Overview Page 14 balance You can see the burst credit balance for a file system by viewing the BurstCreditBalance metric in Amazon CloudWatch 23 Newly created file systems start with a credit balance of 21 TiB with a baseline rate of 50 MiB/s per TiB of storage and a burst rate of 100 MiB/s The following list describes some examples of bursting behaviors for file systems of different sizes File system size (GiB) Baseline aggregate throughput (MiB/s) Burst aggregate throughput (MiB/s) Maximum burst duration (hours) % of time file system can burst 10 05 100 60 05% 256 125 100 69 125% 512 250 100 80 250% 1024 500 100 120 500% 1536 750 150 120 500% 2048 1000 200 120 500% 3072 1500 300 120 500% 4096 2000 400 120 500% Here are a few recommendations to get the most performance out of your Amazon EFS file system Because of the distributed architecture of Amazon EFS larger I/O workloads generally experience higher throughput EFS file systems can be mounted by thousands of EC2 instances concurrently If your application is parallelizable across multiple instances you can drive higher throughput levels on your file system in aggregate across instances If your application can handle asynchronous writes to your file system and you’re able to trade off consistency for speed enabling asynchronous writes may improve performance We recommend Linux kernel version 4 or later and NFSv41 for all clients accessing EFS file systems When mounting EFS file systems use the mount o ptions recommended in the Mounting File Systems and Additional Mounting Considerations sections of the Amazon EFS User Guide 24 25 ArchivedAmazon Web Services – AWS Storage Services Overview Page 15 Durability and Availability Amazon EFS is designed to be highly durable and highly available Each Amazon EFS file system object ( such as a directory file or link) is redundantly stored across multiple Availabilit y Zones within a Region Amazon EFS is designed to be as highly durable and available as Amazon S3 Scalability and Elasticity Amazon EFS automatically scales your file system storage capacity up or down as you add or remove files without disrupting your a pplications giving you just the storage you need when you need it and while eliminating time consuming administration tasks associated with traditional storage management ( such as planning buying provisioning and monitoring) Your EFS file system can grow from an empty file system to multiple petabytes automatically and there is no provisioning allocating or administration Security There are three levels of access control to consider when planning your EFS file system security: IAM permissions for API calls; security groups for EC2 instances and mount targets; and Network File System level users groups and permissions IAM enables access control for administering EFS file systems allowing you to specify an IAM identity ( either an IAM user or IAM role) so you can create delete and describe EFS file system resources The primary resource in Amazon EFS is a file system All other EFS resources such as mount targets and tags are referred to as subresources Identity based policies like IAM polic ies are used to assign permissions to IAM identities to manage the EFS resources and subresources Security groups play a critical role in establishing network connectivity between EC2 instances and EFS file systems You associate one security group with an EC2 instance and another security group with an EFS mount target associated with the file system These sec urity groups act as firewalls and enforce rules that define the traffic flow between EC2 instances and EFS file systems EFS file system objects work in a Unix style mode which defines permissions needed to perform actions on objects Users and groups are mapped to numeric ArchivedAmazon Web Services – AWS Storage Services Overview Page 16 identifiers which are mapped to EFS users to represent file ownership Files and directories within Amazon EFS are owned by a single owner and a single group Amazon EFS uses these numeric IDs to check permissions when a user attempts to access a file system object For more information about Amazon EFS security see the Amazon EFS User Guide 26 Interfaces Amazon offers a network protocol based HTTP (RFC 2616) API for managing Amazon EFS as well as support ing for EFS operations within the AWS SDKs and the AWS CLI The API actions and EFS operations are used to create delete and describe file systems; crea te delete and describe mount targets; create delete and describe tags; and describe and modify mount target security groups If you prefer to work with a graphical user interface the AWS Management Console gives you all the capabilities of the API in a browser interface EFS file systems use Network File System version 4 (NFSv4) and version 41 (NFSv41) for data access We recommend using NFSv41 to take advantage of the performance benefits in the latest version including scalability and parallelis m Cost Model Amazon EFS provides the capacity you need when you need it without having to provision storage in advance It is also designed to be highly available and highly durable as each file system object ( such as a directory file or link) is redu ndantly stored across multiple Availability Zones This highly durable highly available architecture is built into the pricing model and you only pay for the amount of storage you put into your file system As files are added your EFS file system dynami cally grows and you only pay for the amount of storage you use As files are removed your EFS file system dynamically shrinks and you stop paying for the data you deleted There are no charges for bandwidth or requests and there are no minimum commitme nts or up front fees You can find pricing information for Amazon EFS at the Amazon EF S pricing page 27 ArchivedAmazon Web Services – AWS Storage Services Overview Page 17 Amazon EBS Amazon Elastic Block Store (Amazon EBS) volumes provide durable block level storage for use with EC2 instances 28 Amazon EBS volumes are network attached storage that persists independently from the running life of a single EC2 instance After an EBS volume is attached to an EC2 instance you can use t he EBS volume like a physical hard drive typically by formatting it with the file system of your choice and using the file I/O interface provided by the instance operating system Most Amazon Machine Images (AMIs) are backed by Amazon EBS and use an EBS volume to boot EC2 instance s You can also attach multiple EBS volumes to a single EC2 instance Note however that any single EBS volume can be attached to only one EC2 instance at any time EBS also provides the ability to create point intime snapshots of volumes which are stored in Amazon S3 These snapshots can be used as the starting point for new EBS volumes and to protect data for long term durability To learn more about Amazon EBS durability see the EBS Durability and Availability section of this whitepaper The same snapshot can be used to instantiate as many volumes as you want These snapshots can be copied across AWS Regions making it easier to leverage multiple AWS Regions for geographical expansion data center migration and disaster recovery Sizes for EBS volumes range from 1 GiB to 16 TiB depending on the volume type and are allocated in 1 GiB increments You can find information about Amazon EBS previous generation Magne tic volumes at the Amazon EBS Previous Generation Volumes page 29 Usage Patterns Amazon EBS is meant for data that changes relatively frequently and needs to persist beyond the life of EC2 ins tance Amazon EBS is well suited for use as the primary storage for a database or file system or for any application or instance (operating system) that requires direct access to raw block level storage Amazon EBS provides a range of options that allow y ou to optimize storage performance and cost for your workload These options are divided into two major categories: solid state drive ( SSD )backed storage for transactional workloads such as databases and boot volumes (performance depends primarily on IOPS ) and hard disk drive ( HDD )backed storage for throughput intensive workloads such as big data data warehouse and log processing (performance depends primarily on MB/s) ArchivedAmazon Web Services – AWS Storage Services Overview Page 18 Amazon EBS doesn’t suit all storage situations The following table presents some storage needs for which you should consider other AWS storage options Storage Need Solution AWS Services Temporary storage Consider using local instance store volumes for needs such as scratch disks buffers queues and caches Amazon Local Instance Store Multi instance storage Amazon EBS volumes can only be attached to one EC2 instance at a time If you need multiple EC2 instances accessing vo lume data at the same time consider using Amazon EFS as a file system Amazon EFS Highly durable storage If you need very highly durable storage use S3 or Amazon EFS Amazon S3 Standard storage is designed for 99999999999 percent (11 nines) annual durability per object You can even decide to take a snapshot of the EBS volumes Such a snapshot then gets saved in Amazon S3 thus providing you the durability of Amazon S3 For more information on EBS durability see the Durability and Availability section EFS is designed for high durability and high availability with data stored in multiple Availability Zones within an AWS Region Amazon S3 Amazon EFS Static data or web content If your data doesn’t change that often Amazon S3 might represent a more cost effective and scalable solution for storing this fixed information Also web content served out of Amazon EBS requires a web server running on Amazon EC2; in contrast you can deliver web content directly out of Amazon S3 or from multiple EC2 instances using Amazon EFS Amazon S3 Amazon EFS Performance As described previously Amazon EBS provides a range of volume types that are divided into two major categories: SSD backed storage volumes and HDD backed storage volumes SSD backed storage volumes offer great price/performance characteristics for random small block workloads such as transactional applications whereas HDD backed storage volumes offer the best price/performance characteristics for large block sequential workloads You can attach and stripe data across multiple volumes of any type to increase the I/O performance available to your Amazon EC2 applications The following table presents the storage characteristics of the current generat ion volume types ArchivedAmazon Web Services – AWS Storage Services Overview Page 19 SSDBacked Provisioned IOPS (io1) SSDBacked General Purpose (gp2)* HDD Backed Throughput Optimized (st1) HDD Backed Cold (sc1) Use Cases I/Ointensive NoSQL and relational databases Boot volumes lowlatency interactive apps dev & test Big data data warehouse log processing Colder data requiring fewer scans per day Volume Size 4 GiB – 16 TiB 1 GiB – 16 TiB 500 GiB – 16 TiB 500 GiB – 16 TiB Max IOPS** per Volume 20000 10000 500 250 Max Throughput per Volume 320 MiB/s 160 MiB/s 500 MiB/s 250 MiB/s Max IOPS per Instance 65000 65000 65000 65000 Max Throughput per Instance 1250 MiB/s 1250 MiB/s 1250 MiB/s 1250 MiB/s Dominant Performance Attribute IOPS IOPS MiB/s MiB/s *Default volume type **io1/gp2 based on 16 KiB I/O; st1/sc1 based on 1 MiB I/O General Purpose SSD (gp2) volumes offer cost effective storage that is ideal for a broad range of workloads These volumes deliver single digit millisecond latencies the ability to burst to 3000 IOPS for extended periods of time and a baseline performance of 3 IOPS/GiB up to a maximum of 10000 IOPS (at 3334 GiB) The gp2 volumes can range in size from 1 GiB to 16 TiB These volumes have a throughput limit range of 128 MiB/second for volumes less than or equal to 170 GiB; for volumes over 170 GiB this limit increases at the ra te of 768 KiB/second per GiB to a maximum of 160 MiB/second (at 214 GiB and larger) You can see the percentage of I/O credits remaining in the burst buckets for gp2 volumes by viewing the Burst Balance metric in Amazon CloudWatch 30 Provisioned IOPS SSD (io1) volumes are designed to deliver predictable high performance for I/O intensive workloads with small I/O size where the dominant performance attribute is IOPS such as database workloads that are sensitive to ArchivedAmazon Web Services – AWS Storage Services Overview Page 20 storage performance and consistency in random access I/O throughput You specify an IOPS rate when creating a volume an d then Amazon EBS delivers within 10 percent of the provisioned IOPS performance 999 percent of the time over a given year when attached to an EBS optimized instance The io1 volumes can range in size from 4 G iB to 16 T iB and you can provision up to 20 000 IOPS per volume The ratio of IOPS provisioned to the volume size requested can be a maximum of 50 For example a volume with 5000 IOPS must be at least 100 GB in size Throughput Optimized HDD (st1) volumes are ideal for frequently accessed through putintensive workloads with large datasets and large I/O sizes where the dominant performance attribute is throughput (MiB/s) such as streaming workloads big data data warehouse log processing and ETL workloads These volumes deliver performance in terms of throughput measured in M iB/s and include the ability to burst up to 250M iB/s per T iB with a baseline throughput of 40M iB/s per T iB and a maximum throughput of 500M iB/s per volume The st1 volumes are designed to deliver the expected throughput performance 99 percent of the time and has enough I/O credits to support a full volume scan at the burst rate The st1 volumes can’t be used as boot volumes You can see the throughput credits remaining in the burst bucket for st1 vol umes by viewing the Burst Balance metric in Amazon CloudWatch 31 Cold HDD (sc1) volumes provide the lowest cost per G iB of all EBS volume types These are ideal for infrequently accessed workloads with large cold datasets with large I/O sizes where the dominant performance attribute is throughput (MiB/s) Similarly to st1 sc1 volumes provide a burst model and can burst up to 80 MiB/s per TiB with a basel ine throughput of 12 M iB/s per T iB and a maximum throughput of 250 MB/s per volume The sc1 volumes are designed to deliver the expected throughput performance 99 percent of the time and have enough I/O credits to support a full volume scan at the burst r ate The sc1 volumes can’t be used as boot volumes You can see the throughput credits remaining in the burst bucket for s c1 volumes by viewing the Burst Balance metric in CloudWatch 32 Because all EBS volumes are network attached devices other network I/O performed by an EC2 instance as well as the total load on the shared network can affect the performance of individual EBS volumes To enab le your EC2 instances to maximize the performance of EBS volumes you can launch selected EC2 instance types as EBS optimized instances Most of the latest generation ArchivedAmazon Web Services – AWS Storage Services Overview Page 21 EC2 instances (m4 c4 x1 and p 2) are EBS optimized by de fault EBS optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS with speeds between 500 Mbps and 10000 Mbps depending on the instance type When attached to EBS optimized instances provisioned IOPS volumes are designed to deliver within 10 percent of the provisioned IOPS performance 999 percent of the time within a given year Newly created EBS volumes receive their maximum performance the moment they are available and they don’t require initialization (formerly known as prewarming) However you must i nitialize the storage blocks on volumes that were restored from snapshots before you can access the block 33 Using Amazon EC2 with Amazon EBS you can take advantage of many of the same disk performance optimization techniques that you do with on premises servers and storage For example by attaching multiple EBS volumes to a single EC2 instance you can partition the total application I/O load by allocating one volume for database log data one or more volumes for database file storage and other volumes for file system data Each separate EBS volume can be configured as EBS General Purpose (SSD) Provisioned IOPS (SSD) Throughput Optimized (HDD) or Cold (HDD) as needed Some of the best price/performance balanced workloads take advantage of different v olume types on a single EC2 instance For example Cassandra using General Purpose (SSD) volumes for data but Throughput Optimized (HDD) volumes for logs or Hadoop using General Purpose (SSD) volumes for both data and logs Alternatively you can stripe your data across multiple similarly provisioned EBS volumes using RAID 0 (disk striping) or logical volume manager software thus aggregating available IOPS total volume throughput and total volume size Durability and Avail ability Amazon EBS volumes are designed to be highly available and reliable EBS volume data is replicated across multiple servers in a single Availability Zone to prevent the loss of data from the failure of any single component Taking snapshots of your EBS volumes increases the durability of the data stored on your EBS volumes EBS snapshots are incremental point intime backups containing only the data blocks changed since the last snapshot EBS volumes are designed for an annual failure rate (AFR) of between 01 and 02 percent where failure refers to a complete or partial loss of the volume depending on the size and performance of the volume This means if you have 1000 EBS volumes over the course of a year you can expect unrecoverable failures with 1 or 2 of your ArchivedAmazon Web Services – AWS Storage Services Overview Page 22 volumes This AFR makes EBS volumes 20 times more reliable than typical commodity disk drives which fail with an AFR of around 4 percent Despite these very low EBS AFR numbers we still recommend that you create snapshot s of your EBS volumes to improve the durability of your data The Amazon EBS snapshot feature makes it easy to take application consistent backups of your data For more information on EBS durability see the Amazon EBS Availability and Durability section of the Amazon EBS Product Details page 34 To maximize both durability and availability of Amazon EBS data you should create snapshots of your EBS volumes frequently (For application consistent backups we recommend b riefly pausing any write operations to the volume or unmounting the volume while you issue the snapshot command You can then safely continue to use the volume while the snapshot is pending completion) All EBS volume types offer durable snapshot capabil ities and are designed for 99999 percent availability If your EBS volume does fail all snapshots of that volume remain intact and you can recreate your volume from the last snapshot point Because an EBS volume is created in a particular Availability Z one the volume will be unavailable if the Availability Zone itself is unavailable A snapshot of a volume however is available across all of the Availability Zones within a Region and you can use a snapshot to create one or more new EBS volumes in any Availability Zone in the region EBS snapshots can also be copied from one Region to another and can easily be shared with other user accounts Thus EBS snapshots provide an easy touse disk clone or disk image mechanism for backup sharing and disaster recovery Scalability and Elasticity Using the AWS Management Console or the Amazon EBS API you can easily and rapidly provision and release EBS volumes to scale in and out with your total storage demands The simplest approach is to create and attach a new EBS volume and begin using it together with your existing ones However if you need to expand the size of a single EBS volume you can effectively resize a volume using a snapshot: 1 Detach the original EBS volume 2 Create a snapshot of the original EBS volume’s data in Amazon S3 ArchivedAmazon Web Services – AWS Storage Services Overview Page 23 3 Create a new EBS volume from the snapshot but specify a larger size than the original volume 4 Attach the new larger volume to your EC2 instance in place of the original (In many cases an OS level utility must also be used to expand the file system) 5 Delete the original EBS volume Security IAM enables access control for your EBS volumes allowing you to specify who can access which EBS volumes EBS encryption enables data atrest and data inmotion security It offers seamless encryption of both EBS boot volumes and data volumes as well as snapshots eliminating the need to build and manage a secure key management infrastructure These encryption keys are Amazon managed or keys that you create and manage using t he AWS Key Management Service (AWS KMS) 35 Data inmotion security occurs on the servers that host EC2 instances providing encryption of data as it moves between EC2 instances and EBS volumes Access control plu s encryption offers a strong defense indepth security strategy for your data For more information see Amazon EBS Encryption in the Amazon EBS User Guide 36 Interfaces Amazon offers a REST management API for Amazon EBS as well as support for Amazon EBS operations within both the AWS SDKs and the AWS CLI The API actions and EBS operations are used to create delete describe attach and detach EBS volumes for your EC2 instances; to create delete and describe snapshots from Amazon EBS to Amazon S3; and to copy snapshots from one region to another If you prefer to work with a graphical user interface the AWS Management Console gives you all the capabilities of the API in a browser interface Regardless of how you create your EBS volume note that all storage is allocated at the time of volume creation and that you are charged for this allocated storage even if you don’t write data to it ArchivedAmazon Web Services – AWS Storage Services Overview Page 24 Amazon EBS doesn’t provide a d ata API Instead Amazon EBS presents a block device interface to the EC2 instance That is to the EC2 instance an EBS volume appears just like a local disk drive To write to and read data from EBS volumes you use the native file system I/O interfaces of your chosen operating system Cost Model As with other AWS services with Amazon EBS you pay only for what you provision in increments down to 1 GB In contrast hard disks come in fixed sizes and you pay for the entire size of the disk regardless of the amount you use or allocate Amazon EBS pricing has three components: provisioned storage I/O requests and snapshot storage Amazon EBS General Purpose (SSD) Throughput Optimized (HDD) and Cold (HDD) volumes are charged per GB month of provisioned s torage Amazon EBS Provisioned IOPS (SSD) volumes are charged per GB month of provisioned storage and per provisioned IOPS month For all volume types Amazon EBS snapshots are charged per GB month of data stored An Amazon EBS snapshot copy is charged fo r the data transferred between R egions and for the standard Amazon EBS snaps hot charges in the destination R egion It’s important to remember that for EBS volumes you are charged for provisioned (allocated) storage whether or not you actually use it For Amazon EBS snapshots you are charged only for storage actually used (consumed) Note that Amazon EBS snapshots are incremental so the storage used in any snapshot is generally much less than the storage consumed for an EBS volume Note that there is no ch arge for transferring information among the various AWS storage offerings (that is an EC2 instance transferring information with Amazon EBS Amazon S3 Amazon RDS and so on) as long as the storage off erings are within the same AWS R egion You can find pr icing information for Amazon EBS at the Amazon EBS pricing page 37 Amazon EC2 Instance Storage Amazon EC2 instance st ore volumes (also called ephemeral drives) provide temporary block level storage for many EC2 instance types 38 This storage consists of a preconfigured and pre attached block of disk storage on the same ArchivedAmazon Web Services – AWS Storage Services Overview Page 25 physical server that hosts the EC2 instance for which the block provides storage The amount of the disk storage provided varies by EC2 instance type In the EC2 instance families that provide instance storage larger instances tend to provide both more and larger instance store volumes Note that some instance types such as the micro instances (t1 t2) and the Compute optimized c4 instances use EBS storage only w ith no instance storage provided Note also that instances using Amazon EBS for the root device (in other words that boot from Amazon EBS) don’t expose the instance store volumes by default You can choose to expose the instance store volumes at instance l aunch time by specifying a block device mapping For more information see Block Device Mapping in the Amazon EC2 User Guide 39 AWS offers two EC2 inst ance families that are purposely built for storage centric workloads Performance specificat ions of the storage optimized (i2) and dense storage (d2) instance families are outlined in the following table SSDBacked Storage Optimized (i2) HDD Backed Dense Storage (d2) Use Cases NoSQL databases like Cassandra and MongoDB scale out transactional databases data warehousing Hadoop and cluster file systems Massively Parallel Processing (MPP) data warehousing MapReduce and Hadoop distributed computing distributed file systems network file systems log or data processing applications Read Performance 365000 Random IOPS 35 G iB/s* Write Performance 315000 Random IOPS 31 G iB/s* Instance Store Max Capacity 64 T iB SSD 48 TiB HDD Optimized For Very high random IOPS High disk throughput * 2MiB block size ArchivedAmazon Web Services – AWS Storage Services Overview Page 26 Usage Patterns In general EC2 local instance store volumes are ideal for temporary storage of information that is continually changing such as buffers caches scratch data and other temporary content or for data that is replicated across a fleet of instances such as a load balanced pool of web servers EC2 instance storage is wellsuited for this purpose It cons ists of the virtual machine’s boot device (for instance store AMIs only) plus one or more additional volumes that are dedicated to the EC2 instance (for both Amazon EBS AMIs and instance store AMIs) This storage can only be used from a single EC2 instanc e during that instance's lifetime Note that unlike EBS volumes instance store volumes cannot be detached or attached to another instance For high I/O and high storage use EC2 instance storage targeted to these use cases High I/O instances (the i2 family) provide instance store volumes backed by SSD and are ideally suited for many high performance database workloads Example applications include NoSQL databases like Cassandra and MongoDB clustered databases and online transaction processing (OLT P) systems High storage instances (the d2 family) support much higher storage density per EC2 instance and are ideally suited for applications that benefit from high sequential I/O performance across very large datasets Example applications include data warehouses Hadoop/MapReduce storage nodes and parallel file systems Note that applications using instance storage for persistent data generally provide data durability through replication or by periodically copying data to durable storage EC2 instance store volumes don’t suit all storage situations The following table presents some storage needs for which you should consider other AWS storage options Storage Need Solution AWS Services Persistent storage If you need persistent virtual disk storage si milar to a physical disk drive for files or other data that must persist longer than the lifetime of a single EC2 instance EBS volumes Amazon EFS file systems or Amazon S3 are more appropriate Amazon EC2 Amazon EBS Amazon EFS Amazon S3 Relational database storage In most cases relational databases require storage that persists beyond the lifetime of a single EC2 instance making EBS volumes the natural choice Amazon EC2 Amazon EBS ArchivedAmazon Web Services – AWS Storage Services Overview Page 27 Storage Need Solution AWS Services Shared storage Instance store volumes are dedicated to a single EC2 instance and can’t be shared with other systems or users If you need storage that can be detached from one instance and attached to a different instance or if you need the ability to share data easily Amazon EFS Amazon S3 or Amazon EBS are better choice s Amazon EFS Amazon S3 Amazon EBS Snapshots If you need the convenience long term durability availability and the ability to share point intime disk snapshots EBS volumes are a better choice Amazon EBS Performance The instance store volumes that are not SSD based in most EC2 instance families have performance characteristics similar to standard EBS volumes Because the EC2 instance virtual machine and the local instance store volumes are located on the same physical server interaction with this storage is very fast particularly for sequential acc ess To increase aggregate IOPS or to improve sequential disk throughput multiple instance store volumes can be grouped together using RAID 0 (disk striping) software Because the bandwidth of the disks is not limited by the network aggregate sequential throughput for multiple instance volumes can be higher than for the same number of EBS volumes Because of the way that EC2 virtualizes disks the first write operation to any location on an instance store volume performs more slowly than subsequent write s For most applications amortizing this cost over the lifetime of the instance is acceptable However if you require high disk performance we recommend that you prewarm your drives by writing once to every drive location before production use The i2 r3 and hi1 instance types use direct attached SSD backing that provides maximum performance at launch time without prewarming Additionally r3 and i2 instance store backed volumes support the TRIM command on Linux instances For these volumes you can us e TRIM to notify the SSD controller whenever you no longer need data that you've written This notification lets the controller free space which can reduce write amplification and increase performance ArchivedAmazon Web Services – AWS Storage Services Overview Page 28 The SSD instance store volumes in EC2 high I/O instan ces provide from tens of thousands to hundreds of thousands of low latency random 4 KB random IOPS Because of the I/O characteristics of SSD devices write performance can be variable For more information see High I/O Instances in the Amazon EC2 User Guide 40 The instance store volumes in EC2 high storage instances provide very high storage density and high sequential read and write performance For more information see High Storage Instances in the Amazon EC2 User Guide 41 Durability and Availability Amazon EC2 local instance store volumes are not intended to be used as durable disk storage Unlike Amazon EBS volume data data on instance store volumes persists only during the life of the associated EC2 instance This functionality means that data on instance store volumes is persistent across orderly instance reboots but if the EC2 instance is stopped and restarted terminates or fails all data on the instance sto re volumes is lost For more information on the lifecycle of an EC2 instance see Instance Lifecycle in the Amazon EC2 User Guide 42 You should not use local ins tance store volumes for any data that must persist over time such as permanent file or database storage without providing data persistence by replicating data or periodically copying data to durable storage such as Amazon EBS or Amazon S3 Note that this usage recommendation also applies to the special purpose SSD and high density instance store volumes in the high I/O and high storage instance types Scalability and Elasticity The number and storage capacity of Amazon EC2 local instance store volumes are fixed and defined by the instance type Although you can’t increase or decrease the number of instance store volumes on a single EC2 instance this storage is still scalable and elastic; you can scale the total amount of instance store up or down by incre asing or decreasing the number of running EC2 instances To achieve full storage elasticity include one of the other suitable storage options such as Amazon S3 Amazon EFS or Amazon EBS in your Amazon EC2 storage strategy ArchivedAmazon Web Services – AWS Storage Services Overview Page 29 Security IAM helps you secure ly control which users can perform operations such as launch and termination of EC2 instances in your account and instance store volumes can only be mounted and accessed by the EC2 instances they belong to Also when you stop or terminate an instance th e applications and data in its instance store are erased so no other instance can have access to the instance store in the future Access to an EC2 instance is controlled by the guest operating system If you are concerned about the privacy of sensitive d ata stored in an instance storage volume we recommend encrypting your data for extra protection You can do so by using your own encryption tools or by using third party encryption tools available on the AWS Marketplace 43 Interfaces There is no separate management API for EC2 instance store volumes Instead instance store volumes are specified using the block device mapping feature of the Amazon EC2 API and the AWS Management Console You cannot create or destroy instance store volumes but you can control whether or not they are exposed to the EC2 instance and what device name is mapped to for each volume There is also no separate data API for instance store volumes Just like EBS volumes insta nce store volumes present a block device interface to the EC2 instance To the EC2 instance an instance store volume appears just like a local disk drive To write to and read data from instance store volumes you use the native file system I/O interfaces of your chosen operating system Note that in some cases a local instance store volume device is attached to an EC2 instance upon launch but must be formatted with an appropriate file system and mounted before use Also keep careful track of your block device mappings There is no simple way for an application running on an EC2 instance to determine which block device is an instance store (ephemeral) volume and which is an EBS (persistent) volume ArchivedAmazon Web Services – AWS Storage Services Overview Page 30 Cost Model The cost of an EC2 instance includes any local instance store volumes if the instance type provides them Although there is no additional charge for data storage on local instance store volumes note that data transferred to and from Amazon EC2 instance store volumes from other Availability Zones or outside of an Amazon EC2 Region can incur data transfer charges; additional charges apply for use of any persistent storage such as Amazon S3 Amazon Glacier Amazon EBS volumes and Amazon EBS snapshots You can find pricing information for Amazon EC2 A mazon EBS and data transfer at the Amazon EC2 Pricing web page 44 AWS Storage Gateway AWS Storage Gateway connects an on premises software appliance wi th cloud based storage to provide seamless and secure storage integration between an organization’s on premises IT environment and the AWS storage infrastructure 45 The service enables you to securely store data in the AWS Cloud for scalable and cost effec tive storage AWS Storage Gateway supports industry standard storage protocols that work with your existing applications It provides lowlatency performance by maintaining frequently accessed data on premises while securely storing all of your data encryp ted in Amazon S3 or Amazon Glacier For disaster recovery scenarios AWS Storage Gateway together with Amazon EC2 can serve as a cloud hosted solution that mirrors your entire production environment You can download the AWS Storage Gateway software appl iance as a virtual machine (VM) image that you install on a host in your data center or as an EC2 instance Once you’ve installed your gateway and associated it with your AWS account through the AWS activation process you can use the AWS Management Consol e to create gateway cached volumes gateway stored volumes or a gateway virtual tape library (VTL) each of which can be mounted as an iSCSI device by your on premises applications With gateway cached volumes you can use Amazon S3 to hold your primary data while retaining some portion of it locally in a cache for frequently accessed data Gateway cached volumes minimize the need to scale your on premises storage infrastructure while still providing your applications with low latency access to their freq uently accessed data You can create storage volumes up to 32 ArchivedAmazon Web Services – AWS Storage Services Overview Page 31 TiB in size and mount them as iSCSI devices from your on premises application servers Each gateway configured for gateway cached volumes can support up to 20 volumes and total volume storage of 150 T iB Data written to these volumes is stored in Amazon S3 with only a cache of recently written and recently read data stored locally on your on premises storage hardware Gateway stored volumes store your primary data locally while asynchronously backing up that data to AWS These volumes provide your on premises applications with low latency access to their entire datasets while providing durable off site backups You can create storage volumes up to 1 T iB in size and mount them as iSCSI devices from your on premises application servers Each gateway configured for gateway stored volumes can support up to 12 volumes and total volume storage of 12 T iB Data written to your gateway stored volumes is stored on your on premises storage hardware and a synchronously backed up to Amazon S3 in the form of Amazon EBS snapshots A gateway VTL allows you to perform offline data archiving by presenting your existing backup application with an iSCSI based virtual tape library consisting of a virtual media chang er and virtual tape drives You can create virtual tapes in your VTL by using the AWS Management Console and you can size each virtual tape from 100 G iB to 25 T iB A VTL can hold up to 1500 virtual tapes with a maximum aggregate capacity of 150 T iB On ce the virtual tapes are created your backup application can discover them by using its standard media inventory procedure Once created tapes are available for immediate access and are stored in Amazon S3 Virtual tapes that you need to access frequentl y should be stored in a VTL Data that you don't need to retrieve frequently can be archived to your virtual tape shelf (VTS) which is stored in Amazon Glacier further reducing your storage costs Usage Patterns Organizations are using AWS Storage Gateway to support a number of use cases These use cases include corporate file sharing enabling existing on premises backup applications to store primary backups on Amazon S3 disaster recovery and mirroring data to cloud based compute resources and th en later archiving it to Amazon Glacier ArchivedAmazon Web Services – AWS Storage Services Overview Page 32 Performance Because the AWS Storage Gateway VM sits between your application Amazon S3 and underlying on premises storage the performance you experience depends upon a number of factors These factors include the speed and configuration of your underlying local disks the network bandwidth between your iSCSI initiator and gateway VM the amount of local storage allocated to the gateway VM and the bandwidth between the gateway VM and Amazon S3 For gateway cached volumes to provide low latency read access to your on premises applications it’s important that you provide enough local cache storage to store your recently accessed data The AWS Storage Gateway documentation provides guidance on how to optimize your environment setup for best performance including how to properly size your local storage 46 AWS Storage Gateway efficiently uses your Internet bandwidth to speed up the upload of your on premises application data to AWS AWS Storage Gateway only uploads data that has changed which minimizes the amount of data sent over the Internet To further increase throughput and reduce your network costs you can also use AWS Direct Connect to establish a dedicated network connection between your on premises gateway and AWS 47 Durability and Availability AWS Storage Gateway durably stores your on premises application data by uploading it to Amazon S3 or Amazon Glacier Both of these AWS services store data in multiple facilities and on multiple devices within each facility being designed to provide an average annual durability of 99999999999 percent (11 nines) They also perform regular systematic data integrity checks and are built to be automatically self healing Scalability and Elasticity In both gateway cached and gateway stored volume configurations AWS Storage Gateway stores data in Amazon S3 which has been designed to offer a very high level of scalability and elasticity automatically Unlike a typical file system that can encounter issues when storing large number of files in a directory Amazon S3 supports a virtually unlimited number of files in any bucke t Also unlike a disk drive that has a limit on the total amount of data that can be stored before you must partition the data across drives or servers an Amazon S3 bucket can ArchivedAmazon Web Services – AWS Storage Services Overview Page 33 store a virtually unlimited number of bytes You are able to store any number of objects and Amazon S3 will manage scaling and distributing redundant copies of your information onto other servers in other locations in the same region all using Amazon’s high performance infrastructure In a gateway VTL configuration AWS Storage Ga teway stores data in Amazon S3 or Amazon Glacier providing a virtual tape infrastructure that scales seamlessly with your business needs and eliminates the operational burden of provisioning scaling and maintaining a physical tape infrastructure Securi ty IAM helps you provide security in controlling access to AWS Storage Gateway With IAM you can create multiple IAM users under your AWS account The AWS Storage Gateway API enables a list of actions each IAM user can perform on AWS Storage Gateway 48 The AWS Storage Gateway encrypts all data in transit to and from AWS by using SSL All volume and snapshot data stored in AWS using gateway stored or gateway cached volumes and all virtual tape data stored in AWS using a gateway VTL is encrypted at rest using AES 256 a secure symmetric key encryption standard using 256 bit encryption keys Storage Gateway supports authentication between your gateway and iSCS I initiators by using Challenge Handshake Authentication Protocol (CHAP) Interfaces The AWS Management Console can be used to download the AWS Storage Gateway VM on premises or onto an EC2 instance (an AMI that contains the gateway VM image) You can then select between a gateway cached gateway stored or gateway VTL configuration and activate your storage gateway by associating your gateway’s IP address with your AWS account All the detailed steps for AWS Storage Gateway deployment can be found in Getting Started in the AWS Storage Gateway User Guide 49 The integrated AWS CLI also provides a set of high level Linux like commands for common operations of the AWS Storage Gateway service ArchivedAmazon Web Services – AWS Storage Services Overview Page 34 You can also use the AWS SDKs to develop applications that interact with AWS Storage Gateway The AWS SDKs for Java NET JavaScript Nodejs Ruby PHP and Go wrap the underlying AWS Storage Gateway API to simplify your programming tasks Cost Model With AWS Storage Gateway you pay only for what you use AWS Storage Gateway has the following pricing components: gateway usage (per gateway per month) snapshot storage usage (per GB per month) volume storage usage (per GB per month) virtual tape shelf storage (per GB per month) virtual tape library storage (per GB per month) retrieval from virtual tape shelf (per GB) and data transfer out (per GB per month) You can find pricing information at the AWS Storage Gateway pricing page 50 AWS Snowball AWS Snowball accelerates moving large amounts of data into and out of AWS using secure Snowball appl iances 51 The Snowball appliance is purpose built for efficient data storage and transfer All AWS Regions have 80 TB Snowballs while US Regions have both 50 TB and 80 TB models The Snowball appliance is rugged enough to withstand an 85 G jolt At less than 50 pounds the appliance is light enough for one person to carry It is entirely self contained with a power cord one RJ45 1 GigE and two SFP+ 10 GigE network connections on the back and an E Ink display and control panel on the front Each Snowball appliance is water resistant and dustproof and serves as its own rugged shipping container AWS transfers your data directly onto and off of Snowball storage devices using Amazon’s high speed internal network and bypasses the Internet For datasets of significant size Snowball is often faster than Internet transfer and more cost effective than upgrading your connectivity AWS Snowball supports importing data into and exporting data from Amazon S3 buckets From there the data can be copied or moved to oth er AWS services such as Amazon EBS and Amazon Glacier as desired Usage Patterns Snowball is ideal for transferring anywhere from terabytes to many petabytes of data in and out of the AWS Cloud securely This is especially beneficial in cases ArchivedAmazon Web Services – AWS Storage Services Overview Page 35 where you don’t want to make expensive upgrades to your network infrastructure or in areas whe re high speed Internet connections are not available or cost prohibitive In general if loading your data over the Internet would take a week or more you should consider using Snowball Common use cases include cloud migration disaster recovery data ce nter decommission and content distribution When you decommission a data center many steps are involved to make sure valuable data is not lost and Snowball can help ensure data is securely and cost effectively transferred to AWS In a content distributi on scenario you might use Snowball appliances if you regularly receive or need to share large amounts of data with clients customers or business associates Snowball appliances can be sent directly from AWS to client or customer locations Snowball migh t not be the ideal solution if your data can be transferred over the Internet in less than one week Performance The Snowball appliance is purpose built for efficient data storage and transfer including a high speed 10 Gbps network connection designed to minimize data transfer times allowing you to transfer up to 80 TB of data from your data source to the appliance in 25 days plus shipping time In this case the end toend time to transfer the data into AWS is approximately a week including default s hipping and handling time to AWS data centers Copying 160 TB of data can be completed in the same amount of time by using two 80 TB Snowballs in parallel You can use the Snowball client to estimat e the time it takes to transfer your data (refer to the AWS Import/Export User Guide for more details) 52 In general you can improve your transfer speed from your data source to the Snowball appliance by reducing local network use eliminating unnecessary hops between the Snowball appliance and the workstation using a powerful computer as your workstation and combining smaller objects Parallelization can also help achieve maximum performance of your data transfer This could involve one or more of the following parallelization types: using multiple instances of the Snowball client on a single workstation with a single Snowball appliance; using multiple instances of the Snowball client on multiple workstations with a single ArchivedAmazon Web Services – AWS Storage Services Overview Page 36 Snowball appliance; and/or usi ng multiple instances of the Snowball client on multiple workstations with multiple Snowball appliances Durability and Availability Once the data is imported to AWS the durability and availability characteristics of the target storage applies Amazon S3 is designed for 99999999999 percent (11 nines) durability and 9999 percent availability Scalability and Elasticity Each AWS Snowball appliance is capable of storing 50 TB or 80 TB of data If you want to transfer more data than that you can use multipl e appliances For Amazon S3 individual files are loaded as objects and can range up to 5 TB in size but you can load any number of objects in Amazon S3 The aggregate total amount of data that can be imported is virtually unlimited Security You can integrate Snowball with IAM to control which actions a user can perform 53 You can give the IAM users on your AWS account access to all Snowball actions or to a subse t of them Similarly an IAM user that creates a Snowball job must have permissions to access the Amazon S3 buckets that will be used for the import operations For Snowball AWS KMS protects the encryption keys used to protect data on each Snowball appliance All data loaded onto a Snowball appliance is encrypted using 256 bit encryption Snowball is physically secured by using an industry standard Trusted Platform Module (TPM) that uses a dedicated processor designed to detect any unauthorized modifications to the hardware firmware or software Snowball is included in the AWS HIPAA compliance program so you can use Snowball to transfer large amounts of Protected Health Information (PHI) data into and out of AWS 54 ArchivedAmazon Web Services – AWS Storage Services Overview Page 37 Interfaces There are two ways to get started with Snowball You can create an import or export job using the AWS Snowball Management Console or you can use the Snowball Job Management API and integrate AWS Snowball as a p art of your data management solution The primary functions of the API are to create list and describe import and export jobs and it uses a simple standards based REST web services interface For more details around using the Snowball Job Management API see the API Reference documentation 55 You also have two ways to locally transfer data between a Snowball appliance and your on premises data center The Snowball c lient available as a download from the AWS Import/Export Tools page is a standalone terminal application that you run on your local workstation to do your data transfer 56 You use simple copy (cp ) commands to transfer data and handling errors and logs are written to your local workstation for troubleshooting and auditing The second option to locally transfer data between a Snowball appliance and your on premises data center is the Amazon S3 Adap ter for Snowball which is also available as a download from the AWS Import/Export Tools page You can programmatically transfer data between your on premises data center and a Snowball appliance u sing a subset of the Amazon S3 REST API commands This allows you to have direct access to a Snowball appliance as if it were an Amazon S3 endpoint Below is an example of how you would reference a Snowball appliance as an Amazon S3 endpoint when executing an AWS CLI S3 list command By default the adapter runs on port 8080 but a different port can be specified by changing the adapterconfig file The following example steps you through how to implement a Snowball appliance to import your data into AW S using the AWS Snowball Management Console 1 To start sign in to the AWS Snowball Management Console and create a job 2 AWS then prepares a Snowball appliance for your job 3 The Snowball appliance is shipped to you through a regional shipping carrier (UPS in all AWS regions except India which uses Amazon Logistics) You can find your tracking number and a link to the tracking website on the AWS Snowball Management Console ArchivedAmazon Web Services – AWS Storage Services Overview Page 38 4 A few days later the regional shipping carrier delivers the Snowball appliance to the address you provided when you created the job 5 Next get ready to transfer your data by downloading your credentials your job manifest and the manifest’s unlock code from the AWS Management Console and by downloading the Snowball client The Sno wball client is the tool that you’ll use to manage the flow of data from your on premises data source to the Snowball appliance 6 Install the Snowball client on the computer workstation that has your data source mounted on it 7 Move the Snowball appliance in to your data center open it and connect it to power and your local network 8 Power on the Snowball appliance and start the Snowball client You provide the IP address of the Snowball appliance the path to your manifest and the unlock code The Snowball client decrypts the manifest and uses it to authenticate your access to the Snowball appliance 9 Use the Snowball client to transfer the data that you want to import into Amazon S3 from your data source into the Snowball appliance 10 After your data transfer is complete power off the Snowball appliance and unplug its cables The E Ink shipping label automatically updates to show the correct AWS facility to ship to You can track job status by using Amazon SNS text messages or directly in the console 11 The regional shipping carrier returns the Snowball appliance to AWS 12 AWS gets the Snowball appliance and imports your data into Amazon S3 On average it takes about a day for AWS to begin importing your data into Amaz on S3 and the import can take a few days If there are any complications or issues we contact you through email Once the data transfer job has been processed and verified AWS performs a software erasure of the Snowball appliance that follows the Nation al Institute of Standards and Technology (NIST) 800 88 guidelines for media sanitization Cost Model With Snowball as with most other AWS services you pay only for what you use Snowball has three pricing components: service fee (per job) extra day char ges as required (the first 10 days of onsite usage are free) and data transfer For the ArchivedAmazon Web Services – AWS Storage Services Overview Page 39 destination storage the standard Amazon S3 storage pricing applies You can find pricing information at the AWS Snowball Pricing page 57 Amazon CloudFront Amazon CloudFront is a content delivery web service that speeds up the distribution of your website’s dynamic static and streaming content by making it available from a global network of edge locations 58 When a user requests content that you’re serving with Amazon Cloud Front the user is routed to the edge location that provides the lowest latency (time delay) so content is delivered with better performance than if the user had accessed the content from a data center farther away If the content is already in the edge l ocation with the lowest latency Amazon CloudFront delivers it immediately If the content is not currently in that edge location Amazon CloudFront retrieves it from an Amazon S3 bucket or an HTTP server (for example a web server) that you have identifie d as the source for the definitive version of your content Amazon CloudFront caches content at edge locations for a period of time that you specify Amazon CloudFront supports all files that can be served over HTTP These files include dynamic web pages such as HTML or PHP pages and any popular static files that are a part of your web application such as website images audio video media files or software downloads For on demand media files you can also choose to stream your content using Real Time Messaging Protocol (RTMP) delivery Amazon CloudFront also supports delivery of live media over HTTP Amazon CloudFront is optimized to work with other Amazon web services such as Amazon S3 Amazon EC2 Elastic Load Balancing and Amazon Route 53 Amazon CloudFront also works seamlessly with any non AWS origin servers that store the original definitive versions of your files Usage Patterns CloudFront is ideal for distribution of frequently accessed static content that benefits from edge delivery such as popular website images videos media files or software downloads Amazon CloudFront can also be used to deliver dynamic web applications over HTTP These applications can include static content dynamic content or a whole site with a mixture of the two Amazon CloudFront is also commonly used to stream audio and video files to web browsers and mobile ArchivedAmazon Web Services – AWS Storage Services Overview Page 40 devices To get a better understanding of your end user usage patterns you can use Amazon CloudFront reports 59 If you need to remove an object from Amazon CloudFront edge server caches before it expires you can either invalidate the object or use object versioning to serve a different version of the object that has a different name 60 61 Additionally it might be better to serve infrequently accessed data directly from the origin server avoiding the additional cost of origin fetches for data that is not likely to be reused at the edge; however origin fetches to Amazon S3 are free Performance Amazon CloudFront is designed for low latency and high bandwidth delivery of content Amazon CloudFront speeds up the distribution of your content by routing end users to the edge location that can best serve each end user’s request in a worldwide network of edge locations T ypically requests are routed to the nearest Amazon CloudFront edge location in terms of latency This approach dramatically reduces the number of networks that your users’ requests must pass through and improves performance Users get both lower latency —here latency is the time it takes to load the first byte of an object —and the higher sustained data transfer rates needed to deliver popular objects at scale Durability and Availability Because a CDN is an edge cache Amazon CloudFront does not provide dur able storage The origin server such as Amazon S3 or a web server running on Amazon EC2 provides the durable file storage needed Amazon CloudFront provides high availability by using a distributed global network of edge locations Origin requests from t he edge locations to AWS origin servers (for example Amazon EC2 Amazon S3 and so on) are carried over network paths that Amazon constantly monitors and optimizes for both availability and performance This edge network provides increased reliability and availability because there is no longer a central point of failure Copies of your files are now held in edge locations around the world Scalability and Elasticity Amazon CloudFront is designed to provide seamless scalability and elasticity You can easi ly start very small and grow to massive numbers of global ArchivedAmazon Web Services – AWS Storage Services Overview Page 41 connections With Amazon CloudFront you don’t need to worry about maintaining expensive web server capacity to meet the demand from potential traffic spikes for your content The service automatica lly responds as demand spikes and fluctuates for your content without any intervention from you Amazon CloudFront also uses multiple layers of caching at each edge location and collapses simultaneous requests for the same object before contacting your origin server These optimizations further reduce the need to scale your origin infrastructure as your website becomes more popular Security Amazon CloudFront is a very secure service to distribute your data It integrates with IAM so that you can create us ers for your AWS account and specify which Amazon CloudFront actions a user (or a group of users) can perform in your AWS account You can configure Amazon CloudFront to create log files that contain detailed information about every user request that Amazo n CloudFront receives These access logs are available for both web and RTMP distributions 62 Additionally Amazon CloudFront integrates with Amazon CloudWatch metrics so that you can monitor your website or application 63 Interfaces You can manage and configure Amazon CloudFront in several ways T he AWS Management Console provides an easy way to manage Amazon CloudFront and supports all features of the Amazon CloudFront API For example you can enable or disable distributions configure CNAMEs and enable end user logging using the console You ca n also use the Amazon CloudFront command line tools the native REST API or one of the supported SDKs There is no data API for Amazon CloudFront and no command to preload data Instead data is automatically pulled into Amazon CloudFront edge locations o n the first access of an object from that location Clients access content from CloudFront edge locations either using HTTP or HTTPs from locations across the Internet; these protocols are configurable as part of a given CloudFront distribution ArchivedAmazon Web Services – AWS Storage Services Overview Page 42 Cost Model With Amazon CloudFront there are no long term contracts or required minimum monthly commitments —you pay only for as much content as you actually deliver through the service Amazon CloudFront has two pricing components: regional data transfer out (p er GB) and requests (per 10000) As part of the Free Usage Tier new AWS customers don’t get charged for 50 GB data transfer out and 2000000 HTTP and HTTPS requests each month for one year Note that if you use an AWS service as the origin (for example Amazon S3 Amazon EC2 Elastic Load Balancing or others) data transferred from the origin to edge locations (ie Amazon CloudFront “origin fetches”) will be free of charge For web distributions data transfer out of Amazon CloudFront to your origin server will be billed at the “Regional Data Transfer Out of Origin” rates CloudFront provides three different price classes according to where your content needs to be distributed If you don’t need your content to be distributed globally but only within certain locations such as the US and Europe you can lower the prices you pay to deliver by choosing a price class that includes only these locations Although there are no long term contracts or required minimum monthly commitments CloudFront offers an o ptional reserved capacity plan that gives you the option to commit to a minimum monthly usage level for 12 months or longer and in turn receive a significant discount You can find pricing information at the Amazon CloudFront pricing page 64 Conclusion Cloud storage is a critic al component of cloud computing because it holds the information used by applications Big data analytics data warehouses Internet of Things databases and backup and archiv e applications all rely on some form of data storage architecture Cloud storage is typically more reliable scalable and secure than traditional on premises storage systems AWS offers a complete range of cloud storage services to support both applicati on and archival compliance requirements This whitepaper provides guidance for understanding the different storage services and features available in the AWS Cloud Usage pat terns performance durability ArchivedAmazon Web Services – AWS Storage Services Overview Page 43 and availability scalability and elasticity security interface and cost models are outlined and described for these cloud storage service s While t his gives you a better understanding of the features and characteristics of these cloud services it is crucial for you to understand your workloads and requirements then decide which storage service is best suited for your needs Contributors The following individuals contributed to this document: • Darryl S Osborne Solutions Architect Amazon Web Services • Shruti Worlikar Solutions Archi tect Amazon Web Services • Fabio Silva Solutions Architect Amazon Web Services ArchivedAmazon Web Services – AWS Storage Services Overview Page 44 References and Further Reading AWS Storage Services • Amazon S3 65 • Amazon Glacier 66 • Amazon EFS 67 • Amazon EBS 68 • Amazon EC2 Instance Store 69 • AWS Storage Gateway 70 • AWS Snowball 71 • Amazon CloudFront 72 Other Resources • AWS SDKs IDE Toolkits and Command Line Tools 73 • Amazon Web Services Simple Monthly Calculator 74 • Amazon Web Services Blog 75 • Amazon Web Services Forums 76 • AWS Free Usage Tier 77 • AWS Case Studies 78 Notes 1 https://awsamazoncom/s3/ 2 https://docsawsamazoncom/AmazonS3/latest/dev/crrhtml 3 http://docsawsamazoncom/AmazonS3/latest/dev/uploadobjusingmpuhtml ArchivedAmazon Web Services – AWS Storage Services Overview Page 45 4 http://docsawsamazoncom/AmazonS3/latest/dev/access control overviewhtml#access control resources manage permissions basics 5 http://docsawsamazoncom/AmazonS3/latest/dev/serv sideencryptionhtml 6 http://docsawsamazoncom/AmazonS3/latest/dev/UsingClientSideEncryptio nhtml 7 http://docsawsamazoncom/AmazonS3/latest/dev/Versioninghtml#MultiFac torAuthenticationDelete 8 http://docsawsamazoncom/AmazonS3/latest/dev/ServerLogsh tml 9 http://awsamazoncom/sns/ 10 http://awsamazoncom/sqs/ 11 http://awsamazoncom/lambda/ 12 http://awsamazoncom/free/ 13 http://awsamazoncom/s3/pricing/ 14 http://awsamazoncom/glacier/ 15 http://docsawsamazoncom/amazonglacier/latest/dev/uploading archive mpuhtml 16 http://docsawsamazoncom/amazonglacier/latest/dev/downloading an archivehtml#downloading anarchive range 17 https://awsamazoncom/iam/ 18 http://awsamazoncom/cloudtrail/ 19 http://docsawsamazoncom/AmazonS3/latest/dev/object lifecycle mgmthtml 20 http://awsamazoncom/glacier/pricing/ 21 http://awsamazoncom/efs/ 22 http://docsawsamazoncom/efs/latest/ug/how itworkshtml 23 http://docsawsamazoncom/efs/latest/ug/monito ringcloudwatchhtml#efs metrics ArchivedAmazon Web Services – AWS Storage Services Overview Page 46 24 http://docsawsamazoncom/efs/latest/ug/mounting fshtml 25 http://docsawsamazoncom/efs/latest/ug/mounting fsmount cmd generalhtml 26 http://docsawsamazoncom/efs/latest/ug/security considerationshtml 27 http://aws amazoncom/efs/pricing/ 28 http://awsamazoncom/ebs/ 29 https://awsamazoncom/ebs/previous generation/ 30 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSVolumeTypesht ml#monitoring_burstbucket 31 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSVolumeTypesht ml#monitoring_burstbucket 32 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSVolumeTypesht ml#monitoring_burstbucket 33 http://docsawsamazoncom/AWSEC2/latest/UserGuide/ebs initializ ehtml 34 https://awsamazoncom/ebs/details/ 35 https://awsamazoncom/kms/ 36 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSEncryptionhtml 37 http://awsamazoncom/ebs/pricing/ 38 http://docsawsamazoncom/AWSEC2/latest/UserGuide/InstanceStoragehtm l 39 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/block device mapping conceptshtml 40 http://docsawsamazoncom/AWSEC2/latest/UserGuide/i2 instanceshtml 41 http://docsawsamazoncom/AWSEC2/latest/UserGuide/high_storage_instan ceshtml ArchivedAmazon Web Services – AWS Storage Services Overview Page 47 42 http://docsawsamazoncom/AWSEC2/latest/UserGuide/ec2 instance lifecyclehtml 43 https://awsamazoncom/marketplace 44 http://awsamazoncom/ec2/pricing/ 45 http://awsamazoncom/storagegateway/ 46 http://docsawsamazoncom/storagegateway/latest/userguide/Wh atIsStorage Gatewayhtml 47 http://awsamazoncom/directconnect/ 48 http://docsawsamazoncom/storagegateway/latest/userguide/AWSStorageGa tewayAPIhtml 49 http://docsawsamazoncom/storagegateway/latest/userguide/GettingStarted commonhtml 50 http://awsamazoncom/storagegateway/pricing/ 51 https://awsamazoncom/importexport/ 52 http://awsamazoncom/importexport/tools/ 53 http://docsawsamazoncom/AWSImportE xport/latest/DG/auth access controlhtml 54 https://awsamazoncom/about aws/whats new/2016/11/aws snowball now ahipaa eligible service/ 55 https://docsawsamazoncom/AWSImportExport/latest/ug/api referencehtml 56 https://awsamazoncom/importexport/tools/ 57 http://awsamazoncom/importexport/pricing/ 58 http://awsamazoncom/cloudfront/pricing/ 59 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/repo rtshtml ArchivedAmazon Web Services – AWS Storage Services Overview Page 48 60 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/Inva lidationhtm l 61 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/Repl acingObjectshtml 62 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/Acce ssLogshtml 63 http://docsawsamazoncom/AmazonCloudFront/latest/DeveloperGuide/mon itoring using cloudwatchhtml 64 http://awsamazoncom/cloudfront/pricing/ 65 http://awsamazoncom/s3/ 66 http://awsamazoncom/glacier/ 67 http://awsamazoncom/efs/ 68 http://awsamazoncom /ebs/ 69 http://docsawsamazoncom/AWSEC2/latest/UserGuide/InstanceStoragehtm l 70 http://awsamazoncom/storagegateway/ 71 http://awsamazoncom/ snowball 72 http://awsamazoncom/cloudfront/ 73 http://awsamazoncom/tools/ 74 http://calculators3amazonawscom/indexhtml 75 https://awsamazoncom/blogs/aws/ 76 https://forumsawsamazoncom/indexjspa 77 http://awsamazoncom/free/ 78 http://awsamazoncom/solutions/case studies/ Archived
|
General
|
consultant
|
Best Practices
|
AWS_User_Guide_to_Financial_Services_Regulations__Guidelines_in_Hong_Kong__Insurance_Authority
|
AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines April 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Overview 1 Security and the Shared Responsibility Model 1 Security IN the Cloud 2 Security OF the Cloud 3 AWS Complian ce Assurance Programs 4 AWS Artifact 6 AWS Regions 6 Hong Kong Insurance Authority Guideline on Outsourcing (GL14) 6 Prior Notification of Material Outsourcing 7 Outsourcing Policy 7 Outsourcing Agreement 9 Information Confidentiality 9 Monitoring and Control 12 Contingenc y Planning 13 Hong Kong Insurance Authority Guideline on the Use of Internet for Insurance Activities (GL8) 14 Next Steps 20 Additional Resources 21 Document Revisions 22 About this Guide This document provides information to assist Authorized Insurers (AIs) in Hong Kong regulated by the Hong Kong Insurance Authority (IA) as they accelerate their use of Amazon Web Services’ (AWS) Cloud services Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 1 Overview The Hong Kong Insurance Authority (IA) issues guidelines to provide the Hong Kong insurance industry with practical guidance to facilitate compliance with regulatory requirements The guideli nes relevant to the use of outsourced services instruct Authorized Insurers (AIs) to perform materiality assessments risk assessments perform due diligence reviews of service providers ensure controls are in place to preserve information confidentiality have sufficient monitoring and control oversight on the outsourcing arrangement and establish contingency arrangements The following sections provide considerations for AIs as they assess their responsibilities with regards to the following guidelines: • Guideline on Outsourcing (GL14) – This guideline sets out the IA’s supervisory approach to outsourcing and the major points that the IA recommends AIs to address when outsourcing their activities including the use of cloud services • Guideline on the Use of Internet for Insurance Activities (GL8) – This guideline outlines the specific points that AIs (and other groups regulated by the IA) need to be aware of when engaging in internet based insurance activities For a full list of the IA guidelines see the Guidelines section of Legislative and Regulatory Framework on the IA website Security and the Shared Responsibility Model Cloud se curity is a shared responsibility At AWS we maintain a high bar for security OF the cloud through robust governance automation and testing and validates our approach through compliance with global and regional regulatory requirements and best practices Security IN the cloud is the responsibility of the customer What this means is that customers retain control of the security program they choose to implement to protect their own content platform applications systems and networks Customers shoul d carefully consider how they will manage the services they choose as their responsibilities vary depending on the services they use the integration of those services into their IT environments and applicable laws and regulations We recommend that cus tomers think about their security responsibilities on a service by service basis because the extent of their responsibilities may differ between services Amazon Web Services AWS User Guide to the Hong Kong In surance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 2 Figure 1 – Shared Responsibility Model Security IN the Cloud Customers are responsible for their security in the cloud For services such as Elastic Compute Cloud (EC2) the customer is responsible for managing the guest operating system (including installing updates and security patches) and other associated application softwa re as well as the configuration of the AWS provided security group firewall Customers can also use managed services such as databases directory and web application firewall services which provide customers the resources they need to perform specific tasks without having to launch and maintain virtual machines For example a customer can launch an Amazon Aurora database which Amazon Relational Database Service (RDS) manages to handle tasks such as provisioning patching backup recovery failure d etection and repair It is important to note that when using AWS services customers maintain control over their content and are responsible for managing critical content security requirements including: • The content that they choose to store on AWS • The AWS services that are used with the content • The country where their content is stored • The format and structure of their content and whether it is masked anonymized or encrypted Amazon Web Services AWS User Guide to the Hong Kong Insurance Auth ority on Outsourcing and Use of Internet for Insurance Activities Guidelines 3 • How their content is encrypted and where the keys are stored • Who has acce ss to their content and how those access rights are granted managed and revoked Because customers rather than AWS control these important factors customers retain responsibility for their choices Customers are responsible for the security of the content they put on AWS or that they connect to their AWS infrastructure such as the guest operating system applications on their compute instances and content stored and processed in AWS storage platforms databases or other services Security OF the Cloud For many services such as EC2 AWS operates manages and controls the IT components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate In order to provide assuranc e about security of the AWS Cloud we continuously audit our environment AWS infrastructure and services are validated against multiple compliance standards and industry certifications across geographies and industries Customers can use the AWS complia nce certifications to validate the implementation and effectiveness of AWS security controls including internationally recognized security best practices and certifications The AWS compliance program is based on the following actions: • Validate that AWS s ervices and facilities across the globe maintain a ubiquitous control environment that is operating effectively The AWS control environment encompasses the people processes and technology necessary to establish and maintain an environment that supports t he operating effectiveness of the AWS control framework AWS has integrated applicable cloud specific controls identified by leading cloud computing industry bodies into the AWS control framework AWS monitors these industry groups to identify leading prac tices that can be implemented and to better assist customers with managing their control environment • Demonstrate the AWS compliance posture to help customers verify compliance with industry and government requirements AWS engages with external certifyi ng bodies and independent auditors to provide customers with information regarding the policies processes and controls established and operated by AWS Customers can use this information to perform their control evaluation and verification procedures as required under the applicable compliance standard Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Out sourcing and Use of Internet for Insurance Activities Guidelines 4 • Monitor that AWS maintains compliance with global standards and best practices through the use of thousands of security control requirements AWS Compliance Assurance Programs In order to help customers establish operate and leverage the AWS security control environment AWS has developed a security assurance program that uses global privacy and data protection best practices These security protections and control processes are independently validated by multiple third party independent assessments The following are of particular importance to Hong Kong AIs: ISO 27001 – ISO 27001 is a security management standard that specifies security management best practices and comprehensive security controls foll owing the ISO 27002 best practice guidance The basis of this certification is the development and implementation of a rigorous security program which includes the development and implementation of an Information Security Management System that defines h ow AWS perpetually manages security in a holistic comprehensive manner For more information or to download the AWS ISO 27001 certification see the ISO 27001 Compliance webpage ISO 27017 – ISO 27017 provides guidance on the information security aspects of cloud computing recommending the implementation of cloud specific information security controls that supplement the guidance of the ISO 27002 and ISO 27001 standards This code of prac tice provides additional security controls implementation guidance specific to cloud service providers For more information or to download the AWS ISO 27017 certification see the ISO 27017 Compliance webpage ISO 27018 – ISO 27018 is a code of practice that focuses on protection of personal data in the cloud It is based on ISO information security standard 27002 and provides implementati on guidance on ISO 27002 controls applicable to public cloud Personally Identifiable Information (PII) It also provides a set of additional controls and associated guidance intended to address public cloud PII protection requirements not addressed by the existing ISO 27002 control set For more information or to download the AWS ISO 27018 certification see the ISO 27018 Compliance webpage ISO 9001 ISO 9001 outlines a process oriented a pproach to documenting and reviewing the structure responsibilities and procedures required to achieve effective quality management within an organization The key to ongoing certification under this standard is establishing maintaining and improving the organizational structure responsibilities procedures processes and resources in a manner where AWS Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 5 products and services consistently satisfy ISO 9001 quality requirements For more information or to download the AWS ISO 9001 certification see th e ISO 9001 Compliance webpage PCI DSS Level 1 The Payment Card Industry Data Security Standard (also known as PCI DSS) is a proprietary information security standard administered by the PC I Security Standards Council PCI DSS applies to all entities that store process or transmit cardholder data (CHD) and/or sensitive authentication data (SAD) including merchants processors acquirers issuers and service providers The PCI DSS is manda ted by the card brands and administered by the Payment Card Industry Security Standards Council For more information or to request the PCI DSS Attestation of Compliance and Responsibility Summary see the PCI DSS Compliance webpage SOC – AWS System & Organization Controls (SOC) Reports are independent third party a udit reports that demonstrate how AWS achieves key compliance controls and objectives The purpose of these reports is to help customers and their auditors understand the AWS controls established to support operations and compliance For more information see the SOC Compliance webpage There are three types of AWS SOC Reports: • SOC 1 : Provides information about the AWS control environment that may be relevant to a customer’s internal controls over financial reporting as well as information for assessment and opinion of the effectiveness of internal controls over financial reporting (ICOFR) • SOC 2 : Provides customers and their service users with a business need with an independent assessment of the AWS control environment relevant to system security availability and confidentiality • SOC 3 : Provides customers and their service users with a business need with an independent assessment of the AWS control environment relevant to system security availability and confidentiality without disclosing AWS internal information By tying together governance focused audit friendly service features with such certifications attestations and audit standards AWS Compliance enablers build on traditional programs helping customers to establish and operate in an AWS security control environment For more information about other AWS certifications and attestations see AWS Compliance Programs Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Inte rnet for Insurance Activities Guidelines 6 AWS Artifact Customers can review and download reports and details about more than 2600 security controls by using AWS Artifact the automated compliance reporting tool available in the AWS Management Console The AWS Artifact portal provides on demand access to AWS’s security and compliance documents including SOC reports PCI repo rts and certifications from accreditation bodies across geographies and compliance verticals AWS Regions The AWS Cloud infrastructure is built around AWS Regions and Availability Zones An AWS Region is a physical location in the world that is made up o f multiple Availability Zones Availability Zones consist of one or more discrete data centers that are housed in separate facilities each with redundant power networking and connectivity These Availability Zones offer customers the ability to operat e production applications and databases at higher availability fault tolerance and scalability than would be possible from a single data center For current information on AWS Regions and Availability Zones see https://awsamazoncom/about aws/global infrastructure/ Hong Kong Insurance Authority Guideline on Outsourcing (GL14) The Hong Kong Insurance Authority Guideline on Outsourcing (GL14) provides guidance and recommendations on prudent risk management practices for outsourcing including the use of cloud services by AIs AIs that use cloud services are expected to carry out due diligence evaluate and address risks and enter into appropriate outsourcing agreements Section 5 of the GL14 states that the AI’s materiality and risk assessments should include considerations such as a determination of the importance and criticality of the services to be outs ourced and the impact on the AI’s risk profile (in respect to financial operational legal and reputational risks and potential losses to customers) if the outsourced service is disrupted or falls short of acceptable standards AIs should be able to de monstrate their observance of the guidelines as required by the IA A full analysis of the GL14 is beyond the scope of this document However the following sections address the considerations in the GL14 that most frequently arise in interactions with AIs Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Ins urance Activities Guidelines 7 Prior Notification of Material Outsourcing Under Section 61 of the GL14 an AI is required to notify the IA when the AI is planning to enter into a new material outsourcing arrangement or significantly vary an existing one The notification includes the following requirements: • Unless otherwise justifiable by the AI the notification should be made at least 3 months before the day on which the new outsourcing arrangement is proposed to be entered into or the existing arrangement is proposed to be varied significantly • A detailed description of the proposed outsourcing arrangement to be entered into or the significant proposed change • Sufficient information to satisfy the IA that the AI has taken into account and properly addressed all of the essential iss ues set out in Section 5 of the GL14 Outsourcing Policy Section 58 of the GL14 sets out a list of factors that should be evaluated in the context of service provider due diligence when an AI is considering an outsourcing arrangement including the use of cloud services The following table includes considerations for each component of Section 58 Due Diligence Requirement Customer Considerations (a) reputation experience and quality of service Since 2006 AWS has provided flexible scalable and secure IT infrastructure to businesses of all sizes around the world AWS continues to grow and scale allowing us to provide new services that help millions of active customers (b) financial soundness in particular the ability to continue to provide t he expected level of service The financial statements of Amazoncom Inc include AWS’s sales and income permitting assessment of its financial position and ability to service its debts and/or liabilities These financial statements are available from the SEC or at Amazon’s Investor Relations website Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activ ities Guidelines 8 Due Diligence Requirement Customer Considerations (c) managerial skills technical and operational expertise and competence in particular the ability to deal with disruptions in business continuity AWS management has developed a strategic business plan which includes risk identification and the implementation of controls to mitigate or manage risks AWS management re ‐ evaluates the strategic business plan at least biannually This process requires ma nagement to identify risks within its areas of responsibility and to implement appropriate measures designed to address those risks The AWS Cloud operates a global infrastructure with multiple Availability Zones within multiple geographic AWS Regions around the world For more information see AWS Global Infrastructure AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and data Maintaining customer trust and confidence is of the utmost i mportance to AWS AWS performs a continuous risk assessment process to identify evaluate and mitigate risks across the company The process involves developing and implementing risk treatment plans to mitigate risks as necessary The AWS risk management team monitors and escalates risks on a continuous basis performing risk assessments on newly implemented controls at least every six months (d) any license registration permission or authorization required by law to perform the outsourced service While Hong Kong does not have specific licensing or certification requirements for operating cloud services AWS has multiple attestations for secure and compliant operation of its services Globally these include certification to ISO 27017 (guidelines for in formation security controls applicable to the provision and use of cloud services) and ISO 27018 (code of practice for protection of personally identifiable information (PII) in public clouds) For more information about our assurance programs see AWS Assurance Programs (e) extent of reliance on sub contractors and effectiveness in monitoring the work of sub contractors AWS creates and maintains written agreements with third parties (for example contractors or vendors) in accordance with the work or service to be provided and implements appropriate relationship management mechanisms in line with their relationship to the business Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidel ines 9 Due Diligence Requirement Customer Considerations (f) compatibility with the insurer’s corporate culture and future development strategies AWS maintains a systematic approach to planning and developing new services for the AWS environment to ensure that the quality and security requirements are met with each release The AWS strategy for the design and development of services is to clearly define services in terms of customer use cases service performance marketing and distribution requirements production and testing and legal and regulatory requirements (g) familiarity with the insurance industry and capacity to keep pace with innovation in the market For a list of case studies from financial services customers that have deployed applications on the AWS Cloud see Financial Services Customer Stories For a list of financial services cloud solutions provided by AWS see Financial Services Cloud Solutions The AWS Cloud pla tform expands daily For a list of the latest AWS Cloud services and news see What's New with AWS Outsourcing Agreement An outsourcing agreement should be undertaken in the form of a legally binding written agreement Section 510 of the Guideline on Outsourcing (GL14) clarifies the matters that an AI should consider when entering into an outsourcing arrangement with a service provider including performance standards certain reporting or notification requirem ents and contingency plans AWS cust omers may have the option to enroll in an Enterprise Agreement with AWS Enterprise Agreements give customers the option to tailor agreements that best suit your organization’s needs For more information about AWS Ent erprise Agreements contact your AWS representative Information Confidentiality Under Sections 512 513 and 514 of the Guideline on Outsourcing (GL14) AIs need to ensure that the outsourcing arrangements comply with relevant laws and statutory requir ements on customer confidentiality The following table includes considerations for Sections 512 513 and 514 Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 10 Requirement Customer Considerations 512 The insurer should ensure that it and the service provider have proper safeguards in place to protect the integrity and confidentiality of the insurer’s information and customer data Data Protection – You choose how your data is secured AWS offers you strong encryption for your data in transit or at rest and AWS provides you with the option to m anage your own encryption keys If you want to tokenize data before it leaves your organization you can achieve this through a number of AWS partners that provide this Data Integrity – For access and system monitoring AWS Config provides you with an AWS resource inventory configuration history and configuration change notifications to enable security and governance Config rules enable you to create rules that automatically check the configuration of AWS resources recorded by AWS Config When your reso urces are created updated or deleted AWS Config streams these configuration changes to Amazon Simple Notification Service (Amazon SNS) which notifies you of all configuration changes AWS Config represents relationships between resources so that you c an assess how a change to one resource might impact other resources Data Segregation – Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own IP address range creation of subnets and configuration of route tables and network gateways Access Rights – AWS provides a number of ways for you to identify users and securely access your AWS Account A complete list of credentials supported by AWS can be found in the AWS Management Console by choosing your user name in the navigation bar and then choosing My Security Credentials AWS also pro vides additional security options that enable you to further protect your AWS Account and control access using the following: AWS Identity and Access Management (IAM) key management and rotation temporary security credentials and multi factor authentica tion (MFA) Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 11 Requirement Customer Considerations 513 An authorized insurer should take into account any legal or contractual obligation to notify customers of the outsourcing arrangement and circumstances under which their data may be disclosed or lost In the event of the termination of th e outsourcing agreement the insurer should ensure that all customer data are either retrieved from the service provider or destroyed AWS provides you with the ability to delete your data Because you retain control and ownership of your data it is your responsibility to manage data retention to your own requirements If you decide to leave AWS you can manage access to your data and AWS services and resources including the ability to import and export data AWS provides services such as AWS Import/Expo rt to transfer large amounts of data into and out of AWS using physical storage appliances For more information see Cloud Storage with AWS Additionally AWS offers AWS Database Migration Service a web service that you can use to migrate a database from an AWS service to an on premises database In alignment with ISO 27001 standards when a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent your organization’s data from being exposed to unauthorized individuals AWS uses the techniques detailed in DoD 522022 M (“National Industrial Security Program Operating Manual “) or NIST 800 88 (“Guidelines for Media Sanitization” ) to destroy data as part of the decommissioning process If a hardware device is unable to be decommissioned using these procedures the device will be degaussed or physically destroyed in accordance with industry standard practices For more information see ISO 27001 standards Annex A domain 8 AWS has been validated and certified by an independent auditor to confirm alignment with the ISO 27001 certification standard For additional details see AWS Cloud Security Also see the Section 73 of the Customer Agreement which is available at AWS Customer Agreement Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 12 Requirement Customer Considerations 514 An authorized insurer should notify the IA forthwith of any unauthorized access or breach of confidentiality by the service provider or its sub contractor that affects the insurer or its customers AWS employees are trained on how to recognize suspected security incidents and where to report them When appropriate incidents are reported to relevant authorities AWS maintains the AWS security bulletin webpage located at https://awsamazoncom/security/security bulletins to notify customers of security and privacy events affecting AWS services Customers can subscribe to the Secu rity Bulletin RSS Feed to keep abreast of security announcements on the Security Bulletin webpage The customer support team maintains a Service Health Dashboard webpage located athttp://statusawsamazoncom/ to alert customers to any broadly impacting availability issues Customers are responsible for their security in the cloud It is important to note that when using AWS services customers maintain control over their content and are responsible for managing critical content security requirements inc luding who has access to their content and how those access rights are granted managed and revoked AWS customers should consider implementation of the following best practices to protect against and detect security breaches: • Use encryption to secure cus tomer data • Configure the AWS services to keep customer data secure AWS provides customers with information on how to secure their resources within the AWS service's documentation at http://docsawsamazoncom/ • Implement least privilege permissions for a ccess to your resources and customer data • Use monitoring tools like AWS CloudWatch to track when customer data is accessed and by whom Monitoring and Control Under Section 515 of the Guideline on Outsourcing (GL14) AIs should ensure that they have suf ficient and appropriate resources to monitor and control outsourcing arrangements at all times Section 516 further sets out that once an AI implements an outsourcing arrangement it should regularly review the effectiveness and adequacy of its controls i n monitoring the performance of the service provider AWS has implemented a formal documented incident response policy and program this can be reviewed in the SOC 2 report via AWS Artifact You can also see security Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 13 notifications on the AWS Security Bulletins website AWS provides you with various tools you can use to monitor your services including those already noted and others you can find on the AWS Marketplace Contingency Planning Under Sections 517 and 518 of the Guideline on Outsourcing (GL14) if an AI chooses to outsource service to a service provider they should put in place a contingency plan to ensure that the AI’s busine ss won’t be disrupted as a result of undesired contingencies of the service provider such as system failures The AI should also ensure that the service provider has its own contingency plan that covers daily operational and systems problems The AI shoul d have an adequate understanding of the service provider's contingency plan and consider the implications for its own contingency planning in the event that the outsourced service is interrupted due to undesired contingencies of the service provider AWS a nd regulated AIs share a common interest in maintaining operational resilience ie the ability to provide continuous service despite disruption Continuity of service especially for critical economic functions is a key prerequisite for financial stabi lity For more information about AWS operational resilience approaches see the AWS whitepaper Amazon Web Services’ Approach to Operational Resilience in the Fin ancial Sector & Beyond The AWS Business Continuity plan details the process that AWS follows in the case of an outage from detection to deactivation This plan has been developed to recover and reconstitute AWS using a three phased approach: Activation and Notification Phase Recovery Phase and Reconstitution Phase This approach ensures that AWS performs system recovery and reconstitution efforts in a methodical sequence maximizing the effectiveness of the recovery and reconstitution efforts and minimi zing system outage time due to errors and omissions For more information see the AWS whitepaper Amazon Web Services: Overview of Security Processes and the SOC 2 re port in the AWS Artifact console AWS provides you with the capability to implement a robust continuity plan including frequent server instance backups data redundancy replication and the flexibility to place instances and store data within multiple geo graphic Regions as well as across multiple Availability Zones within each Region For more information about disaster recovery approaches see Disaster Recovery Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 14 Hong Kong Insurance Authority Guid eline on the Use of Internet for Insurance Activities (GL8) The Hong Kong Insurance Authority Guideline on the Use of Internet for Insurance Activities (GL8) aims to draw attentio n to the special considerations that AIs (and other groups regulated by the IA) need to be aware of when engaging in internet based insurance activities Sections 51 items (a) (g) of the Guideline on the Use of Internet for Insurance Activities (GL8) sets out a series of requirements regarding information security confidentiality integrity data protection payment systems security and related concerns for AIs to address when carrying out internet insurance activities AIs should take all pract icable steps to ensure the following: Requirement Customer Considerations (a) a comprehensive set of security policies and measures that keep up with the advancement in internet security technologies shall be in place AWS has established formal policies a nd procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of your syste ms and data Maintaining customer trust and confidence is of the utmost importance to AWS AWS works to comply with applicable federal state and local laws statutes ordinances and regulations concerning security privacy and data protection of AWS services in order to minimize the risk of accidental or unauthorized access or disclosure of customer data Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 15 (b) mechanisms shall be in place to maintain the integrity of data stored in the system hardware whilst in transit and as displayed on the website AWS is designed to protect the confidentiality and integrity of transmitted data through the comparison of a cryptographic hash of data transmitted This is done to help ensure that the message is not corrupted or altered in transit Data that has been alte red or corrupted in transit is immediately rejected AWS provides many methods for you to securely handle your data: AWS enables you to open a secure encrypted channel to AWS servers using HTTPS (TLS/SSL) Amazon S3 provides a mechanism that enables you t o use MD5 checksums to validate that data sent to AWS is bitwise identical to what is received and that data sent by Amazon S3 is identical to what is received by the user When you choose to provide your own keys for encryption and decryption of Amazon S 3 objects (S3 SSE C) Amazon S3 does not store the encryption key that you provide Amazon S3 generates and stores a one way salted HMAC of your encryption key and that salted HMAC value is not logged Connections between your applications and Amazon RDS MySQL DB instances can be encrypted using TLS/SSL Amazon RDS generates a TLS/SSL certificate for each database instance which can be used to establish an encrypted connection using the default MySQL client When an encrypted connection is established dat a transferred between the database instance and your application is encrypted during transfer If you require data to be encrypted while at rest in the database your application must manage the encryption and decryption of data Additionally you can set up controls to have your database instances only accept encrypted connections for specific user accounts Data is encrypted with 256 bit keys when you enable AWS KMS to encrypt Amazon S3 objects Amazon EBS volumes Amazon RDS DB Instances Amazon Redshift Data Blocks AWS CloudTrail log files Amazon SES messages Amazon Workspaces volumes Amazon WorkMail messages and Amazon EMR S3 storage AWS offers you the ability to add an additional layer of security to data at rest in the cloud providing scalable and efficient encryption features Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 16 Requirement Customer Considerations This includes: • Data encryption capabilities available in AWS storage and database services such as Amazon EBS Amazon S3 Amazon Glacier Amazon RDS for Oracle Database Amazon RDS for SQL Server and Amazon Redshift • Flexible key management options including AWS Key Management Service (AWS KMS) that allow you to choose whether to have AWS manage the encryption keys or enable you to keep complete control over your keys • Dedicated hardware based cryptographi c key storage using AWS CloudHSM which enables you to satisfy compliance requirements In addition AWS provides APIs that you can use to integrate encryption and data protection with any of the services you develop or deploy in the AWS Cloud Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 17 Requirement Customer Considerations (c) approp riate backup procedures for the database and application software shall be implemented AWS maintains a retention policy applicable to AWS internal data and system components in order to continue operations of AWS business and services Critical AWS system components including audit evidence and logging records are replicated across multiple Availability Zones and backups are maintained and monitored You retain control and ownership of your data When you store data in a specific Region it is not replic ated outside that Region It is your responsibility to replicate data across Regions if your business needs require this capability Amazon S3 supports data replication and versioning instead of automatic backups You can however back up data stored in Amazon S3 to other AWS Regions or to on premises backup systems Amazon S3 replicates each object across all Availability Zones within the respective Region Replication can provide data and service availability in the case of system failure but provides no protection against accidental deletion or data integrity compromise —it replicates changes across all Availability Zones where it stores copies Amazon S3 offers standard redundancy and reduced redundancy options which have different durability objectiv es and price points Each Amazon EBS volume is stored as a file and AWS creates two copies of the EBS volume for redundancy Both copies reside in the same Availability Zone however so while Amazon EBS replication can survive hardware failure it is not suitable as an availability tool for prolonged outages or disaster recovery purposes We recommend that you replicate data at the application level or create backups Amazon EBS provides snapshots that capture the data stored on an Amazon EBS volume at a specific point in time If the volume is corrupt (for example due to system failure) or data from it is deleted you can restore the volume from snapshots Amazon EBS snapshots are AWS objects to which IAM users groups and roles can be assigned permiss ions so that only authorized users can access Amazon EBS backups Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 18 Requirement Customer Considerations (d) a client’s personal information (including password if any) shall be protected against loss; or unauthorized access use modification or disclosure etc You control your data With AWS you can do the following: • Determine where your data is stored including the type of storage and geographic Region of that storage • Choose the secured state of your data We offer you strong encryption for your content in transit or at rest and we provide you with the option to manage your own encryption keys • Manage access to your data and AWS services and resources through users groups permissions and credentials that you control (e) a client’s electronic signature if any shall be verified Amazon Partner Network (APN) Technology Partners provide software solutions (including electronic signature solutions) that are either hosted on or integrated with the AWS Cloud platform The AWS Partner Solutions Finder provides you with a centralized p lace to search discover and connect with trusted APN Technology and Consulting Partners based on your business needs For more information see AWS Partner Solutions Finder Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 19 Requirement Customer Considerations (f) the electronic payme nt system (eg credit card payment system) shall be secure AWS is a Payment Card Industry (PCI) compliant cloud service provider having been PCI DSS Certified since 2010 The most recent assessment validated that AWS successfully completed the PCI Data Security Standards 32 Level 1 Service Provider assessment and was found to be compliant for all the services outlined on AWS Services in Scope by Compliance Program The AWS PCI Complian ce Package which is available through AWS Artifact includes the AWS PCI DSS 32 Attestation of Compliance (AOC) and AWS 2016 PCI DSS 32 Responsibility Summary PCI compliance on AWS is a shared responsibility In accordance with the shared responsibili ty model all entities must manage their own PCI DSS compliance certification While for the portion of the PCI cardholder environment deployed in AWS your organization’s QSA can rely on AWS Attestation of Compliance (AOC) you are still required to satis fy all other PCI DSS requirements The AWS 2016 PCI DSS 32 Responsibility Summary provides you with guidance on what you are responsible for For more information about AWS PCI DSS Compliance see PCI DSS Level 1 Service Provider Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 20 Requirement Customer Considerations (g) a valid insurance contract shall not be cancelled accidentally maliciously or consequent upon careless computer handling Your data is validated for integrity and corrupted or tampered data is not written to storage Amazon S3 utilizes checksums int ernally to confirm the continued integrity of content in transit within the system and at rest Amazon S3 provides a facility for you to send checksums along with data transmitted to the service The service validates the checksum upon receipt of the data to determine that no corruption occurred in transit Regardless of whether a checksum is sent with an object to Amazon S3 the service utilizes checksums internally to confirm the continued integrity of content in transit within the system and at rest Whe n disk corruption or device failure is detected the system automatically attempts to restore normal levels of object storage redundancy External access to content stored in Amazon S3 is logged and the logs are retained for at least 90 days including relevant access request information such as the accessor IP address object and operation Next Steps Each organization’s cloud adoption journey is unique In order to successfully execute your adoption you need to understand your organization’s current state the target state and the transition required to achieve the target state Knowing this will help you set goals and create work streams that will enable staff to thrive in the cloud The AWS Cloud Adoption Framework (AWS CAF) offers structure to help organizations develop an efficient and effective plan for their cloud adoption journey Guidance and bestpractices prescribed within the framework can help you build a comprehensive approach to cloud computing across your organiza tion throughout your IT lifecycle The AWS CAF breaks down the complicated process of planning into manageable areas of focus Many organizations choose to apply the AWS CAF methodology with a facilitator led workshop To find more about such workshops p lease contact your AWS representative Alternatively AWS provides access to tools and resources for self service application of the AWS CAF methodology at AWS Cloud Adoption Framework Amazon Web Services AWS Us er Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 21 For AIs in Hong Kong next steps typically also include the following: • Contact your AWS representative to discuss how the AWS Partner Network and AWS Solution Architects Professional Services teams and Training instructors can assist with your cloud adoption journey If you do not have an AWS representative contact us at https://awsamazoncom/ contact us/ • Obtain and review a copy of the latest AWS SOC 1 & 2 reports PCI DSS Attestation of Compliance and Responsibility Summary and ISO 27001 certification from the AWS Artifact portal (accessible via the AWS Management Console) • Consider the relevance and application of the CIS AWS Foundations Benchmark available here and here as appropriate for your cloud journey and use cases These industry accepted best practices published by the Center for Internet Security go beyond the high level security guidance already available providing AWS users with clear step bystep implementation and assessment recommendations • Dive deeper on other governance and risk management practices as necessary in light of your due diligence and risk assessment using the tools and resources referenced throughout this whitepaper and in the Additional Resources section below • Speak to your AWS representative about an AWS Enterprise Agreement Additional Resources For additional information see: • AWS Cloud Security Whitepapers & Guides • AWS Compliance • AWS Cloud Security Services • AWS Best Practices for DDoS Resiliency • AWS Security Checklist • Cloud Adoption Framework Security Perspective • AWS Security Best Practices • AWS Risk & Compliance • Using AWS in the Context of Hong Kong Privacy Considerations Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 22 Document Revisions Date Description April 2020 Updates to Additional Resources section February 2020 Revision and updates October 2017 First publication
|
General
|
consultant
|
Best Practices
|
AWS_User_Guide_to_Financial_Services_Regulations__Guidelines_in_Hong_Kong__Monetary_Authority
|
AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals April 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitmen ts or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to i ts customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Overview 1 Security and the Shared Responsibility Model 1 Security IN the Cloud 2 Security OF the Cloud 3 AWS Compliance Assurance Programs 4 AWS Artifact 6 AWS Regions 6 HKMA Supervisory Policy Manual on Outsourcing (SA 2) 6 Outsourcing Notification 7 Assessment of Service Providers 7 Outsourcing Agreement 9 Information Confidentiality 9 Monitoring and Control 11 Contingency Planning 12 Access to Outsourced Data 12 HKMA Supervisory Policy Manual on General Principles for Technology Risk Management (TM G1) 13 Next Steps 16 Additional Resources 17 Document Rev isions 18 About this Guide This document provides information to assist Authorized Institutions (AIs) in Hong Kong regulated by the Hong Kong Monetary Authority (HKMA) as they accelerate their use of Amazon Web Services’ (AWS) Cloud services AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 1 Overview The Hong Kong Monetary Authority (HKMA) issues guidelines to provide the Hong Kong banking industry with practical guidance to facilitate compliance with regulatory requirements The guidelines relevant to the use of outsourced services instruct Authorized Institutions (AIs) to perform risk assessments perform due diligence reviews of service providers ensure controls are in place to preserve information confidentiality have sufficient monitoring and control oversight on the outsourcing arrangement and establish contingency arrangements The following sections provide considerations for AIs as they assess their responsibili ties with regards to the following guidelines: • Supervisory Policy Manual on Outsourcing (SA 2) This Supervisory Policy Manual sets out the HKMA's supervisory approach to outsourcing and the major points which the HKMA recommends AIs to address when outsourcing their activities including the use of cloud services • Supervisory Policy Manual on General Principles for Technology Risk Management (TM G1) This Supervisory Policy Manual provides AIs with guidance on general principles which AIs are expected to consider in managing technology related risks Taken togeth er AIs can use this information to perform their due diligence and assess how to implement an appropriate information security risk management and governance program for their use of AWS For a list of the guidelines see the Regulatory Resources – Regulatory Guides section on the HKMA website Security and the Shared Responsibility Model Cloud security is a shared responsibility At AWS we maintain a high bar for secur ity OF the cloud through robust governance automation and testing and validates our approach through compliance with global and regional regulatory requirements and best practices Security IN the cloud is the responsibility of the customer What this means is that customers retain control of the security program they choose to implement to protect their own content platform applications systems and networks Customers should carefully consider how they will manage the services they choose as thei r responsibilities vary depending on the services they use the integration of those services into their IT environments and applicable laws and regulations We AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 2 recommend that customers think about their security responsibilities on a service by service basis because the extent of their responsibilities may differ between services Figure 1 – Shared Responsibility Model Security IN the Cloud Customers are responsible for their security in the cloud For services such as Amazon Elastic Compute Cloud ( Amazon EC2) the customer is responsible for managing the guest operating system (including installing updates and security patches) and other associated application software as well as the configuration of the AWS provided security group firewall Customers can also use managed services such as databases directory and web application firewall services which provide customers the resources they need to perform specific tasks without having to launch and main tain virtual machines For example a customer can launch an Amazon Aurora database which Amazon Relational Database Service ( Amazon RDS) manages to handle tasks such as provisioning patching backup recovery failure detection and repair It is impor tant to note that when using AWS services customers maintain control over their content and are responsible for managing critical content security requirements including: • The content that they choose to store on AWS • The AWS services that are used with t he content AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 3 • The country where their content is stored • The format and structure of their content and whether it is masked anonymized or encrypted • How their content is encrypted and where the keys are stored • Who has access to their content and how those access rights are granted managed and revoked Because customers rather than AWS control these important factors customers retain responsibility for their choices Customers are responsible for the security of the content they put on AWS or that th ey connect to their AWS infrastructure such as the guest operating system applications on their compute instances and content stored and processed in AWS storage platforms databases or other services Security OF the Cloud For many services such as EC2 AWS operates manages and controls the IT components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate In order to provide assurance about security of the AWS Cloud we continuously audit our environment AWS infrastructure and services are validated against multiple compliance standards and industry certifications across geographies and industries Customers can use the AWS compliance certifications to validate th e implementation and effectiveness of AWS security controls including internationally recognized security best practices and certifications The AWS compliance program is based on the following actions: • Validate that AWS services and facilities across the globe maintain a ubiquitous control environment that is operating effectively The AWS control environment encompasses the people processes and technology necessary to establish and maintain an environment that supports the operating effectiveness of the AWS control framework AWS has integrated applicable cloud specific controls identified by leading cloud computing industry bodies into the AWS control framework AWS monitors these industry groups to identify leading practices that can be implemented an d to better assist customers with managing their control environment AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 4 • Demonstrate the AWS compliance posture to help customers verify compliance with industry and government requirements AWS engages with external certifying bodies and independent auditor s to provide customers with information regarding the policies processes and controls established and operated by AWS Customers can use this information to perform their control evaluation and verification procedures as required under the applicable c ompliance standard • Monitor that AWS maintains compliance with global standards and best practices through the use of thousands of security control requirements AWS Compliance Assurance Programs In order to help customers establish operate and leverage the AWS security control environment AWS has developed a security assurance program that uses global privacy and data protection best practices These security protections and control processes are independently validated by multiple third party independ ent assessments The followings are of particular importance to Hong Kong AIs: ISO 27001 – ISO 27001 is a security management standard that specifies security management best practices and comprehensive security controls following the ISO 27002 best practice guidance The basis of this certification is the development and implementation of a rigorous security program which includes the development and implementation of an Information Security Management System that defines how AWS perpetually manages security in a holistic comprehensive manner For more information or to download the AWS ISO 27001 certification see the ISO 27001 Compliance webpage ISO 27017 – ISO 27017 provides guidance on the information security aspects of cloud computing recommending the implementation of cloud specific information security controls that supplement the guidance of the ISO 27002 and ISO 27001 standards This code of practice provides additional security controls impleme ntation guidance specific to cloud service providers For more information or to download the AWS ISO 27017 certification see the ISO 27017 Compliance webpage ISO 27018 – ISO 27018 is a code of practice that focuses on protection of personal data in the cloud It is based on ISO information security standard 27002 and provides implementation guidance on ISO 27002 controls applicable to public cloud Personally Identifiable Information (PI I) It also provides a set of additional controls and associated guidance intended to address public cloud PII protection requirements not addressed by AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 5 the existing ISO 27002 control set For more information or to download the AWS ISO 27018 certificati on see the ISO 27018 Compliance webpage ISO 9001 ISO 9001 outlines a process oriented approach to documenting and reviewing the structure responsibilities and procedures required to ac hieve effective quality management within an organization The key to ongoing certification under this standard is establishing maintaining and improving the organizational structure responsibilities procedures processes and resources in a manner wh ere AWS products and services consistently satisfy ISO 9001 quality requirements For more information or to download the AWS ISO 9001 certification see the ISO 9001 Compliance webpage PCI DSS Level 1 The Payment Card Industry Data Security Standard (also known as PCI DSS) is a proprietary information security standard administered by the PCI Security Standards Council PCI DSS applies to all entities that store process or transmit card holder data (CHD) and/or sensitive authentication data (SAD) including merchants processors acquirers issuers and service providers The PCI DSS is mandated by the card brands and administered by the Payment Card Industry Security Standards Council F or more information or to request the PCI DSS Attestation of Compliance and Responsibility Summary see the PCI DSS Compliance webpage SOC – AWS System & Organization Controls (SOC) Reports are independent third party audit reports that demonstrate how AWS achieves key compliance controls and objectives The purpose of these reports is to help customers and their auditors understand the AWS controls established to support operations and compliance For more information see the SOC Compliance webpage There are three types of AWS SOC Reports: • SOC 1 : Provides information about the AWS control environment that may be relevant to a customer’s internal controls over financial reporting as well as information for assessment and opinion of the effectiveness of internal controls over financial reporting (ICOFR) • SOC 2 : Provides customers and their service users with a business need with an independent assessment of the AWS control environment relevant to system security availability and confidentiality • SOC 3 : Provides customers and their service users with a business need with an independent assessment of the AWS control environm ent relevant to system security availability and confidentiality without disclosing AWS internal information AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 6 By tying together governance focused audit friendly service features with such certifications attestations and audit standards AWS Compliance enablers build on traditional programs helping customers to establish and operate in an AWS security control environment For more information about other AWS certifications and attestations see AWS Compliance Programs AWS Artifact Customers can review and download reports and details about more than 2600 security controls by using AWS Artifact the automated compliance reporting tool available in the AWS Management Console The AWS Artifact portal provides on demand access to AWS’s security and compliance documents including SOC reports PCI reports and certifications from accreditation bodies across geographies and compliance verticals AWS Regions The AWS Cloud infrastructure is built around AWS Regions and Availability Zones An AWS Region is a physical location in the world that is made up of multiple Availability Zones Availability Zones consist of one or more discrete data centers tha t are housed in separate facilities each with redundant power networking and connectivity These Availability Zones offer customers the ability to operate production applications and databases at higher availability fault tolerance and scalability th an would be possible from a single data center For current information on AWS Regions and Availability Zones see https://awsamazoncom/about aws/global infrastructure/ HKMA Superv isory Policy Manual on Outsourcing (SA 2) The HKMA Supervisory Policy Manual on Outsourcing (SA 2) provides guidance and recommendation s on prudent risk management practices for outsourcing including use of cloud services by AIs AIs that use the cloud are expected to carry out due diligence evaluate and address risks and enter into appropriate outsourcing agreements Section 22 of t he SA 2 states that the AI’s risk assessment should include a determination of the importance and criticality of the services to be outsourced the cost and benefit of the outsourcing and the impact on the AI’s risk profile (in respect of operational leg al AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 7 and reputation risks) of the outsourcing AIs should be able to demonstrate their observance of the guidelines to the HKMA through the submission of the HKMA Risk Assessment Form on Technology related Outsourcing (including Cloud Computing) six weeks be fore target implementation date A full analysis of the SA 2 is beyond the scope of this document However the following sections address the considerations in the SA 2 that most frequently arise in interactions with AIs Outsourcing Notification Under Se ction 132 of the SA 2 AIs are required to notify the HKMA via a Notification Letter prior to implementing solutions which leverage public cloud services in respect of banking related business areas including in cases where the AI is outsourcing a banki ng activity to a service provider who is providing services using the public cloud In general a notification letter should be submitted to the HKMA 3 months prior to the commencement of the outsourcing activity The AI must affirm specific compliance w ith controls related to outsourcing and cloud operation together with general compliance with other relevant HKMA guidelines such as the Supervisory Policy Manual on General Principles for Technology Risk Management (TM G1) The HKMA expects AIs to full y comply with all relevant regulatory control requirements prior to launching any new outsourced services including when deploying on AWS cloud Assessment of Service Providers Sections 21 22 and 23 of the SA 2 set out a list of topics that should be evaluated in the course of due diligence when an AI is considering an outsourcing arrangement including use of cloud services The following table includes considerations for each component of Section 231 of the SA 2 Due Diligence Requirement Customer Considerations Financial soundness The financial statements of Amazoncom Inc include AWS’s sales and income permitting assessment of its financial position and ability to service its debts and/or liabilities These financial statements are ava ilable from the SEC or at Amazon’s Investor Relations website AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 8 Due Diligence Requirement Customer Considerations Reputation Since 2006 AWS has provided flexible scalable and secure IT infrastructure to businesses of all sizes around the world AWS continues to grow and scale allowing us to provide new services that help millions of active customers Managerial skills AWS management has developed a strategic business plan which includes risk identification and the implementation of controls to mitigate or m anage risks AWS management re ‐ evaluates the strategic business plan at least biannually This process requires management to identify risks within its areas of responsibility and to implement appropriate measures designed to address those risks Technic al capabilities operational capability and capacity The AWS Cloud operates a global infrastructure with multiple Availability Zones within multiple geographic AWS Regions around the world For more information see AWS Global Infrastructure AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy esta blishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and data Maintaining customer trust and confidence is of the utmost importance to AWS AWS performs a continuous risk assessment process to identify evaluate and mitigate risks across the company The process involves developing and implementing risk treatment plans to mitigate risks as necessary The AWS risk management team monitors and escalates risks on a continuous basis performing risk assessme nts on newly implemented controls at least every six months Compatibility with the AI's corporate culture and future development strategies AWS maintains a systematic approach to planning and developing new services for the AWS environment to ensure that the quality and security requirements are met with each release The AWS strategy for the design and development of services is to clearly define services in terms of customer use cases service performance marketing and distribution requirements produc tion and testing and legal and regulatory requirements AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 9 Due Diligence Requirement Customer Considerations Familiarity with the banking industry and capacity to keep pace with innovation in the market For a list of case studies from financial services customers that have deployed applications on the AWS Cloud see Financial Services Customer Stories For a list of financial ser vices cloud solutions provided by AWS see Financial Services Cloud Solutions The AWS Cloud platform expands daily For a list of the latest AWS Cloud services and news see What's New with AWS Outsourcing Agreement Section 24 of the SA 2 clarifies that the type and level of services to be provided and the contractual liabilities and obligations of the service provider must be clearly set out in a serv ice agreement between the AI and their service provider HKMA expect AIs to regularly review their outsourcing agreements AWS customers may have the option to enroll in an Enterprise Agreement with AWS Enterprise Agreements give customers the option to tailor agreements that best suit your organization’s needs For more information about AWS Enterprise Agreements contact your AWS representative Information Confidentiality Under Section 25 of the SA 2 AIs need to ensure that as part of the outsourcing AIs can continue to comply with local and regional data protection requirements The following table lists what you should consider AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 10 Requirement Customer Considerations Section 252: AIs should have controls in place to ensure that the requirements of customer data confidentiality are observed and proper safeguards are established to protect the integrity and confidentiality of customer information Data Protection – You choose how your data is secured AWS offers you strong encryption for your data in transit or at rest and AWS provides you with the option to manage your own encryption keys If you want to tokenize data before it leaves your organization you can engage a number of AWS partners with relevant expertise Data Integrity – For access and system monitoring AWS Config provides you with an AWS resource inventory configuration history and configuration change notifications to enable security and governance Config rules enable you to create rules that automatically check t he configuration of AWS resources recorded by AWS Config When your resources are created updated or deleted AWS Config streams these configuration changes to Amazon Simple Notification Service (Amazon SNS) which notifies you of all configuration chang es AWS Config represents relationships between resources so that you can assess how a change to one resource might impact other resources Data Segregation – Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own IP address range creation of subnets and configuration of route tables and network gateways Access Rights – AWS provides a number of ways for you to identify users and securely access your AWS Account A complete list of credentials supported by AWS can be found in the AWS Management Console by choosing your user name in the navigation bar and then choosing My Security Credentials AWS also provides additional security options that enable you to further protect your AWS Account and control access using the following: AWS Identity and Access Management (IAM) key management a nd rotation temporary security credentials and multi factor authentication (MFA) AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 11 Requirement Customer Considerations Section 254: In the event of a termination of outsourcing agreement for whatever reason AIs should ensure that all customer data is either retrieved from the service provider or destroyed AWS provides you with the ability to delete your data Because you retain control and ownership of your data it is your responsibility to manage data retention to your own requirements If you decide to leave AWS you can manage acce ss to your data and AWS services and resources including the ability to import and export data AWS provides services such as AWS Import/Export to transfer large amounts of data into and out of AWS using physical storage appliances For more information see Cloud Storage with AWS Additionally AWS offers AWS Database Migration Service a web service that you can use to migrate a database from an AWS service to an on premises database In alignment with ISO 27001 standards when a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent your organization’s data from being exposed to unauthorized individuals AWS uses the tech niques detailed in DoD 522022 M (“National Industrial Security Program Operating Manual “) or NIST 800 88 (“Guidelines for Media Sanitization”) to destroy data as part of the decommissioning process If a hardware device is unable to be decommissioned usi ng these procedures the device will be degaussed or physically destroyed in accordance with industry standard practices For more information see ISO 27001 standards Annex A domain 8 AWS has been validated and certified by an independent auditor to co nfirm alignment with the ISO 27001 certification standard For additional details see AWS Cloud Security Also see the Section 73 of the Customer Agreement which is available at AWS Customer Agreement Monitoring and Control Under Section 26 of the SA 2 AIs need to ensure that they have sufficient and effective procedures for monitoring the performance of the service provider the relationship with the ser vice provider and the risks associated with the outsourced activity AWS has implemented a formal documented incident response policy and program this can be reviewed in the SOC 2 report via AWS Artifact You can also see security notifications on the AWS Security Bulletins website AWS provides you with various AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 12 tools you can use to monitor your services including those already noted and others you can find on the AWS Marketplace Contingency Planning Under Section 27 of the SA 2 AIs should maintain contingency plans that take the following into consideration: the service provider’s contingency plan a breakdown in the systems of the s ervice provider and telecommunication problems in the host country Section 272 of the SA 2 states that contingency arrangements in respect of daily operational and systems problems would normally be covered in the service provider’s own contingency pla n AIs should ensure that they have an adequate understanding of their service provider’s contingency plan and consider implications for their own contingency planning in the event that the outsourced service is interrupted AWS and regulated AIs share a common interest in maintaining operational resilience ie the ability to provide continuous service despite disruption Continuity of service especially for critical economic functions is a key prerequisite for financial stability For more informatio n about AWS operational resilience approaches see the AWS whitepaper Amazon Web Services’ Approach to Operational Resilience in the Financial Sector & Beyond The AWS Business Continuity plan details the process that AWS follows in the case of an outage from detection to deactivation This plan has been developed to recover and reconstitute AWS using a three phased approach: Activation and Notification Phase Re covery Phase and Reconstitution Phase This approach ensures that AWS performs system recovery and reconstitution efforts in a methodical sequence maximizing the effectiveness of the recovery and reconstitution efforts and minimizing system outage time d ue to errors and omissions For more information see the AWS whitepaper Amazon Web Services: Overview of Security Processes and the SOC 2 report in the AWS Artifact console AWS provides you with the capability to implement a robust continuity plan including frequent server instance backups data redundancy replication and the flexibility to place instances and store data within multiple geographic Regions as well as across multiple Availability Zones within each Region For more information about disaster recovery approaches see Disaster Recovery Access to Outsourced Data The SA 2 clarifies that an AI’s outsourcin g arrangements should not interfere with the ability of the AI to effectively manage its business activities or impede the HKMA in carrying out its supervisory functions and objectives AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 13 You retain ownership and control of your data when using AWS services You have complete control over which services you use and whom you empower to access your content and services including what credentials will be required You control how you configure your environments and secure your data including whether you encryp t your data (at rest and in transit) and what other security features and tools your use and how you use them AWS does not change your configuration settings as these settings are determined and controlled by you You have the complete freedom to design their security architecture to meet your compliance needs This is a key difference from traditional hosting solutions where the provider decides on the architecture AWS enables and empowers you to decide when and how security measures will be implemente d in the cloud in accordance with your business needs For example if a higher availability architecture is required to protect your data you may add redundant systems backups locations network uplinks etc to create a more resilient high availabil ity architecture If restricted access to your data is required AWS enables you to implement system level access rights management controls and data level encryption For more information see Using AWS in the Context of Hong Kong Privacy Considerations You can validate the security controls in place within the AWS environment through AWS certifications and reports including th e AWS Service Organization Control (SOC) 1 2 and 3 reports ISO 27001 27017 and 27018 certifications and PCI DSS compliance reports These reports and certifications are produced by independent third party auditors and attest to the design and operating effectiveness of AWS security controls For more information about the AWS approach to audit and inspection please contact your AWS representative HKMA Supervisory Policy Manual on General Principles for Technology Risk Management (TMG1) The HKMA Supervisory Policy Manual on General Principles for Technology Risk Management (TM G1) sets out risk management principles and best prac tice standards to guide AIs in meeting their legal obligations The HKMA expects AIs to have an effective technology risk management framework in place to ensure the adequacy of IT controls and quality of their computer systems AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 14 AWS has produced a TM G1 Workbook that covers the six domains documented within the TM G1 For shared controls where AWS is expected to provide information as part of the Shared Responsibility Model AWS controls are mapped against the control requirements of the TM G1 The following table shows the AWS response to guidelines Sections 211 and 332 of the TM G1: ID Guideline Responsibility Customer Considerations 211 Achieving a consistent standard of sound practices for IT controls across an AI requires clear direction and commitment from the Board and senior management In this connection senior management who may be assisted by a delegated subcommittee is responsible for developing a set of IT control policies which es tablish the ground rules for IT controls These policies should be formally approved by the Board or its designated committee and properly implemented among IT functions and business units Customer Specific Not Applicable AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 15 ID Guideline Responsibility Customer Considerations 332 Proper segregation of duties within the security administration function or other compensating controls (eg peer reviews) should be in place to mitigate the risk of unauthorized activities being performed by the security administration function Shared Identity & Access Managemen t: Segregation of Duties Privileged access to AWS systems by AWS employees are allocated based on least privilege approved by an authorized individual prior to access provisioning and assigned a different user ID than used for normal business use Duties and areas of responsibility (for example access request and approval change management request and approval change development testing and deployment etc) are segregated across different individuals to reduce opportunities for an unauthorized or uni ntentional modification or misuse of AWS systems Customers retain the ability to manage segregation of duties of their AWS resources by using AWS Identity and Access Management (IAM) IAM enables you to securely control access to AWS services and resourc es for your users Using IAM you can create and manage AWS users and groups and use permissions to allow and deny their access to AWS resources You can get a copy of the TM G1 Workbook by accessing AWS Artifact within the AWS Management Console To use the TM G1 Workbook you should review the AWS responses and then enrich them with your own organizational contro ls Let’s use the previous controls statements as an example Section 221 of the TM G1 discusses the sound practices for IT controls oversight by the AI’s board of directors/senior management This is a principle that would only apply to you and is not specific to cloud or particular applications This control can only be fulfilled by you the AI In contrast Section 332 of the TM G1 is a shared control This control requires formal procedures for administering the access rights to system resources a nd application systems This is a shared control because AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 16 AWS administers the access rights to the system resources AWS uses to operate the cloud services and you administer the system resources that you create using our services The Workbook also position s you to more clearly consider whether and how to add supplementary technology risk controls that are specific to your line ofbusiness or application teams or your particular needs Note that it is important to appreciate the implications of the shared security responsibility model and understand which party is responsible for a particular control Where AWS is responsible the AI should identify which of the AWS Assurance reports certifications or attestations are used to establish or assess that the control is operating Next Steps Each organization’s cloud adoption journey is unique In order to successfully execute your adoption you need to understand your organization’s current state the target state and the transition required to achieve the t arget state Knowing this will help you set goals and create work streams that will enable staff to thrive in the cloud The AWS Cloud Adoption Framework (AWS CAF) offers structure to help organizations develop an efficient and effective plan for their clo ud adoption journey Guidance and best practices prescribed within the framework can help you build a comprehensive approach to cloud computing across your organization throughout your IT lifecycle The AWS CAF breaks down the complicated process of plan ning into manageable areas of focus Many organizations choose to apply the AWS CAF methodology with a facilitator led workshop To find more about such workshops please contact your AWS representative Alternatively AWS provides access to tools and res ources for self service application of the AWS CAF methodology at AWS Cloud Adoption Framework For AIs in Hong Kong next steps typically also include the following: • Contact your AWS repres entative to discuss how the AWS Partner Network as well as AWS Solution Architects Professional Services teams and Training instructors can assist with your cloud adoption journey If you do not have an AWS representative please contact us at https://awsamazoncom/ contact us/ AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 17 • Obtain and review a copy of the latest AWS SOC 1 & 2 reports PCI DSS Attestation of Compliance and Responsibility Summary and ISO 27001 certification from the AWS Artifact portal (accessible via the AWS Management Console) • Consider the relevance and application of the CIS AWS Foundations Benchmark available here and here as appropriate for your cloud journey and use cases These industry accepted best practices published by the Center for Internet Security go beyond the high level security guidance already available providing AWS users with clear step bystep implementation and assessment recom mendations • Dive deeper on other governance and risk management practices as necessary in light of your due diligence and risk assessment using the tools and resources referenced throughout this whitepaper and in the Additional Resources section below • Speak with your AWS representative to learn more about how AWS is helping Financial Services customers migrate their critical workloads to the cloud Additional Resources For additional information see: • AWS Cloud Security Whitepapers & Guides • AWS Compliance • AWS Cloud Security Services • AWS Best Practices for DDoS Resiliency • AWS Security Checklist • Cloud Adoption Framework Security Perspective • AWS Security Best Practices • AWS Risk & Compliance • Using AW S in the Context of Hong Kong Privacy Considerations AWS User Guide to the Hong Kong Monetary Authority on Outsourcing and General Principles for Technology Risk Management Supervisory Policy Manuals 18 Document Revisions Date Description April 2020 Updates to Additional Resources February 2020 Revision and updates November 2017 Style and content updates August 2017 First publication
|
General
|
consultant
|
Best Practices
|
AWS_User_Guide_to_Financial_Services_Regulations__Guidelines_in_Singapore
|
AWS User Guide to Financial Services Regulations & Guidelines in Singapore First Published July 2017 Updated January 3 2022 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 22 Amazon Web Services Inc or its affiliates All rights reserved Contents About this guide 1 Security of the cloud 4 AWS complian ce programs 5 AWS Artifact 7 AWS Regions 7 MAS Guidelines on Outsourcing 8 Assessment of service providers 8 Cloud computing 11 Outsourcing agreements 15 Audit and inspection 16 MAS Technology Risk Management Guidelines 17 Notice 655 on Cyber Hygiene 20 ABS Cloud Co mputing Implementation Guide 20 23 Key controls 23 Next steps 26 Conclusion 28 Additional resources 28 Contributors 30 Document revisions 30 Abstract This document provides information to help regulated financial institutions (FIs) operating in Singapore as they accelerate their use of Amazon Web Services (AWS) Cloud services Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 1 About this guide This document provides information to assist banks and financial services institutions in Singapore regulated by the Monetary Authority of Singapore (MAS) as they adopt and accelerate their use of the AWS Cloud This guide: • Describes the respective roles that the customer and AWS each play in managing and securing the cloud environment; • Provides an overview of the regulatory requirements and guidance that financial institutions can consider when using AWS; and • Provides additional resources that financial institutions can use to help them design and architect their AWS environment to be secure and meet regulatory expectations The Monetary Authority of Singapore (MAS) Guidelines on Outsourcing for finan cial institutions (FIs) acknowledge s that FIs can leverage cloud services to enhance their operations and reap the benefit of the scale standardization and security of the cloud The MAS Guidelines on Outsourcing instruct FIs to perform due diligence and apply sound governance and risk management practices to their use of cloud services The following sections provide considerations for FIs as they assess their responsibilities related to the following guidelines: • MAS Guidelines on Outsourcing – The Guide lines on Outsourcing provide expanded guidance to the industry on prudent risk management practices for outsourcing including cloud services • MAS Technology Risk Management (TRM) Guidelines – These include guidance for a high level of reliability availab ility and recoverability of critical IT systems and for FIs to implement IT controls to protect customer information from unauthorized access or disclosure • Notice 655 on Cyber Hygiene : – This notice sets out cyber security requirements on securing administrative accounts applying security patching establishing baseline security standards deploying network security devices implementing anti malware measures and strengthening user authentication Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 2 • Association of Banks in Singapore (ABS) Cloud Computing Implementation Guide 20 – This guide is intended to assist FIs in further understanding approaches to due diligence vendor management and key controls that should be implemented in cloud outsourcing arrangements Taken together FIs can use this information for their due diligence and to assess how to implement an appropriate information security risk management and governance program for their use of AWS Security and the Shared Responsibility Mod el Before exploring the requirements included in the various guidelines it is important that FIs understand the AWS Shared Responsibility Model AWS Shared Security Responsibility Model AWS operates manages and controls the IT components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate The customer assumes responsibility and management of the guest operating system (including updates and security patches) other asso ciated application software as well as the configuration of the AWS provided security group firewall Customers should carefully consider the services they choose as their responsibilities vary depending on the services used the integration of those servi ces into their IT environment and applicable laws and regulations The nature of this shared Amazon Web Services AWS User Guide to Financial Services Regulations & Guidel ines in Singapore Page 3 responsibility also provides the flexibility and customer control that permits the deployment As shown in the preceding chart this differentiation of responsib ility is commonly referred to as Security “of” the Cloud versus Security “in” the Cloud Customers should carefully consider the services they choose as their responsibilities vary depending on the services they use the integration of those services into their IT environments and applicable laws and regulations When using AWS services customers maintain control over their content and are responsible for managing critical content security requirements including: • The content that customers choose to store on AWS • The AWS services that are used with the content • The country where the content is stored • The format and structure of that content and whether it is masked anonymized or encrypted • How the data is encrypted and where the keys are stored • Who has access to that content an d how those access rights are granted managed and revoked It is possible to enhance security and meet more stringent compliance requirements by leveraging technology such as host based firewalls host based intrusion detectio n and prevention and encryption AWS provides tools and information to assist customers in their efforts to account for and validate whether controls are operating effectively in their extended IT environment For more information refer to the AWS Compli ance Center at http://awsamazoncom/compliance For more information on the Shared Responsibility Model and its implications for the storage and processing of personal data and other content using AWS refer to Using AWS in the Context of Singapore Privacy Considerations Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 4 Security of the cloud To provide security of the cloud AWS environments are nearly continuously audited and the infrastructure and services are approved to operate under several compliance standards and industry certifications across geographies and verticals Customers can use these certifications to validate the implementation and effectiveness of AWS securit y controls including internationally recognized security best practices and certifications The AWS compliance program is based on the following actions: • Validate that AWS services and facilities across the globe maintain a ubiquitous control environment that is operating effectively The AWS control environment includes policies processes and control activities that leverage various aspects of the AWS overall co ntrol environment The collective control environment encompasses the people processes and technology necessary to establish and maintain an environment that supports the operating effectiveness of our control framework AWS has integrated applicable clo udspecific controls identified by leading cloud computing industry bodies into the AWS control framework AWS monitors these industry groups to identify leading practices that it can implement and to better assist customers with managing their control en vironment • Demonstrate the AWS compliance posture to help customers verify compliance with industry and government requirements AWS engages with external certifying bodies and independent auditors to provide customers with considerable information regardi ng the policies processes and controls established and operated by AWS Customers can leverage this information to perform their control evaluation and verification procedures as required under the applicable compliance standard • Monitor that AWS mainta ins compliance with global standards and best practices through the use of thousands of security control requirements Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 5 AWS compliance programs AWS has obtained certifications and independent thirdparty attestations for a variety of industry specific w orkloads The following are of particular importance to FIs: • ISO 27001 – ISO 27001 is a security management standard that specifies security management best practices and comprehensive security controls following the ISO 27002 best practice guidance The b asis of this certification is the development and implementation of a rigorous security program which includes the development and implementation of an Information Security Management System which defines how AWS perpetually manages security in a holistic comprehensive manner For more information or to download the AWS ISO 27001 certification refer to https://awsamazoncom/compliance/iso 27001 faqs/ • ISO 27017 – ISO 27017 provides guidance on the information security aspects of cloud computing recommending the implementation of cloud specific information security controls that supplement the guidance of the ISO 27002 and ISO 27001 standards This code of practi ce provides additional information security controls and implementation guidance specific to cloud service providers For more information or to download the AWS ISO 27017 certification refer to https://awsamazoncom/compliance/iso 27017 faqs/ • ISO 27018 – ISO 27018 is a code of practice that focuses on protection of personal data in the cloud It is based on ISO information security standard 27002 and provides implementation guidance on I SO 27002 controls applicable to public cloud Personally Identifiable Information (PII) It also provides a set of additional controls and associated guidance intended to address public cloud PII protection requirements which is not addressed by the existi ng ISO 27002 control set For more information or to download the AWS ISO 27018 certification refer to https://awsamazoncom/compliance/iso 27018 faqs/ • ISO 9001 – ISO 9001 outlines a pro cessoriented approach to documenting and reviewing the structure responsibilities and procedures required to achieve effective quality management within an organization The key to the ongoing certification under this standard is establishing maintaini ng and improving the organizational structure responsibilities procedures processes and resources in a manner in which AWS products and services consistently satisfy ISO 9001 quality requirements For mor e information or to download the AWS ISO 9001 certification refer to https://awsamazoncom/compliance/iso 9001 faqs/ Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 6 • MTCS Level 3 – Multi Tier Cloud Security (MTCS) is an operational Singapore security management Standard (SPRING SS 584:2013) based on ISO 27001/02 Information Security Management System (ISMS) standards The key to the ongoing three year certification under this standa rd is the effective management of a rigorous security program and annual monitoring by an MTCS Certifying Body (CB) The Information Security Management System (ISMS) required under this standard defines how AWS perpetually manages security in a holistic comprehensive way For more information refer to https://awsamazoncom/compliance/aws multitiered cloud security standard certification/ • Outsourced Service Provider’s Audit Report (OSPAR) – The ABS Guidelines recommend that Singapore banks select outsourced service providers that meet the controls set out in the ABS Guidelines which can be demonstrated through an OSPAR An OSPAR attestation involves an external audit of the service provider’s controls against the criteria specified in the ABS Guidelines For more information refer to https://awsamazoncom/compliance/OSPAR/ • PCI DSS Level 1 – The Payment Card Industry Data Security Standard (also known as PCI DSS) is a proprietary information security standard administered by the PCI Security Standards Council PCI DSS applies to all entities that store process or transmit cardholder data (CHD ) and/or sensitive authentication data (SAD) including merchants processors acquirers issuers and service providers The PCI DSS is mandated by the card brands and administered by the Payment Card Industry Security Standards Council For more informati on or to request the PCI DSS Attestation of Compliance and Responsibility Summary refer to https://awsamazoncom/compliance/pci dsslevel1faqs/ • SOC – AWS Service Organization Con trol (SOC) Reports are independent third party examination reports that demonstrate how AWS achieves key compliance controls and objectives The purpose of these reports is to help customers and their auditors understand the AWS controls established to su pport operations and compliance For more information refer to https://awsamazoncom/compliance/soc faqs/ There are three types of AWS SOC Reports: o SOC 1 – Provides information about the AWS control environment that might be relevant to a customer’s internal controls over financial reporting as well as information for assessment and opinion of the effectiveness of internal controls over financial reporting (I COFR) Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 7 o SOC 2 – Provides customers and their service users that have a business need with an independent assessment of the AWS control environment that is relevant to system security availability and confidentiality o SOC 3 – Provides customers and their service users that have a business need with an independent assessment of the AWS control environment that is relevant to system security availability and confidentiality without disclosing AWS internal information For more information about the other certifications and attestations from AWS refer to the AWS Compliance Center at https://awsamazoncom/compliance/ For a description of general security controls and service specific security from AWS refer to AWS Overview of Security Processes AWS Artifact Customers can review and download reports and details about more than 2500 secur ity controls by using AWS Artifact the self service audit artifact retrieval portal available in the AWS Management Console The AWS Artifact portal provides on demand access to AWS security and compliance documents including Service Organization Control (SOC) reports Payment Card Industry (PCI) reports the AWS MAS Technology Risk Management Workbook and certifications from accreditation bodies across geographies and compliance verticals AWS Regions The AWS Cloud infrastructure is built around Regions and Availability Zones A Region is a physical location in the world with multiple Availability Zones Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity all housed in separate facilit ies These Availability Zones offer customers the ability to operate production applications and databases which are more highly available fault tolerant and scalable than would be possible from a single data center For additional information on AWS Reg ions and Availability Zones refer to https://awsamazoncom/about aws/global infrastructure/ Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 8 MAS Guidelines on Outsourcing The MAS Guidelines on Outsourcing provide guidance and reco mmendations on prudent risk management practices for outsourcing including the use of cloud services by FIs FIs that use the cloud are expected to carry out due diligence evaluate and address risks and enter into appropriate outsourcing agreements The Guidelines on Outsourcing expressly state that the extent and degree to which an FI implements the specific guidance therein should be commensurate with the nature of risks in and materiality of the outsourcing FIs should also demonstrate their observa nce of the guidelines to MAS through the submission of an outsourcing register to MAS annually or on request A full analysis of the Guidelines on Outsourcing is beyond the scope of this document However the following information includes the considera tions in the Guidelines that AWS most frequently encounters in interactions with Singapore’s FIs Assessment of service providers Section 543 of the Guidelines on Outsourcing includes a partial list of topics that should be evaluated in the course of due diligence when an FI is considering an outsourcing arrangement such as use of the cloud The following table includes considerations for each component of section 543 of the MAS Outsourcing G uidelines Table 1 – Considerations for section 543 of the MAS Outsourcing Guidelines Due diligence requirement AWS response 543 (a) Experience and capability to implement and support the outsourcing arrangement over the contracted period Since 2006 AWS has provided flexible scalable and secure IT infrastructure to businesses of all sizes around the world AWS continues to grow and scale which allows us to provide new services that help millions of active customers 543 (b) Financial stren gth and resources The financial statements of Amazoncom Inc include sales and income information from AWS permitting assessment of its financial position and the ability to service its debts and/or liabilities These financial statements are available from the SEC or at the Amazon Investor Relations website Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 9 Due diligence requirement AWS response 543 (c) Corporate governance business reputation and culture compliance and pending or potential litigation AWS has establi shed formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and content Maintaining customer trust and confidence is of the utmost importance to AWS AWS performs a nearly continuous risk assessment process to identify evaluate and mitigate risks across the company The process involves developing and implementing risk treatment plans to mitigate risks as necessary The AWS risk management team monitors and escalates risks on a nearly continuous basis performing risk assessment s on newly implemented controls at least every six months For additional information see these AWS Audit Reports: SOC 2 PCI DSS ISO 27001 ISO 27017 Amazoncom has a Code of Business Conduct and Ethics available at the Amazon Investor Relations websi te which encompasses considerations such as compliance with laws conflicts of interest bribery discrimination and harassment health and safety recordkeeping and financial integrity Information on legal proceedings can be found within the Amazoncom Inc Form 10 K filing available at the Amazon Investor Relations website or the website of the US Securities and Exchange Commission 543 (d) Security and internal controls audit coverage reporting and monitoring environment AWS management re ‐evaluates the security program at least biannually This process includes risk assessment and implementation of appropriate measures designed to address those risks AWS has established a formal audit program that includes continual independ ent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment To learn more about each of the audit programs leveraged by AWS refer to the AWS Compliance Programs Compliance reports from these assessments are made available through AWS Artifact to customers to enable them to evaluate AWS The AWS Compliance reports identify the scope of AWS services and Regions assessed as well the assessor’s attestation of compliance Customers can also leverage reports and certifications available through AWS Artifact to evaluate vendor s or suppliers ac cording to their requirements Amazon Web Services AWS User Guide to Financial Services Regulati ons & Guidelines in Singapore Page 10 Due diligence requirement AWS response 543 (e) Risk management framework and capabilities including technology risk management and business continuity management in respect of the outsourcing arrangement AWS performs a nearly continuous risk assessment proces s to identify evaluate and mitigate risks across the company The process involves developing and implementing risk treatment plans to mitigate risks as necessary AWS monitors and escalates risks on a nearly continuous basis regularly performing risk as sessments on newly implemented controls 543 (f) Disaster recovery arrangements and disaster recovery track record The AWS Business Continuity plan details the process that AWS follows in the case of an outage from detection to deactivation This plan has been developed to recover and reconstitute AWS using a three phased approach: Activation and Notification Phase Recovery Phase and Reconstitution Phase This approach ensures that AWS performs system recovery and reconstitution efforts in a methodica l sequence maximizing the effectiveness of the recovery and reconstitution efforts and minimizing system outage time due to errors and omissions AWS maintains a ubiquitous security control environment across all Regions Each data center is built to physical environmental and security standards in an active active configuration employing an n+1 redundancy model designed to ensure system availability in the event of component failure Components (N) have at leas t one independent backup component (+1) so the backup component is active in the operation even if all other components are fully functional In order to reduce single points of failure this model is applied throughout AWS including network and data cen ter implementation All data centers are online and serving traffic; no data center is cold In case of failure there is sufficient capacity to enable traffic to be load balanced to the remaining sites Customers are responsible for properly implementing contingency planning training and testing for their systems hosted on AWS AWS provides customers with the capability to implement a robust continuity plan including the utilization of frequent server instance back ups data redundancy repl ication and the flexibility to place instances and store data within multiple geographic Regions as well as across multiple Availability Zones within each Region Each Availability Zone is designed as an independent failure zone In the case of failure automated processes move customer data traffic away from the affected area This means that Availability Zones are typically physically separated within a metropolitan region and are in different flood plains Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 11 Due diligence requirement AWS response Customers use AWS to enable faster disaster rec overy of their critical IT systems without incurring the infrastructure expense of a second physical site The AWS Cloud supports many popular disaster recovery (DR) architectures from pilot light environments that are ready to scale up at a moment’s noti ce to hot standby environments that enable rapid failover 543 (g) Reliance on and success in dealing with subcontractors AWS has a program in place for selecting vendors and periodically evaluating vendor performance and compliance with contractual obligations AWS implements policies and controls to monitor access to resources that process or store customer content Vendors and third parties with restricted access that engage in business with Amazon are subject to confidentiality commitments as part of their agreements with Amazon To monitor subcontractor access year round refer to https://awsamazoncom/compliance/third party access/ 543 (h) Insurance coverage Amazon's Memora ndum of Insurance is available on the Amazon Investor Relations website 543 (i) External environment (such as the political economic social and legal environment of the jurisdiction in which the service provider operates); 543 (j) Ability to comply with applicable laws and regulations and track record in relation to its compliance with applicable laws and regulations AWS complies with applicable federal state and local laws stat utes ordinances and regulations concerning security privacy and data protection of AWS services which helps to minimize the risk of accidental or unauthorized access or disclosure of customer content AWS formally tracks and monitors its regulatory an d contractual agreements and obligations AWS has performed and maintains the following activities: • Identified applicable laws and regulations for each of the jurisdictions in which AWS operates • Documented and maintains all statutory regulatory and contractual requirements relevant to AWS Cloud computing The updated MAS Guidelines on Outsourcing include a chapter on cloud computing MAS notes that cloud services can potentially offer many advantages including the following: • Economie s of scale • Costsavings Amazon Web Services AWS U ser Guide to Financial Services Regulations & Guidelines in Singapore Page 12 • Access to quality system administration • Operations that adhere to uniform security standards and best practices • Flexibility and agility for institutions to scale up or pare down on computing resources quickly as usage requirements change • Enha nce s ystem resilience during location specific disasters or disruptions MAS also clarified that it considers cloud computing a form of outsourcing and that the types of risks arising from using the cloud to FIs are not distinct from th ose of other forms o f outsourcing arrangements FIs are still expected to perform the necessary due diligence and apply sound governance and risk management practices in a similar manner that the FI would for any other outsourcing arrangement Section 6 of the Guidelines on Outsourcing outlines a partial list of specific risks that should be evaluated and addressed by an FI that uses cloud services The following table includes considerations relevant to each risk mentioned in paragraph 67 of the Guidelines Table 2 — Consi derations relevant to paragraph 67 of the Outsourcing Guidelines Risk area AWS controls Data access confidentiality and integrity AWS gives customers ownership and control over their customer content by design through simple but powerful tools that allow customers to determine where to store their customer content secure their customer content in transit or at rest and manage acce ss to AWS services and resources for their users AWS implements responsible and sophisticated technical and physical controls designed to prevent unauthorized access to or disclosure of customer content AWS seeks to maintain data integrity through all ph ases including transmission storage and processing AWS treats all customer data and associated assets as highly confidential AWS services are content agnostic which means that they offer the same high level of security to all customers regardless of the type of content being stored AWS is vigilant about customers’ security and ha s implemented sophisticated technical and physical measures against unauthorized access AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored and how it is used and protected from disclosure Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 13 Risk area AWS controls Customer provided data is validated for integrity and corrupted or tampered data is not written to storage Amazon Simple Storage Service (Amazon S3) uses checksums internally to confirm the continued integrity of data in transit within the system and at rest Amazon S3 provides a facility for customers to sen d checksums with the data transmitted to the service The service validates the checksum upon receipt of the data to determine that no corruption occurred in transit Regardless of whether a checksum is sent with an object to Amazon S3 the service uses checksums internally to confirm the continued integrity of data in transit within the system and at rest When disk corruption or device failure is detected the system automatically attempts to restore normal levels of object storage redundancy External ac cess to data stored in Amazon S3 is logged and the logs are retained for at least 90 days including relevant access request information such as the data accessor IP address object and operation For more information see the following AWS Audit Reports : SOC 1 SOC 2 PCI DSS ISO 27001 ISO 27017 Sovereignty AWS customers choose the physical Region in which their data and servers are located AWS does not move customers’ content from the selected Regions without notifying the customer unless required to comply with the law or a binding order of a governmental body For more information refer to Using AWS in the context of Singapore Privacy Considerations Recoverability The Amazon infrastructure has a high level of availability and provides customers the features to deploy a resilient IT architecture AWS has designed its systems to tolerate system or hardware failures with minimal customer impact AWS prov ides customers with the flexibility to place instances and store data within multiple geographic Regions as well as across multiple Availability Zones within each Region Each Availability Zone is designed as an independent failure zone This means that Availability Zones are physically separated within a typical metropolitan region and are located in lower risk flood plains (specific flood zone categorization varies by Region) In addition to discrete noninterruptable power supply (UPS) and onsite back up generation facilities they are each fed through different grids from independent utilities to further reduce single points of failure Availability Zones are all redundantly connected to multiple tier 1 transit providers Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 14 Risk area AWS controls Regulatory compliance AWS for mally tracks and monitors its regulatory and contractual agreements and obligations To do so AWS has performed and maintains the following activities: • Identifi ed applicable laws and regulations for each of the jurisdictions in which AWS operates • Document ed and maintains all statutory regulatory and contractual requirements relevant to AWS • Categorized records into types with details of retention periods and type of storage media through the Data Classification Policy • Informed and train ed personnel (emplo yees contractors third party users) that must be made aware of compliance policies to protect sensitive AWS information ( such as intellectual property rights and AWS records) through the Data Handling Policy • Monitors the use of AWS facilities for unauthorized activitie s with a process in place to enforce appropriate disciplinary action AWS maintains relationships with outside parties to monitor business and regulatory requirements Should a new security directive be issued AWS has documented plans in place to implement that directive with in designated time frames For more information see the following AWS Audit Reports: SOC 1 SOC 2 PCI DSS ISO 27001 ISO 27017 Auditing Enabling our customers to protect the confidentiality integrity and availability of systems and content is of the utmost importance to AWS as is maintaining customer trust and confidence To make sure these standards are met AWS has established a forma l audit program to validate the implementation and effectiveness of the AWS control environment The AWS audit program includes internal audits and thirdparty accreditation audits The objective of these audits is to evaluate the operating effectiveness o f the AWS control environment Internal audits are planned and performed periodically Audits by thirdparty accreditation are conducted to review the continued performance of AWS against standards based criteria and to identify general improvement opport unities Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapor e Page 15 Risk area AWS controls Compliance reports from these assessments are made available to customers to enable them to evaluate AWS The AWS Compliance reports identify the scope of AWS services and Regions assessed as well as the assessor’s attestation of compliance Customers can also leverage reports and certifications available through AWS Artifact to evaluate vendors or suppliers according to their requirements Some of our key audit programs and certifications are described in the AWS compliance programs section of this document For a full list of audits certifications and attestations refer to the AWS Compliance Center Segregation of customer data Customer environments are logically segregated to prevent users and customers from accessing resources not assigned to them Customers maintain full control over who has access to their data Services which provide virtualized operational environments to customers (for example EC2) are designed to ensure that customers are segregated from one another and prevent cross tenant privilege escalation and information disclosure via hypervisors and instance isolation Customers can also use Amazon Virtual Private Cloud (V PC) which gives them complete control over their virtual networking environment including resource placement connectivity and security The first step is to create your VPC Then you can add resources to it such as Amazon Elastic Compute Cloud (EC2) an d Amazon Relational Database Service (RDS) instances Finally you can define how your VPCs communicate with each other across accounts Availability Zones (AZs) or Regions Outsourcing agreements Section 55 of the Guidelines on Outsourcing clarifies th at contractual terms and conditions governing the use of the cloud should be defined in written agreements MAS expects such agreements to address at the least the scope of the outsourcing arrangement; performance operational internal control and risk management standards; confidentiality and security; business continuity management; monitoring and control; audit and inspection; notification of adverse developments; dispute resolution; default termination and early exit; sub contracting; and applicabl e laws AWS customers have the option to enroll in an Enterprise Agreement with AWS Enterprise Agreements give customers the option to tailor agreements that best suit their needs AWS also provides an introductory guide to help Singapore’s FIs assess Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 16 the AWS Enterprise Agreement against the Guidelines on Outsourcing For more information about AWS Enterprise Agreements contact your AWS representative Audit and inspection The Guidelines on Outsourcing clarify that a n FI’s outsourcing arrangements should not interfere with the ability of the FI to effectively manage its business activities or impede MAS in carrying out its supervisory functions and objectives Customers retain ownership and control of their content when they use AWS services and do not cede that ownership and control of their content to AWS Customers have complete control over which services they use and whom they allow to access their content and services including what credentials are required Customers control how they configure their environments and secure their content including whether they encrypt their content (at rest and in transit) and what other security features and tools they use and how they use them AWS does not change customer configuration settings because these settings are determined and controlled by the customer AWS customers have the complete freedom to design their security architecture to meet their compliance needs This is a key difference from traditional hosting solutions where the provider decides on the architecture AWS enables and empowers the customer to decide when and how security measures are implemented in the cloud in accordance with each customer ’s business needs For example if a higher availability architecture is required to protect customer content the customer can add redundant systems backups locations and network uplinks to create a more resilient highavailability architecture If restricted access to customer content is required AWS enables the customer to implement access rights management controls both on a systems level and through encryption on a d ata level For more information refer to Using AWS in the Context of Singapore Privacy Considerations The Guidelines on Outsourcing also require FIs to have access to audit reports and findings made on service providers whether produced by the service provider’s or its subcontractors’ internal or external auditors or by agents appointed by the service provider and its sub contractor in relation to the outsourcing agreement Customers can validate the security controls in place within the AWS environment through AWS certifications and reports including the AWS Service Organization Control (SOC) 1 2 and 3 reports ISO 27001 27017 and 27018 certifications and PCI DSS Amazon Web Services AWS User Guide to Financial Servi ces Regulations & Guidelines in Singapore Page 17 complianc e reports These reports and certifications are produced by independent thirdparty auditors and attest to the design and operating effectiveness of AWS security controls For more information about how AWS approach es audit s and inspection s and how these requirements may be addressed in an Enterprise Agreement with AWS contact your AWS representative MAS Technology Risk Management Guidelines The MAS Technology Risk Management (TRM) Guidelines define risk management principles and best practice standard s to guide FIs in the following: • Establishing a sound and robust technology risk management framework • Strengthening system security reliability resiliency and recoverability • Deploying strong authentication to protect customer data transactions and systems AWS has produced a MAS TRM Guidelines Workbook that maps AWS security and compliance controls ( OF the cloud) and best practice guidance provided by the AWS WellArchitected Framewo rk (IN the cloud) to the requirements within the MAS TRM Guidelines Where applicable under the AWS Shared Responsibility Model the workbook provides supporting details and references to assist FIs when they adapt the MAS TRM Guidelines for their workloads on AWS The WellArchitected Framework helps you un derstand the pros and cons of decisions you make while building systems on AWS By using the framework you learn architectural best practices for designing and operating reliable secure efficient and costeffective systems in the cloud It provides a w ay for you to consistently measure your architectures against best practices and identify areas for improvement The process for reviewing an architecture is a constructive conversation about architectural decisions and is not an audit mechanism AWS believes that having well architected systems greatly increases the likelihood of business success AWS Solutions Architects have years of experience architecting solutions across a wide variety of business verticals and use cases They have helped design and review thousands of customers’ architectures on AWS From this experience they have identified best practices and core strategies for architecting systems in the cloud Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 18 The AWS Well Architected Framework documents a set of foundational questions that allow you to understand whether a specific architecture aligns well with cloud best practices The Framework provides a consistent approach to evaluating systems against the qualities you expect from modern cloud based systems and the remediation that would be required to achieve those qualities As AWS continues to evolve and continue s to learn more from working with customers the definition of well architected will continue to be refined The Framework is intended for those in technology roles such as chief technology officers (CTOs) architects developers and operations team members It describes AWS best practices and strategies to use when designing and operating a cloud workload and provides link s to further implementation details and architectural patterns For more information refer to the AWS Well Architected page The following table excerpt shows an example of the response from AWS to guideline 915 in the TRM Guidelines: Table 3 — Response from AWS to guideline 915 in the TRM Guidelines Requirement Responsibility AWS supporting information Additional information 915 Multi factor authentication should be implemented for users with access to sensitive system functions to safeguard the systems and data from unauthori zed access AWS AWS Control Objective: Governance and Risk Management Shared Responsibility Model Security and compliance is a shared respon sibility between AWS and the customer AWS is responsible for the security and compliance 'of' the cloud and implements security controls to secure the underlying infrastructure that runs the AWS services and hosts and connects customer resources AWS cus tomers are responsible for security 'in' the cloud and should determine design and implement the security controls needed based on their security and compliance needs and AWS services they select The customer responsibility will be determined by the AWS services that a customer selects AWS provides customers with best practices on how to secure their resources within the AWS service's documentation at http://docsawsamazoncom/ Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 19 Requirement Responsibility AWS supporting information Additional information AWS AWS Control Objective: Identity and Access Management Administrative Access Amazon personnel with a business need to access the management plane are required to first use multi factor authentication to gain access to purpose built administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane; such access is logged and audited When an employee no longer has a business need to access the management plane the p rivileges and access to these hosts and relevant systems are revoked 915 Multi factor authentication should be implemented for users with access to sensitive system functions to safeguard the systems and data from unauthori zed access Customer WellArchitected Question/Best Practice: SEC2 How do you manage authentication for people and machines? Use strong sign in mechanisms Enforce minimum password length and educate users to avoid common or re used passwords Enforce multi factor aut hentication (MFA) with software or hardware mechanisms to provide an additional layer FIs can create an AWS account at AWS Artifact and get a copy of the AWS MAS TRM Workbook from the AWS Artifact portal after logging in FIs should review responses from AWS in the AWS MAS TRM Workbook and enrich them with the FI’s own company wide cont rols For example section 3 of the MAS TRM Guidelines discusses the oversight of technology risk by the board of directors and senior management This is a principle that is likely to apply company wide is not specific to cloud or particular applications and can only be addressed by the FI The AWS MAS TRM Workbook also positions FIs to more clearly consider whether and how Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 20 to add extra or supplementary technology risk controls that are specific to line of businesses or application teams or the FI’s par ticular needs Notice 655 on Cyber Hygiene The Notice 655 on Cyber Hygiene applies to all banks in Singapore It sets out cyber security requirements on securing administrative accounts applying security patching establishing baseline security standards deploying network security devices implementing anti malware measures and strengthening user authentication AWS has produced the AWS Workbook for MAS Notice 655 on Cyber Hygiene that maps AWS security and compliance controls ( OF the cloud) and best prac tice guidance provided by the AWS Well Architected Framework (IN the cloud) to the requirements within the Notice 655 Where applicable under the AWS Shared Responsibility Model the workbook provides supporting details and references to assist FIs when they adapt the Notice 655 on Cyber Hygiene for their workloads on AWS The following table excerpt shows an example of the response from AWS to Cyber Hygiene Pr actice 43 in the Notice 655 for Cyber hygiene : Table 4 — Response from AWS to Cyber Hygiene Practice 43 Requirement Responsibility AWS supporting information Additional information 43 Security Standards IV Cyber Hygiene Practices AWS AWS Control Objective: Governance and Risk Management Baseline Requirements AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and content Maintaining customer trust and confidence is of the utmost importance to AWS AWS complies with applicable federal state and local laws statutes ordinances and regulations concerning securit y privacy and data protection of AWS services which helps to minimize the risk of accidental or unauthorized access or disclosure of customer content Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 21 Requirement Responsibility AWS supporting information Additional information 43 Security Standards IV Cyber Hygiene Practices AWS AWS Control Objective: Governance and Risk Management Security Control Framework AWS has developed and implemented a security control environment designed to protect the confidentiality integri ty and availability of customers’ systems and content AWS maintains a broad range of industry and geography specific compliance programs and is continually assessed by external certifying bodies and independent auditors to provide assurance the policies processes and controls established and operated by AWS are in alignment with these program standards and the highest open standards 43 Security Standards IV Cyber Hygiene Practices AWS AWS Control Objective: Governance and Risk Management Shar ed Responsibility Model Security and compliance is a shared responsibility between AWS and the customer AWS is responsible for the security and compliance 'of' the cloud and implements security controls to secure the underlying infrastructure that runs t he AWS services and hosts and connects customer resources AWS customers are responsible for security 'in' the cloud and should determine design and implement the security controls needed based on their security and compliance needs and AWS services they select The customer responsibility will be determined by the AWS services that a customer selects AWS provides customers with best practices on how to secure their resources within the AWS service's documentation at http://docsawsamazoncom/ AWS customers are responsible for all scanning penetration testing file integrity monitoring and intrusion detection for their Amazon EC2 and Amazon ECS instances and applications Refer to http://awsamazoncom/security/penetration testing for terms of service regarding penetration testing Penetration tests should include customer IP addresses and not AWS endpoints Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 22 Requirement Responsibility AWS supporting information Additional information AWS endpoints are tested as part of AWS compliance vulnerability scans Table 5 — Guidance provided by the AWS Well Architected Framework to the Cyber Hygiene Practice 43 Requirement Responsibility AWS Supporting information Additional information Learn more 43 Security Standards IV Cyber Hygiene Practices Customer WellArchitected Question / Best Practice: OPS 3 How do you reduce defects ease remediation and improve flow into production? Share design standards Share best practices across teams to increase awareness and maximize the benefits of development efforts Learn more 43 Security Standards IV Cyber Hygiene Practices Customer WellArchitected Question / Best Practice: SEC7 How do you protect your compute resources? Automate configuration management Enforce and validate secure configurations automatically by using a configuration management service or tool to reduce human error Learn more 43 Security Standards IV Cyber Hygiene Practices Customer WellArchitected Question / Best Practice: SEC6 How do you protect your networks? Automate configuration management Enforce and validate secure configurations automatically by using a configuration management service or tool to reduce human error Learn more FIs can create an AWS account at AWS Artifact and get a copy of the AWS Workbook for MAS Notice 655 on Cyber Hygiene from the AWS Artifact portal after logging in FIs should review responses fr om AWS in the AWS Workbook for MAS Notice 655 on Cyber Hygiene and enrich them with the FI’s own company wide controls Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 23 ABS Cloud Computing Implementation Guide 20 The Association of Banks in Singapore (ABS) has also published an implementation guide for banks that are entering into cloud outsourcing arrangements The ABS Cloud Computing Implementation Guide 20 includes recommendations that were discussed and agreed by members of the ABS Standing Committee for Cyber Security and are intended to assist banks in further understanding approaches to due diligence vendor management and key controls that should be implemented in cloud outsourcing arrangements Imp ortantly while the MAS Guidelines on Outsourcing and Technology Risk Management Guidelines are issued by the relevant regulator and provide guidance for a broad class of financial institutions the ABS Cloud Computing Implementation Guide 20 comprises a series of practical recommendations from the banking industry body Key controls The ABS Cloud Computing Implementation Guide recommends that a number of key controls be implemented when entering into a cloud outsourcing arrangement AWS has produced the AWS Workbook for ABS Cloud Computing Implementation Guide 20 that maps AWS security and compliance controls ( OF the cloud) and best practice guidance provided by the AWS Well Architect ed Framework (IN the cloud) to the requirements within the guide Where applicable under the AWS Shared Responsibility Model the workbook provides supporting details and references to assist FIs when they adapt the guide for their workloads on AWS The f ollowing table excerpt shows an example of the response from AWS to controls in section 4 C) Run the Cloud 1 Change Management – Considerations/Standard Workloads of the ABS Cloud Computing Implementation Guide 20 : Amazon Web Services AWS User Guide to Fin ancial Services Regulations & Guidelines in Singapore Page 24 Table 6 — Response from AWS to controls in section 4 C1 Requirement Responsibility AWS supporting information Additi onal information Learn more 1 Change management procedures should be mutually agreed between the CSP and the FI Such procedures should be formali zed and include change request and approval procedures as well as a reporting component Considerations for Standard Workloads AWS AWS Control Objective: Governance and Risk Management Shared Responsibility Model Security and compliance is a shared responsibility between AWS and the customer AWS is responsible for the security and complia nce 'of' the cloud and implements security controls to secure the underlying infrastructure that runs the AWS services and hosts and connects customer resources AWS customers are responsible for security 'in' the cloud and should determine design and im plement the security controls needed based on their security and compliance needs and AWS services they select The customer responsibility will be determined by the AWS services that a customer selects AWS provides customers with best practices on how to secure their resources within the AWS service's documentation at http://docsawsamazoncom/ AWS customers are responsible for all scanning penetration testing file integrity monitoring and intrusion detection for their Amazon EC2 and Amazon ECS insta nces and applications Refer to http://awsamazoncom/security/pen etration testing for terms of service regarding penetration testing Penetration tests should include customer IP addresses and not AWS endpoints AWS endpoints are tested as part of AWS com pliance vulnerability scans n/a Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 25 Requirement Responsibility AWS supporting information Additi onal information Learn more 1 Change management procedures should be mutually agreed between the CSP and the FI Such procedures should be formali zed and include change request and approval procedures as well as a reporting component Considerations for Standard Workloads AWS AWS Control Objective: OSPAR The Association of Banks in Singapore (ABS) Guidelines on Control Objectives and Procedures for Outsourced Service Providers (ABS Guidelines) recommend that Singapore banks select out sourced service providers that meet the controls set out in the ABS Guidelines which can be demonstrated through an OSPAR Amazon Web Services (AWS) achieved the Outsourced Service Provider’s Audit Report (OSPAR) attestation An OSPAR attestation involve s an external audit of the service provider’s controls against the criteria specified in the ABS Guidelines The audit report can be downloaded on AWS Artifact n/ a Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 26 Requirement Responsibility AWS supporting information Additi onal information Learn more 1 Change management procedures should be mutually agreed between the CSP and the FI Such procedures should be formali zed and include change request and approval procedures as well as a reporting component Considerations for Standard Workloads Customer WellArchitected Question/ Best Practice: REL5 How do you implement change? Deploy changes in a planned manner Deployments and patching follow a documented process Learn more FIs can create an AWS account at AWS Artifact and get a copy of the AWS Workbook for ABS Cloud Computing Implementation Guide 20 from the AWS Artifact portal after logging in FIs shoul d review responses from the AWS Workbook for ABS Cloud Computing Implementation Guide 20 and enrich them with the FI’s own company wide controls Next steps Each organization’s cloud adoption journey is unique To successfully complete cloud adoption FIs need to understand their organization’s current state the target state and the transition required to achieve the target state Knowing this will help FIs set goals and create work streams that will enable a successful move to the cloud The AWS Cloud Adoption Framework (AWS CAF) offers structure to help organizations develop an efficient and effective plan for their cloud adoption journey Guidance and best practices prescri bed in the Framework can help FIs build a comprehensive Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 27 approach to cloud computing across their organization throughout their IT lifecycle The AWS CAF breaks down the complicated process of planning into manageable areas of focus Many organizations choose to apply the AWS CAF methodology with a facilitator led workshop To find more about such workshops contact your AWS representative Alternatively AWS provides access to tools and reso urces for self service application of the AWS CAF methodology at AWS Cloud Adoption Framework For FIs regulated by the Monetary Authority of Singapore (MAS) next steps typically also inclu de the following: • Contact your AWS representative to discuss how the AWS Partner Network as well as AWS solution architects Professional Services teams and training instructors can assist with your cloud adoption journey If you do not have an AWS repre sentative contact AWS • Obtain and review a copy of the latest AWS SOC 1 and 2 reports PCI DSS Attestation of Compliance and Responsibility Summary and ISO 27001 certification from the AWS Artifact portal (accessible via the AWS Management Console) • Consider the relevance and application of the CIS Amazon Web Services Foundations as appropriate for your cloud journey and use cases These industry accepted best practices published by the Center for Internet Security go beyond the high level security guidance already available providing AWS users with clear step bystep implementation and assessment recommendations • Dive deeper on other governance and risk manag ement practices as necessary in light of your due diligence and risk assessment using the tools and resources referenced throughout this whitepaper and in the Additional Resources section below • Speak with your AWS representative to learn more about how A WS is helping financial services customers migrate their critical workloads to the cloud • Review a copy of the AWS MAS TRM Workbook Notice 655 on Cyber Hygiene Workbook and ABS Cloud Computing Implementation Guide 20 Workbook from the AWS Artifact portal (accessible through the AWS Management Console) FIs should populate the workbook with additional controls that they have implemented or will implement Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 28 • Update and maintain your register of outsourcing arrangements as appropriate for submission to MAS at least annually or upon request Conclusion Providing highly secure and resilient infrastructure and services to customers is a top priority for AWS The AWS commitment to customers is focused on working to continuously earn customer trust and ensure custo mers maintain confidence in operating their workloads securely on AWS To achieve this AWS has integrated risk and compliance mechanisms that include: • The implementation of a wide array of security controls and automated tools • Nearly c ontinuous monitoring and assessment of security controls to help ensure AWS operational effectiveness and strict adherence to compliance regimes In addition AWS regularly undergoes independent third party audits to provide assurance that the control activities are operating as intended These audits along with the many certifications AWS has obtained provide an additional level of validation of the AWS control environment that benefit s customers Taken together with customer managed security controls these efforts allow AW S to securely innovate on behalf of customers and help customers improve their security posture when building on AWS Additional resources Set out below are additional resources to help financial institut ions think about security compliance and designing a secure and resilient AWS environment • AWS Compliance Quick Reference Guide — AWS has many compliance enabl ing features that you can use for your regulated workloads in the AWS Cloud These features can allow you to achieve a higher level of security at scale Cloud based compliance offers a lower cost of entry easier operations and improved agility by provid ing more oversight security control and central automation Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 29 • AWS Well Architected Framework — The Well Architected Framework has been developed to help cloud architects build the most s ecure high performing resilient and efficient infrastructure possible for their applications This framework provides a consistent approach for customers and partners to evaluate architectures and provides guidance to help implement designs that will scale application needs over time The Well Architected Framework consists of five pillars: Operational Excellence; Security; Reliability; Performance Efficiency; Cost Optimization • Global Financial Services Regulatory Principles — AWS has identified five common principles related to financial services regulation that customers should consider when using AWS Cloud services and specifically when applying the share d responsibility model to their regulatory requirements Customers can access a whitepaper on t hese principles under a nondisclosure agreement at AWS Artifact • NIST Cybersecurity Framework (CSF) — The AWS whitepaper NIST Cybersecurity Framework (CSF): Aligning to the NIST CSF in the AWS Cloud demonstrates how public and commercial sector organizations can assess the AWS environment against the NIST CSF and improve the security measures they implement and operate (security in the cloud) The whitepaper also provides a third party auditor letter attesting to the AWS Cloud offering’s conformance to NIST CSF risk management p ractices (security of the cloud) FIs can leverage NIST CSF and AWS resources to elevate their risk management frameworks • Using AWS in the Context of Common Privacy and Data Protection Considerations — This document provides information to assist customers who want to use AWS to store or process content containing personal data in the context of comm on privacy and data protection considerations It will help customers understand : o The way AWS services operate including how customers can address security and encrypt their content o The geographic locations where customers can choose to store content ; and other relevant considerations o The respective roles the customer and AWS each play in managing and securing content stored on AWS services Amazon Web Services AWS User Guide to Financial Services Regulations & Guidelines in Singapore Page 30 Contributors Contributors to this document include: • Bella Khabbaz Senior Corporate Counsel Amazon Web Services • Alvin Li Sr Security Strategist Amazon Web Services • Brandon Lim Principal FS Security Amazon Web Services • Daniel Wu Principal Public Policy Amazon Web Services • Genevieve Ding Public Policy Head SG & ASEAN Amazon Web Services • Melissa Yoong Public Policy Manager SG Amazon Web Services Document revisions Date Description January 3 2022 Third publication Updated MAS TRMG ABS Cloud Computing Implementation Guidelines 20 to reflect the updated AWS TRMG Workbook and new ABS Cloud Computing Implementation Guide 20 May 2019 Second publication Updated MAS TRM section to reflect the security in the cloud guidance provided by AWS Well Architected and the associated enhanced MAS TRM Guidance Workbook July 2017 First publication
|
General
|
consultant
|
Best Practices
|
AWS_User_Guide_to_Financial_Services_Regulations_and_Guidelines_in_Hong_Kong
|
AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines April 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Overview 1 Security and the Shared Responsibility Model 1 Security IN the Cloud 2 Security OF the Cloud 3 AWS Complian ce Assurance Programs 4 AWS Artifact 6 AWS Regions 6 Hong Kong Insurance Authority Guideline on Outsourcing (GL14) 6 Prior Notification of Material Outsourcing 7 Outsourcing Policy 7 Outsourcing Agreement 9 Information Confidentiality 9 Monitoring and Control 12 Contingenc y Planning 13 Hong Kong Insurance Authority Guideline on the Use of Internet for Insurance Activities (GL8) 14 Next Steps 20 Additional Resources 21 Document Revisions 22 About this Guide This document provides information to assist Authorized Insurers (AIs) in Hong Kong regulated by the Hong Kong Insurance Authority (IA) as they accelerate their use of Amazon Web Services’ (AWS) Cloud services Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 1 Overview The Hong Kong Insurance Authority (IA) issues guidelines to provide the Hong Kong insurance industry with practical guidance to facilitate compliance with regulatory requirements The guideli nes relevant to the use of outsourced services instruct Authorized Insurers (AIs) to perform materiality assessments risk assessments perform due diligence reviews of service providers ensure controls are in place to preserve information confidentiality have sufficient monitoring and control oversight on the outsourcing arrangement and establish contingency arrangements The following sections provide considerations for AIs as they assess their responsibilities with regards to the following guidelines: • Guideline on Outsourcing (GL14) – This guideline sets out the IA’s supervisory approach to outsourcing and the major points that the IA recommends AIs to address when outsourcing their activities including the use of cloud services • Guideline on the Use of Internet for Insurance Activities (GL8) – This guideline outlines the specific points that AIs (and other groups regulated by the IA) need to be aware of when engaging in internet based insurance activities For a full list of the IA guidelines see the Guidelines section of Legislative and Regulatory Framework on the IA website Security and the Shared Responsibility Model Cloud se curity is a shared responsibility At AWS we maintain a high bar for security OF the cloud through robust governance automation and testing and validates our approach through compliance with global and regional regulatory requirements and best practices Security IN the cloud is the responsibility of the customer What this means is that customers retain control of the security program they choose to implement to protect their own content platform applications systems and networks Customers shoul d carefully consider how they will manage the services they choose as their responsibilities vary depending on the services they use the integration of those services into their IT environments and applicable laws and regulations We recommend that cus tomers think about their security responsibilities on a service by service basis because the extent of their responsibilities may differ between services Amazon Web Services AWS User Guide to the Hong Kong In surance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 2 Figure 1 – Shared Responsibility Model Security IN the Cloud Customers are responsible for their security in the cloud For services such as Elastic Compute Cloud (EC2) the customer is responsible for managing the guest operating system (including installing updates and security patches) and other associated application softwa re as well as the configuration of the AWS provided security group firewall Customers can also use managed services such as databases directory and web application firewall services which provide customers the resources they need to perform specific tasks without having to launch and maintain virtual machines For example a customer can launch an Amazon Aurora database which Amazon Relational Database Service (RDS) manages to handle tasks such as provisioning patching backup recovery failure d etection and repair It is important to note that when using AWS services customers maintain control over their content and are responsible for managing critical content security requirements including: • The content that they choose to store on AWS • The AWS services that are used with the content • The country where their content is stored • The format and structure of their content and whether it is masked anonymized or encrypted Amazon Web Services AWS User Guide to the Hong Kong Insurance Auth ority on Outsourcing and Use of Internet for Insurance Activities Guidelines 3 • How their content is encrypted and where the keys are stored • Who has acce ss to their content and how those access rights are granted managed and revoked Because customers rather than AWS control these important factors customers retain responsibility for their choices Customers are responsible for the security of the content they put on AWS or that they connect to their AWS infrastructure such as the guest operating system applications on their compute instances and content stored and processed in AWS storage platforms databases or other services Security OF the Cloud For many services such as EC2 AWS operates manages and controls the IT components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate In order to provide assuranc e about security of the AWS Cloud we continuously audit our environment AWS infrastructure and services are validated against multiple compliance standards and industry certifications across geographies and industries Customers can use the AWS complia nce certifications to validate the implementation and effectiveness of AWS security controls including internationally recognized security best practices and certifications The AWS compliance program is based on the following actions: • Validate that AWS s ervices and facilities across the globe maintain a ubiquitous control environment that is operating effectively The AWS control environment encompasses the people processes and technology necessary to establish and maintain an environment that supports t he operating effectiveness of the AWS control framework AWS has integrated applicable cloud specific controls identified by leading cloud computing industry bodies into the AWS control framework AWS monitors these industry groups to identify leading prac tices that can be implemented and to better assist customers with managing their control environment • Demonstrate the AWS compliance posture to help customers verify compliance with industry and government requirements AWS engages with external certifyi ng bodies and independent auditors to provide customers with information regarding the policies processes and controls established and operated by AWS Customers can use this information to perform their control evaluation and verification procedures as required under the applicable compliance standard Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Out sourcing and Use of Internet for Insurance Activities Guidelines 4 • Monitor that AWS maintains compliance with global standards and best practices through the use of thousands of security control requirements AWS Compliance Assurance Programs In order to help customers establish operate and leverage the AWS security control environment AWS has developed a security assurance program that uses global privacy and data protection best practices These security protections and control processes are independently validated by multiple third party independent assessments The following are of particular importance to Hong Kong AIs: ISO 27001 – ISO 27001 is a security management standard that specifies security management best practices and comprehensive security controls foll owing the ISO 27002 best practice guidance The basis of this certification is the development and implementation of a rigorous security program which includes the development and implementation of an Information Security Management System that defines h ow AWS perpetually manages security in a holistic comprehensive manner For more information or to download the AWS ISO 27001 certification see the ISO 27001 Compliance webpage ISO 27017 – ISO 27017 provides guidance on the information security aspects of cloud computing recommending the implementation of cloud specific information security controls that supplement the guidance of the ISO 27002 and ISO 27001 standards This code of prac tice provides additional security controls implementation guidance specific to cloud service providers For more information or to download the AWS ISO 27017 certification see the ISO 27017 Compliance webpage ISO 27018 – ISO 27018 is a code of practice that focuses on protection of personal data in the cloud It is based on ISO information security standard 27002 and provides implementati on guidance on ISO 27002 controls applicable to public cloud Personally Identifiable Information (PII) It also provides a set of additional controls and associated guidance intended to address public cloud PII protection requirements not addressed by the existing ISO 27002 control set For more information or to download the AWS ISO 27018 certification see the ISO 27018 Compliance webpage ISO 9001 ISO 9001 outlines a process oriented a pproach to documenting and reviewing the structure responsibilities and procedures required to achieve effective quality management within an organization The key to ongoing certification under this standard is establishing maintaining and improving the organizational structure responsibilities procedures processes and resources in a manner where AWS Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 5 products and services consistently satisfy ISO 9001 quality requirements For more information or to download the AWS ISO 9001 certification see th e ISO 9001 Compliance webpage PCI DSS Level 1 The Payment Card Industry Data Security Standard (also known as PCI DSS) is a proprietary information security standard administered by the PC I Security Standards Council PCI DSS applies to all entities that store process or transmit cardholder data (CHD) and/or sensitive authentication data (SAD) including merchants processors acquirers issuers and service providers The PCI DSS is manda ted by the card brands and administered by the Payment Card Industry Security Standards Council For more information or to request the PCI DSS Attestation of Compliance and Responsibility Summary see the PCI DSS Compliance webpage SOC – AWS System & Organization Controls (SOC) Reports are independent third party a udit reports that demonstrate how AWS achieves key compliance controls and objectives The purpose of these reports is to help customers and their auditors understand the AWS controls established to support operations and compliance For more information see the SOC Compliance webpage There are three types of AWS SOC Reports: • SOC 1 : Provides information about the AWS control environment that may be relevant to a customer’s internal controls over financial reporting as well as information for assessment and opinion of the effectiveness of internal controls over financial reporting (ICOFR) • SOC 2 : Provides customers and their service users with a business need with an independent assessment of the AWS control environment relevant to system security availability and confidentiality • SOC 3 : Provides customers and their service users with a business need with an independent assessment of the AWS control environment relevant to system security availability and confidentiality without disclosing AWS internal information By tying together governance focused audit friendly service features with such certifications attestations and audit standards AWS Compliance enablers build on traditional programs helping customers to establish and operate in an AWS security control environment For more information about other AWS certifications and attestations see AWS Compliance Programs Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Inte rnet for Insurance Activities Guidelines 6 AWS Artifact Customers can review and download reports and details about more than 2600 security controls by using AWS Artifact the automated compliance reporting tool available in the AWS Management Console The AWS Artifact portal provides on demand access to AWS’s security and compliance documents including SOC reports PCI repo rts and certifications from accreditation bodies across geographies and compliance verticals AWS Regions The AWS Cloud infrastructure is built around AWS Regions and Availability Zones An AWS Region is a physical location in the world that is made up o f multiple Availability Zones Availability Zones consist of one or more discrete data centers that are housed in separate facilities each with redundant power networking and connectivity These Availability Zones offer customers the ability to operat e production applications and databases at higher availability fault tolerance and scalability than would be possible from a single data center For current information on AWS Regions and Availability Zones see https://awsamazoncom/about aws/global infrastructure/ Hong Kong Insurance Authority Guideline on Outsourcing (GL14) The Hong Kong Insurance Authority Guideline on Outsourcing (GL14) provides guidance and recommendations on prudent risk management practices for outsourcing including the use of cloud services by AIs AIs that use cloud services are expected to carry out due diligence evaluate and address risks and enter into appropriate outsourcing agreements Section 5 of the GL14 states that the AI’s materiality and risk assessments should include considerations such as a determination of the importance and criticality of the services to be outs ourced and the impact on the AI’s risk profile (in respect to financial operational legal and reputational risks and potential losses to customers) if the outsourced service is disrupted or falls short of acceptable standards AIs should be able to de monstrate their observance of the guidelines as required by the IA A full analysis of the GL14 is beyond the scope of this document However the following sections address the considerations in the GL14 that most frequently arise in interactions with AIs Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Ins urance Activities Guidelines 7 Prior Notification of Material Outsourcing Under Section 61 of the GL14 an AI is required to notify the IA when the AI is planning to enter into a new material outsourcing arrangement or significantly vary an existing one The notification includes the following requirements: • Unless otherwise justifiable by the AI the notification should be made at least 3 months before the day on which the new outsourcing arrangement is proposed to be entered into or the existing arrangement is proposed to be varied significantly • A detailed description of the proposed outsourcing arrangement to be entered into or the significant proposed change • Sufficient information to satisfy the IA that the AI has taken into account and properly addressed all of the essential iss ues set out in Section 5 of the GL14 Outsourcing Policy Section 58 of the GL14 sets out a list of factors that should be evaluated in the context of service provider due diligence when an AI is considering an outsourcing arrangement including the use of cloud services The following table includes considerations for each component of Section 58 Due Diligence Requirement Customer Considerations (a) reputation experience and quality of service Since 2006 AWS has provided flexible scalable and secure IT infrastructure to businesses of all sizes around the world AWS continues to grow and scale allowing us to provide new services that help millions of active customers (b) financial soundness in particular the ability to continue to provide t he expected level of service The financial statements of Amazoncom Inc include AWS’s sales and income permitting assessment of its financial position and ability to service its debts and/or liabilities These financial statements are available from the SEC or at Amazon’s Investor Relations website Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activ ities Guidelines 8 Due Diligence Requirement Customer Considerations (c) managerial skills technical and operational expertise and competence in particular the ability to deal with disruptions in business continuity AWS management has developed a strategic business plan which includes risk identification and the implementation of controls to mitigate or manage risks AWS management re ‐ evaluates the strategic business plan at least biannually This process requires ma nagement to identify risks within its areas of responsibility and to implement appropriate measures designed to address those risks The AWS Cloud operates a global infrastructure with multiple Availability Zones within multiple geographic AWS Regions around the world For more information see AWS Global Infrastructure AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and data Maintaining customer trust and confidence is of the utmost i mportance to AWS AWS performs a continuous risk assessment process to identify evaluate and mitigate risks across the company The process involves developing and implementing risk treatment plans to mitigate risks as necessary The AWS risk management team monitors and escalates risks on a continuous basis performing risk assessments on newly implemented controls at least every six months (d) any license registration permission or authorization required by law to perform the outsourced service While Hong Kong does not have specific licensing or certification requirements for operating cloud services AWS has multiple attestations for secure and compliant operation of its services Globally these include certification to ISO 27017 (guidelines for in formation security controls applicable to the provision and use of cloud services) and ISO 27018 (code of practice for protection of personally identifiable information (PII) in public clouds) For more information about our assurance programs see AWS Assurance Programs (e) extent of reliance on sub contractors and effectiveness in monitoring the work of sub contractors AWS creates and maintains written agreements with third parties (for example contractors or vendors) in accordance with the work or service to be provided and implements appropriate relationship management mechanisms in line with their relationship to the business Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidel ines 9 Due Diligence Requirement Customer Considerations (f) compatibility with the insurer’s corporate culture and future development strategies AWS maintains a systematic approach to planning and developing new services for the AWS environment to ensure that the quality and security requirements are met with each release The AWS strategy for the design and development of services is to clearly define services in terms of customer use cases service performance marketing and distribution requirements production and testing and legal and regulatory requirements (g) familiarity with the insurance industry and capacity to keep pace with innovation in the market For a list of case studies from financial services customers that have deployed applications on the AWS Cloud see Financial Services Customer Stories For a list of financial services cloud solutions provided by AWS see Financial Services Cloud Solutions The AWS Cloud pla tform expands daily For a list of the latest AWS Cloud services and news see What's New with AWS Outsourcing Agreement An outsourcing agreement should be undertaken in the form of a legally binding written agreement Section 510 of the Guideline on Outsourcing (GL14) clarifies the matters that an AI should consider when entering into an outsourcing arrangement with a service provider including performance standards certain reporting or notification requirem ents and contingency plans AWS cust omers may have the option to enroll in an Enterprise Agreement with AWS Enterprise Agreements give customers the option to tailor agreements that best suit your organization’s needs For more information about AWS Ent erprise Agreements contact your AWS representative Information Confidentiality Under Sections 512 513 and 514 of the Guideline on Outsourcing (GL14) AIs need to ensure that the outsourcing arrangements comply with relevant laws and statutory requir ements on customer confidentiality The following table includes considerations for Sections 512 513 and 514 Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 10 Requirement Customer Considerations 512 The insurer should ensure that it and the service provider have proper safeguards in place to protect the integrity and confidentiality of the insurer’s information and customer data Data Protection – You choose how your data is secured AWS offers you strong encryption for your data in transit or at rest and AWS provides you with the option to m anage your own encryption keys If you want to tokenize data before it leaves your organization you can achieve this through a number of AWS partners that provide this Data Integrity – For access and system monitoring AWS Config provides you with an AWS resource inventory configuration history and configuration change notifications to enable security and governance Config rules enable you to create rules that automatically check the configuration of AWS resources recorded by AWS Config When your reso urces are created updated or deleted AWS Config streams these configuration changes to Amazon Simple Notification Service (Amazon SNS) which notifies you of all configuration changes AWS Config represents relationships between resources so that you c an assess how a change to one resource might impact other resources Data Segregation – Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own IP address range creation of subnets and configuration of route tables and network gateways Access Rights – AWS provides a number of ways for you to identify users and securely access your AWS Account A complete list of credentials supported by AWS can be found in the AWS Management Console by choosing your user name in the navigation bar and then choosing My Security Credentials AWS also pro vides additional security options that enable you to further protect your AWS Account and control access using the following: AWS Identity and Access Management (IAM) key management and rotation temporary security credentials and multi factor authentica tion (MFA) Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 11 Requirement Customer Considerations 513 An authorized insurer should take into account any legal or contractual obligation to notify customers of the outsourcing arrangement and circumstances under which their data may be disclosed or lost In the event of the termination of th e outsourcing agreement the insurer should ensure that all customer data are either retrieved from the service provider or destroyed AWS provides you with the ability to delete your data Because you retain control and ownership of your data it is your responsibility to manage data retention to your own requirements If you decide to leave AWS you can manage access to your data and AWS services and resources including the ability to import and export data AWS provides services such as AWS Import/Expo rt to transfer large amounts of data into and out of AWS using physical storage appliances For more information see Cloud Storage with AWS Additionally AWS offers AWS Database Migration Service a web service that you can use to migrate a database from an AWS service to an on premises database In alignment with ISO 27001 standards when a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent your organization’s data from being exposed to unauthorized individuals AWS uses the techniques detailed in DoD 522022 M (“National Industrial Security Program Operating Manual “) or NIST 800 88 (“Guidelines for Media Sanitization” ) to destroy data as part of the decommissioning process If a hardware device is unable to be decommissioned using these procedures the device will be degaussed or physically destroyed in accordance with industry standard practices For more information see ISO 27001 standards Annex A domain 8 AWS has been validated and certified by an independent auditor to confirm alignment with the ISO 27001 certification standard For additional details see AWS Cloud Security Also see the Section 73 of the Customer Agreement which is available at AWS Customer Agreement Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 12 Requirement Customer Considerations 514 An authorized insurer should notify the IA forthwith of any unauthorized access or breach of confidentiality by the service provider or its sub contractor that affects the insurer or its customers AWS employees are trained on how to recognize suspected security incidents and where to report them When appropriate incidents are reported to relevant authorities AWS maintains the AWS security bulletin webpage located at https://awsamazoncom/security/security bulletins to notify customers of security and privacy events affecting AWS services Customers can subscribe to the Secu rity Bulletin RSS Feed to keep abreast of security announcements on the Security Bulletin webpage The customer support team maintains a Service Health Dashboard webpage located athttp://statusawsamazoncom/ to alert customers to any broadly impacting availability issues Customers are responsible for their security in the cloud It is important to note that when using AWS services customers maintain control over their content and are responsible for managing critical content security requirements inc luding who has access to their content and how those access rights are granted managed and revoked AWS customers should consider implementation of the following best practices to protect against and detect security breaches: • Use encryption to secure cus tomer data • Configure the AWS services to keep customer data secure AWS provides customers with information on how to secure their resources within the AWS service's documentation at http://docsawsamazoncom/ • Implement least privilege permissions for a ccess to your resources and customer data • Use monitoring tools like AWS CloudWatch to track when customer data is accessed and by whom Monitoring and Control Under Section 515 of the Guideline on Outsourcing (GL14) AIs should ensure that they have suf ficient and appropriate resources to monitor and control outsourcing arrangements at all times Section 516 further sets out that once an AI implements an outsourcing arrangement it should regularly review the effectiveness and adequacy of its controls i n monitoring the performance of the service provider AWS has implemented a formal documented incident response policy and program this can be reviewed in the SOC 2 report via AWS Artifact You can also see security Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 13 notifications on the AWS Security Bulletins website AWS provides you with various tools you can use to monitor your services including those already noted and others you can find on the AWS Marketplace Contingency Planning Under Sections 517 and 518 of the Guideline on Outsourcing (GL14) if an AI chooses to outsource service to a service provider they should put in place a contingency plan to ensure that the AI’s busine ss won’t be disrupted as a result of undesired contingencies of the service provider such as system failures The AI should also ensure that the service provider has its own contingency plan that covers daily operational and systems problems The AI shoul d have an adequate understanding of the service provider's contingency plan and consider the implications for its own contingency planning in the event that the outsourced service is interrupted due to undesired contingencies of the service provider AWS a nd regulated AIs share a common interest in maintaining operational resilience ie the ability to provide continuous service despite disruption Continuity of service especially for critical economic functions is a key prerequisite for financial stabi lity For more information about AWS operational resilience approaches see the AWS whitepaper Amazon Web Services’ Approach to Operational Resilience in the Fin ancial Sector & Beyond The AWS Business Continuity plan details the process that AWS follows in the case of an outage from detection to deactivation This plan has been developed to recover and reconstitute AWS using a three phased approach: Activation and Notification Phase Recovery Phase and Reconstitution Phase This approach ensures that AWS performs system recovery and reconstitution efforts in a methodical sequence maximizing the effectiveness of the recovery and reconstitution efforts and minimi zing system outage time due to errors and omissions For more information see the AWS whitepaper Amazon Web Services: Overview of Security Processes and the SOC 2 re port in the AWS Artifact console AWS provides you with the capability to implement a robust continuity plan including frequent server instance backups data redundancy replication and the flexibility to place instances and store data within multiple geo graphic Regions as well as across multiple Availability Zones within each Region For more information about disaster recovery approaches see Disaster Recovery Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 14 Hong Kong Insurance Authority Guid eline on the Use of Internet for Insurance Activities (GL8) The Hong Kong Insurance Authority Guideline on the Use of Internet for Insurance Activities (GL8) aims to draw attentio n to the special considerations that AIs (and other groups regulated by the IA) need to be aware of when engaging in internet based insurance activities Sections 51 items (a) (g) of the Guideline on the Use of Internet for Insurance Activities (GL8) sets out a series of requirements regarding information security confidentiality integrity data protection payment systems security and related concerns for AIs to address when carrying out internet insurance activities AIs should take all pract icable steps to ensure the following: Requirement Customer Considerations (a) a comprehensive set of security policies and measures that keep up with the advancement in internet security technologies shall be in place AWS has established formal policies a nd procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of your syste ms and data Maintaining customer trust and confidence is of the utmost importance to AWS AWS works to comply with applicable federal state and local laws statutes ordinances and regulations concerning security privacy and data protection of AWS services in order to minimize the risk of accidental or unauthorized access or disclosure of customer data Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 15 (b) mechanisms shall be in place to maintain the integrity of data stored in the system hardware whilst in transit and as displayed on the website AWS is designed to protect the confidentiality and integrity of transmitted data through the comparison of a cryptographic hash of data transmitted This is done to help ensure that the message is not corrupted or altered in transit Data that has been alte red or corrupted in transit is immediately rejected AWS provides many methods for you to securely handle your data: AWS enables you to open a secure encrypted channel to AWS servers using HTTPS (TLS/SSL) Amazon S3 provides a mechanism that enables you t o use MD5 checksums to validate that data sent to AWS is bitwise identical to what is received and that data sent by Amazon S3 is identical to what is received by the user When you choose to provide your own keys for encryption and decryption of Amazon S 3 objects (S3 SSE C) Amazon S3 does not store the encryption key that you provide Amazon S3 generates and stores a one way salted HMAC of your encryption key and that salted HMAC value is not logged Connections between your applications and Amazon RDS MySQL DB instances can be encrypted using TLS/SSL Amazon RDS generates a TLS/SSL certificate for each database instance which can be used to establish an encrypted connection using the default MySQL client When an encrypted connection is established dat a transferred between the database instance and your application is encrypted during transfer If you require data to be encrypted while at rest in the database your application must manage the encryption and decryption of data Additionally you can set up controls to have your database instances only accept encrypted connections for specific user accounts Data is encrypted with 256 bit keys when you enable AWS KMS to encrypt Amazon S3 objects Amazon EBS volumes Amazon RDS DB Instances Amazon Redshift Data Blocks AWS CloudTrail log files Amazon SES messages Amazon Workspaces volumes Amazon WorkMail messages and Amazon EMR S3 storage AWS offers you the ability to add an additional layer of security to data at rest in the cloud providing scalable and efficient encryption features Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 16 Requirement Customer Considerations This includes: • Data encryption capabilities available in AWS storage and database services such as Amazon EBS Amazon S3 Amazon Glacier Amazon RDS for Oracle Database Amazon RDS for SQL Server and Amazon Redshift • Flexible key management options including AWS Key Management Service (AWS KMS) that allow you to choose whether to have AWS manage the encryption keys or enable you to keep complete control over your keys • Dedicated hardware based cryptographi c key storage using AWS CloudHSM which enables you to satisfy compliance requirements In addition AWS provides APIs that you can use to integrate encryption and data protection with any of the services you develop or deploy in the AWS Cloud Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 17 Requirement Customer Considerations (c) approp riate backup procedures for the database and application software shall be implemented AWS maintains a retention policy applicable to AWS internal data and system components in order to continue operations of AWS business and services Critical AWS system components including audit evidence and logging records are replicated across multiple Availability Zones and backups are maintained and monitored You retain control and ownership of your data When you store data in a specific Region it is not replic ated outside that Region It is your responsibility to replicate data across Regions if your business needs require this capability Amazon S3 supports data replication and versioning instead of automatic backups You can however back up data stored in Amazon S3 to other AWS Regions or to on premises backup systems Amazon S3 replicates each object across all Availability Zones within the respective Region Replication can provide data and service availability in the case of system failure but provides no protection against accidental deletion or data integrity compromise —it replicates changes across all Availability Zones where it stores copies Amazon S3 offers standard redundancy and reduced redundancy options which have different durability objectiv es and price points Each Amazon EBS volume is stored as a file and AWS creates two copies of the EBS volume for redundancy Both copies reside in the same Availability Zone however so while Amazon EBS replication can survive hardware failure it is not suitable as an availability tool for prolonged outages or disaster recovery purposes We recommend that you replicate data at the application level or create backups Amazon EBS provides snapshots that capture the data stored on an Amazon EBS volume at a specific point in time If the volume is corrupt (for example due to system failure) or data from it is deleted you can restore the volume from snapshots Amazon EBS snapshots are AWS objects to which IAM users groups and roles can be assigned permiss ions so that only authorized users can access Amazon EBS backups Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 18 Requirement Customer Considerations (d) a client’s personal information (including password if any) shall be protected against loss; or unauthorized access use modification or disclosure etc You control your data With AWS you can do the following: • Determine where your data is stored including the type of storage and geographic Region of that storage • Choose the secured state of your data We offer you strong encryption for your content in transit or at rest and we provide you with the option to manage your own encryption keys • Manage access to your data and AWS services and resources through users groups permissions and credentials that you control (e) a client’s electronic signature if any shall be verified Amazon Partner Network (APN) Technology Partners provide software solutions (including electronic signature solutions) that are either hosted on or integrated with the AWS Cloud platform The AWS Partner Solutions Finder provides you with a centralized p lace to search discover and connect with trusted APN Technology and Consulting Partners based on your business needs For more information see AWS Partner Solutions Finder Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 19 Requirement Customer Considerations (f) the electronic payme nt system (eg credit card payment system) shall be secure AWS is a Payment Card Industry (PCI) compliant cloud service provider having been PCI DSS Certified since 2010 The most recent assessment validated that AWS successfully completed the PCI Data Security Standards 32 Level 1 Service Provider assessment and was found to be compliant for all the services outlined on AWS Services in Scope by Compliance Program The AWS PCI Complian ce Package which is available through AWS Artifact includes the AWS PCI DSS 32 Attestation of Compliance (AOC) and AWS 2016 PCI DSS 32 Responsibility Summary PCI compliance on AWS is a shared responsibility In accordance with the shared responsibili ty model all entities must manage their own PCI DSS compliance certification While for the portion of the PCI cardholder environment deployed in AWS your organization’s QSA can rely on AWS Attestation of Compliance (AOC) you are still required to satis fy all other PCI DSS requirements The AWS 2016 PCI DSS 32 Responsibility Summary provides you with guidance on what you are responsible for For more information about AWS PCI DSS Compliance see PCI DSS Level 1 Service Provider Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 20 Requirement Customer Considerations (g) a valid insurance contract shall not be cancelled accidentally maliciously or consequent upon careless computer handling Your data is validated for integrity and corrupted or tampered data is not written to storage Amazon S3 utilizes checksums int ernally to confirm the continued integrity of content in transit within the system and at rest Amazon S3 provides a facility for you to send checksums along with data transmitted to the service The service validates the checksum upon receipt of the data to determine that no corruption occurred in transit Regardless of whether a checksum is sent with an object to Amazon S3 the service utilizes checksums internally to confirm the continued integrity of content in transit within the system and at rest Whe n disk corruption or device failure is detected the system automatically attempts to restore normal levels of object storage redundancy External access to content stored in Amazon S3 is logged and the logs are retained for at least 90 days including relevant access request information such as the accessor IP address object and operation Next Steps Each organization’s cloud adoption journey is unique In order to successfully execute your adoption you need to understand your organization’s current state the target state and the transition required to achieve the target state Knowing this will help you set goals and create work streams that will enable staff to thrive in the cloud The AWS Cloud Adoption Framework (AWS CAF) offers structure to help organizations develop an efficient and effective plan for their cloud adoption journey Guidance and bestpractices prescribed within the framework can help you build a comprehensive approach to cloud computing across your organiza tion throughout your IT lifecycle The AWS CAF breaks down the complicated process of planning into manageable areas of focus Many organizations choose to apply the AWS CAF methodology with a facilitator led workshop To find more about such workshops p lease contact your AWS representative Alternatively AWS provides access to tools and resources for self service application of the AWS CAF methodology at AWS Cloud Adoption Framework Amazon Web Services AWS Us er Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 21 For AIs in Hong Kong next steps typically also include the following: • Contact your AWS representative to discuss how the AWS Partner Network and AWS Solution Architects Professional Services teams and Training instructors can assist with your cloud adoption journey If you do not have an AWS representative contact us at https://awsamazoncom/ contact us/ • Obtain and review a copy of the latest AWS SOC 1 & 2 reports PCI DSS Attestation of Compliance and Responsibility Summary and ISO 27001 certification from the AWS Artifact portal (accessible via the AWS Management Console) • Consider the relevance and application of the CIS AWS Foundations Benchmark available here and here as appropriate for your cloud journey and use cases These industry accepted best practices published by the Center for Internet Security go beyond the high level security guidance already available providing AWS users with clear step bystep implementation and assessment recommendations • Dive deeper on other governance and risk management practices as necessary in light of your due diligence and risk assessment using the tools and resources referenced throughout this whitepaper and in the Additional Resources section below • Speak to your AWS representative about an AWS Enterprise Agreement Additional Resources For additional information see: • AWS Cloud Security Whitepapers & Guides • AWS Compliance • AWS Cloud Security Services • AWS Best Practices for DDoS Resiliency • AWS Security Checklist • Cloud Adoption Framework Security Perspective • AWS Security Best Practices • AWS Risk & Compliance • Using AWS in the Context of Hong Kong Privacy Considerations Amazon Web Services AWS User Guide to the Hong Kong Insurance Authority on Outsourcing and Use of Internet for Insurance Activities Guidelines 22 Document Revisions Date Description April 2020 Updates to Additional Resources section February 2020 Revision and updates October 2017 First publication
|
General
|
consultant
|
Best Practices
|
AWS_WellArchitected_Framework__Cost_Optimization_Pillar
|
ArchivedCost Optimization Pillar AWS WellArchitected Framework July 2020 This paper has been archived The latest version is now available at: https://docsawsamazoncom/wellarchitected/latest/costoptimizationpillar/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are p rovided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2020 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Cost Optimization 2 Design Principles 2 Definition 3 Practice Cloud Financial Management 3 Functional Ownership 4 Finance and Technology Partnership 5 Cloud Budgets and Forecasts 6 CostAware Processes 7 CostAware Culture 8 Quantify Business Value Delivered Through Cost Optimization 8 Expenditure and Usage Awareness 10 Governance 10 Monitor Cost and Usage 14 Decommission Resources 17 Cost Effective Resources 18 Evaluate Cost When Selecting Services 18 Select the Correct Resource Type Size and Number 21 Select the Best Pricing Model 22 Plan for Data Transfer 28 Manage Demand and Supply Resources 30 Manage Demand 31 Dynamic Supply 31 Optimize Over Time 33 Review and Imp lement New Services 33 Conclusion 35 ArchivedContributors 35 Further Reading 36 Document Revisions 36 ArchivedAbstract This whitepaper focuses on the cost optimization pillar of the Amazon Web Services (AWS) WellArchitected Framework It provides guidance to help customers apply best practices in the design deliv ery and maintenance of AWS environments A cost optimized workload fully utilizes all resources achieves an outcome at the lowest possible price point and meets your functional requirements This whitepaper provides indepth guidance for building capabi lity within your organization designing your workload selecting your services configuring and operating the services and applying cost optimization techniques ArchivedAmazon Web Services Cost Optimization Pillar 1 Introduction The AWS Well Architected Framework helps you understand the decisions you make while building workloads on AWS The Framework provides architectural best practices for designing and operating reliable secure efficient and cost effective workloads in the cloud It demonstrates a way to consistently measure your architectures against best practices and identify areas for improvement We believe that having well architected workloads greatly increases the likelihood of business success The framework is based on five pillars: •Operational Excellence •Security •Reliability •Performance Efficiency •Cost Optimization This paper focuses on the cost optimization pillar and how to architect workloads with the most effective use of services and resources to achieve business outcomes at the lowest price point You’ll learn how to apply the best practices of the cost optimization pillar within your organization Cost optimization can be challenging in traditional on premises solutions because you must predict future c apacity and business needs while navigating complex procurement processes Adopting the practices in this paper will help your organization achieve the following goals: •Practice Cloud Financial Management •Expenditure and usage awareness •Cost effective reso urces •Manage demand and supply resources •Optimize over time This paper is intended for those in technology and finance roles such as chief technology officers (CTOs) chief financial officers (CFOs) architects developers financial controllers financia l planners business analysts and operations team ArchivedAmazon Web Services Cost Optimization Pillar 2 members This paper does not provide implementation details or architectural patterns however it does include references to appropriate resources Cost Optimization Cost optimization is a continual proce ss of refinement and improvement over the span of a workload’s lifecycle The practices in this paper help you build and operate cost aware workloads that achieve business outcomes while minimizing costs and allowing your organization to maximize its retur n on investment Design Principles Consider the following design principles for cost optimization: Implement cloud financial management : To achieve financial success and accelerate business value realization in the cloud you must invest in Cloud Financial Management Your organization must dedicate the necessary time and resources for building capability in this new domain of technology a nd usage management Similar to your Security or Operations capability you need to build capability through knowledge building programs resources and processes to help you become a cost efficient organization Adopt a consumption model : Pay only for t he computing resources you consume and increase or decrease usage depending on business requirements For example development and test environments are typically only used for eight hours a day during the work week You can stop these resources when they ’re not in use for a potential cost savings of 75% (40 hours versus 168 hours) Measure overall efficiency : Measure the business output of the workload and the costs associated with delivery Use this data to understand the gains you make from increasing o utput increasing functionality and reducing cost Stop spending money on undifferentiated heavy lifting : AWS does the heavy lifting of data center operations like racking stacking and powering servers It also removes the operational burden of managing operating systems and applications with managed services This allows you to focus on your customers and business projects rather than on IT infrastructure Analyze and attribute expenditure : The cloud makes it easier to accurately identify the cost and u sage of workloads which then allows transparent attribution of IT costs to revenue streams and individual workload owners This helps measure return on ArchivedAmazon Web Services Cost Optimization Pillar 3 investment (ROI) and gives workload owners an opportunity to optimize their resources and reduce costs Definition There are five focus areas for cost optimization in the cloud: • Practice Cloud Financial Management • Expenditure and usage awareness • Costeffective resources • Manage demand and supplying resources • Optimize over time Similar to the other pillars wi thin the Well Architected Framework there are trade offs to consider for cost optimization For example whether to optimize for speed tomarket or for cost In some cases it’s best to optimize for speed —going to market quickly shipping new features o r meeting a deadline —rather than investing in upfront cost optimization Design decisions are sometimes directed by haste rather than data and the temptation always exists to overcompensate rather than spend time benchmarking for the most costoptimal de ployment Overcompensation can lead to over provisioned and under optimized deployments However it may be a reasonable choice if you must “lift and shift” resources from your on premises environment to the cloud and then optimize afterwards Investing th e right amount of effort in a cost optimization strategy up front allows you to realize the economic benefits of the cloud more readily by ensuring a consistent adherence to best practices and avoiding unnecessary over provisioning The following sections provide techniques and best practices for the initial and ongoing implementation of Cloud Financial Management and cost optimization for your workloads Practi ce Cloud Financial Management Cloud Financial Management (CFM) enables organizations to realize b usiness value and financial success as they optimize their cost and usage and scale on AWS ArchivedAmazon Web Services Cost Optimization Pillar 4 The following are Cloud Financial Management best practices: • Functional ownership • Finance and technology partnership • Cloud budgets and forecasts • Costaware processes • Costaware culture • Quantif y business value delivered through cost optimization Functional Ownership Establish a cost optimization function: This function is responsible for establishing and maintaining a culture of cost awareness It c an be an existing individual a team within your organization or a new team of key finance technology and organization stakeholders from across the organization The function (individual or team) prioritizes and spends the required percentage of their time on cost management and cost optimization activities For a small organization the function might spend a smaller percentage of time compared to a full time function for a larger enterprise The function require a multi disciplined approach with capabi lities in project management data science financial analysis and software/infrastructure development The function is can improve efficiencies of workloads by executing cost optimizations (centralized approach) influencing technology teams to execute o ptimizations (decentralized) or a combination of both (hybrid) The function may be measured against their ability to execute and deliver against cost optimization goals (for example workload efficiency metrics) You must secure executive sponsorship for this function The sponsor is regarded as champion for cost efficient cloud consumption and provides escalation support for the function to ensure that cost optimization activities are treated with the level of priority defined by the organization Toget her the sponsor and function ensure that your organization consumes the cloud efficiently and continue to deliver business value ArchivedAmazon Web Services Cost Optimization Pillar 5 Finance and Technology Partnership Establish a partnership between finance and technology: Technology teams innovate faster in the cloud due to shortened approval procurement and infrastructure deployment cycles This can be an adjustment for finance organizations previously used to executing time consuming and resource intensive processes for procuring and deploying capital in data center and on premises environments and cost allocation only at project approval Establish a partnership between key finance and technology stakeholders to create a shared understanding of organizational goals and develop me chanisms to succeed financially in the variable spend model of cloud computing Relevant teams within your organization must be involved in cost and usage discussions at all stages of your cloud journey including: • Financial leads: CFOs financial controll ers financial planners business analysts procurement sourcing and accounts payable must understand the cloud model of consumption purchasing options and the monthly invoicing process Due to the fundamental differences between the cloud (such as the rate of change in usage pay as you go pricing tiered pricing pricing models and detailed billing and usage information) compared to on premises operation it is essential that the finance organization understands how cloud usage can impact business as pects including procurement processes incentive tracking cost allocation and financial statements • Technology leads: Technology leads (including product and application owners) must be aware of the financial requirements (for example budget constraints) as well as business requirements (for example service level agreements) This allows the workload to be implemented to achieve the desired goals of the organization The partnership of finance and technology provides the following benefits: • Finance and t echnology teams have near real time visibility into cost and usage • Finance and technology teams establish a standard operating procedure to handle cloud spend variance • Finance stakeholders act as strategic advisors with respect to how capital is used to purchase commitment discounts (for example Reserved Instances or AWS Savings Plans) and how the cloud is used to grow the organization ArchivedAmazon Web Services Cost Optimization Pillar 6 • Existing accounts payable and procurement processes are used with the cloud • Finance and technology teams collaborate on forecasting future AWS cost and usage to align/build organizational budgets • Better cross organizational communication through a shared language and common understanding of financial concepts Additional stakeholders within your organization that shou ld be involved in cost and usage discussions include: • Business unit owners: Business unit owners must understand the cloud business model so that they can provide direction to both the business units and the entire company This cloud knowledge is critical when there is a need to forecast growth and workload usage and when assessing longer term purchasing options such as Reserved Instances or Savings Plans • Third parties: If your organization uses third parties (for example consultants or tools) ensure that they are aligned to your financial goals and can demonstrate both alignment through their engagement models and a return o n investment (ROI) Typically third parties will contribute to reporting and analysis of any workloads that they manage and the y will provide cost analysis of any workloads that they design Cloud Budgets and Forecasts Establish cloud budgets and forecasts: Customers use the cloud for efficiency speed and agility which creates a highly variable amount of cost and usage Costs ca n decrease with increases in workload efficiency or as new workloads and features are deployed Or workloads will scale to serve more of your customers which increases cloud usage and costs Existing organizational budgeting processes must be modified to incorporate this variability Adjust existing budgeting and forecasting processes to become more dynamic using either a trend based algorithm (using historical costs as inputs) or using business driver based algorithms (for example new product launches or regional expansion) or a combination of both trend and business drivers You can use AWS Cost Explorer to forecast daily (up to 3 months) or monthly (up to 12 months) cloud costs based on machine learning algorithms applied to your historical costs (trend based) ArchivedAmazon Web Services Cost Optimization Pillar 7 Cost Aware Processes Implement cost awareness in your organizational processes : Cost awareness must be implemented in new and existing organizational processes It is recommended to re use and modify existing processes where possible —this minimizes the impact to agility and velocity The following recommendations will help implement cost awareness in your workload: • Ensure that change management includes a cost measurement to quantify the financial impact of your changes This helps pro actively address cost related concerns and highlight cost savings • Ensure that cost optimization is a core component of your operating capabilities For example you can leverage existing incident management processes to investigate and identify root cause for cost and usage anomalies (cost overages) • Accelerate cost savings and business value realizatio n through automation or tooling When thinking about the cost of implementing frame the conversation to include an ROI component to justify the investment of time or money • Extend existing training and development programs to include cost aware training t hroughout your organization It is recommended that this includes continuous training and certification This will build an organization that is capable of self managing cost and usage Report and notify on cost and usage optimization: You must regularly r eport on cost and usage optimization within your organization You can implement dedicated sessions to cost optimization or include cost optimization in your regular operational reporting cycles for your workloads AWS Cost Explorer provides dashboards and reports You can track your progress of cost and usage against configured budgets with AWS Budgets Reports You can also use Amazon QuickSight with Cost and Usage Report (CUR) data to provide highly customized reporting with more granular data Implement notifications on cost and usage to ensure that changes in cost and usage can be acted upon quickly AWS Budgets allows you to provide notifications against targets We recommend configuring notifications on bo th increases and decreases and in both cost and usage for workloads Monitor cost and usage proactively: It is recommended to monitor cost and usage proactively within your organization not just when there are exceptions or anomalies ArchivedAmazon Web Services Cost Optimization Pillar 8 Highly visible dash boards throughout your office or work environment ensure that key people have access to the information they need and indicate the organization’s focus on cost optimization Visible dashboards enable you to actively promote successful outcomes and impleme nt them throughout your organization Cost Aware Culture Create a cost aware culture: Implement changes or programs across your organization to create a cost aware culture It is recommended to start small then as your capabilities increase and your organ ization’s use of the cloud increases implement large and wide ranging programs A cost aware culture allows you to scale cost optimization and cloud financial management through best practices that are performed in an organic and decentralized manner acro ss your organization This creates high levels of capability a cross your organization with minimal effort compared to a strict top down centralized approach Small changes in culture can have large impacts on the efficiency of your current and future wor kloads Examples of this include: • Gamify ing cost and usage across your organization This can be done through a publicly visible dashboard or a report that compares normalized costs and usage across teams (for example cost per workload cost per transact ion) • Recognizing cost efficiency Reward voluntary or unsolicited cost optimization accomplishments publicly or privately and learn from mistakes to avoid repeating them in the future • Create top down organizational requirements for workloads to run at p redefined budgets Keep up to date with new service releases: You may be able to implement new AWS services and features to increase cost efficiency in your workload Regularly review the AWS News Blog the AWS Cost Management blog and What’s New with AWS for information on new service and feature releases Quantif y Business Value Delivered Through Cost Optimization Quantify business value from cost optimization: In addition to reporting savings from cost optimization it is recommended that you quantify the additional value ArchivedAmazon Web Services Cost Optimization Pillar 9 delivered Cost optimization benefits are typically quantified in terms of lower costs per business out come For example you can quantify On Demand Amazon Elastic Compute Cloud (Amazon EC2) cost savings when you purchase Savings Plans which reduce cost and maintain workload output levels You can quantify cost reductions in AWS spending when idle Amazon E C2 instances are terminated or unattached Amazon Elastic Block Store (Amazon EBS) volumes are deleted Quantifying business value from cost optimization allows you to understand the entire set of benefits to your organization Because cost optimization is a necessary investment quantifying business value allow s you to explain the return on investment to stakeholders Quantifying business value can help you gain more buy in from stakeholders on future cost optimization investments and provides a framework to measure the outcomes for your organization’s cost optimization activities The benefits from cost optimization however go above and beyond cost reduction or avoidance Consider capturing additional data to measure efficiency improvements and business value Examples of improvement include: • Executing cost optimization best practices: For example resource lifecycle management reduces infrastructure and operational costs and creates time and unexpected budget for experimentation This increases organiza tion agility and uncovers new opportunities for revenue generation • Implementing automation: For example Auto Scaling which ensures elasticity at minimal effort and increases staff productivity by eliminating manual capacity planning work For more deta ils on operational resiliency refer to the Well Architected Reliability Pillar whitepaper • Forecasting future AWS costs: Forecasting enables finance stakeholders to set expectations with other internal and external organization stakeholders and helps improve your organization’s financial predictability AWS C ost Explorer can be used to perform forecasting for yo ur cost and usage Resources Refer to the following resources to learn more about AWS best practices for budgeting and forecasting cloud spend • Repor ting your budget metrics with budget reports • Forecasting with AWS Cost Explorer • AWS Training ArchivedAmazon Web Services Cost Optimization Pillar 10 • AWS Certification • AWS Cloud Management Tools partners Expenditure and Usage Awareness Understanding your organization’s costs and drivers is critical for managing your cost and usage effecti vely and identifying cost reduction opportunities Organizations typically operate multiple workloads run by multiple teams These teams can be in different organization units each with its own revenue stream The capability to attribute resource costs t o the workloads individual organization or product owners drives efficient usage behavior and helps reduce waste Accurate cost and usage monitoring allows you to understand how profitable organization units and products are and allows you to make more informed decisions about where to allocate resources within your organization Awareness of usage at all levels in the organization is key to driving change as change in usage drives changes in cost Consider taking a multi faceted approach to becoming aw are of your usage and expenditures Your team must gather data analyze and then report Key factors to consider include: • Governance • Monitoring cost and usage • Decommissioning Governance In order to manage your costs in the cloud you must manage your usag e through the governance areas below: Develop Organizational Policies: The first step in performing governance is to use your organization’s requirements to develop policies for your cloud usage These policies define how your organization uses the cloud a nd how resources are managed Policies should cover all aspects of resources and workloads that relate to cost or usage including creation modification and decommission over the resource’s lifetime Policies should be simple so that they are easily unde rstood and can be implemented effectively throughout the organization Start with broad high level policies such as which geographic Region usage is allowed in or times of the day that resources should be running Gradually refine the policies for the v arious organizational units and ArchivedAmazon Web Services Cost Optimization Pillar 11 workloads Common policies include which services and features can be used (for example lower performance storage in test/development environments) and which types of resources can be used by different groups (for example the largest size of resource in a development account is medium) Develop goals and targets: Develop c ost and usage goals and targets for your organization Goals provide guidance and direction to your organization on expected outcomes Targets provide sp ecific measurable outcomes to be achieved An example of a goal is: platform usage should increase significantly with only a minor (non linear) increase in cost An example target is: a 20% increase in platform usage with less than a 5% increase in costs Another common goal is that workloads need to be more efficient every 6 months The accompanying target would be that the cost per output of the workload needs to decrease by 5% every 6 months A common goal for cloud workloads is to increase workload ef ficiency which is to decrease the cost per business outcome of the workload over time It is recommended to implement this goal for all workloads and also set a target such as a 5% increase in efficiency every 6 12 months This can be achieved in the clo ud through building capability in cost optimization and through the release of new services and service features Account structure: AWS has a one parent tomany children account structure that is commonly known as a master (the parent formerly payer) ac count member (the child formerly linked) account A best practice is to always have at least one master with one member account regardless of your organization size or usage All workload resources should reside only within member accounts There is no o nesizefitsall answer for how many AWS accounts you should have Assess your current and future operational and cost models to ensure that the structure of your AWS accounts reflects your organization’s goals Some companies create multiple AWS accounts for business reasons for example: • Administrative and/or fiscal and billing isolation is required between organization units cost centers or specific workloads • AWS service limits are set to be specific to particular workloads • There is a requirement for isolation and separation between workloads and resources Within AWS Organizations consolidated billing creates the construct between one or more member accounts and the master account Member accounts allow you to isolate ArchivedAmazon Web Services Cost Optimization Pillar 12 and distinguish your cost and usage by groups A common practice is to have separate member accounts for each organ ization unit (such as finance marketing and sales) or for each environment lifecycle (such as development testing and production) or for each workload (workload a b and c) and then aggregate these linked accounts using consolidated billing Consolidated billing allows you to consolidate payment for multiple member AWS accounts under a single master account while still providing visibility for each linked account’s activity As costs and usage are aggregated in the master account this allows you to maximize your service volume discounts and maximize the use of your commitment discounts (Savings Plans and Reserved Instances) to achieve the highest discounts AWS Control Tower can quickly s et up and configure multiple AWS accounts ensuring that governance is aligned with your organization’s requirements Organizational Groups and Roles : After you develop policies you can create logical groups and roles of users within your organization Th is allows you to assign permissions and control usage Begin with high level groupings of people typically this aligns with organizational units and job roles (for example systems administrator in the IT Department or Financial controller) The groups j oin people that do similar tasks and need similar access Roles define what a group must do For example a systems administrator in IT requires access to create all resources but an analytics team member only needs to create analytics resources Controls — Notifications: A common first step in implementing cost controls is to setup notifications when cost or usage events occur outside of the policies This enables you to act quickly and verify if corrective action is required without restricting or negat ively impacting workloads or new activity After you know the workload and environment limits you can enforce governance In AWS notifications are conducted with AWS Budgets which a llows you to define a monthly budget for your AWS costs usage and commitment discounts (Savings Plans and Reserved Instances) You can create budgets at an aggregate cost level (for example all costs) or at a more granular level where you include only specific dimensions such as linked accounts services tags or Availability Zones You can also attach email notifications to your budgets which will trigger when current or forecasted costs or usage exceeds a defined percentage threshold Controls — Enforcement: As a second step you can enforce governance policies in AWS through AWS Identity and Access Management (IAM) and AWS Organizations Service Control Policies (SCP) IAM allows you to securely manage access to AWS ArchivedAmazon Web Services Cost Optimization Pillar 13 services and resources Using IAM you can control who can create and manage AWS resources the type of resources that can be created and where they can be created This minimizes the creation of resources that are not required Use the roles and groups created previously and assign IAM policies to enforce the correct usage SCP offers central control over the maximum available permissions for all accounts in your organization ensuring that your accounts stay within your access control guidelines SCPs are available only in an organization that has all features enabled and you can configure the SCPs to either deny or allow actions for member accounts by default Refer to the WellArchitected Security Pillar whitepaper for more details on implementing access management Controls — Service Quotas: Governance can also be implemented through management of Service Quotas By ensuring Service Quotas are set with minimum overhead and accurately maintained you can minimize re source creation outside of your organization’s requirements To achieve this you must understand how quickly your requirements can change understand projects in progress (both creation and decommission of resources) and factor in how fast quota changes c an be implemented Service Quotas can be used to increase your quotas when required AWS Cost Management services are integrated with the AWS Identity and Access Management (IAM) service You use the IAM service in conjunction with Cost Management services to control access to your financial data and to the AWS tools in the billing console Track workload lifecycl e: Ensure that you track the entire lifecycle of the workload This ensures that when workloads or workload components are no longer required they can be decommissioned or modified This is especially useful when you release new services or features The existing workloads and components may appear to be in use but should be decommissioned to redirect customers to the new service Notice previous stages of workloads — after a workload is in production previous environments can be decommissioned or greatl y reduced in capacity until they are required again AWS provides a number of management and governance services you can use for entity lifecycle tracking You can use AWS Config or AWS Systems Manager to provide a detailed inventory of your AWS resources and configuration It is recommended that you integrate with your existing project or asset management systems to keep track of active projects and produ cts within your organization Combining your current system with the rich set of events and metrics provided by AWS allows you to build a view of ArchivedAmazon Web Services Cost Optimization Pillar 14 significant lifecycle events and proactively manage resources to reduce unnecessary costs Refer to the WellArchitected Operational Excellence Pillar whitepaper for more details on implementing entity lifecycle tracking Monitor Cost and Usage Enable teams to take action on their cost and usage t hrough detailed visibility into the workload Cost optimization begins with a granular understanding of the breakdown in cost and usage the ability to model and forecast future spend usage and features and the implementation of sufficient mechanisms to align cost and usage to your organization’s objectives The following are required areas for monitoring your cost and usage: Configure detailed data sources: Enable hourly granularity in Cost Explorer and create a Cost and Usage Report (CUR) These data sources provide the most accurate view of cost and usage across your entire organization The CUR provides daily or hourly usage granularity rates costs and usage at tributes for all chargeable AWS services All possible dimensions are in the CUR including: tagging location resource attributes and account IDs Configure your CUR with the following customizations: • Include resource IDs • Automatically refresh the CUR • Hourly granularity • Versioning: Overwrite existing report • Data integration: Athena (Parquet format and compression) Use AWS Glue to prepare the data for analysis and use Amazon Athena to perform data analysis using SQL to query the data You can also use Amazon QuickSight to build custom and complex visualizations and distribute them throughout your organization Identify cost attribution categories: Work with your finance team and other relevant stakeholders to understand the requirements of how costs must be allocated within your organization Workload costs must be allocated throughout the entire lifecycle includin g development testing production and decommissioning Understand how the costs ArchivedAmazon Web Services Cost Optimization Pillar 15 incurred for learning staff development and idea creation are attributed in the organization This can be helpful to correctly allocate accounts used for this purpose to training and development budgets instead of generic IT cost budgets Establish workload metrics: Understand how your workload ’s output is measured against business success Each workload typically has a small set of major outputs that indicate performance If you have a complex workload with many components then you can prioritize the list or define and track metrics for each component Work with your teams to understand which metrics to use This unit will be u sed to understand the efficiency of the workload or the cost for each business output Assign organization meaning to cost and usage: Implement tagging in AWS to add organizati on information to your resources which will then be added to your cost and usage information A tag is a key value pair — the key is defined and must be unique across your organization and the value is unique to a group of resources An example of a key value pair is the key is Environment with a value of Production All resources in the production environment will have this key value pair Tagging allows you categorize and track your costs with meaningful relevant organization information You can app ly tags that represent organization categories (such as cost centers application names projects or owners) and identify workloads and characteristics of workloads (such as test or production) to attribute your costs and usage throughout your organizat ion When you apply tags to your AWS resources (such as EC2 instances or Amazon S3 buckets) and activate the tags AWS adds this information to your Cost and Usage Reports You can run reports and perform analysis on tagged and untagged resources to allow greater compliance with internal cost management policies and ensure accurate attribution Creating and implementing an AWS tagging standard across your organization’s accounts enables you to manage and govern your AWS environments in a consistent and un iform manner Use Tag Policies in AWS Organizations to define rules for how tags can be used on AWS resources in your accounts in AWS Organiza tions Tag Policies allow you to easily adopt a standardized approach for tagging AWS resources AWS Tag Editor allows you to add delete and manage tags of multiple resource s AWS Cost Categories allows you to assign organization meaning to your costs without requiring tags on resources You can map your cost and usage information to unique internal organization structures You define category rules to map and categorize costs using billing dimensions such as accounts and tags This provides another level of ArchivedAmazon Web Services Cost Optimization Pillar 16 management capability in addition to tagging You can also map specific accounts and tags to multiple projects Configure billing and cost optimization tools: To modify usage and adjust costs each person in your organization must have access to their cost and usage information It is recommended that all workloads and teams have the followin g tooling configured when they use the cloud: • Reports: Summarize of all cost and usage information • Notifications: Provide notifications when cost or usage is outside of defined limits • Current State: Configure a dashboard showing current levels of cost and usage The dashboard should be available in a highly visible place within the work environment (similar to an operations dashboard) • Trending: Provide the capability to show the variability in cost and usage over the required period of time with the required granularity • Forecasts: Provide the capability to show estimated future costs • Tracking: Show the current cost and usage against configured goals or targets • Analysis: Provide the capability for team members to perform custom and deep analysis down to the hourly granularity with all possible dimensions You can use AWS native tooling such as AWS Cost Explorer AWS Budgets and Amazon Athena with QuickSight to provide this capability You can also use third party tooling however you must ensure that the costs of this tooling provide value to your organization Allocate costs based on workload metrics: Cost Optimization is delivering business outcomes at the lowest price point which can only be achieved by allocating workload costs by workload metrics (measured by workload efficiency) Monitor the defined workload metrics through log files or other application monitoring Combine this data with the workload costs which can be obtaine d by looking at costs with a specific tag value or account ID It is recommended to perform this analysis at the hourly level Your efficiency will typically change if you have some static cost components (for example a backend database running 24/7) with a varying request rate (for example usage peaks at 9am – 5pm with few requests at night) Understanding the relationship between the static and variable costs will help you to focus your optimization activities ArchivedAmazon Web Services Cost Optimization Pillar 17 Decommission Resources After you manage a list of projects employees and technology resources over time you will be able to identify which resources are no longer being used and which projects that no longer have an owner Track resources over their lifetime: Decommission workload resources that are no longer required A common example is resources used for testing after testing has been completed the resources can be removed Tracking resources with tags (and running reports on those tags) will help you identify assets for decommission Us ing tags is an effective way to track resources by labeling the resource with its function or a known date when it can be decommissioned Reporting can then be run on these tags Example values for feature tagging are “featureX testing” to identify the p urpose of the resource in terms of the workload lifecycle Implement a decommissioning process: Implement a standardized process across your organization to identify and remove unused resources The process should define the frequency searches are performe d and the processes to remove the resource to ensure that all organization requirements are met Decommission resources: The frequency and effort to search for unused resources should reflect the potential savings so an account with a small cost should b e analyzed less frequently than an account with larger costs Searches and decommission events can be triggered by state changes in the workload such as a product going end of life or being replaced Searches and decommission events may also be triggered by external events such as changes in market conditions or product termination Decommission resources automatically: Use automation to reduce or remove the associated costs of the decommissioning process Designing your workload to perform automated deco mmissioning will reduce the overall workload costs during its lifetime You can use AWS Auto Scaling to perform the decommissioning process You can also implement custom code using the API or SDK to decommission workload resources automatically Resources Refer to the following resources to learn more about AWS best practices for expenditure awareness • AWS Tagging Strategies • Activating User Defined Cost Allocation Tags ArchivedAmazon Web Services Cost Optimization Pillar 18 • AWS Billing and Cost Management • Cost Management Blog • Multiple Account Billing Strategy • AWS SDK and Tools • Tagging best practices • WellArchitected Labs Cost Fundamentals • WellArchitected Labs – Expenditure Awareness Cost Effective Resources Using the appropriate services resources and configurations for your workloads is key to cost savings Consider the following when creating cost effective resources: • Evaluate cost when selecting services • Select the correct resource type size and number • Select the best pricing model • Plan for data transfer You can use AWS Solutions Architects AWS Solutions AWS Reference Architectures and APN Partners to help you choose an architecture based on what you have learned Evaluate Cost When Selecting Services Identify organization requirements: When selecting services for your workload it is key that you understand your organization priorities Ensure that you have a balance between cost and other Well Architected pillars such as performance and reliability A fully cost optimized workload is the solution that is most aligned to your organization’s requirements not necessarily the lowest cost Meet with all teams within your organization to collect information such as product business technical and finance Analyze all workload components: Perform a thorough analysis on all components in your workload Ensure that balance between the cost of analysis and the potential savings in the workload over its lifecycle You must find the current impact and potential future impact of the component Fo r example if the cost of the proposed resource is $10/month and under forecasted loads would not exceed $15/month ArchivedAmazon Web Services Cost Optimization Pillar 19 spending a day of effort to reduce costs by 50% ($5 a month) could exceed the potential benefit over the life of the system Using a faster and more efficient data based estimation will create the best overall outcome for this component Workloads can change over time the right set of services may not be optimal if the workload architecture or usage changes Analysis for selection of service s must incorporate current and future workload states and usage levels Implementing a service for future workload state or usage may reduce overall costs by reducing or removing the effort required to make future changes AWS Cost Explorer and the CUR can analyze the cost of a Proof of Concept (PoC) or running environment You can also use th e AWS Simple Monthly Calculator or the AWS Pricing Calculator to estimate workload costs Managed Services: Managed services remove the operationa l and administrative burden of maintaining a service which allows you to focus on innovation Additionally because managed services operate at cloud scale they can offer a lower cost per transaction or service Consider the time savings that will allow your team to focus on retiring technical debt innovation and value adding features For example you might need to “lift and shift” your on premises environment to the cloud as rapidly as possible and optimize later It is worth exploring the savings you could realize by using managed services that remove or reduce license costs Usually managed services have attributes that you can set to ensure sufficient capacity You must set and monitor these attributes so that your excess capacity is kept to a mini mum and performance is maximized You can modify the attributes of AWS Managed Services using the AWS Management Console or AWS APIs and SDKs to align resource needs with changing demand For example you can increase or decrease the number of nodes on an Amazon EMR cluster (or an Amazon Redshift cluster) to scale out or in You can also pack multiple instances on an AWS resource to enable higher density usage For example you can provision multiple small databases on a single Amazon Relational Database Se rvice (Amazon RDS) DB instance As usage grows you can migrate one of the databases to a dedicated RDS DB instance using a snapshot and restore process When provisioning workloads on managed services you must understand the requirements of adjusting the service capacity These requirements are typically time ArchivedAmazon Web Services Cost Optimization Pillar 20 effort and any impact to normal workload operation The provisioned resource must allow time for any changes to occur provision the required overhead to allow this The ongoing effort required to modify services can be reduced to virtually zero by using APIs and SDKs that are integrated with system and monitoring tools such as Amazon CloudWatch Amazon Relational Database Service (RDS) Amazon Redshift and Amazon ElastiCache provide a managed database service Amazon Athena Amazon Elastic Map Reduce (EMR) and Amazon Elasticsearch provide a managed analytics service AWS Managed Services (AMS) is a service that operates AWS infrastructure on behalf of enterprise customers and partners It provides a secure and compliant environment that you can deploy your workloads onto AMS uses enterprise cloud operating models with automation to allow you to meet your organization requirements move into the cloud faster and reduce your on going management costs Serverless or Application level Services: You can use serverless or application level services such as AWS L ambda Amazon Simple Queue Service (Amazon SQS) Amazon Simple Notification Service (Amazon SNS) and Amazon Simple Email Se rvice (Amazon SES) These services remove the need for you to manage a resource and provide the function of code execution queuing services and message delivery The other benefit is that they scale in performance and cost in line with usage allowing efficient cost allocation and attribution For more information on Serverless refer to the WellArchitected Serverless Application lens whitepaper Analyze the workload for different usage over time: As AWS releases new services and features the optimal services for your workload may change Effort required should reflect potential benefits Workload review frequency depends on your organization requirements If it is a workload of significant cost implementing new services sooner will maximize cost savings so more frequent review can be advantageous Another trigger for review is change in usage patterns Significant changes in usage can indicat e that alternate services would be more optimal For example for higher data transfer rates a direct connect service may be cheaper than a VPN and provide the required connectivity Predict the potential impact of service changes so you can monitor for these usage level triggers and implement the most cost effective services sooner Licensing costs: The cost of software licenses can be eliminated through the use of open source software This can have significant impact on workload costs as the size of the workload scales Measure the benefits of licensed software against the total cost to ArchivedAmazon Web Services Cost Optimization Pillar 21 ensure that you have the most optimized workload Model any changes in licensing and how they would impact your workload costs If a vendor changes the cost of your database license investigate how that impacts the overall efficiency of your workload Consider historical pricing announcements from your vendors for trends of licensing changes across their products Licensing costs may also scale independently of throughpu t or usage such as licenses that scale by hardware (CPU bound licenses) These licenses should be avoided because costs can rapidly increase without corresponding outcomes You can use AWS License Manag er to manage the software licenses in your workload You can configure licensing rules and enforce the required conditions to help prevent licensing violations and also reduce costs due to license overages Select the Correct Resource Type Size and N umber By selecting the best resource type size and number of resources you meet the technical requirements with the lowest cost resource Right sizing activities takes into account all of the resources of a workload all of the attributes of each indiv idual resource and the effort involved in the right sizing operation Right sizing can be an iterative process triggered by changes in usage patterns and external factors such as AWS price drops or new AWS resource types Right sizing can also be one off if the cost of the effort to right size out weights the potential savings over the life of the workload In AWS there are a number of different approaches: • Perform cost modeling • Select size based on metrics or data • Select size automatically (based on metrics) Cost Modeling: Perform cost modeling for your workload and each of its components to understand the balance between resources and find the correct size for each resource in the workload given a specific level of performan ce Perform benchmark activities for the workload under different predicted loads and compare the costs The modeling effort should reflect potential benefit; for example time spent is proportional to component cost or predicted saving For best practices refer to the Review section of the Performance Efficiency Pillar of the AWS Well Architected Framework whitepaper AWS Compute Optimizer can assist with cost modeling for running workloads It provides right sizing recommendations for compute resources based on historical ArchivedAmazon Web Services Cost Optimization Pillar 22 usage This is the ideal data source for compute resources because it is a free service and it utilizes machine learning to make multiple recommendations depending on levels of risk You can also use Amazon CloudWatch and CloudWatch Logs with custom logs as data sources for right sizing operations for other services and workload components The following are recommendations for cost modeling data and metrics: • The monitoring must a ccurately reflect the end user experience Select the correct granularity for the time period and thoughtfully choose the maximum or 99th percentile instead of the average • Select the correct granularity for the time period of analysis that is required to cover any workload cycles For example if a two week analysis is performed you might be overlooking a monthly cycle of high utilization which could lead to under provisioning Metrics or data based selection: Select resource size or type based on worklo ad and resource characteristics; for example compute memory throughput or write intensive This selection is typically made using cost modeling a previous version of the workload (such as an on premises version) using documentation or using other so urces of information about the workload (whitepapers published solutions) Automatic selection based on metrics: Create a feedback loop within the workload that uses active metrics from the running workload to make changes to that workload You can use a managed service such as AWS Auto Scaling which you configur e to perform the right sizing operations for you AWS also provides APIs SDKs and features that allow resources to be modified with minimal effort You can program a workload to stop andstart an EC2 instance to allow a change of instance size or instance type This provides the benefits of right sizing while removing almost all the operational cost required to make the change Some AWS services have built in automatic type or size selection such as S3 Intelligent Tiering S3 Intelligent Tiering automatically moves your data between two access tiers: frequent access and infrequent access based on your usage patterns Select the Best Pricing Model Perform workload cost modeling: Consider the requirements of the workload components and understand the potential pricing models Define the availability requirement of the component Determine if there are multiple independent resources that perform the function in the workload and what the workload requirements are over time Compare the cost of the resources using the default On Demand pricing model ArchivedAmazon Web Services Cost Optimization Pillar 23 and other applicable models Factor in any potential changes in resources or workload components Perform regular account level analysis: Performing regular cost modeling ensures that opportunities to optimize across multiple workloads can be implemented For example if multiple workloads use On Demand at an aggregate level the risk of change is lower and implementing a commitment based discount will achieve a lower overall cost It is recommended to perform analysis in regular cycles of two weeks to 1 month This allows you to make small adjustment purchases so the coverage of your pricing models continues to evolve with your changing workloads and their components Use the AWS Cost Explorer recommendations tool to find opportunities for commitment discounts To find opportunities for Spot workloads use an hourly view of your overall usage and look for regular periods of changing usage or elasticity Pricing Models: AWS has multiple pricing models that allow you to pay for your resources in the most cost effective way that suits your organization’s needs The following section describes each purchasing model: • OnDemand • Spot • Commitment discounts Savings Plans • Commitment discounts Reserved Instances/C apacity • Geographic selection • Third party agreements and pricing OnDemand: This is the default pay as you go pricing model When you use resources (for example EC2 instances or services such as DynamoDB on demand) you pay a flat rate and you have no lon gterm commitments You can increase or decrease the capacity of your resources or services based on the demands of your application On Demand has an hourly rate but depending on the service can be billed in increments of 1 second (for example Amazon RD S or Linux EC2 instances) On demand is recommended for applications with short term workloads (for example a four month project) that spike periodically or unpredictable workloads that can’t be interrupted On demand is also suitable for workloads su ch as pre production environments which require uninterrupted runtimes but do not run long enough for a commitment discount (Savings Plans or Reserved Instances) ArchivedAmazon Web Services Cost Optimization Pillar 24 Spot: A Spot Instance is spare EC2 compute capacity available at discounts of up to 90% off On Demand prices with no long term commitment required With Spot Instances you can significantly reduce the cost of running your applications or scale your application’s compute capacity for the same budg et Unlike On Demand Spot Instances can be interrupted with a 2 minute warning if EC2 needs the capacity back or the Spot Instance price exceeds your configured price On average Spot Instances are interrupted less than 5% of the time Spot is ideal whe n there is a queue or buffer in place or where there are multiple resources working independently to process the requests (for example Hadoop data processing) Typically these workloads are fault tolerant stateless and flexible such as batch processin g big data and analytics containerized environments and high performance computing (HPC) Non critical workloads such as test and development environments are also candidates for Spot Spot is also integrated into multiple AWS services such as EC2 Auto Scaling groups (ASGs) Elastic MapReduce (EMR) Elastic Container Service (ECS) and AWS Batch When a Spot Instance needs to be reclaimed EC2 sends a two minute warning via a Spot Instance interruption notice delivered through CloudWatch Events as well as in the instance metadata During that two minute period your application can use the time to save its state drain runnin g containers upload final log files or remove itself from a load balancer At the end of the two minutes you have the option to hibernate stop or terminate the Spot Instance Consider the following best practices when adopting Spot Instances in your w orkloads: • Set your maximum price as the On Demand rate : This ensures that you will pay the current spot rate (the cheapest available price) and will never pay more than the On Demand rate Current and historical rates are available via the console and API • Be flexible across as many instance types as possible : Be flexible in both the family and size of the instance type to improve the likelihood of fulfilling your target capacity requirements obtain the lowest possible cost and minimize the impact of interruptions • Be flexible about where your workload will run: Available capacity can vary by Availability Zone This improves the likelihood of fulfilling your target capacity by tapping into multiple spare capacity pools and provides the lowest possible cost ArchivedAmazon Web Services Cost Optimization Pillar 25 • Design for continuity : Design your workloads for st atelessness and fault tolerance so that if some of your EC2 capacity gets interrupted it will not have impact on the availability or performance of the workload • We recommend using Spot Instances in combination with On Demand and Savings Plans/Reserved I nstances to maximize workload cost optimization with performance Commitment Discounts – Savings Plans: AWS provides a number of ways for you to reduce your costs by reserving or committing to use a certain amount of resources and receiving a discounted r ate for your resources A Savings Plan allows you to make an hourly spend commitment for one or three years and receive discounted pricing across your resources Savings Plans provide discounts for AWS Compute services such as EC2 Fargate and Lambda When you make the commitment you pay that commitment amount every hour and it is subtracted from your On Demand usage at the discount rate For example you commit to $50 an hour and have $150 an hour o f OnDemand usage Considering the Savings Plans pricing your specific usage has a discount rate of 50% So your $50 commitment covers $100 of On Demand usage You will pay $50 (commitment) and $50 of remaining On Demand usage Compute Savings Plans are the most flexible and provide a discount of up to 66% They automatically apply across Availability Zones instance size instance family operating system tenancy Region and compute service Instance Savings Plans have less flexibility but provide a higher discount rate (up to 72%) They automatically apply across Availability Zones instance size instance family operating system and tenancy There are three payment options: • No upfront payment: There is no upfront payment; you then pay a reduced hourly rate each month for the total hours in the month • Partial upfront payment: Provides a higher discount rate than No upfront Part of the usage is paid up front; you then pay a smaller reduced hourly rate each month for the total hours in the month • All upfront payment: Usage for the entire period is paid up front and no other costs are incurred for the remainder of th e term for usag e that is covered by the commitment You can apply any combination of these three purchasing options across your workloads ArchivedAmazon Web Services Cost Optimization Pillar 26 Savings plans apply first to the usage in the account they are purchased in from the highest discount percentage to the lowest the n they apply to the consolidated usage across all other accounts from the highest discount percentage to the lowest It is recommended to purchase all Savings Plans in an account with no usage or resources such as the master account This ensures that the Savings Plan applies to the highest discount rates across all of your usage maximizing the discount amount Workloads and usage typically change over time It is recommended to continually purchase small amounts of Savings Plans commitment over time This ensures that you maintain high levels of coverage to maximize your discounts and your plans closely match your workload and organization requirements at all times Do not set a target coverage in your accounts due to the variability of discount that is possible Low coverage does not necessarily indicate high potential savings You may have a low coverage in your account but if your usage is made up of small instances with a licensed operating system the potential savin g could be as low as a few percent Instead track and monitor the potential savings available in the Savings Plan recommendation tool Frequently review the Savings Plans recommendations in Cost Explorer (perform regular analysis) and continue to purchase commitments until the estimated savings are below the required discount for the organization For example track and monitor that your potential discounts remained below 20% if it goes above that a purchase must be made Monitor the utilization and cover age but only to detect changes Do not aim for a specific utilization percent or coverage percent as this does not necessarily scale with savings Ensure that a purchase of Savings Plans results in an increase in coverage and if there are decreases in coverage or utilization ensure they are quantified and known For example you migrate a workload resource to a newer instance type which reduces utilization of an existing plan but the performance benefit outweighs the saving reduction Commitment Disco unts – Reserved Instances/Commitment: Similar to Savings Plans Reserved Instances (RI) offer discounts up to 72% for a commitment to running a minimum amount of resources Reserved Ins tances are available for RDS Elasticsearch ElastiCache Amazon Redshift and DynamoDB Amazon CloudFront and AWS Elemental MediaConvert also provide discounts when you make minimum usage commitments Reserved Instances are currently available for EC2 ho wever Savings Plans offer the same discount levels with increased flexibility and no management overhead ArchivedAmazon Web Services Cost Optimization Pillar 27 Reserved Instances offer the same pricing options of no upfront partial upfront and all upfront and the same terms of one or three years Reserved Instances can be purchased in a Region or a specific Availability Zone They provide a capacity reservation when purchased in an Availability Zone EC2 features convertible RI’s however Savings Plans should be used for all EC2 instances due to increased flexibility and reduced operational costs The same process and metrics should be used to track and make purchases of Reserved Instances It is recommended to not track coverage of RI’s across your accounts It is also recommended that utilization % is no t monitored or tracked instead view the utilization report in Cost Explorer and use net savings column in the table If the net savings is a significantly large negative amount you must take action to remediate the unused RI EC2 Fleet : EC2 Fleet is a feature that allows you to define a target compute capacity and then specify the instance types and the balance of On Demand and Spot for the fleet EC2 Fleet will automatically launch the lowest price combination of resources to meet the defined capacity Geographic Selection: When you architect your solutions a best practice is to seek to place computing resources closer to users to provide lower latency and strong data sovereignty For global audiences you should use multiple locations to meet these needs You should select the geographic location that minimizes your costs The AWS Cloud infrastructure is built around Regions and Availability Zones A Region is a physical location in the world where we have multiple Availability Zones Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity housed in separate facilities Each AWS Region operates within local market conditions and resource pricing is different in each Region Choose a specific Region to operate a component of or your entire solution so that you can run at the lowest possible price globally You can use the AWS Simple Monthly Calculator to estimate the costs of your workload in various Regions Third party agreements and pricing: When you utilize third party solutions or services in the cloud it is important that the pricing structures are aligned to Cost Optimization outcomes Pricing should scale with the outcomes and value it provides An example of this is software that takes a percentage of savings it provides the more you save (outcome) the more it charges Agreements that scale with your bill are ty pically not ArchivedAmazon Web Services Cost Optimization Pillar 28 aligned to Cost Optimization unless they provide outcomes for every part of your specific bill For example a solution that provides recommendations for EC2 and charges a percentage of your entire bill will increase if you use other services for which it provides no benefit Another example is a managed service that is charged at a percentage of the cost of resources that are managed A larger instance size may not necessarily require more management effort but will be charged more Ensure that these service pricing arrangements include a cost optimization program or features in their service to drive efficiency Plan for Data Transfer An advantage of the cloud is that it is a managed network service There is no longer the need to m anage and operate a fleet of switches routers and other associated network equipment Networking resources in the cloud are consumed and paid for in the same way you pay for CPU and storage —you only pay for what you use Efficient use of networking resou rces is required for cost optimization in the cloud Perform data transfer modeling: Understand where the data transfer occurs in your workload the cost of the transfer and its associated benefit This allows you to make an informed decision to modify or accept the architectural decision For example you may have a Multi Availability Zone configuration where you replicate data between the Availability Zones You model the cost of structure and decide that this is an acceptable cost (similar to paying for compute and storage in both Availability Zone) to achieve the required reliability and resilience Model the costs over different usage levels Workload usage can change over time and different services may be more cost effective at different levels Use AWS Cost Explorer or the Cost and Usage Report (CUR) to understand and model your data transfer costs Configure a proof of concept (PoC) or test your workload and run a test with a realistic simulated load You can model your costs at different workload demands Optimize Data Transfer: Architecting for data transfer ensures that you minim ize data transfer costs This may involve using content delivery networks to locate data closer to users or using dedicated network links from your premises to AWS You can also use WAN optimization and application optimization to reduce the amount of dat a that is transferred between components Select services to reduce data transfer costs: Amazon CloudFront is a global content delivery network that delivers data with low latency and high transfer speeds It ArchivedAmazon Web Services Cost Optimization Pillar 29 caches data at edge locations across the world which reduces the load on your resources By using CloudFront you can reduce the administrative effort in delivering content to large numbers of users globally with minimum latency AWS Direct Connect allows you to establish a dedicated network connection to AWS This can reduce network costs increase bandwidth and provide a more consistent network experience than internet based connections AWS VPN allows you to establish a secure and private connection between your private network and the AWS global network It is ideal for small offices or business partners because it provides quick and easy connectivity and it is a fully managed and elastic service VPC Endpoints allow connectivity between AWS services over private networking and can be used to reduce public data transfe r and NAT gateways costs Gateway VPC endpoints have no hourly charges and support Ama zon S3 and Amazon DynamoDB Interface VPC endpoints are provided by AWS PrivateLink and have an hourly fee and per GB usage cost Resources Refer to the following resource s to learn more about AWS best practices for cost effective resources • AWS Managed Services: Enterprise Transformation Journey Video • Analyzing Your Costs with Cost Explorer • Accessing Reserved Instance Recommendations • Getting Started with Rightsizing Recommendations • Spot Instances Best Practices • Spot Fleets • How Reserved Instances Work • AWS Global Infrastructure • Spot Instance Advisor • WellArchitected Labs Cost Effective Resources ArchivedAmazon Web Services Cost Optimization Pillar 30 Manage Demand and Supply Resources When you move to the cloud you pay only for what you need You can supply resources to match the workload demand at the time they’re needed — eliminating the need for costly and wasteful overprovisioning You can also modify the demand using a throttle buffer or queue to smooth the demand and serve it with less resources The economic benefits of just intime supply should be balanced against the need to provision to account for resource failures high availability and provision time Depending on whether your demand is fixed or variable plan to create metrics and automation that will ensure that management of your environment is minimal – even as you scale When mod ifying the demand you must know the acceptable and maximum delay that the workload can allow In AWS you can use a number of different approaches for managing demand and supplying resources The following sections describe how to use these approaches: • Analyze the workload • Manage demand • Demand based supply • Time based supply Analyze the workload: Know the requirements of the workload The organization requirements should indicate the workload response times for requests The response time can be used to det ermine if the demand is managed or if the supply of resources will change to meet the demand The analysis should include the predictability and repeatability of the demand the rate of change in demand and the amount of change in demand Ensure that the analysis is performed over a long enough period to incorporate any seasonal variance such as endofmonth processing or holiday peaks Ensure that the analysis effort reflects the potential benefits of implementing scaling Look at the expected total cos t of the component and any increases or decreases in usage and cost over the workload lifetime You can use AWS Cost Explorer or Amazon QuickSight with the CUR or your application logs to perform a visual analysis of workload demand ArchivedAmazon Web Services Cost Optimization Pillar 31 Manage Demand Manage Demand – Throttling: If the source of the demand has retry capability then you can implement throttling Throttling tells the source that if it cannot service the request at the current time it should try again later The source will wait for a period of time and then re try the request Implementing throttling has the advantage of limiting the maximum amount of resources and cos ts of the workload In AWS you can use Amazon API Gateway to implement throttling Refer to the WellArchitected Reliability pillar whitep aper for more details on implementing throttling Manage Demand – Buffer based: Similar to throttling a buffer defers request processing allowing applications that run at different rates to communicate effectively A buffer based approach uses a queue to accept messages (units of work) from producers Messages are read by consumers and processed allowing the messages to run at the rate that meets the consumers’ business requirements You don’t have to worry about producers having to deal with thr ottling issues such as data durability and backpressure (where producers slow down because their consumer is running slowly) In AWS you can choose from multiple services to implement a buffering approach Amazon SQS is a managed service that provides queues that allow a single consumer to read individual messages Amazon Kinesis provides a stream that allows many consumers to read the same messages When architect ing with a buffer based approach ensure that you architect your workload to service the request in the required time and that you are able to handle duplicate requests for work Dynamic Supply Demand based supply: Leverage the elasticity of the cloud to supply resources to meet changing demand Take advantage of APIs or service features to programmatically vary the amount of cloud resources in your architecture dynamically This allows you to scale components in your architecture and automatically increa se the number of resources during demand spikes to maintain performance and decrease capacity when demand subsides to reduce costs Auto Scaling helps you adjust your capacity to maintain steady predict able performance at the lowest possible cost It is a fully managed and free service that integrates with Amazon EC2 instances and Spot Fleets Amazon ECS Amazon DynamoDB and Amazon Aurora ArchivedAmazon Web Services Cost Optimization Pillar 32 Auto Scaling provides automatic resource discovery to help find resources in your workload that can be configured it has built in scaling strategies to optimize performance costs or a balance between the two and provides predictive scaling to assist with regularly occurring spikes Auto Scaling can implement manual scheduled or demand based scaling you can also use metrics and alarms from Amazon CloudWatch to trigger scaling events for your workload Typical metrics can be standard Amazon EC2 metrics such as CPU utilization network throughput and ELB observed request/response latency When possible you should use a metric that is indicative of customer experience typically this a custom metric that might originate from application code within your workload When architecting with a demand based approach keep in mind two key considerations First understand how quickly you must provision new resources Second understand that the size of margin between supply and demand will shift You must be ready to cope with the rate of change in demand and also be ready for resource failures Elastic Load Balancing (ELB) helps you to scale by distributing demand across multiple resources As you implement more reso urces you add them to the load balancer to take on the demand AWS ELB has support for EC2 Instances containers IP addresses and Lambda functions Time based supply: A time based approach aligns resource capacity to demand that is predictable or well defined by time This approach is typically not dependent upon utilization levels of the resources A time based approach ensures that resources are available at the specific time they are required and can be provided without any delays due to start up pro cedures and system or consistency checks Using a time based approach you can provide additional resources or increase capacity during busy periods You can use scheduled Auto Scaling to implement a time based approach Workloads can be scheduled to scale out or in at defined times (for example the start of business hours) thus ensuring that resources are available when users or demand arrives You can also leverage the AWS APIs and SDKs and AWS CloudFormation to automatically provision and decommission entire environments as you need them This approach is well suited for development or test environments that run only in defined business hours or periods o f time You can use APIs to scale the size of resources within an environment (vertical scaling) For example you could scale up a production workload by changing the instance size ArchivedAmazon Web Services Cost Optimization Pillar 33 or class This can be achieved by stopping and starting the instance and selecting the different instance size or class This technique can also be applied to other resources such as EBS Elastic Volumes which can be modified to increase size adjust performance (IOPS) or change the volume type while in use When architecting with a ti mebased approach keep in mind two key considerations First how consistent is the usage pattern? Second what is the impact if the pattern changes? You can increase the accuracy of predictions by monitoring your workloads and by using business intelligen ce If you see significant changes in the usage pattern you can adjust the times to ensure that coverage is provided Dynamic Supply: You can use AWS Auto Scaling or incorporate scaling in your code with the AWS API or SDKs This reduces your overall workload costs by removing the operational cost from manually making changes to your environment and can be performed much faster This will ensure that the wor kload resourcing best matches the demand at any time Resources Refer to the following resources to learn more about AWS best practices for managing demand and supplying resources • API Gateway Throttling • Getting Started with Amazon SQS • Getting Start ed with Amazon EC2 Auto Scaling Optimize Over Time In AWS you optimize over time by reviewing new services and implementing them in your workload Review and Implement New Services As AWS releases new services and features it is a best practice to rev iew your existing architectural decisions to ensure that they remain cost effective As your requirements change be aggressive in decommissioning resources components and workloads that you no longer require Consider the following to help you optimize over time: • Develop a workload review process ArchivedAmazon Web Services Cost Optimization Pillar 34 • Review and implement services Develop a workload review process: To ensure that you always have the most cost efficient workload you must regularly review the workload to know if there are opportunities to impl ement new services features and components To ensure that you achieve overall lower costs the process must be proportional to the potential amount of savings For example workloads that are 50% of your overall spend should be reviewed more regularly a nd more thoroughly than workloads that are 5% of your overall spend Factor in any external factors or volatility If the workload services a specific geography or market segment and change in that area is predicted more frequent reviews could lead to c ost savings Another factor in review is the effort to implement changes If there are significant costs in testing and validating changes reviews should be less frequent Factor in the long term cost of maintaining outdated and legacy components and resources and the inability to implement new features into them The current cost of testing and validation may exceed the proposed benefit However over time the cost of making the change may significantly increase as the gap between the workload and the current technologies increases resulting in even larger costs For example the cost of moving to a new programming language may not currently be cost effective However in five years time the cost of people skilled in that language may increase and du e to workload growth you would be moving an even larger system to the new language requiring even more effort than previously Break down your workload into components assign the cost of the component (an estimate is sufficient) and then list the facto rs (for example effort and external markets) next to each component Use these indicators to determine a review frequency for each workload For example you may have webservers as a high cost low change effort and high external factors resulting in hi gh frequency of review A central database may be medium cost high change effort and low external factors resulting in a medium frequency of review Review the workload and implement services: To realize the benefits of new AWS services and features yo u must execute the review process on your workloads and implement new services and features as required For example you might review your workloads and replace the messaging component with Amazon Simple Email Service (SES) This removes the cost of opera ting and maintaining a fleet of instances while providing all the functionality at a reduced cost ArchivedAmazon Web Services Cost Optimization Pillar 35 Conclusion Cost optimization and Cloud Financial Management is an ongoing effort You should regularly work with your finance and technology teams review y our architectural approach and update your component selection AWS strives to help you minimize cost while you build highly resilient responsive and adaptive deployments To truly optimize the cost of your deployment take advantage of the tools tech niques and best practices discussed in this paper Contributors Contributors to this document include: • Philip Fitzsimons Sr Manager Well Architected Amazon Web Services • Nathan Besh Cost Lead Well Architected Amazon Web Services • Levon Stepanian BDM C loud Financial Management Amazon Web Services • Keith Jarrett Business Development Lead – Cost Optimization • PT Ng Commercial Architect Amazon Web Services • Arthur Basbaum Business Developer Manager Amazon Web Service • Jarman Hauser Commercial Architect Amazon Web Services ArchivedAmazon Web Services Cost Optimization Pillar 36 Further Reading For additional information see: •AWS Well Architected Framework Document Revisions Date Description Hsjw 2020 Updated to incorporate CFM new services and integration with the WellArchitected too July 2018 Updated to reflect changes to AWS and incorporate learnings from reviews with customers November 2017 Updated to reflect changes to AWS and incorporate learnings from reviews with customers November 2016 First publication
|
General
|
consultant
|
Best Practices
|
AWS_WellArchitected_Framework__HPC_Lens
|
ArchivedHigh Performance Computing Lens AWS Well Architected Framework December 2019 This paper has been archived The latest version is now available at: https://docsawsamazoncom/wellarchitected/latest/highperformancecomputinglens/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Definitions 2 General Design Principles 3 Scenario s 6 Loosely Coupled Scenarios 8 Tightly Coupled Scenarios 9 Reference Architectures 10 The Five Pillars of the Well Architected Framework 20 Operational Excellence Pillar 20 Security Pillar 23 Reliability Pillar 25 Performance Efficiency Pillar 28 Cost Optimization Pillar 36 Conclusion 39 Contributors 40 Further Reading 40 Document Revisions 40 ArchivedAbstract This document describes the High Performance Computing (HPC) Lens for the AWS WellArchitected Framework The document covers common HPC scenarios and identif ies key elements to ensure that your workloads are architected according to best practices ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 1 Introduction The AWS Well Architected Framework helps you unde rstand the pros and cons of decisions you make while building systems on AWS 1 Use the Framework to learn architectural best practices for designing and operating reliable secure efficient and costeffective systems in the cloud The Framework provides a way for you to consistently measure your architectures against best practices and identify areas for improvement We believe that having well architected systems greatly increases the likelihood of business success In this “Lens ” we focus on how to des ign deploy and architect your High Performance Computing (HPC) workloads on the AWS Cloud HPC workloads run exceptionally well in the cloud The natural ebb and flow and bursting characteristic of HPC workloads make them well suited for pay asyougo cloud infrastructure The ability to fine tune cloud resources and create cloud native architectures naturally accelerates the turnaround of HPC workloads For brevity we only cover details from the WellArchitected Framework that are specific to HPC workloads We recommend that you consider best practices and q uestions from the AWS Well Architected Frame work whitepaper 2 when designing your architecture This paper is intended for those in technology roles such as chief technology officers (CTOs) architects developers and operations team members After reading this paper you will understand AWS be st practices and strategies to use when designing and operating HPC in a cloud environment ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 2 Definition s The AWS Well Architected Framework is based on five pillars : operational excellence security reliability performance efficiency and cost optimizat ion When architecting solutions you make tradeoffs between pillars based upon your business context These business decisions can drive your engineering priorities You might reduce cost at the expense of reliability in development environments or for mission critical solutions you might optimize reliability with increased costs Security and operational excellence are generally not traded off against other pillars Throughout this paper we make the crucial distinction between loosely coupled – someti mes referred to as high throughput computing (HTC) in the community – and tightly coupled workloads We also cover server based and serverless designs Refer to the Scenarios section for a detailed discussion of these distinctions Some vocabulary of the A WS Cloud may differ from common HPC terminology For example HPC users may refer to a server as a “node” while AWS refers to a virtual server as an “instance” When HPC users commonly speak of “jobs” AWS refers to them as “workloads” AWS documentation u ses the term “vCPU” synonymously with a “thread” or a “hyperthread ” (or half of a physical core) Don’t miss this factor of 2 when quantifying the performance or cost of an HPC application on AWS Cluster p lacement groups are an AWS method of grouping your compute instances for applications with the highest network requirements A placement group is not a physical hardware element It is simply a logical rule keeping all nodes within a low latency radius of the network The AWS Cloud infrastructure is built around Regions and Availability Zones A Region is a physical location in t he world where we have multiple Availability Zones Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity housed in separate facilities Depending on the characteristics of your HPC worklo ad you may want your cluster to span Availability Zones (increasing reliability) or stay within a single Availability Zone (emphasizing low latency) ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 3 General Design Principles In traditional computing environment s architectural decisions are often implem ented as static one time events sometimes with no major software or hardware upgrades during a computing system ’s lifetime As a project and its context evolve these initial decisions may hinder the system’s ability to meet changing business requirement s It’s different in the cloud A c loud infrastructure can grow as the project grows allowing for a continuously optimized capability In the cloud the capability to automate and test on demand lowers the risk of impact from infrastructure design changes This allows systems to evolve over time so that projects can take advantage of innovations as a standard practice The Well Architected Framework proposes a set of general design principles to facilitate good design in the cloud with highperformance computing: • Dynamic architectures : Avoid frozen static architectures and cost estimates that use a steady state model Your architecture must be dynamic: growing and shrinking to match your demands for HPC over time Match your architecture design and cost an alysis explicitly to the natural cycles of HPC activity For example a period of intense simulation efforts might be followed by a reduction in demand as the work moves from the design phase to the lab Or a long and steady data accumulation phase might be followed by a large scale analysis and data reduction phase Unlike many traditional supercomputing centers the AWS Cloud helps you avoid long queues lengthy quota applications and restrictions on customization and software installation Many HPC end eavors are intrinsically bursty and well matched to the cloud paradigms of elasticity and pay asyougo The elasticity and p ayasyougo model of AWS eliminates the painful choice between oversubscribed systems (waiting in queues) or idle systems (wasted money) Environments such as compute clusters can be “right sized” for a given need at any given time • Align the procurement model to the workload : AWS makes a range of compute procurement models available for the various HPC usage patterns Selecting the correct model ensure that you are only paying for what you need For example a research institute might run the same weather forecast application in different ways: o An academic research project investigates the role of a weather variable with a large number of parameter sweeps and ensembles These simulations are not urgent and cost is a primary concern They are a great ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 4 match for Amazon EC2 Spot I nstances Spot Instances let you take advantage of Amazon EC2 unused capacity and are available at up to a 90% discount compared to On Demand prices o During the wildfire season up totheminute local wind forecasts ensure the safety of firefighters Every minute of delay in the simulations decreases their chance of safe evacuation On Demand Instances must be used for these simulations to allow for the bursting of analyses and ensure that results are obtained without interruption o Every morning weather fo recasts are run for television broadcasts in the afternoon Scheduled Reserved Instances can be used to make sure that the needed capacity is available every day at the right time Use of this pricing model provides a discount compared with On Demand I nstances • Start from the data : Before you begin designing your architecture you must have a clear picture of the data Consider data origin size velocity and updates A holistic optimization of performance and cost focuses on compute and include s data cons iderations AWS has a strong offering of data and related services including data visualization which enables you to extract the most value from your data • Automate to simplify architectural experimentation : Automation through code allows you to create a nd replicate your systems at low cost and avoid the expense of manual effort You can track changes to your code audit the ir impact and revert to prev ious versions when necessary The ability to easily experiment with infrastructure allows you to optimiz e the architecture for performance and cost AWS offers tools such as AWS ParallelCluster that help you get started with treating your HPC cloud infrastructure as code • Enable collaboration : HPC work often occurs in a collaborative context sometimes spa nning many countries around the world Beyond immediate collaboration methods and results are often shared with the wider HPC and scientific community It’s important to consider in advance which tools code and data may be shared and with whom The del ivery methods should be part of this design process For example w orkflows can be shared in many ways on AWS: you can use Amazon Machine Images ( AMIs ) Amazon Elastic Block Store (Amazon EBS) snapshots Amazon Simple Storage Service (Amazon S3) buckets AWS CloudFormation templates AWS ParallelCluster configuration files AWS Marketplace products and scripts Take full advantage of the AWS security and collaboration features that make AWS an excellent environment for ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 5 you and your collaborators to solve your HPC problems This help s your computing solutions and datasets achieve a greater impact by secure ly sharing within a selective group or public ly sharing with the broader community • Use c loud native d esigns : It is usually unnecessary and suboptimal to replicate your on prem ises environment when you migrate workloads to AWS The breadth and depth of AWS services enables HPC workloads to run in new ways using new design patterns and cloud native solution s For example each user or group can use a separate cluster which can independently scale d epending on the load Users can rely on a managed service like AWS Batch or serverless computing like AWS Lambda to manage the underlying infrastructure Cons ider not using a traditional cluster schedule r and instead use a scheduler only if your workload requires it In the cloud HPC clusters do not require permanence and can be ephemeral resources When you automate your cluster deployment you can terminate one cluster and launch a new one quickly with the same or different parameters This method creates environments as necessary • Test real world workloads : The only way to know how your production workload will perform in the cloud is to test it on the cloud Most HPC applications are complex and their memory CPU and network patterns often can’t be reduced to a simple test Also application requirements for infrastructure vary based on which application solvers (mathematical methods or algorithms) yo ur model s use the size and complexity of your models etc For this reason generic benchmarks aren’t reliable predictors of actual HPC production performance Similarly there is little value in testing an application with a small benchmark set or “toy p roblem ” With AWS you only pay for what you actually use ; therefore it is feasible to do a realistic proof ofconcept with your own representative models A major advantage of a cloud based platform is that a realistic full scale test can be done before migration • Balance time toresults and cost reduction : Analyze performance using the most meaningful parameters: time and cost Focus on cost o ptimiz ation should be used for workloads that are not time sensitive Spot Instances are usually the least expensive method for nontimecritical workloads For example i f a researcher has a large number of lab measurements that must be analyzed sometime before next year’s conference Spot Instances can help analyze the largest possible number of measurements within the fixed research budget Conversely for timecritical workloads such as emergency response modeling cost optimization can be traded for performance and instance type procurement model and cluster size should be chosen for lowest and most imm ediate ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 6 execution time If comparing platforms it’s important to take the entire time to solution into account including non compute aspects such as provisioning resources staging data or in more traditional environments time spent in job queues Scenarios HPC cases are typically complex computational problems that require parallel processing techniques To support the calculations a w ellarchitected HPC infrastructure is capable of sustained performance for the duration of the calculation s HPC work loads span traditional applications like genomics computational chemistry financial risk modeling computer aided engineering weather prediction and seismic imaging as well as emerging applications like machine learning deep learning and autonomous driving Still the traditional grids or HPC clusters that support these calculations are remarkably similar in architecture with select cluster attributes optimized for the specific workload In AWS the network storage type compute (instance) type an d even deployment method can be strategically chosen to optimize performance cost and usability for a particular workload HPC is divided into two categories based on the degree of interaction between the concurrently running parallel processes: loosely coupled and tightly coupled workloads Loosely coupled HPC cases are those where the multiple or parallel processes don’t strongly interact with each other in the course of the entire simulation Tightly coupled HPC cases are those where the parallel proce sses are simultaneously running and regularly exchanging information between each other at each iteration or step of the simulation With loosely coupled workloads the completion of an entire calculation or simulation often requires hundreds to millions o f parallel processes These processes occur in any order and at any speed through the course of the simulation This offers flexibility on the computing infrastructure required for loosely coupled simulations Tightly coupled workloads have processes that regularly exchange information at each iteration of the simulation Typically these tightly coupled simulations run on a homogenous cluster The total core or processor count can range from tens to thousands and occasionally to hundreds of thousands if the infrastructure allows The interactions of the processes during the simulation place extra demands on the infrastructure such as the compute nodes and network infrastructure ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 7 The infrastructure used to run the huge variety of loosely and tightly coupl ed applications is differentiated by its ability for process interactions across nodes There are fundamental aspects that apply to both scenarios and specific design considerations for each Consider the following fundamentals for both scenarios when sele cting an HPC infrastructure on AWS: • Network : Network requirements can range from cases with low requirements such as loosely coupled applications with minimal communication traffic to tightly coupled and massively parallel applications that require a per formant network with large bandwidth and low latency • Storage : HPC calculations use create and move data in unique ways Storage infrastructure must support these requirements during each step of the calculation Input data is frequently stored on startup more data is created and stored while running and output data is moved to a reservoir location upon run completion Factors to be considered include data size media type transfer speeds shared access and storage properties (for example durab ility and availability) It is helpful to use a shared file system between nodes For example using a Network File System (NFS) share such as Amazon Elastic File System (EFS) or a Lustre file system such as Amazon FSx for Lustre • Compute : The Amazon EC 2 instance type defines the hardware capabilities available for your HPC workload Hardware capabilities include the processor type core frequency processor features (for example vector extensions) memory tocore ratio and network performance On AWS an instance is considered to be the same as an HPC node These terms are used interchangeably in this whitepaper o AWS offers managed services with the ability to access compute without the need to choose the underlying EC2 instance type AWS Lambda and AW S Fargate are compute services that allow you to run workloads without having to provision and manage the underlying servers • Deployment : AWS provides many options for deploying HPC workloads Instances can be manually launched from the AWS Management Cons ole For an automated deployment a variety of Software Development Kits (SDKs) is available for coding end toend solutions in different programming languages A popular HPC deployment option combines bash shel l scripting with the AWS Command Line Interface (AWS CLI) ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 8 o AWS CloudFormation templates allow the specification of application tailored HPC clusters described as code so that they can be launched in minutes AWS ParallelCluster is open source software that coordinates the launch of a cluster through CloudFormation with already installed software (for example compilers and schedulers) for a traditional cluster experience o AWS provides managed deployment services for container based workloads such as Amazon EC2 Container Service (Amazon ECS) Amazon Elastic Kubernetes Service (Amazon EKS) AWS Fargate and AWS Batch o Additional software options are available from third party companies in the AWS Marketplace and the AWS Partner Network (APN) Cloud computing makes it easy to experiment with infrastructure components and architecture design AWS strongly encourages testing instance types EBS volume types deployment methods etc to find the best performance at the lowest cost Loosely Coupled Scenarios A loosely coupled workload entails the processing of a large number of smaller jobs Generally the smaller job runs on one node either consuming one process or multiple processes with shared memory parallelization (SMP) for parallelization within that node The parallel processes or the iterations in the simulation are post processed to create one solution or discovery from the simulation Loosely coupled applications are found in many areas including Monte Carlo simulations image processing genomics ana lysis and Electronic Design Automation (EDA) The loss of one node or job in a loosely coupled workload usually doesn’t delay the entire calculation The lost work can be picked up later or omitted altogether The nodes involved in the calculation can var y in specification and power A suitable architecture for a loosely coupled workload has the following considerations: • Network : Because parallel processes do not typically interact with each other the feasibility or performance of the workloads is not sen sitive to the bandwidth and latency capabilities of the network between instances Therefore clustered placement groups are not necessary for this scenario because they weaken the resiliency without providing a performance gain ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 9 • Storage : Loosely coupled w orkloads vary in storage requirements and are driven by the dataset size and desired performance for transferring reading and writing the data • Compute : Each application is different but in general the application’s memory tocompute ratio drives the underlying EC2 instance type Some applications are optimized to take advantage of graphics processing units (GPUs) or field programmable gate array (FPGA) accelerators on EC2 instances • Deployment : Loosely coupled simulations often run across many — sometimes millions — of compute cores that can be spread across Availability Zones without sacrificing performance Loosely coupled simulations can be deployed with end toend services and solutions suc h as AWS Batch and AWS ParallelCluster or through a combination of AWS services such as Amazon Simple Queue Service (Amazon SQS) Auto Scaling AWS Lambda and AWS Step Functions Tightly Coupled Scenarios Tightly coupled applications consist of parallel processes that are dependent on each other to carry out the calculation Unlike a loosely coupled computation all processes of a tightly coupled simulation iterate together and require communication with one another An iteration is defined as one step o f the overall simulation Tightly coupled calculations rely on tens to thousands of processes or cores over one to millions of iterations The failure of one node usually leads to the failure of the entire calculation To mitigate the risk of complete fail ure application level checkpointing regularly occurs during a computation to allow for the restarting of a simulation from a known state These simulations rely on a Message Passing Interface (MPI) for interprocess communication Shared Memory Parallelism via OpenMP can be used with MPI Examples of tightly coupled HPC workloads include : computational fluid dynamics weather prediction and reservoir simulation A suitable architecture for a tightly coupled HPC workload has the following considerations: • Network : The network requirements for tightly coupled calculations are demanding Slow communication between nodes results in the slowdown of the entire calculation The largest instance size enhanced networking and cluster placement groups are required fo r consistent networking performance These techniques minimize simulation runtimes and reduce overall costs Tightly coupled applications range in size A large problem size spread over a large ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 10 number of processes or cores usually parallelizes well Smal l cases with lower total computational requirements place the greatest demand on the network Certain Amazon EC2 instances use the Elastic Fabric Adapter (EFA) as a network interface that enables running applications that requir e high levels of internode communications at scale on AWS EFA’s custom built operating system bypass hardware interface enhances the performance of interinstance communications which is critical to scaling tightly coupled applications • Storage : Tightly cou pled workloads vary in storage requirements and are driven by the dataset size and desired performance for transferring reading and writing the data Temporary data storage or scratch space requires special consideration • Compute : EC2 instances are offer ed in a variety of configurations with varying core to memory ratios For parallel applications it is helpful to spread memory intensive parallel simulations across more compute nodes to lessen the memory percore requirements and to target the best perfo rming instance type Tightly coupled applications require a homogenous cluster built from similar compute nodes Targeting the largest instance size minimizes internode network latency while providing the maximum network performance when communicating betw een nodes • Deployment : A variety of deployment options are available End toend automation is achievable as is launching simulations in a “traditional” cluster environment Cloud scalability enables you to launch hundreds of large multi process cases at once so there is no need to wait in a queue Tightly coupled simulations can be deployed with end toend solutions such as AWS Batch and AWS ParallelCluster or through solutions based on AWS services such as CloudFormation or EC2 Fleet Reference Archite ctures Many architectures apply to both loosely coupled and tightly coupled workloads and may require slight modifications based on the scenario Traditional on premises clusters force a one sizefitsall approach to the cluster infrastructure However t he cloud offers a wide range of possibilities and allows for optimization of performance and cost In the cloud your configuration can range from a traditional cluster experience with a scheduler and login node to a cloud native architecture with the adva ntages of cost efficiencies obtainable with cloud native solutions Five reference architectures are below: ArchivedAmazon Web Services AWS Well Architected Framework — High Perfor mance Computing Lens 11 1 Traditional cluster environment 2 Batch based architecture 3 Queue based architecture 4 Hybrid deployment 5 Serverless workflow Traditional Cluster Environm ent Many users begin their cloud journey with an environment that is similar to traditional HPC environments Th e environment often involves a login node with a scheduler to launch jobs A common approach to traditional cluster provisioning is based on a n AWS CloudFormation template for a compute cluster combined with customization for a user’s specific tasks AWS ParallelCluster is an example of an end toend cluster provisioning capability based on AWS CloudFormation Although the complex description of the architecture is hidden inside the template typical configuration options allow the user to select the instance type scheduler or bootstrap actions such as installing applications or synchro nizing data The template can be constructed and executed to provide an HPC environment with the “look and feel” of conventional HPC clusters but with the added benefit of scalability The login node maintains the scheduler shared file system and runnin g environment Meanwhile an automatic scaling mechanism allows additional instances to spin up as jobs are submitted to a job queue As instances become idle they are automatically terminated A cluster can be deployed in a persistent configuration or treated as an ephemeral resource Persistent clusters are deployed with a login instance and a compute fleet that can either be a fixed sized or tied to an Auto Scaling group which increase s and decrease s the compute fleet depending on the number of submitted jobs Persistent clusters always have some infrastructure running Alternatively clusters can be treated as ephemeral where each workload runs on its own cluster Ephemeral clusters are enabl ed by automation For example a bash script is combined with the AWS CLI or a Python script with the AWS SDK provides end toend case automation For each case resources are provisioned and launched data is placed on the nodes jobs are run across mult iple nodes and the case output is either retrieved automatically or sent to Amazon S3 Upon completion of the job the infrastructure is terminated These clusters treat infrastructure as code optimize costs and allow for complete version control of infrastructure changes ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 12 Traditional cluster architectures can be used for loosely and tightly coupled workloads For best performance tightly coupled workloads must use a compute fleet in a clustered placement group with homogenous instance types Reference Architecture Figure 1: Traditional cluster deployed with AWS ParallelCluster Workflow steps: 1 User initiates the creation of a cluster through the AWS ParallelCluster CLI and specification in the configuration file 2 AWS CloudFormation builds the cluster architecture as described in the cluster template file where the user contributed a few custom settings (for example by editing a configuration file or using a web in terface) 3 AWS CloudFormation deploys the infrastructure from EBS snapshot s created with customized HPC software/applications that cluster instances can access through an NFS export ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 13 4 The u ser logs into the login instance and submits jobs to the scheduler ( for example SGE Slurm ) 5 The login instance emits metrics to CloudWatch based on the job queue size 6 CloudWatch triggers Auto Scaling events to increase the number of compute instances if the job queue size exceeds a threshold 7 Scheduled jobs are processe d by the compute fleet 8 [Optional] User initiates cluster deletion and termination of all resources Batch Based Architecture AWS Batch is a fully managed service that enables you to run large scale compute workloads in the cloud without provisioning resource s or manag ing schedulers 3 AWS Batch enables developers scientists and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS AWS Batch dynamically provisions the op timal quantity and type of compute resources (for example CPU or memory optimized instances) based on the volume and specified resource requirements of the batch jobs submitted It plans schedules and executes your batch computing workloads across the f ull range of AWS compute services and features such as Amazon EC2 4 and Spot Instances 5 Without the need to install and manage the batch computing software or server clusters necessary for running your jobs you can focus on analyzing results and gaining new insights With AWS Batch you package your application in a container specify your job’s dependencies and submit your batch jobs using the AWS Management Console the CLI or an SDK You can specify execution parameters and job dependencies and integrate with a br oad range of popular batch computing workflow engines and languages (for example Pegasus WMS Luigi and AWS Step Functions) AWS Batch provides default job queues and compute environment definitions that enable you to get started quickly An AWS Batch ba sed architecture can be used for both loosely and tightly coupled workloads Tightly coupled workloads should use Multi node Parallel Jobs in AWS Batch ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 14 Reference Architecture Figure 2: Example AWS Batch architecture Workflow steps: 1 User creates a job container uploads the container to the Amazon EC2 Container Registry (Amazon ECR ) or another container registry (for example DockerHub) and creates a job definition to AWS Batch 2 User submits jobs to a job queue in AWS Batch 3 AWS Batch pulls the image from the container registry and processes the jobs in the queue 4 Input and output data from each job is stored in an S3 bucket Queue Based Architecture Amazon SQS is a fully managed messag e queuing service that makes it easy to decouple preprocessing steps from compute steps and post processing steps 6 Building applications from individual components that perform discrete function s improves scalability and reliability Decoupling components is a best practice for designing modern applications Amazon SQS frequently lies at the heart of cloud native loosely coupled solutions Amazon SQS is often orchestrated with AWS CLI or AWS SDK scripted solutions for the deployment of applications from the desktop without users interacting with AWS ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Co mputing Lens 15 components directly A queue based architecture with SQS and EC2 requi res self managed compute infrastructure in contrast with a service managed deployment such as AWS Batch A queue based architecture is best for loosely coupled workloads and can quickly become complex if applied to tightly coupled workloads Reference Ar chitecture Figure 3: Amazon SQS deployed for a loosely coupled workload Workflow steps: 1 Multiple users submit jobs with the AWS CLI or SDK 2 The j obs are queued as messages in Amazon SQS 3 EC2 Instances poll the queue and start processing jobs 4 Amazon SQS emits metrics based on the number of messages (jobs) in the queue 5 An Amazon CloudWatch alarm is configured to notify Auto Scaling if the queue is longer than a specified length Auto Scaling increase s the number of EC2 instances 6 The EC2 instances pull so urce data and store result data in an S3 bucket ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 16 Hybrid Deployment Hybrid deployments are primarily considered by organizations that are invested in their onpremises infrastructure and also want to use AWS This approach allows organizations to augment on premises resources and creates an alternative path to AWS rather than an immediate full migration Hybrid scenarios vary from minimal coordination like workload separation to tightly integrated approaches like scheduler driven job placement For example an organization may separate their workloads and run all workloads of a certain type on AWS infra structure Alternatively organizations with a large investment in their on premises processes and infrastructure may desire a more seamless experience for their end users by managing AWS resources with their job scheduling software and potentially a job s ubmission portal Several job schedulers – commercial and open source – provide the capability to dynamically provision and deprovision AWS resources as necessary The underlying resource management relies on native AWS integrations (for example AWS CLI or API) and can allow for a highly customized environment depending on the scheduler Although job schedulers help manage AWS resources the scheduler is only one aspect of a successful deployment Critical factor s in successfully operating a hybrid scenario are data locality and data movement Some HPC workloads do not require or generate significant datasets; therefore data management is less of a concern However jobs that require large input data or that generate significant output data can become a bottleneck Techniques to address data management vary depending on organization For example one organization may have their end users manage the data transfer in their job submission scripts others might only run certain jobs in the location whe re a dataset resides a third organization might choose to duplicate data in both locations and yet another organization might choose to use a combination of several options Depending on the data management approach AWS provides several services to aid in a hybrid deployment For example AWS Direct Connect establishes a dedicated network connection between an on premises environment and AWS and AWS DataSync automatically moves data from on premises storage to Amazon S3 or Amazon Elastic File System Additional software options are available from third party companies in the AWS Marketplace and the AWS Partner Network (APN) Hybrid deployment architectures can be used for loosely and tightly coupled workloads However a single tightly coupled workload s hould reside either on premises or in AWS for best performance ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 17 Refe rence Architecture Figure 3: Example hybrid scheduler based deployment Workflow steps: 1 User submits the job to a scheduler ( for example Slurm) on an on premises login node 2 Scheduler e xecutes the job on either on premises compute or AWS infrastructure based on configuration 3 The jobs access shared storage based on their run location Serverless The loosely coupled cloud journey often leads to an environment that is entirely serverless meaning that you can concentrate on your applications and leave the server provisioning responsibility to managed services AWS Lambda can run code without the need to provision or manag e servers You pay only for the compute time you consume — there is no charge when your code is not running You upload your code and Lambda takes care of everything required to run and scale your code Lambda also has the capabilit y to automatically trigger events from other AWS services ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 18 Scalability is a second advantage of the serverless Lambda approach Although each worker may be modest in size – for example a compute core with some memory – the architecture can spawn thousands of concurrent Lambda workers thus reaching a large compute throughput capacity and earning the HPC label For example a large number of files can be analyzed by invocations of the same algorithm a large number of genomes can be analyzed in parallel or a large number of gene sites within a genome can be modeled The largest attainable scale and speed of scaling matter While server based architectures require time on the order of minutes to increase capacity in response to a request (even when using vir tual machines such as EC2 instances) serverless Lambda functions scale in seconds AWS Lambda enables HPC infrastructure that respond s immediately to an y unforeseen request s for compute intensive results and can fulfill a variable number of requests with out requiring any resources to be wastefully provisioned in advance In addition to compute there are other serverless architectures that aid HPC workflows AWS Step Functions let you coordinate multiple steps in a pipeline by stitching together different AWS services For example an automated genomics pipeline can be created with AWS Step Functions for coordination Amazon S3 for storage AWS Lambda for small tasks and AWS Batch for data processing Serverless architectures are best for loosely coupled workloads or as workflow coordination if combined with another HPC architecture Reference Architecture ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 19 Figure 4: Example Lambda deployed loosely coupled workload Workflow steps: 1 The u ser uploads a file to an S3 bucket through the AWS CLI or SDK 2 The input file is saved with an incoming prefix ( for example input/) 3 An S3 event automatically triggers a Lambda function to process the incoming data 4 The output file is saved back to the S3 bucket with a n outgoing prefix (for example output/ ) ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 20 The Five Pillars of the Well Architected Framework This section describes HPC in the context of the five pillars of the WellArchitected Framework Each pillar discusses design principles definitions best practices evaluation questions consideratio ns key AWS services and useful links Operational Excellence Pillar The operational excellence pillar includes the ability to run and monitor systems to deliver business value and continually improve supporting processes and procedures Design Principles In the cloud a number of principles drive operational excellence In particular the following are emphasized for HPC workloads See also the design principles in the AWS Well Architected Framework whitepaper • Automate cluster operations : In the cloud you can define your entire workload as code and update it with code This enables you to automate repetitive processes or procedures You benefit from being able to consistently reproduce infrastructure and implement operational procedures This includes automating the job submission process and responses to events such as job start completion or failure In HPC it is common for users to expect to repeat multiple steps for every job including for example uploading case files submittin g a job to a scheduler and moving result files Automate these repetitive steps with scripts or by event driven code to maximize usability and minimize costs and failures • Use cloud native architectures where applicable : HPC architectures typically take o ne of two forms The first is a traditional cluster configuration with a login instance compute nodes and job scheduler The second a cloud native architecture with automated deployments and managed services A single workload can run for each (ephemeral ) cluster or use serverless capabilities Cloud native architectures can optimize operations with democratizing advanced technologies; however the best technology approach aligns with the desired environment for HPC users ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 21 Definition There are three best practice areas for operational excellence in the cloud: • Prepare • Operat e • Evolve For more information on the prepare operate and evolve areas see the AWS Well Architected Framework whitepaper Evolve is not described in this whitepaper Best Practices Prepare Review the corresponding section in the AWS Well Architected Framework whitepaper As you prepare to deploy your workload consider using specialized softw are packages (commercial or open source ) to gain visibility into system information and leverage this information to defin e architecture patterns for your workloads Use automation tools such as AWS ParallelCluster or AWS CloudFormation to define these a rchitectures in a way that is configurable with variables The Cloud provides multiple scheduling options One option is to use AWS Batch which is a fully managed batch processing service with support for both single node and multi node tasks Another opt ion is to not use a scheduler For example you can create an ephemeral cluster to run a single job directly HPCOPS 1: How do you standardize architectures across clusters? HPCOPS 2: How do you schedule jobs – traditional schedulers AWS Batch or no scheduler with ephemeral clusters? Operate Operations must be standardized and manage d routinely Focus on automation small frequent changes regular quality assurance testing and defined mechanisms to track audit roll back and review changes Changes should not be large and infrequent should not require scheduled downtime and should not require manual execution A ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 22 wide range of logs and metrics based on key operational indicators for a workload must be collected and reviewed to ensure continuous oper ations AWS provides the opportunity to use additional tools for handling HPC operations These tools can vary from monitoring assistance to automating deployments For example you can have Auto Scaling restart failed instances use CloudWatch to monitor your cluster’s load metric s configur e notifications for when jobs finish or use a managed service (such as AWS Batch ) to implement retry rules for failed jobs Cloud native tools can greatly improve your application deployment and change management Release management processes whether manual or automated must be based on small incremental changes and tracked versions You must be able to revert releases that introduce issues without causing operational impact Use continuous integration and continuous deployment tools such as AWS CodePipeline and AWS CodeDeploy to automate change deployment Track source code changes w ith version control tools such as AWS CodeCommit and infrastructure configurations with automation tools such as AWS CloudFormation templates HPCOPS 3: How are you evolving your workload while minimizing the impact of change ? HPCOPS 4: How do you monit or your workload to ensure that it is operating as expected ? Using the cloud for HPC introduces new operational considerations While on premises clusters are fixed in size cloud clusters can scale to meet demand Cloudnative architectures for HPC also operate differently than on premises architectures For example they use different mechanisms for job submission and provisioning On Demand Instance resources as jobs arrive You must a dopt operational procedures that accommodate the elasticity of the c loud and the dynamic nature of cloud native architectures Evolve There are no best practices unique to HPC for the evolve practice area For more information see t he corresponding section in the AWS Well Architected Framework whitepaper ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 23 Security Pillar The security pillar includes the ability to protect information systems and assets while delivering business value through risk assessments and mitigation strategies Design Principles In the cloud there are a number of principles that help you strengthen your system ’s security The Design Principles from the AWS Well Architected Framework wh itepaper are recommended and do not vary for HPC workloads Definition There are five best practice areas for security in the cloud: • Identity and access management (IAM) • Detective controls • Infrastructure protection • Data protection • Incident response Before architecting any system you must establish security practices You must be able to control permissions identify security incidents protect your systems and services and maintain the confidentiality and integrity of data through data protection You should have a well defined and practiced process for responding to security incidents These tools and techniques are important because they support objectives such as preventing data loss and complying with regulatory obligations The AWS Shared Responsi bility Model enables organizations that adopt the cloud to achieve their security and compliance goals Because AWS physically secures the infrastructure that supports our cloud services you can focus on using services to accomplish your goals The AWS Cl oud provides access to security data and an automated approach to responding to security events All of the security best practice areas are vital and well documented in the AWS Well Architected Framework whitepaper The detective controls infrastructure protection and incident response areas are described in the AWS Well Architected Framework whitepaper They are not described in this whitepaper and do not require modification for HPC workloads ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 24 Best Practices Identity and Access Management (IAM) Identity and access management are key parts of an information security program They ensure that only authorized and authenticated users are able to access your resources For example you define principals (users groups services and roles that take action in your account) build out policies referencing these principals and impl ement strong credential management These privilege management elements form the core concepts of authentication and authorization Run HPC workloads autonomously and ephemerally to limit the exposure of sensitive data Autonomous deployments require minim al human access to instances which minimize s the exposure of the resources HPC data is produced within a limited time minimizing the possibility of potential unauthorized data access HPC SEC 1: How are you using managed services autonomous methods and ephemeral clusters to minimize human access to the workload infrastructure ? HPC architectures can use a variety of managed ( for example AWS Batch AWS Lambda) and unmanaged compute services ( for example Amazon EC2) When architectures requir e direct access to the compute environments such as connecting to an EC2 instance users commonly connect through a Secure Shell (SSH) and authenticate with an SSH key This access model is typical in a Traditional Cluster scenario All credentials inclu ding SSH keys must be appropriately protected and regularly rotated Alternatively AWS Systems Manager has a fully managed service (Session Manager ) that provides an interactive browser based shell and CLI experience It provide s secure and auditable ins tance management without the need to open inbound ports maintain bastion hosts and manage SSH keys Session Manager can be accessed through any SSH client that supports ProxyCommand HPC SEC 2: What methods are you using to protect and manage your creden tials? Detective Controls There are no best practices unique to HPC for the detective controls best practice area Review the corresponding section in the AWS WellArchitected Framework whitepaper ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 25 Infrastructure Protection There are no best practices unique to HPC for the infrastructure best practice area Review the corresponding section in the AWS Well Architected Framework whitepaper Data Protection Before architecting any system you must establish foundational security practices For example data classification provides a way to categorize organizational data based on levels of sensitivity and encryption protects data by rendering it unintelligible to unauthorized access These tools and techniques are important because they su pport objectives such as preventing data loss or complyi ng with regulatory obligations HPCSEC 3 : How does your architecture address data requirements for storage availability and durability through the lifecycle of your results? In addition to the level of sensitivity and regulatory obligations HPC data can also be categorized according to when and how the data will next be used Final results are often retained while intermediate results which can be recreated if necessary may not need to be retained Careful evaluation and categorization of data allows for programmatic data migration of important data to more resilient storage solutions such as Amazon S3 and Amazon EFS An understanding of data longevity combined with programmatic handling of the dat a offers the minimum exposure and maximum protection for a WellArchitected infrastructure Incident Response There are no best practices unique to HPC for the incident response best practice area Review the corresponding section in the AWS Well Architected Framework whitepaper Reliability Pillar The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions dynamically acquire computing resources to meet demand and mitigate disruptions such as misconfigurations or transient network issues ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 26 Design Principles In the cloud a number of principles help you incr ease reliability In particular the following are emphasized for HPC workloads For more information refer to the design principles in the AWS Well Architec ted Framework whitepaper • Scale horizontally to increase aggregate system availability : It is important to consider horizontal scaling options that might reduce the impact of a single failure on the overall system For example rather than having one la rge shared HPC cluster running multiple jobs consider creating multiple clusters across the Amazon infrastructure to further isolate your risk of potential failures Since infrastructure can be treated as code you can horizontally scale resources inside a single cluster and you can horizontally scale the number of clusters running individual cases • Stop guessing capacity : A set of HPC clusters can be provisioned to meet current needs and scaled manually or automatically to meet increases or decreases in demand For example terminate idle compute nodes when not in use and run For example terminate idle c ompute nodes when not in use and run concurrent clusters for processing multiple computations rather than waiting in a queue • Manage change in automa tion: Automating changes to your infrastructure allows you to place a cluster infrastructure under version control and make exact duplicates of a previously created cluster Automation changes must be managed Definition There are three best practice areas for r eliability in the cloud : • Foundations • Change management • Failure management The change management area is described in the AWS Well Architec ted Framework whitepaper ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 27 Best Practices Foundations HPC REL 1 : How do you manag e AWS service limits for your accounts? AWS sets service limits (an upper limit on the number of each resource your team can request) to protect you from accidentally overprovisioning resources HPC applications often require a large number of compute instances simultaneously The ability and advantages of scaling horizontally are highly desirable for HPC workloads However scaling horizontally may require an increase to the AWS service limits before a large workload is deployed to either one large cluster or to many smaller clusters all at once Service limits must often be increased from the default values in order to handle the requirements of a large deployment Contact AWS Support to request an increase Change Management There are no best practices unique to HPC for the change management best practice area Review the corresponding section in the AWS Well Architected Framework whitepaper Failure Management Any complex system can expect for failures to occasionally occur and it critical to become aware of these failures respond to them and prevent them from happening again Failure scenarios can include the failure of a cluster to start up or the failure of a specific workload HPCREL 2: How does your application use checkpointing to recover from failures? Failure tolerance can be improved in multiple ways For long running cases incorporating regular checkpoints in your code allows you to continue from a partial state in the event of a failure Checkpointing is a common feature of application level failure management already built into many HPC applications The most common approach is for applications to periodically write out intermediate results The ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 28 intermediate results offer potential insight into application errors and the ability to restart the case as needed while only partially losing the work Checkpointing is useful on Spot Instances when you are using highly cost effective but potentially interruptible instances In addition some applications may bene fit from changing the default Spot interruption behavior ( for example stopping or hibernating the instance rather than terminating it) It is important to consider the durability of the storage option when relying on checkpointing for failure management HPCREL 3: How have you planned for failure tolerance in your architecture? Failure tolerance can be improved when deploying to multiple Availability Zones The lowlatency requirements of tightly coupled HPC applications require that each individual case r eside within a single cluster placement group and Availability Zone Alternatively loosely coupled applications do not have such low latency requirements and can improve failure management with the ability to deploy to several Availability Zones Consider the tradeoff between the reliability and cost pillars when making this design decision Duplication of compute and storage infrastructure (for example a head node and attached storage) incurs additional cost and there may be data transfer charges for moving data to an Availability Zone or to another AWS Region For non urgent use cases it may be preferable to only move into another Availability Zone as part of a disaster recovery (DR) event Performance Efficiency Pillar The performance efficiency pillar focuses on the efficient use of computing resources to meet requirements and on maintaining that efficiency as demand c hanges and technologies evolve Design Principles When designing for HPC i n the cloud a number of principles help you achieve perform ance efficiency: • Design the cluster for the application : Traditional clusters are static and require that the application be designed for the cluster AWS offers the capability to design the cluster for the application A one sizefitsall model is no longer necessary with individual clusters for each application When running a variety of ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 29 applications on AWS a variety of architectures can be used to meet each application’s demands This allows for the best perf ormance w hile minimizing cost • Test performance with a meaningful use case : The best method to gauge an HPC application’s performance on a particular architecture is to run a meaningful demonstr ation of the application itself An inadvertently small or large demons tration case – one without the expected compute memory data tra nsfer or network traffic –will not provide a meaningful test of application performance on AWS Although system specific benchmarks offer an understanding of the underlying compute infrastr ucture performance they do not reflect how an application will perform in the aggregate The AWS payasyougo model makes a proof ofconcept quick and cost effective • Use cloud native architectures where applicable : In the cloud managed serverless and cloud native architectures remove the need for you to run and maintain servers to carry out traditional compute activities Cloud native components for HPC target compute storage job orchestration and organization of the data and metadata The variety of AWS services allow s each step in the workload process to be decoupled and optimized for a more performant capability • Experiment often : Virtual and automatable resources allow you to quickly carry out comparative testing using different ty pes of instances storage and configurations Definition There are four best practice areas for p erformance efficiency in the cloud: • Selection • Review • Monitoring • Tradeoff s The review monitoring and tradeoffs areas are described in the AWS Well Architected Framework whitepaper ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 30 Best Practices Selection The optimal solution for a particular system varies based on the kind of workload you have WellArchitected systems use multiple solutions and enable different f eatures to improve performance An HPC architecture can rely on one or more different architectural elements for example queued batch cluster comput e containers serverless and event driven Compute HPCPERF 1 : How do you select your compute solution? The optimal compute solution for a particular HPC architecture depends on the workload deployment method degree of automation usage patterns and configuration Different compute solutions may be chosen for each step of a process Selecting the wrong compute solution s for an architecture can lead to lower performance efficiency Instances are virtualized servers and come in diff erent families and sizes to offer a wide variety of capabilities Some instance families target specific workloads for example compute memory or GPU intensive workloads Other instances are general purpose Both the targeted workload and general purpose instance families are useful for HPC applications Instances of particular interest to HPC include the compute optimized family and accelerated instance types such as GPUs and FPGAs Some instance families provide variants within the family for addit ional capabilities For example an instance family may have a variant with local storage greater networking capabilities or a different processor These variants can be viewed in the Instance Type Matrix 7 and may improve the performance of some HPC workloads Within each instance family one or more instance sizes allow vertical scaling of resources Some applications require a larger instance type (for example 24xlarg e) while others run on smaller types (for example large ) depending on the number or processes sup ported by the application The optimum performance is obtained with the largest instance type when working with a tightly coupled workload The T series instance family is designed for applications with moderate CPU usage that can benefit from bursting beyond a baseline level of CPU performance Most HPC ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 31 applications are compute intensive and suffer a performance decline with the T series instance family Applications vary in their requirements (for example desired cores processor speed memory requirements storage needs and networking specifications) When selecting an instance family and type begin with the specific needs of the applica tion Instance types can be mixed and matched for applications requiring targeted instances for specific application components Containers are a method of operating system virtualization that is attractive for many HPC workloads particularly if the appli cations have already been containerized AWS services such as AWS Batch Amazon Elastic Container Service (ECS) and Amazon Elastic Container Service for Kubernetes (EKS) help deploy containerized applications Functions abstract the execution environment AWS Lambda allows you to execute code without deploying running or maintaining an instance Many AWS services emit events based on activity inside the service and often a Lambda function can be triggered off of service events For example a Lambda fu nction can be executed after an object is uploaded to Amazon S3 Many HPC users use Lambda to automatically execute code as part of their workflow There are several choices to make when launching your selected compute instance: • Operating system : A current operating system is critical to achieving the best performance and ensuring access to the most up todate libraries • Virtualization type : Newgeneration EC2 instances run on the AWS Nitro System The Nitro System delivers all the host hardware’s compute a nd memory resources to your instances resulting in better overall performance Dedicated Nitro Cards enable high speed networking highspeed EBS and I/O acceleration Instances do not hold back resources for management software The Nitro Hypervisor is a lightweight hypervisor that manages memory and CPU allocation and delivers performance that is indistinguishable from bare metal The Nitro System also makes bare metal instances available to run without the Nitro Hypervisor Launching a bare metal instance boots the underlying server which includes verifying all hardware and firmware components This means it can take longer before the bare metal instance becomes available to start your workload as compared to a virtualized instance The additiona l initialization time must be considered when operating in a dynamic HPC environment where resources launch and terminate based on demand ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 32 HPCPERF 2: How do you optimize the compute environment for your application? Underlying hardware features : In additio n to cho osing an AMI you can further optimize your environment by taking advantage of the hardware features of the underlying Intel processors There are four primary methods to consider when optimizing the underlying hardware: 1 Advanced processor features 2 Intel Hyper Threading Technology 3 Processor affinity 4 Processor state control HPC applications can benefit from these advanced processor features (for example Advanced Vector Extensions) and can increas e their calculation speeds by compiling the software for the Intel architecture 8 The compiler options for architecture specific instructions vary by compiler (check the usage guide for your compiler) AWS enables Intel Hyper Threading Technology commonl y referred to as “hyperthreading ” by default Hyperthreading improves performance for some applications by allowing one process per hyperthread (two processes per core) Most HPC applications benefit from disabling hyperthreading and therefore it tends to be the preferred environment for HPC applications Hyperthreading is easily disabled in Amazon EC2 Unless an application has been tested with hyperthreading enabled it is recommended that hyperthreading be disabled and that process es are launched and individually pinned to cores when running HPC applications CPU or processor affinity allows process pinning to easily happen Processor affinity can be controlled in a variety of ways For example i t can be configured at the operating system level (available in both Windows and Linux) set as a compiler flag within the threading library or specified as an MPI flag during execution The chosen method of controlling processor affinity depends on your workload and application AWS enable s you to tune the processor state control on certain instance types 9 You may consider altering the C state (idle states) and P state (operational states) settings to optimize your performance The default C state and P state settings provide maximum performance which is optimal for most workloads However if your application would benefit from re duced latency at the cost of higher single or dual core ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 33 frequencies or from consistent performance at lower frequencies as opposed to spiky Turbo Boost frequencies experiment with the C state or P state settings available on select instances There are many compute options available to optimize a compute environment Cloud deployment allows experimentation on every level from operating system to instance type to bare metal deployments Because static clusters are tuned before deployment time spent expe rimenting with cloud based clusters is vital to achieving the desired performance Storage HPCPERF 3: How do you select your storage solution? The optimal storage solution for a particular HPC architecture depends largely on the individual applications tar geted for that architecture Workload deployment method degree of automation and desired data lifecycle patterns are also factors AWS offers a wide range of storage options As with compute the best performance i s obtained when targeting the specific s torage needs of an application AWS does not require you to overprovision your storage for a “onesizefitsall” approach and large highspeed shared file systems are no t always required Optimizing the c ompute choice is important for optimizing HPC performance and m any HPC applications will not benefit from the fastest storage solution possible HPC deployments often require a shared or high performance file system that is accessed by the cluster compute nodes There are several architecture patterns you can use to implement these storage solutions from AWS Managed Services AWS Marketplace offerings APN Partner solutions and open source configurations deployed on EC2 instances In particular A mazon FSx for Lustre is a managed service that provides a cost effective and performant solution f or HPC architectures requiring a high performance parallel file system Shared file systems can also be created from Amazon Elastic File System (EFS) or EC2 instances with Amazon EBS volumes or instance store volumes Frequently a simple NFS mount is used to create a shared directory When selecting your storage solution you may select an EBS backed instance for either or both of your local and shared storage s EBS volumes are often the basis for an HPC storage solution Various types of EBS volumes are available including magnetic hard disk drives (HDDs) general purpose solid state drives (SSDs) and Provisioned IOPS SSDs for high IOPS solutions They differ in throughput IOPS performance and cost ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 34 You can gain further performance enhancements by selecting an Amazon EBS optimized instance An EBS optimized instance uses an optimized configuration stack and provides additional dedicated capacity for Amazon E BS I/O This optimization provides the best performance for your EBS volumes by minimizing contention between Amazon EBS I/O and other network traffic to and from your instance Choose an EBS optimized instance for more consistent performance and for HPC a pplications that rely on a low latency network or have intensive I/O data needs to EBS volumes To launch an EBS optimized instance choose an instance type that enables EBS optimization by default or choose an instance type that allows enabling EBS optim ization at launch Instance store volumes including nonvolatile memory express (NVMe) SSD volumes (only available on certain instance families) can be used for temporary block level storage Refer to the instance type matrix for EBS optimization and instance store volume support 10 When you select a storage solution ensure that it aligns with your access patterns to achieve the desired perf ormance It is eas y to experiment with different storage types and configurations With HPC workloads the most expensive option is not always the best performing solution Networking HPCPERF 4: How do you select your network solution ? The optimal network solution for an HPC workload varies based on latency bandwidth and throughput requirements Tightly coupled HP C applications often require the lowest latency possible for network connection s between com pute nodes For moderately sized tightly coupled workloads it is possible to select a large instance type with a large number of cores so that the application fits entirely within the instance without crossing the network at all Alternatively some applications are network bound and require high network performance I nstances with higher network performance can be selected for these applications The highest network performance is obtained with the largest instance type in a family Refer to the instance type matrix for more details 7 Multiple instances with low latency between the instances are required for large tightly coupled applications On AWS th is is achieved by launching compute nodes into a ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 35 cluster placement group which is a logical grouping of instances within an Availability Zone A cluster placement group provides non blocking and non oversubscribed connectivity including full bisection ba ndwidth between instances Use cluster placement groups for latency sensitive tightly coupled applications spanning multiple instances In addition to cluster placement groups tightly coupled applications benefit from an Elastic Fabric Adapter (EFA) a network device that can attach to your Amazon EC2 instance EFA provides lower and more consistent latency and higher throughput than the TCP transport traditionally used in cloud based HPC systems It enables an OS bypass access model through the Libfabric API that allows HPC applications to communicate directly with the network interface hardware EFA enhances the performance of interinsta nce communication is optimized to work on the existing AWS network infrastructure and is critical for scaling tightly coupled applications 13 If an application cannot take advantage of EFA’s OS bypass functionality or an instance type does not support EFA optimal network performance can be obtained by selecting an instance type that supports enhanced networking Enhanced networking provides EC2 instances with higher networking performance and lower CPU utilization through the use of pass through rather than hardware emulated devices This method allows EC2 instances to achieve higher bandwidth higher packet persecond processing and lower inter instance latency compared to traditional device virtualization Enhance d networking is available on all current generation instance types and requires an AMI with supported drivers Although most current AMIs contain supported drivers custom AMIs may require updated drivers For more information on enabling enhanced networki ng and instance support refer to the enhanced networking documentation 11 Loosely coupled workloads are generally not sensitive to very low latency networking and do not require the use of a cluster placement group or the need to keep instances in the same Availability Zone or Region Review There are no best practices unique to HPC for the review best practice area Review the corresponding section in the AWS Well Architected Framework whitepaper Monitoring There are no best practices unique to HPC for the monitoring best practice area Review the corresponding section in the AWS Well Architected Framework whitepaper ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 36 Trade offs There are no best practices unique to HPC for the monitoring best pr actice area Review the corresponding section in the AWS Well Architected Framework whitepaper Cost Optimization Pillar The cost optimization pillar includes the continual process of refinement and improvement of an HPC system over its entire lifecycle From the initial design of your first proof of concept to the ongoing operation of production workloads adopting the practices in this paper enables you to build and operate costaware systems that achieve business outcomes and minimize costs This allows your business to maximize its return on investment Design Principles For HPC in the cloud you can follow a number of principles to achieve cost optimization: • Adopt a consumption model : Pay only for the computing resources that you consume HPC workloads ebb and flow providing the opportunity to reduce costs by increasing a nd decreasing resource capacity on an as needed basis For example a low level run rate HPC capacity can be provisioned and reserved upfront so as to benefit from higher discounts while burst requirements may be provisioned with spot or on demand pricing and brought online only as needed • Optimize infrastructure costs for specific jobs : Many HPC workloads are part of a data processing pipeline that include s the data transfer pre processing computational calculations post processing data transfer and storage steps In the cloud rather than on a large and expensive server the computing platform is optimized at each step For example if a single step in a pipeline requires a large amount of memory you only need to pay for a more expensive large memo ry server for the memory intensive application while all other steps can run well on smaller and less expensive computing platforms Costs are reduced by optimizing infrastructure for each step of a workload • Burst workloads in the most efficient way : Savings are obtained for HPC workloads through horizontal scaling in the cloud When scaling horizontally many jobs or iterations of an entire workload are run simultaneously for less total elapsed time Depending on the application horizontal scaling can b e cost neutral while offering indirect cost savings by delivering results in a fraction of the time ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 37 • Make use of spot pricing : Amazon EC2 Spot Instances offer spare compute capacity in AWS at steep discounts compared to On Demand instances However Spot I nstances can be interrupted when EC2 needs to reclaim the capacity Spot Instances are frequently the most cost effective resource for flexible or fault tolerant workloads The intermittent nature of HPC workloads makes them well suited to Spot Instances The risk of Spot Instance interruption can be minimized by working with the Spot Advisor and the interruption impact can be mitigated by changing the default interruption behavior and using Spot Fleet to manage your Spot Instanc es The need to occasionally restart a workload is easily offset by the cost savings of Spot Instances • Assess the tradeoff of cost versus time : Tightly coupled massively parallel workloads are able to run on a wide range of core counts For these applica tions the run efficiency of a case typically falls off at higher core counts A cost versus turnaround time curve can be created if many cases of similar type and size will run Curves are specific to both the case type and application as scaling depends significantly on the ratio of computational to network requirements Larger workloads are capable of scaling further than smaller workloads With an understanding of the cost versus turnaround time tradeoff time sensitive workloads can run more quickly on more cores while cost savings can be achieved by running on fewer cores and at maximum efficiency Workloads can fall somewhere in between when you want to balance time sensitivity and cost sensitivity Definition There are four best practice areas for c ost optimization in the cloud: • Costeffective resources • Matching supply and demand • Expenditure awareness • Optimizing over time The matching supply and demand expenditure awareness and optimizing over time areas are described in the AWS Well Architected Framework whitepaper ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 38 Best Practices Cost Effective Resources HPCCOST 1: How have you evaluated available compute and storage options for your workload to optimize cost? HPCCOST 2: How have you evaluated the trade offs between job completion time and cost? Using the appropriate instances resources and features for your system is key to cost managem ent The instance choice may increase or decrease the overall cost of running an HPC workload For example a tightly coupled HPC workload might take five hours to run on a cluster of several smaller servers while a cluster of fewer and larger servers may cost double per hour but compute the result in one hour saving money overall The choice of storage can also impact cost Consider the potential tradeoff between job turnaround and cost optimization and test workloads with different instance sizes and storage options to optimize cost AWS offers a variety of flexible and cost effective pricing options to acquire instances from EC2 and other services in a way that best fits your needs On Demand Instances allow you to pay for compute capacity by the hour with no minimum commitments required Reserved Instances allow you to reserve capacity and offer savings relative to OnDemand pricing With Spot Instances you can leverage unused Amazon EC2 capacity and offer additional savings relative to OnDemand pric ing A WellArchitected system uses the most cost effective resources You can also reduce costs by using managed services for pre processing and post processing For example rather than maintaining servers to store and post process completed run data da ta can be stored on Amazon S3 and then post processed with Amazon EMR or AWS Batch Many AWS services provide features that further reduce your costs For example Auto Scaling is integrated with EC2 to automatically launch and terminate instances based on workload demand FSx for Lustre natively integrates with S3 and presents the entire contents of an S3 bucket as a Lustre filesystem This allows you to optimize your storage costs by provisioning a minimal Lustre filesystem for your immediate workload while maintaining your long term data in cost effective S3 storage S3 provides different Storage Classes so that you can use the most cost effective class for your data; Glacier or Glacier Deep Storage Classes enable you to archive data at the lowest cost ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 39 Experimenting with different instance types storage requirements and architectures can minimize costs while maintaining desirable performance Match ing Supply and Demand There are no best practices unique to HPC for the matching supply and demand best practice area Review the corresponding section in the AWS Well Architected Framework whitepaper Expenditure Awareness There are no best practices unique to HPC for the expenditure awareness best practice area Review the corresponding section in the AWS Well Architected Framework whitepaper Optimizing Over Time There are no best practices unique to HPC for the optimizing over time best practice area Review the corresponding section in the AWS Well Architected Framework whitepaper Conclusion This lens provides architectural best practices for designing and operating reliable secure efficient and cost effective systems for High Performance Computing workloads in the cloud We covered prototypical HPC architectures and overarching HPC design principles We revisited the five Well Architected pillars th rough the lens of HPC providing you with a set of questions to help you review an existing or proposed HPC architecture Applying the Framework to your architecture helps you build stable and efficient systems allowing you to focus on running HPC applica tions and pushing the boundaries of your field ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 40 Contributors The following individuals and organizations contributed to this document: • Aaron Bucher HPC Specialist Solutions Architect Amazon Web Services • Omar Shorbaji Global Solutions Architect Amazon Web Services • Linda Hedges HPC Application Engineer Amazon Web Services • Nina Vogl HPC Specialist Solutions Architect Amazon Web Services • Sean Smith HPC Software Development Engineer Amazon Web Services • Kevin Jorissen Solutions Architect – Climate and Weather Amazon Web Services • Philip Fitzsimons Sr Manager Well Architected Amazon Web Services Further Reading For additional information see the following: • AWS Well Architected Framework 12 • https://awsamazoncom/hpc • https://d1awsstaticcom/whitepapers/Intro_to_HPC_on_AWSpdf • https://d1awsstatic com/whitepapers/optimizing electronic design automation edaworkflows onawspdf • https://awsamazoncom/blogs/compute/real world awsscalability/ Document Revisions Date Description December 2019 Minor Updates November 2018 Minor Updates November 2017 Original publication ArchivedAmazon Web Services AWS Well Architected Framework — High Performance Computing Lens 41 1 https://awsamazoncom/well architected 2 https://d0awsstaticcom/whitepapers/architecture/AWS_Well Architected_Frameworkpdf 3 https://awsamazoncom/ batch/ 4 https://awsamazoncom/ec2/ 5 https://awsamazoncom/ec2/spot/ 6 https://awsamazoncom/message queue 7 https://awsamazoncom/ec2/instance types/#instance typematrix 8 https://awsamazoncom/intel/ 9 http://docsawsamazoncom/AWSEC2/latest/UserGuide/processor_state_controlhtml 10 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSOptimizedhtml#ebs optimization support 11 http://docsawsamazoncom/AWSEC2/latest/UserGuide/enhanced networkinghtml 12 https://awsamazoncom/well architected 13 https://docsawsamazoncom/AWSEC2/latest/UserGuide/efahtml Notes
|
General
|
consultant
|
Best Practices
|
AWS_WellArchitected_Framework__IoT_Lens
|
ArchivedAWS IoT Lens AWS Well Architected Framework December 2019 This paper has been archived The latest version is now available at: https://docsawsamazoncom/wellarchitected/latest/iotlens/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Definitions 1 Design and Manufacturing Layer 2 Edge Layer 2 Provisioning Layer 3 Communication Layer 4 Ingestion Layer 4 Analytics Layer 5 Application Layer 6 General Design Principles 8 Scenarios 9 Device Provisioning 9 Device Telemetry 11 Device Commands 12 Firmware Updates 14 The Pillars of the Well Architected Framework 16 Operational Excellence Pillar 16 Security Pillar 23 Reliability Pillar 37 Performance Efficiency Pillar 44 Cost Optimization Pillar 53 Conclusion 59 Contributors 59 Document Revisions 59 ArchivedAbstract This whitepaper describes the AWS IoT Lens for the AWS Well Architected Framework which enables customers to review and improve their cloud based architectures and better understand the business impact of their design decisions The document describes general design principles as well as specifi c best practices and guidance for the five pillars of the Well Architected Framework ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 1 Introduction The AWS Well Architected Framework helps you understand the pros and cons of the decisions you make when building systems on AWS Using the Framework allows you to learn architectural best practices for designing and operating reliable secure efficient and cost effective systems in the cloud The Framework provides a way for you to consistently measure your architectures against best practices and identify areas for improvement We believe that having wellarchitected systems greatly increases the likelihood of business success In this “Lens” we focus on how to design deploy and architect your IoT workloads (Internet of Things) in the AWS Cloud To implement a wellarchitected IoT application you must follow wellarchitected principles starting from the procurement of connected physical assets (things) to the eventual decommissioning of those same assets in a secure reliable and automated fashion In addition to AWS Cloud best practices this document also articulates the i mpact considerations and recommendations for connecting physical assets to the internet This document only cover s IoT specific workload details from the Well Architected Framework We recommend that you read the AWS Well Architected Framework whitepaper and consider the best practices and questions for other lenses This document is intended for those in technology roles such as chief technology officers (CTOs) architects developers embedded engineers and operations team members After reading this document you will understand AWS best practices and strategies for IoT applications Definitions The AWS Well Architected Framework is based on five pillar s — operational excellence security reliability performance efficiency and cost optimization When architecting technology solutions you must make informed tradeoffs between pillars based upon your business context For IoT workloads AWS provides mul tiple services that allow you to design robust architectures for your applications Internet of Things (IoT) applications are composed of many devices (or things) that securely connect and interact with complementary edge based and cloud based components t o deliver business value IoT applications gather process analyze and act on data generated by connected devices This section presents an overview of the AWS components that are ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 2 used throughout this document to architect IoT workloads There are seven distinct logical layers to consider when building an IoT workload: • Design and manufacturing layer • Edge layer • Provisioning layer • Communications layer • Ingestion layer • Analytics layer • Application layer Design and Manufacturing Layer The design and manufacturi ng layer consists of product conceptualization business and technical requirements gathering prototyping module and product layout and design component sourcing and manufacturing Decisions made in each phase impact the next logical layers of the IoT workload described below For example some IoT device creators prefer to have a common firmware image burned and tested by the contract manufacturer This decision will partly determine what steps are required during the Provisioning layer You may go a step further and burn a unique certificate and priva cy key to each device during manufacturing This decision can impact the Communications layer since the type of credential can impact the subsequent selection of network protocols If the credential never expires it can simplify the Communications and Provisioning layers at the possible expense of increased data loss risk due to compromise of the issuing Certificate Authority Edge Layer The edge layer of your Io T workload consi sts o f the physical hardware of your devices the embedded operating system that manages the processes on your device and the device firmware which is the software and instructions programmed onto your IoT devices The edge is responsible for sensing and acti ng on other peripheral devices Common use cases are reading sensors connected to an edge device or changing the state of a peripheral based on a user action suc h as turning on a light when a motion sensor is activated ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 3 AWS IoT Device SDKs simplify usin g AWS IoT Core with your devices and applications with an API tailored to your programming language or platform Amazon FreeRTOS is a real time operating system for microcontrollers that lets you program small low power edge devices while leveraging mem oryefficient secure embedded libraries AWS IoT Greengrass is a software component that extends the Linux Operations System of your IoT devices AWS IoT Greengrass allows you to run MQTT local routing between devices data caching AWS IoT shadow sync local AWS Lambda functions and machine learning algorithms Provisioning Layer The provisioning layer of your IoT workloads consists of the Public Key Infrastructure (PKI) used to create unique device identities and the application workflow that provides configuration data to the device The provisioning layer is also involved with o ngoing maintenance and eventual decommissioning of devices over time IoT applications need a robust and automated provisioning layer so that devices can be added and managed by your IoT application in a frictionless way When you provision IoT devices you must install a unique cryptographic credential onto them By using X509 certificates you can implement a provisioning layer that securely creates a trusted identity for your device that can be used to authenticate and authorize against your communicati on layer X509 certificates are issued by a trusted entity called a certificate authority (CA) While X509 certificates do consume resources on constrained devices due to memory and processing requirements they are an ideal identity mechanism due to the ir operational scalability and widespread support by standard network protocols AWS Certificate Manager Private CA helps you automate the process of managing the lifecycle of private certificates for IoT devices using APIs Private certificates such as X509 certificates provide a secure way to give a device a long term identity that can be created during provisioning and used to identify and authorize device permissions against your IoT application AWS IoT Just In Time Registration (JITR) enables you t o programmatically register devices to be used with managed IoT platforms such as AWS IoT Core With Just In Time Registration when devices are first connected to your AWS IoT Core endpoint you can automatically trigger a workflow that can determine the validity of the certificate identity and determine what permissions it should be granted ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 4 Communication Layer The Communication layer handles the connectivity message routing among remote devices and routing between devices and the cloud The Communicati on layer lets you establish how IoT messages are sent and received by devices and how devices represent and store their physical state in the cloud AWS IoT Core helps you build IoT applications by providing a managed message broker that supports the use of the MQTT protocol to publish and subscribe IoT messages between devices The AWS IoT Device Registry helps you manage and operate your things A thing is a representation of a specific device or logical entity in the cloud Things can also have custom defined static attributes that help you identify categorize and search for your assets once deployed With the AWS IoT Device Shadow Service you can create a data store that contains the current state of a particular device The Device Shadow Service m aintains a virtual representation of each of your devices you connect to AWS IoT as a distinct device shadow Each device's shadow is uniquely identified by the name of the corresponding thing With Amazon API Gateway your IoT applications can make HTTP r equests to control your IoT devices IoT applications require API interfaces for internal systems such as dashboards for remote technicians and external systems such as a home consumer mobile application With Amazon API Gateway you can create common A PI interfaces without provisioning and managing the underlying infrastructure Ingestion Layer A key business driver for IoT is the ability to aggregate all the disparate data streams created by your devices and transmit the data to your IoT application in a secure and reliable manner The ingestion layer plays a key role in collecting device data while decoupling the flow of data with the communication between devices With AWS IoT rules engine you can build IoT applications such that your devices can interact with AWS services AWS IoT rules are analyzed and actions are performed based on the MQTT topic stream a message is received on ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 5 Amazon Kinesis is a managed service for streaming data enabling you to get timely insights and react quickly to new i nformation from IoT devices Amazon Kinesis integrates directly with the AWS IoT rules engine creating a seamless way of bridging from a lightweight device protocol of a device using MQTT with your internal IoT applications that use other protocols Similar to Kinesis Amazon Simple Queue Service (Amazon SQS) should be used in your IoT application to decouple the communication layer from your application layer Amazon SQS enables an event driven scalable ingestion queue when your application needs to pro cess IoT applications once where message order is not required Analytics Layer One of the benefits of implementing IoT solutions is the ability to gain deep insights and data about what's happening in the local/edge environment A primary way of realizing contextual insights is by implementing solutions that can process and perform analytics on IoT data Storage Services IoT workloads are often designed to generate large quantities of data Ensure that this discrete data is transmitted processed and consumed securely while being stored durably Amazon S3 is object based storage engineered to store and retrieve any amount of data from anywhere on the internet With Amazon S3 you can build IoT applications that store large amounts of da ta for a variety of purposes: regulatory business evolution metrics longitudinal studies analytics machine learning and organizational enablement Amazon S3 gives you a broad range of flexibility in the way you manage data for not just for cost optimi zation and latency but also for access control and compliance Analytics and Machine Learning Services After your IoT data reaches a central storage location you can begin to unlock the full value of IoT by implementing analytics and machine learning on device behavior With analytics systems you can begin to operationalize improvements in your device firmware as well as your edge and cloud logic by making data driven decisions based on your analysis With analytics and machine learning IoT systems c an implement proactive strategies like predictive maintenance or anomaly detection to improve the efficiencies of the system ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 6 AWS IoT Analytics makes it easy to run sophisticated analytics on volumes on IoT data AWS IoT Analytics manages the underlying I oT data store while you build different materialized views of your data using your own analytical queries or Jupyter notebooks Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL Athena is se rverless so there is no infrastructure to manage and customers pay only for the queries that they run Amazon SageMaker is a fully managed platform that enables you to quickly build train and deploy machine learning models in the cloud and the edge la yer With Amazon SageMaker IoT architectures can develop a model of historical device telemetry in order to infer future behavior Application Layer AWS IoT provides several ways to ease the way cloud native applications consume data generated by IoT devices These connected capabilities include features from serverless computing relational databases to create materialized views of your IoT data and management applications to operate inspect secure and manage your IoT operations Management Appli cations The purpose of management applications is to create scalable ways to operate your devices once they are deployed in the field Common operational tasks such as inspecting the connectivity state of a device ensuring device credentials are configure d correctly and querying devices based on their current state must be in place before launch so that your system has the required visibility to troubleshoot applications AWS IoT Device Defender is a fully managed service that audits your device fleets detects abnormal device behavior alerts you to security issues and helps you investigate and mitigate commonly encountered IoT security issues AWS IoT Device Management eases the organizing monitoring and managing of IoT devices at scale At scale cu stomers are managing fleets of devices across multiple physical locations AWS IoT Device Management enables you to group devices for easier management You can also enable real time search indexing against the current state of your devices through Device Management Fleet Indexing Both Device Groups and Fleet Indexing can be used with Over the Air Updates (OTA) when determining which target devices must be updated ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 7 User Applications In addition to managed applications other internal and external systems need different segments of your IoT data for building different applications To support end consumer views business operational dashboards and the other net new applications you build over time you will need several other technologies that can receive the required information from your connectivity and ingestion layer and format them to be used by other systems Database Services – NoSQL and SQL While a data lake can function as a landing zone for all of your unformatted IoT generated data to support all the formatted views on top of your IoT data you need to complement your data lake with structured and semi structured data stores For these purposes you should leverage both NoSQL and SQL databases These types of databases enable you to create diff erent views of your IoT data for distinct end users of your application Amazon DynamoDB is a fast and flexible NoSQL database service for IoT data With IoT applications customers often require flexible data models with reliable performance and automatic scaling of throughput capacity With Amazon Aurora your IoT architecture can store structured data in a performant and cost effective open source database When your data needs to be accessible to other IoT applications for predefined SQL queries relati onal databases provide you another mechanism for decoupling the device stream of the ingestion layer from your eventual business applications which need to act on discrete segments of your data Compute Services Frequently IoT workloads require application code to be executed when the data is generated ingested or consumed/realized Regardless of when compute code needs to be executed serverless compute is a highly cost effective choice Serverless compute can be leveraged from the edge to the core and from core to applications and analytics AWS Lambda allows you to run code without provisioning or managing servers Due to the scale of ingestion for IoT workloads AWS Lambda is an ideal fit for running statel ess event driven IoT applications on a managed platform ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 8 General Design Principles The Well Architected Framework identifies the following set of design principles in order to facilitate good design in the cloud with IoT: • Decouple ingestion from process ing: In IoT applications the ingestion layer must be a highly scalable platform that can handle a high rate of streaming device data By decoupling the fast rate of ingestion from the processing portion of your application through the use of queues buffe rs and messaging services your IoT application can make several decisions without impacting devices such as the frequency it processes data or the type of data it is interested in • Design for offline behavior : Due to things like connectivity issues or misconfigured settings devices may go offline for much more extended periods of time than anticipated Design your embedded software to handle extended periods of offline connectivity and create metrics in the cloud to track devices that are not communicat ing on a regular timeframe • Design lean data at the edge and enrich in the cloud : Given the constrained nature of IoT devices the initial device schema will be optimized for storage on the physical device and efficient transmissions from the device to you r IoT application For this reason unformatted device data will often not be enriched with static application information that can be inferred from the cloud For these reasons as data is ingested into your application you should prefer to first enrich the data with human readable attributes deserialize or expand any fields that the device serialized and then format the data in a data store that is tuned to support your applications read requirements • Handle personalization : Devices that connect to th e edge or cloud via Wi Fi must receive the Access Point name and network password as one of the first steps performed when setting up the device This data is usually infeasible to write to the device during manufacturing since it’s sensitive and site spec ific or from the cloud since the device isn’t connected yet These factors frequently make personalization data distinct from the device client certificate and private key which are conceptually upstream and from cloud provided firmware and configuration updates which are conceptually downstream Supporting personalization can impact design and manufacturing since it may mean that the device itself requires a user interface for direct data input or the need to provide a smartphone application to connec t the device to the local network ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 9 • Ensure that devices regularly send status checks : Even if devices are regularly offline for extended periods of time ensure that the device firmware contains application logic that sets a regular interval to send device status information to your IoT application Devices must be active participants in ensuring that your application has the right level of visibility Sending this regularly occurring IoT message ensures that your IoT application gets an updated view of the overall status of a device and can create processes when a device does not communicate within its expected period of time Scenarios This section addresses common scenarios related to IoT applications with a focus on how each scenario impacts the archit ecture of your IoT workload These examples are not exhaustive but they encompass common patterns in IoT We present a background on each scenario general considerations for the design of the system and a reference architecture of how the scenarios should be implemented Device Provisioning In IoT device provisioning is comp osed of several sequential steps The most important aspect is that each device must be given a unique identity and then subsequently authenticated by your IoT application using that identity As such the first step to provisioning a device is to install an identity The decisions you make in device design and manufacturing determines i f the device has a production ready firmware image and/or unique client credential by the time it reaches the customer Your decisions determine whether there are additional provisioning time steps that must be performed before a production device identify can be installed Use X509 client certificates in IoT for your applications — they tend to be more secure and easier to manage at scale than static passwords In AWS IoT Core the device is registered using its certificate along with a unique thing ident ifier The registered device is then associated with an IoT policy An IoT policy gives you the ability to create fine grained permissions per device Fine grained permissions ensure that only one device has permissions to interact with its own MQTT topics and messages This registration process ensures that a device is recognized as an IoT asset and that the data it generates can be consumed through AWS IoT to the rest of the AWS ecosystem To provision a device you must enable automatic registration and associate ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 10 a provisioning template or an AWS Lambda function with the initial device provisioning event This registration mechanism relies on the device receiving a unique certificate during provisioning (which can happen either during or after manufacturi ng) which is used to authenticate to the IoT application in this case AWS IoT One advantage of this approach is that the device can be transferred to another entity and be reprovisioned allowing the registration process to be repeated with the new own er’s AWS IoT account details Figure 1: Registration Flow 1 Set up the manufacturing device identifier in a database 2 The device connects to API Gateway and requests registration from the CPM The request is validated 3 Lambda requests X509 certificates f rom your Private Certificate Authority (CA) 4 Your provisioning system registered your CA with AWS IoT Core 5 API Gateway passes the device credentials to the device 6 The device initiates the registration workflow with AWS IoT Core ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 11 Device Telemetry There are many uses cases (such as industrial IoT) where the value for IoT is in collecting telemetry on how a machine is performing For example this data can be used to enable predictive maintenance preventing costly unforeseen equipment failures Telemetry must be collected from the machine and uploaded to an IoT application Another benefit of sending telemetry is the ability of your cloud applications to use this data for analysis and to interpret optimizations that can be made to your firmware over time Telemetry data is read only that is collected and transmitted to the IoT application Since telemetry data is passive ensure the MQTT topic for telemetry messages does not overlap with any topics that relate to IoT commands For example a telemetry topi c could be data/device/sensortype where any MQTT topic that begins with “data” is considered a telemetry topic From a logical perspective we have defined several scenarios for capturing and interacting with device data telemetry Figure 2: Options for capturing telemetry ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 12 1 One publishing topic and one subscriber For example a smart light bulb that publishes its brightness level to a single topic where only a single application can subscribe 2 One publishing topic with variables and one subscriber For example a collection of smart bulbs publishing their brightness on similar but unique topics Each subscriber can listen to a unique publish message 3 Single publishing topic and multiple subscribers In this case a light sensor that publi shes its values to a topic that all the light bulbs in a house subscribe to 4 Multiple publishing topics and a single subscriber For example a collection of light bulbs with motion sensors The smart home system subscribes to all of the light bulb topics inclusive of motion sensors and creates a composite view of brightness and motion sensor data Device Commands When you are building an IoT application you need the ability to interact with your device through commands remotely A n example in the indus trial vertical is to use remote commands to request specific data from a piece of equipment A n example usage in the smart home vertical is to use remote commands to schedule an alarm system remotely With AWS IoT Core you can implement commands using MQT T topics or the AWS IoT Device Shadow to send commands to a device and receive an acknowledgment when a device has executed the command Use the Device Shadow over MQTT topics for implementing commands The Device Shadow has several benefits over using standard MQTT topics such as a clientToken to track the origin of a request version numbers for managing conflict resolution and the abilit y to store commands in the cloud in the event that a device is offline and unable to receive the command when it is issued The device’s shadow is commonly used in cases where a command needs to be persisted in the cloud even if the device is currently not online When the device is back online the device requests the latest shadow information and executes the command ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 13 Figure 3: Using a message broker to send commands to a device AWS IoT Device Shadow Service IoT solutions that use the Device Shadow ser vice in AWS IoT Core manage command requests in a reliable scalable and straightforward fashion The Device Shadow service follows a prescriptive approach to both the management of device related state and how the state changes are communicated This app roach describes how the Device Shadows service uses a JSON document to store a device's current state desired future state and the difference between current and desired states Figure 4: Using Device Shadow with devices 1 A device reports initial devi ce state by publishing that state as a message to the update topic deviceID/shadow/update 2 The Device Shadow reads the message from the topic and records the device state in a persistent data store ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 14 3 A device subscribes to the delta messaging topic deviceId/shadow/update/delta upon which device related state change messages will arrive 4 A component of the solution publishes a desired state message to the topic deviceID/shadow/update and the Device Shadow tracking this device records the desired device state in a persistent data store 5 The Device Shadow publishes a delta message to the topic deviceId/shadow/update/delta and the Message Broker sends the message to the device 6 A device receives the delta message and performs the desired state changes 7 A device publishes an acknowledgment message reflecting the new state to the update topic deviceID/shadow/update and the Device Shadow tracking this device records the new state in a persistent data store 8 The Device Shadow publishes a message to the deviceId/shad ow/update/accepted topic 9 A component of the solution can now request the updated state from the Device Shadow Firmware Updates All IoT solutions must allow device firmware updates Supporting firmware upgrades without human intervention is critical for s ecurity scalability and delivering new capabilities AWS IoT Device Management provides a secure and easy way for you to manage IoT deployments including executing and tracking the status of firmware updates AWS IoT Device Management uses the MQTT protocol with AWS IoT message broker and AWS IoT Jobs to send firmware update commands to devices as well as to receive the status of those firmware updates over time An IoT solution must implement firmware updates using AWS IoT Jobs shown in the followi ng diagram to deliver this functionality ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 15 Figure 5: Updating fir mware on devices 1 A device subscribes to the IoT job notification topic deviceId/jobs/notify next upon which IoT job notification messages will arrive 2 A device publishes a message to deviceId/jobs/start next to start the next job and get the next job its job document and other details including any state saved in statusDetails 3 The AWS IoT Jobs service retrieves the next job document for the specific device and sends this document on the subscribed topic deviceId/jobs/start next/accepted 4 A device performs the actions specified by the job document using the deviceId/jobs/jobId/update MQTT topic to report on the progress of the job 5 During the upgrade process a device downloads firmware using a presigned URL for Amazon S3 Use code signing to sign the firmware when uploading to Amazon S3 By code signing your firmware the end device can verify the authenticity of the firmware before installing Amazon FreeRTOS devices can downloa d the firmware image directly over MQTT to eliminate the need for a separate HTTPS connection 6 The device publishes an update status message to the job topic deviceId/jobs/jobId/update reporting success or failure 7 Because this job's execution status has c hanged to final state the next IoT job available for execution (if any) will change ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 16 The Pillars of the Well Architected Framework This section describes each of the pillars and includes definitions best practices questions considerations and essenti al AWS services that are relevant when architecting solutions for AWS IoT Operational Excellence Pillar The Operational Excellence pillar includes operational practices and procedures used to manage production workloads Operational excellence comprises how planned changes are executed as well as responses to unexpected operational events Change execution and responses should be automated All processes and procedures of operational excellence must be documented tested and regularly reviewed Design Principles In addition to the overall Well Architected Framework operational excellence design principles there are five design principles for operational excellence for IoT in the cloud: • Plan for device provisioning : Design your device provisioning proce ss to create your initial device identity in a secure location Implement a public key infrastructure (PKI) that is responsible for distributing unique certificates to IoT devices As described above selection of crypto hardware with a pre generated priva te key and certificate eliminates the operational cost of running a PKI Otherwise PKI can be done offline with a Hardware Security Module (HSM) during the manufacturing process or during device bootstrapping Use technologies that can manage the Certifi cate Authority (CA) and HSM in the cloud ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 17 • Implement device bootstrapping : Devices that support personalization by a technician (in the industrial vertical) or user (in the consumer vertical) can also undergo provisioning For example a smartphone applica tion that interacts with the device over Bluetooth LE and with the cloud over Wi Fi You must d esign the ability for devices to programmatically update their configuration information using a globally distributed bootstrap API A bootstrapping design ensur es that you can programmatically send the device new configuration settings through the cloud These changes should include settings such as which IoT endpoint to communicate with how frequently to send an overall status for the device and any updated se curity settings such as server certificates The process of bootstrapping goes beyond initial provisioning and plays a critical role in device operations by providing a programmatic way to update device configuration through the cloud • Document device com munication patterns : In an IoT application device behavior is documented by hand at the hardware level In the cloud an operations team must formulate how the behavior of a device will scale once deployed to a fleet of devices A cloud engineer should re view the device communication patterns and extrapolate the total expected inbound and outbound traffic of device data and determine the expected infrastructure necessary in the cloud to support the entire fleet of devices During operational planning thes e patterns should be measured using device and cloud side metrics to ensure that expected usage patterns are met in the system • Implement over the air (OTA) updates : In order to benefit from long term investments in hardware you must be able to continuously update the firmware on the devices with new capabilities In the cloud you can apply a robust firmware update process that allows you to target specific devices for firmware updates roll out changes over time track success and failures of updates and have the ability to roll back or put a stop to firmware changes based on KPIs • Implement functional testing on physical assets : IoT device hardware and firmware must undergo rigorous testing before being deployed in the field Acceptance and functional testing are critical for your path to production The goal of functional testing is to run your hardware components embedded firmware and device application software through rigorous testing scenarios such as intermittent or reduced connectivity or failure of peripheral sensors while profiling the performance of the hardware The tests ensure that your IoT device will perform as expected when deployed ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 18 Definition There are three best practice areas for operational excellence in the cloud: 1 Preparation 2 Operation 3 Evolution In addition to what is covered by the Well Architected Framework concerning process runbooks and game days there are specific areas you should review to drive operational excellence within IoT applications Best Practices Preparation For IoT applications the need to procure provision test and deploy hardware in various environments means that the preparati on for operational excellence must be expanded to cover aspects of your deployment that will primarily run on physical devices and will not run in the cloud Operational metrics must be defined to measure and improve business outcomes and then determine if devices should generate and send any of those metrics to your IoT application You also must plan for operational excellence by creating a streamlined process of functional testing that allows you to simulate how devices may behave in their various enviro nments It is essential that you ask how to ensure that your IoT workloads are resilient to failures how devices can self recover from issues without human intervention and how your cloud based IoT application will scale to meet the needs of an ever increasing load of connected hardware When using an IoT platform you have the opportunity to use additional components/tools for handling IoT operations These tools include services that allow you to monitor and inspect device behavior capture connectivi ty metrics provision devices using unique identities and perform long term analysis on top of device data ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 19 IOTOPS 1 What factors drive your operational priorities? IOTOPS 2 How do you ensure that you are ready to support the operations of devices of your IoT workload? IOTOPS 3 How are you ensuring that newly provisioned devices have the required operational prerequisites? Logical security for IoT and data centers is similar in that both involve predominantly machine tomachine authentication However they differ in that IoT devices are frequently deployed to environments that cannot be assumed to be physically secure IoT applications also commonly require sensitive data to traverse the internet Due to these considerations it is vital for you to have an architecture that determines how devices will securely gain an identity continuously prove their identity be seeded wi th the appropriate level of metadata be organized and categorized for monitoring and enabled with the right set of permissions For successful and scalable IoT applications the management processes should be automated data driven and based on previou s current and expected device behavior IoT applications must support incremental rollout and rollback strategies By having this as part of the operational efficiency plan you will be equipped to launch a fault tolerant efficient IoT application In AWS IoT you can use multiple features to provision your individual device identities signed by your CA to the cloud This path involves provisioning devices with identities and then using just intimeprovisioning (JITP) just intimeregistration (JITR) or Bring Your Own Certificate (BYOC) to securely register your device certificates to the cloud Using AWS services including Route 53 Amazon API Gateway Lambda and DynamoDB will create a simple API interface to extend the provisioning process with device bootstrapping Operate In IoT operational health goes beyond the operational health of the cloud application and extends to the ability to measure monitor troubleshoot and remediate devices that are part of your application but are remotely deplo yed in locations that may be difficult or impossible to troubleshoot locally This requirement of remote operations must be ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 20 considered at design and implementation time in order to ensure your ability to inspect analyze and act on metrics sent from these remote devices In IoT you must establish the right baseline metrics of behavior for your devices be able to aggregate and infer issues that are occurring across devices and have a robust remediation plan that is not only executed in the cloud but al so part of your device firmware You must implement a variety of device simulation canaries that continue to test common device interactions directly against your production system Device canaries assist in narrowing down the potential areas to investigat e when operational metrics are not met Device canaries can be used to raise preemptive alarms when the canary metrics fall below your expected SLA In AWS you can create an AWS IoT thing for each physical device in the device registry of AWS IoT Core By creating a thing in the registry you can associate metadata to devices group devices and configure security permissions for devices An AWS IoT thing should be used to store static data in the thing registry while storing dynamic device data in the thing’s associated device shadow A device's shadow is a JSON document that is used to store and retrieve state information for a device Along with creating a virtual representation of your device in the device registry as part of the operational proces s you must create thing types that encapsulate similar static attributes that define your IoT devices A thing type is analogous to the product classification for a device The combination of thing thing type and device shadow can act as your first entr y point for storing important metadata that will be used for IoT operations In AWS IoT thing groups allow you to manage devices by category Groups can also contain other groups — allowing you to build hierarchies With organizational structure in your IoT application you can quickly identify and act on related devices by device group Leveraging the cloud allows you to automate the addition or removal of devices from groups based on your business logic and the lifecycle of your devices In IoT your d evices create telemetry or diagnostic messages that are not stored in the registry or the device’s shadow Instead these messages are delivered to AWS IoT using a number of MQTT topics To make this data actionable use the AWS IoT rules engine to route er ror messages to your automated remediation process and add diagnostic information to IoT messages An example of how you would route a message that contained an error status code to a custom workflow is below The rules engine inspects the status of a mess age and if it is an error it starts the Step Function workflow to remediate the device based off the error message detail payload ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 21 { "sql": "SELECT * FROM 'command/iot/response WHERE code = 'eror'" "ruleDisabled": false "description": "Error Handling Workflow" "awsIotSqlVersion": "2016 0323" "actions": [{ "stepFunctions": { "executionNamePrefix": "errorExecution" "stateMachineName": "errorStateMachine" "roleArn": "arn:aws:iam::123456789012:role/aws_iot_step_functions" } }] } To support operational insights to your cloud application generate dashboards for all metrics collected from the device broker of AWS IoT Core These metrics are available through CloudWatch Metrics In addition CloudWatch Logs contain information such as total successful messages inbound messages outbound connectivity success and errors To augment your production device deployments implement IoT simula tions on Amazon Elastic Compute Cloud (Amazon EC2) as device canaries across several AWS Regions These device canaries are responsible for mirroring several of your business use cases such as simulating error conditions like long running transactions se nding telemetry and implementing control operations The device simulation framework must output extensive metrics including but not limited to successes errors latency and device ordering and then transmit all the metrics to your operations system In addition to custom dashboards AWS IoT provides fleet level and device level insights driven from the Thing Registry and Device Shadow service through search capabilities such as AWS IoT Fleet Indexing The ability to search across your fleet eases the operational overhead of diagnosing IoT issues whether they occur at the device level or fleet wide level ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 22 Evolve IOTOPS 4 How do you evolve your IoT application with minimum impact to downstream IoT devices? IoT solutions frequently involve a combinatio n of low power devices remote locations low bandwidth and intermittent network connectivity Each of those factors poses communications challenges including upgrading firmware Therefore it's important for you to incorporate and implement an IoT updat e process that minimizes the impact to downstream devices and operations In addition to reducing downstream impact devices must be resilient to common challenges that exist in local environments such as intermittent network connectivity and power loss Use a combination of grouping IoT devices for deployment and staggering firmware upgrades over a period of time Monitor the behavior of devices as they are updated in the field and proceed only after a percentage of devices have upgraded successfully Use AWS IoT Device Management for creating deployment groups of devices and delivering over the air updates (OTA) to specific device groups During upgrades continue to collect all of the CloudWatch Logs telemetry and IoT device job messages and combine that information with the KPIs used to measure overall application health and the performance of any long running canaries Before and after firmware updates perform a retrospective analysis of operations metrics with participants spanning the business t o determine opportunities and methods for improvement Services like AWS IoT Analytics and AWS IoT Device Defender are used to track anomalies in overall device behavior and to measure deviations in performance that may indicate an issue in the updated fi rmware Key AWS Services Several services can be used to drive operational excellence for your IoT application The AWS Device Qualification Program helps you select hardware components that have been designed and tested for AWS IoT interoperability Quali fied hardware can get you to market faster and reduce operational friction AWS IoT Core offers features used to manage the initial onboarding of a device AWS IoT Device Management reduces the operational overhead of performing fleet wide operations such as device grouping and searching In addition Amazon CloudWatch is used for monitoring IoT metrics collecting logs generating alerts and triggering responses Other services and features that support the three areas of operational excellence are as fo llows: ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 23 • Preparation : AWS IoT Core supports provisioning and onboarding your devices in the field including registering the device identity using just intime provisioning just intime registration or Bring Your Own Certificate Devices can then be associated with their metadata and dev ice state using the device registry and the Device Shadow • Operations : AWS IoT thing groups and Fleet Indexing allow you to quickly develop an organizational structure for your devices and search across the current metadata of your devices to perform recurring device operations Amazon CloudWatch allows you to monitor the operational health of your devices and your application • Responses : AWS IoT Jobs enables you to proactively push updates to one or more devices such as firmware updates or device configuration AWS IoT rules engine allows you to inspect IoT messages as they are received by AWS IoT Core and immedi ately respond to the data at the most granular level AWS IoT Analytics and AWS IoT Device Defender enable you to proactively trigger notifications or remediation based on real time analysis with AWS IoT Analytics and real time security and data threshol ds with Device Defender Security Pillar The Security pillar includes the ability to protect information systems and assets while delivering business value Design Principles In addition to the overall Well Architected Framework security design principle s there are specific design principles for IoT security: • Manage device security lifecycle holistically : Data security starts at the design phase and ends with the retirement and destruction of the hardware and data It is important to take an end toend approach to the security lifecycle of your IoT solution in order to maintain your competitive advantage and retain customer trust • Ensure least privilege permissions : Devices should all have fine grained access permissions that limit which topics a device can use for communication By restricting access one compromised device will have fewer opportunities to impact any other devices ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 24 • Secure device credentials at rest : Devices should securely store credential information at rest using mechanisms such as a dedicated crypto element or secure flash • Implement device identity lifecycle management : Devices maintain a device identity from creation through end of life A well designed identity system will keep track of a device’s identity track the validity of t he identity and proactively extend or revoke IoT permissions over time • Take a holistic view of data security : IoT deployments involving a large number of remotely deployed devices present a significant attack surface for data theft and privacy loss Use a model such as the Open Trusted Technology Provider Standard to systemically review your supply chain and solution design for risk and then apply appropriate mitigatio ns Definition There are five best practice areas for security in the cloud: 1 Identity and access management (IAM) 2 Detective controls 3 Infrastructure protection 4 Data protection 5 Incident response Infrastructure and data protection encompass the IoT device har dware as well as the end to end solution IoT implementations require expanding your security model to ensure that devices implement hardware security best practices and your IoT applications follow security best practices for factors such as adequately s coped device permissions and detective controls The security pillar focuses on protecting information and systems Key topics include confidentiality and integrity of data identifying and managing who can do what with privilege management protecting sy stems and establishing controls to detect security events ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 25 Best Practices Identity and Access Management (IAM) IoT devices are often a target because they are provisioned with a trusted identity may store or have access to strategic customer or business data (such as the firmware itself) may be remotely accessible over the internet and may be vulnerable to direct physical tampering To provide protection against unauthorized access you need to always begin with implementing security at the device leve l From a hardware perspective there are several mechanisms that you can implement to reduce the attack surface of tampering with sensitive information on the device such as: • Hardware crypto modules • Software supported solutions including secure flash • Physical function modules that cannot be cloned • Uptodate cryptographic libraries and standards including PKCS #11 and TLS 12 To secure device hardware you implement solutions such that private keys and sensitive identity are unique to and only stored on the device in a secure hardware location Implement hardware or software based modules that securely store and manage access to the private keys used to communicate with AWS IoT In addition to hardware security IoT devices must be given a valid identi ty which will be used for authentication and authorization in your IoT application During the lifetime of a device you will need to be able to manage certificate renewal and revocation To handle any changes to certificate information on a device you must first have the ability to update a device in the field The ability to perform firmware updates on hardware is a vital underpinning to a well architected IoT application Through OTA updates securely rotate device certificates before expiry including certificate authorities IOTSEC 1 How do you securely store device certificates and private keys for devices? IOTSEC 2 How do you associate AWS IoT identities with your devices? ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 26 For example with AWS IoT you first provision X509 certificate and then separately create the IoT permissions for connecting to IoT publishing and subscribing to messages and receiving updates This separation of identity and permissions provides flexibility in managing your device security During the c onfiguration of permissions you can ensure that any device has the right level of identity as well as the right level of access control by creating an IoT policy that restricts access to MQTT actions for each device Ensure that each device has its own un ique X509 certificate in AWS IoT and that devices should never share certificates (one certificate for one device rule) In addition to using a single certificate per device when using AWS IoT each device must have its own unique thing in the IoT regist ry and the thing name is used as the basis for the MQTT ClientID for MQTT connect By creating this association where a single certificate is paired with its own thing in AWS IoT Core you can ensure that one compromised certificate cannot inadvertently assume an identity of another device It also alleviates troubleshooting and remediation when the MQTT ClientID and the thing name match since you can correlate any ClientID log message to the thing that is associated with that particular piece of communic ation To support device identity updates use AWS IoT Jobs which is a managed platform for distributing OTA communication and binaries to your devices AWS IoT Jobs is used to define a set of remote operations that are sent to and executed on one or mor e devices connected to AWS IoT AWS IoT Jobs by default integrate several best practices including mutual authentication and authorization device tracking of update progress and fleet wide wide metrics for a given update Enable AWS IoT Device Defender audits to track device configuration device policies and checking for expiring certificates in an automated fashion For example Device Defender can run audits on a scheduled basis and trigger a notification for expiring certificates With the combinati on of receiving notifications of any revoked certificates or pending expiry certificates you can automatically schedule an OTA that can proactively rotate the certificate IOTSEC 3 How do you authenticate and authorize user access to your IoT application ? ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 27 Although many applications focus on the thing aspect of IoT in almost all verticals of IoT there is also a human component that needs the ability to communicate to and receive notifications from devices For example consumer IoT generally requires use rs to onboard their devices by associating them with an online account Industrial IoT typically entails the ability to analyze hardware telemetry in near real time In either case it's essential to determine how your application will identify authentica te and authorize users that require the ability to interact with particular devices Controlling user access to your IoT assets begins with identity Your IoT application must have in place a store (typically a database) that keeps track of a user's iden tity and also how a user authenticates using that identity The identity store may include additional user attributes that can be used at authorization time (for example user group membership) IoT device telemetry data is an example of a securable asset By treating it as such you can control the access each user has and audit individual user interactions When using AWS to authenticate and authorize IoT application users you have several options to implement your identity store and how that store main tains user attributes For your own applications use Amazon Cognito for your identity store Amazon Cognito provides a standard mechanism to express identity and to authenticate users in a way that can be directly consumed by your app and other AWS serv ices in order to make authorization decisions When using AWS IoT you can choose from several identity and authorization services including Amazon Cognito Identity Pools AWS IoT policies and AWS IoT custom authorizer For implementing the decoupled vie w of telemetry for your users use a mobile service such as AWS AppSync or Amazon API Gateway With both of these AWS services you can create an abstraction layer that decouples your IoT data stream from your user’s device data notification stream By cr eating a separate view of your data for your external users in an intermediary datastore for example Amazon DynamoDB or Amazon ElasticSearch Service you can use AWS AppSync to receive user specific notifications based only on the allowed data in your in termediary store In addition to using external data stores with AWS AppSync you can define user specific notification topics that can be used to push specific views of your IoT data to your external users If an external user needs to communicate directl y to an AWS IoT endpoint ensure that the user identity is either an authorized Amazon Cognito Federated Identity tha t is associated to an authorized Amazon Cognito role and a fine grained IoT policy or uses AWS IoT custom authorizer where the authorizat ion is managed by your own authorization service With either approach associate a fine grained policy to each user ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 28 that limits what the user can connect as publish to subscribe from and receive messages from concerning MQTT communication IOTSEC 4 H ow do you ensure that least privilege is applied to principals that communicates to your IoT application? After registering a device and establishing its identity it may be necessary to seed additional device information needed for monitoring metrics te lemetry or command and control Each resource requires its own assignment of access control rules By reducing the actions that a device or user can take against your application and ensuring that each resource is secured separately you limit the impact that can occur if any single identity or resource is used inadvertently In AWS IoT create fine grained permissions by using a consistent set of naming conventions in the IoT registry The first convention is to use the same unique identifier for a devic e as the MQTT ClientID and AWS IoT thing name By using the same unique identifier in all these locations you can easily create an initial set of IoT permissions that can apply to all of your devices using AWS IoT Thing Policy variables The second naming convention is to embed the unique identifier of the device into the device certificate Continuing with this approach store the unique identifier as the Com monName in the subject name of the certificate in order to use Certificate Policy Variables to bind IoT permissions to each unique device credential By using policy variables you can create a few IoT policies that can be applied to all of your device certificates while maintaining least privilege For example the IoT policy below would restrict any device to connect only using the unique identifier of the de vice (which is stored in the common name) as its MQTT ClientID and only if the certificate is attached to the device This policy also restricts a device to only publish on its individual shadow: { "Version": "2012 1017" "Statement": [{ "Effect": "Allow" "Action": ["iot:Connect"] "Resource": ["arn:aws:iot:us east 1:123456789012:client/${iot:CertificateSubjectCommonName}"] "Condition":{ "Bool":{ ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 29 "iot:ConnectionThingIsAttached": ["true"] } } } { "Effect":"Allow" "Action":["iot:Publish"] "Resource":["arn:aws:iot:us east 1:123456789012:topic/$aws/things/${iot:ConnectionThingThingName}/ shadow/update"] } ] } Attach your device identity (certificate or Amazon Cognito Federated Identity ) to the thing in the AWS IoT registry using AttachThingPrincipal Although these scenarios apply to a single device communicating with its own set of topics and device shadows there are scenarios where a single device needs to act upon the state or topics of other devices For example you may be operating an edge appli ance in an industrial setting creating a home gateway to manage coordinating automation in the home or allowing a user to gain access to a different set of devices based on their specific role For these use cases leverage a known entity such as a group identifier or the identity of the edge gateway as the prefix for all of the devices that communicate to the gateway By making all of the endpoint devices use the same prefix you can make use of wildcards "*" in your IoT policies This approach balanc es MQTT topic security with manageability { "Version": "2012 1017" "Statement": [ { "Effect":"Allow" "Action":["iot:Publish"] "Resource":["arn:aws:iot:us east 1:123456789012:topic/$aws/things/edgegateway123 */shadow/update"] } ] } ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 30 In the preceding example the IoT operator would associate the policy with the edge gateway with the identifier edgegateway123 The permissions in this policy would then allow the edge appliance to publish to other Device Shado ws that are managed by the edge gateway This is accomplished by enforcing that any connected devices to the gateway all have a thing name that is prefixed with the identifier of the gateway For example a downstream motion sensor would have the identifie r edgegateway123 motionsensor1 and therefore can now be managed by the edge gateway while still restricting permissions Detective Controls Due to the scale of data metrics and logs in IoT applications aggregating and monitoring is an essential part of a well architected IoT application Unauthorized users will probe for bugs in your IoT application and will look to take advantage of individual devices to gain further access into other devices applications and cloud resources In order to operate an entire IoT solution you will need to manage detective controls not only for an individual device but also for the entire fleet of devices in your application You will need to enable several levels of logging monitoring and alerting to detect issues at the device level as well as the fleet wide level In a well architected IoT application each layer of the IoT application generates metrics and logs At a minimum your architecture should have metrics and logs related to the physical device the connect ivity behavior of your device message input and output rates per device provisioning activities authorization attempts and internal routing events of device data from one application to another IOTSEC 5: How are you analyzing application logs and met rics across cloud and devices? In AWS IoT you can implement detective controls using AWS IoT Device Defender CloudWatch Logs and CloudWatch Metrics AWS IoT Device Defender processes logs and metrics related to device behavior and connectivity behavior s of your devices AWS IoT Device Defender also lets you continuously monitor security metrics from devices and AWS IoT Core for deviations from what you have defined as appropriate behavior for each device Set a default set of thresholds when device beh avior or connectivity behavior deviates from normal activity Augment Device Defender metrics with the Amazon CloudWatch Metrics Amazon CloudWatch Logs generated by AWS IoT Core and Amazon GuardDuty These ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 31 service level logs provide important insight in to activity about not only activities related to AWS IoT Platform services and AWS IoT Core protocol usage but also provide insight into the downstream applications running in AWS that are critical components of your end to end IoT application All Amazon CloudWatch Logs should be analyzed centrally to correlate log information across all sources IOTSEC 6: How are you managing invalid identities in your IoT application? Security identities are the focal point of device trust and authorization to your IoT application It's vital to be able to manage invalid identities such as certificates centrally An invalid certificate can be revoked expired or made inactive As part of a wellarchitected application you must have a process for capturing all invali d certificates and an automated response based on the state of the certificate trigger In addition to the ability of capturing the events of an invalid certificate your devices should also have a secondary means of establishing secure communications to your IoT platform By enabling a bootstrapping pattern as described previously where t wo forms of identity are used for a device you can create a reliable fallback mechanism for detecting invalid certificates and providing a mechanism for a device or an administrator to establish trusted secure communication for remediation A wellarchitected IoT application establishes a certificate revocation list (CRL) that track s all revoked device certificates or certificate authorities (CAs) Use your own trusted CA for on boarding devices and synchronize your CRL on a regular basis to your IoT appl ication Your IoT application must reject connections from identities that are no longer valid With AWS you do not need to manage your entire PKI on premises Use AWS Certificate Manager (ACM) Private Certificate Authority to host your CA in the cloud O r you can work with an APN Partner to add preconfigured secure elements to your IoT device hardware specification ACM has the capability to export revoked cert ificate s to a file in an S3 bucket That same file can be used to programmatically revoke certi ficates against AWS IoT Core Another state for certificates is to be near their expiry date but still valid The client certificate must be valid for at least the service lifetime of the device It ’s up to your IoT application to keep track of devices near their expiry date and perfor m an OTA process to update their certificate to a new one with a later expiry along with logging information about why the certificate rotation was required for audit purposes ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 32 Enable AWS IoT Device Defender audits related to the certificate and CA expiry Device Defender produce s an audit log of certificates that are set to expire within 30 days Use this list to programmatically update devices before certificates are no longer valid You may also choose to build your own expiry store to manage certificat e expiry dates and programmatically query identify and trigger an OTA for device certificate replacement or renewal Infrastructure Protection Design time is the ideal phase for considering security requirements for infrastructure protection across the e ntire lifecycle of your device and solution By considering your devices as an extension of your infrastructure you can take into account how the entire device lifecycle impacts your design for infrastructure protection From a cost standpoint changes ma de in the design phase are less expensive than changes made later From an effectiveness standpoint data loss mitigations implemented at design time are likely to be more comprehensive than mitigations retrofitted Therefore planning the device and solu tion security lifecycle at design time reduces business risk and provides an opportunity to perform upfront infrastructure security analysis before launch One way to approach the device security lifecycle is through supply chain analysis For example eve n a modestly sized IoT device manufacturer or solution integrator has a large number of suppliers that make up its supply chain whether directly or indirectly To maximize solution lifetime and reliability ensure that you are receiving authentic componen ts Software is also part of the supply chain The production firmware image for a device include s drivers and libraries from many sources including silicon partners open source aggregation sites such as GitHub and SourceForge previous first party produ cts and new code developed by internal engineering To understand the downstream maintenance and support for first party firmware and software you must analyze each software provider in the supply chain to determine if it offers support and how it delivers patches This analysis is especially important for connected devices: software bugs are inevitable and represent a risk to your customers because a vulnerable device can be exploited remotely Your IoT device manufacturer or solution engineering team must learn about and patch bugs in a timely manner to reduce these risks IOTSEC 7 How are you vetting your suppliers contract manufacturers and other outsource relationships? ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 33 IOTSEC 8 How are you planning the security lifecycle of your IoT devices? IOTSEC 9 How are you ensuring timely notification of security bugs in your third party firmware and software components? Although there is no cloud infrastructure to manage when using AWS IoT services there are integration points where AWS IoT Core inte racts on your behalf with other AWS services For example the AWS IoT rules engine consists of rules that are analyzed that can trigger downstream actions to other AWS services based on the MQTT topic stream Since AWS IoT communicates to your other AWS r esources you must ensure that the right service role permissions are configured for your application Data Protection Before architecting an IoT application data classification governance and controls must be designed and documented to reflect how the data can be persisted in the cloud and how data should be encrypted whether on a device or between the devices and the cloud Unlike traditional cloud applications data sensitivity and governance extend to the IoT devices that are deployed in remote loc ations outside of your network boundary These techniques are important because they support protecting personally identifiable data transmitted from devices and complying with regulatory obligations During the design process determine how hardware firm ware and data are handled at device end oflife Store long term historical data in the cloud Store a portion of current sensor readings locally on a device namely only the data required to perform local operations By only storing the minimum data requ ired on the device the risk of unintended access is limited In addition to reducing data storage locally there are other mitigations that must be implemented at the end of life of a device First the device should offer a reset option which can reset the hardware and firmware to a default factory version Second your IoT application can run periodic scans for the last logon time of every device Devices that have been offline for too long a period of time or are associated with inactive customer acco unts can be revoked Third encrypt sensitive data that must be persisted on the device using a key that is unique to that particular device IOTSEC 10: How do you classify manage and protect your data in transit and at rest? ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 34 All traffic to and from AW S IoT must be encrypted using Transport Layer Security (TLS) In AWS IoT security mechanisms protect data as it moves between AWS IoT and other devices or AWS services In addition to AWS IoT you must implement device level security to protect not only t he device’s private key but also the data collected and processed on the device For embedded development AWS has several services that abstract components of the application layer while incorporating AWS security best practices by default on the edge F or microcontrollers AWS recommends using Amazon FreeRTOS Amazon FreeRTOS extends the FreeRTOS kernel with libraries for Bluetooth LE TCP/IP and other protocols In addition Amazon FreeRTOS contains a set of security APIs that allow you to create embedded applications that securely communicate with AWS IoT For Linux based edge gateways AWS IoT Greengrass can be used to extend cloud functionality to the edge of your network AWS IoT Greengrass implements several security features including mutual X509 certificate based authentication with connected devices AWS IAM policies and roles to manage communication permissions between AWS IoT Greengrass and cloud applications and subscriptions which are used to determine how and if data can be routed between connected devices and Greengrass core Incident Response Being prepared for incident response in IoT requires planning on how you will deal with two types of incidents in your IoT workload The first incident is an attack against an individual IoT device in an attempt to disrupt the performance or impact the device’s behavior The second incident is a larger scale IoT event such as network outages and DDoS attack In both scenarios the architecture of your IoT application play s a large role in determining how quickly you will be able to diagnose incidents correlate the data across the incident and then subsequently apply runbooks to the affected dev ices in an automated reliable fashion For IoT applications follow the following best practices for incident responses: • IoT devices are organized in different groups based on device attributes such as location and hardware version • IoT devices are sear chable by dynamic attributes such as connectivity status firmware version application status and device health ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 35 • OTA updates can be staged for devices and deployed over a period of time Deployment rollouts are monitored and can be automatically aborted if devices fail to maintain the appropriate KPIs • Any update process is resilient to errors and devices can recover and roll back from a failed software update • Detailed logging metrics and device telemetry are available that contain contextual informa tion about how a device is currently performing and has performed over a period of time • Fleet wide metrics monitor the overall health of your fleet and alert when operational KPIs are not met for a period of time • Any individual device that deviates from expected behavior can be quarantined inspected and analyzed for potential compromise o f the firmware and applications IOTSEC 11: How do you prepare to respond to an incident that impacts a single device or a fleet of devices? Implement a strategy in which your InfoSec team can quickly identify the devices that need remediation Ensure that the InfoSec team has runbooks that consider firmware versioning and patching for device updates Create automated processes that proactively apply security patches to vulnerable devices as they come online At a minimum your security team should be able to detect an incident on a specific device based on the device logs and current device behavior After an incident is identified the next phase is to quarantine the application To implement this with AWS IoT services you can use AWS IoT Things Groups w ith more restrictive IoT policies along with enabling custom group logging for those devices This allows you to only enable features that relate to troubleshooting as well as gather more data to understand root cause and remediation Lastly after an inc ident has been resolved you must be able to deploy a firmware update to the device to return it to a known state Key AWS Services The essential AWS security services in IoT are the AWS IoT registry AWS IoT Device Defender AWS Identity and Access Manage ment (IAM) and Amazon Cognito In combination these services allow you to securely control access to IoT devices AWS ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 36 services and resources for your users The following services and features support the five areas of security: Design : The AWS Device Q ualification Program provides IoT endpoint and edge hardware that has been pre tested for interoperability with AWS IoT Tests include mutual authentication and OTA support for remote patching AWS Identity and Access Management (IAM): Device credentials (X509 certificates IAM Amazon Cognito identity pools and Amazon Cognito user pools or custom authorization tokens) enable you to securely control device and external user access to AWS resources AWS IoT policies add the ability to implement fine grained access to IoT devices A CM Private CA provides a cloud based approach to creating and managing device certificates Use AWS IoT thing groups to manage IoT permissions at the group level instead of individually Detective controls : AWS IoT Device Defender records device communication and cloud side metrics from AWS IoT Core AWS IoT Device Defender can automate security responses by sending notifications through Amazon Simple Notification Service (Amazon SNS) to internal systems or adm inistrators AWS CloudTrail logs administrative actions of your IoT application Amazon CloudWatch is a monitoring service with integration with AWS IoT Core and can trigger CloudWatch Events to automate security responses CloudWatch captures detailed log s related to connectivity and security events between IoT edge components and cloud services Infrastructure protection : AWS IoT Core is a cloud service that lets connected devices easily and securely interact with cloud applications and other devices The AWS IoT rules engine in AWS IoT Core uses IAM permissions to communicate with other downstream AWS services Data protection : AWS IoT includes encryption capabilities for devices over TLS to protect your data in transit AWS IoT integrates directly with s ervices such as Amazon S3 and Amazon DynamoDB which support encryption at rest In addition AWS Key Management Service (AWS KMS) supports the ability for you to create and control keys used for encryption On devices you can use AWS edge offerings such as Amazon FreeRTOS AWS IoT Greengrass or the AWS IoT Embedded C SDK to support secure communication Incident response : AWS IoT Device Defender allows you to create security profiles that can be used to detect deviations from normal device behavior and trigger automated responses including AWS Lambda AWS IoT Device Management should be ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 37 used to group devices that need remediation and then using AWS IoT Jobs to deploy fixes to devices Resources Refer to the following resources to learn more about our b est practices for security : Documentation and Blogs • IoT Security Identity • AWS IoT Device Defender • IoT Authentication Model • MQTT on port 443 • Detect Anomalies with Device Defender Whitepapers • MQTT Topic Design Reliability Pillar The reliability pillar focuses on the ability to prevent and quickly recover from failures to meet business and customer demand Key topics include foundational elements around setup cross project requi rements recovery planning and change management Design Principles In addition to the overall WellArchitected Framework design principles there are three design principles for reliability for IoT in the cloud: ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 38 • Simulate device behavior at production scale : Create a production scale test environment that closely mirrors your production deployment Leverage a multi step simulation plan that allows you to test your applications with a more significant load before your go live date During deve lopment ramp up your simulation tests over a period of time starting with 10% of overall traffic for a single test and incrementing over time ( that is 25% 50% then 100% of day one device traffic) During simulation tests monitor performance and review logs to ensure that the entire solution behaves as expected • Buffer message delivery from the IoT rules engine with streams or queues : Leverage managed services enable high throughput telemetry By injecting a queuing layer behind high throughput topics IoT applications can manage failures aggregate messaging and scale other downstream services • Design for failure and resiliency : It’s essential to plan for resiliency on the device itself Depending on your use case resiliency may entail robust retry l ogic for intermittent connectivity ability to roll back firmware updates ability to fail over to a different networking protocol or communicate locally for critical message delivery running redundant sensors or edge gateways to be resilient to hardware failures and the ability to perform a factory reset Definition There are three best practice areas for reliability in the cloud: 1 Foundations 2 Change management 3 Failure management To achieve reliability a system must have a well planned foundation and mo nitoring in place with mechanisms for handling changes in demand requirements or potentially defending an unauthorized denial of service attack The system should be designed to detect the failure and automatically heal itself Best Practices Foundation s IoT devices must continue to operate in some capacity in the face of network or cloud errors Design device firmware to handle intermittent connectivity or loss in connectivity in a way that is sensitive to memory and power constraints IoT cloud applica tions must ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 39 also be designed to handle remote devices that frequently transition between being online and offline to maintain data coherency and scale horizontally over time Monitor overall IoT utilization and create a mechanism to automatically increase c apacity to ensure that your application can manage peak IoT traffic To prevent devices from creating unnecessary peak traffic device firmware must be implemented that prevents the entire fleet of devices from attempting the same operations at the same ti me For example if an IoT application is composed of alarm systems and all the alarm systems send an activation event at 9am local time the IoT application is inundated with an immediate spike from your entire fleet Instead you should incorporat e a ran domization factor into those scheduled activities such as timed events and exponential back off to permit the IoT devices to more evenly distribute their peak traffic within a window of time The following questions focus on the considerations for reliab ility IOTREL 1 How do you handle AWS service limits for peaks in your IoT application? AWS IoT provides a set of soft and hard limits for different dimensions of usage AWS IoT outlines all of the data plane limits on the IoT limits page Data plane operations (for example MQTT Connect MQTT Publish and MQTT Subscribe) are the primary driver of your device connectivity Therefore it's important to review the IoT limits and ensure that your application adheres to any soft limits related to the data plane while not exceeding any hard limits that are imposed by the data plane The most important part of your IoT scaling approach is to ensure that you architect around any hard limits because exceeding limits that are not adjustable result s in application errors such as throttling and client errors Hard limits are related to throughput on a single IoT connection If you find your application exceeds a hard limit we recommend re designing your application to avoid those scenarios This can be done in several ways such as restructuring your MQTT topics or implementing cloud side logic to aggregate or filter messages before delivering the messages to the interested devices Soft limits in AWS IoT traditionally correlate to account level limits that are independent of a single device For any account level limits you should calculate your IoT usage for a single device and then multiply that usage by the number of devices to determine the base IoT limits that your application will require for yo ur initial product launch AWS ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 40 recommends that you have a ramp up period where your limit increases align closely to your current production peak usage with an additional buffer To ensure t hat the IoT application is not under provisioned : • Consult publishe d AWS IoT CloudWatch metrics for all of the limits • Monitor CloudWatch metrics in AWS IoT Core • Alert on CloudWatch throttle metrics which would signal if you need a limit increase • Set alarms for all thresholds in IoT including MQTT connect publish su bscribe receive and rule engine actions • Ensure that you request a limit increase in a timely fashion before reaching 100% capacity In addition to data plane limits the AWS IoT service has a control plane for administrative APIs The control plane manages the process of creating and storing IoT policies and principals creating the thing in the registry and associating IoT principals including certificates and Amazon Cognito federated identities Because bootstrapping and device registration is critical to the overall process it's important to plan control plane operations and limits Control plane API calls are based on throughput m easured in requests per second Control plane calls are normally in the order of magnitude of tens of requests per second It ’s important for you to work backward from peak expected registration usage to determine if any limit increases for control plane operations are needed Plan for sustained ramp up periods for onboarding devices so that the IoT limit increases align with regular day today data plane usage To protect against a burst in control plane requests your architecture should limit the access to these APIs to only authorized users or internal applications Implement back off and retry logic and queue inbound requests to control data rates to these APIs IOTREL 2 What is your strategy for managing ingestion and processing throughput of IoT da ta to other applications? Although IoT applications have communication that is only routed between other devices there will be messages that are processed and stored in your application In these cases the rest of your IoT application must be prepared to respond to incoming data All internal services that are dependent upon that data need a way to seamlessly scale the ingestion and processing of the data In a wellarchitected IoT application ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 41 internal systems are decoupled from the connectivity layer of the IoT platform through the ingestion layer The ingestion layer is composed of queues and streams that enable durable short term storage while allowing compute resources to process data indepe ndent of the rate of ingestion In order to optimize throughput use AWS IoT rules to route inbound device data to services such as Amazon Kinesis Data Streams Amazon Kinesis Data Firehose or Amazon Simple Queue Service before performing any compute oper ations Ensure that all the intermediate streaming points are provisioned to handle peak capacity This approach creates the queueing layer necessary for upstream applications to process data resiliently IOTREL 3 How do you handle device reliability when communicating with the cloud? IoT solution reliability must also encompass the device itself Devices are deployed in remote locations and deal with intermittent connectivity or loss in connectivity due to a variety of external factors that are out of y our IoT application ’s control For example if an ISP is interrupted for several hours how will the device behave and respond to these long periods of potential network outage? Implement a minimum set of embedded operations on the device to make it more r esilient to the nuances of managing connectivity and communication to AWS IoT Core Your IoT device must be able to operate without internet connectivity You must implement robust operations in your firmware provide the following capabilities : • Store impor tant messages durably offline and once reconnected send those messages to AWS IoT Core • Implement exponential retry and back off logic when connection attempts fail • If necessary have a separate failover network channel to deliver critical messages to AWS IoT This can include failing over from Wi Fi to standby cellular network or failing over to a wireless personal area network protocol (such as Bluetooth LE ) to send messages to a connected device or gateway • Have a method to set the current time usin g an NTP client or low drift real time clock A device should wait until it has synchronized its time before attempting a connection with AWS IoT Core If this isn’t possible the system provide s a way for a user to set the device’s time so that subsequent connections can succeed ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 42 • Send error codes and overall diagnostics messages to AWS IoT Core Change Management IOTREL 4 How do you roll out and roll back changes to your IoT application? It is important to implement the capability to revert to a previous version of your device firmware or your cloud application in the event of a failed rollout If your application is wellarchitected you will capture metrics from the device as well as metrics generated by AWS IoT Core and AWS IoT Device Defender You will also be alerted when your device canaries deviate from expected behavior after any cloud side changes Based on any deviations in your operational metrics you need the ability to: • Version all of the device firmware using Amazon S3 • Version the manifest or execution steps for your device firmware • Implement a known safe default firmware version for your devices to fall back to in the event of an error • Implement an update strategy us ing cryptographic code signing version checking and multiple non volatile storage partitions to deploy software images and rollback • Version all IoT rules engine configurations in CloudFormation • Version all downstream AWS Cloud resources using CloudFor mation • Implement a rollback strategy for reverting cloud side changes using CloudFormation and other infrastructure as code tools Treating your infrastructure as code on AWS allows you to automate monitoring and change management for your IoT application Version all of the device firmware artifacts and ensure that updates can be verified installed or rolled back when necessary Failure Management IOTREL 5 How does your IoT application withstand failure? Because IoT is an event driven workload your application code must be resilient to handling known and unknown errors that can occur as events are permeated through your application A wellarchitected IoT application has the ability to log and retry errors in data processing An IoT application will archive all data in its raw format By archiving ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 43 all data valid and invalid an architecture can more accurately restore data to a given point in time With the IoT rules engine an application can enable an IoT error action If a problem occurs when inv oking an action the rules engine will invoke the error action This allows you to captur e monitor alert and eventually retry messages that could not be delivered to their primary IoT action We recommend that an IoT error action is configured with a different AWS service from the primary action Use durable storage for error actions such as Amazon SQS or Amazon Kinesis Beginning with the rules engine your application logic should initially process messages from a queue and validate that the schema of that message is correct Your application logic should catch and log any known errors and optionally move those messages to their own DLQ for further analysis Have a catch all IoT rule that uses Amazon Kinesis Data Firehose and AWS IoT Analytics channels to transfer all raw and unformatted messages into long term storage in Amazon S3 AWS IoT Analytics data stores and Amazon Redshift for data warehousing IOTREL 6 How do you verify different levels of hardware failure modes for your physical assets? IoT implementations must allow for multiple types of failure at the device level Failures can be due to hardware software connectivity or unexpected adverse conditions One way to plan for thing failure is to deploy devices in pairs if possible or to deploy dual sensors across a fleet of devices deployed over the same coverage area (meshing) Regardless of the underlying cause for device failures if the device can communicate to your cloud application it should send diagnostic information about the hardware failure to AWS IoT Core using a diagnostics topic If the device loses connectivity because of the hardware failure use Fleet Indexing with connectivity status to track the change in connectivity status If the device is offline for extended per iods of time trigger an alert that the device may require remediation Key AWS Services Use Amazon CloudWatch to monitor runtime metrics and ensure reliability Other services and features that support the three areas of reliability are as follows: ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 44 Foundations : AWS IoT Core enables you to scale your IoT application without having to manage the underlying infrastructure You can scale AWS IoT Core by requesting account level limit increases Change management : AWS IoT Device Management enables you to update devices in the field while using Amazon S3 to version all firmware software and update manifests for devices AWS CloudFormation lets you document your IoT infrastructure as code and provision cloud resources using a Clou dFormation template Failure management : Amazon S3 allows you to durably archive telemetry from devices The AWS IoT rules engine Error action enables you to fall back to other AWS services when a primary AWS service is returning errors Resources Refer to the following resources to learn more about our best practices related to reliability : Documentation and Blogs • Using Device Time to Validate AWS IoT S erver Certificates • AWS IoT Core Limits • IoT E rror Action • Fleet Indexing • IoT Atlas Performance Efficiency Pillar The Performance Efficiency pillar focuses on using computing resources efficiently Key topics include selecting the right resource types and sizes based on workload requirements monitoring performance and making informed decisions to maintain efficiency as business and technology needs evolve The performance eff iciency pillar focuses on the efficient use of computing resources to meet the requirements and the maintenance of that efficiency as demand changes and technologies evolve Design Principles In addition to the overall WellArchitected Framework performan ce efficiency design principles there are three design principles for performance efficiency for IoT in the cloud: ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 45 • Use managed services : AWS provides several managed services across databases compute and storage which can assist your architecture in increasing the overall reliability and performance • Process data in batches : Decouple the connectivity portion of IoT applications from the ingestion and processing portion in IoT By decoupling the ingestion layer your IoT application can handle data in ag gregate and can scale more seamlessly by processing multiple IoT messages at once • Use event driven architectures : IoT systems publish events from devices and permeate those events to other subsystems in your IoT application Design mechanisms that cater t o event driven architectures such as leveraging queues message handling idempotency dead letter queues and state machines Definition There are four best practice areas for Performance Efficiency in the cloud: 1 Selection 2 Review 3 Monitoring 4 Tradeoffs Use a data driven approach when selecting a high performance architecture Gather data on all aspects of the architecture from the high level design to the selection and configuration of resource types By reviewing your choices on a cyclical basis you will ensure that you are taking a dvantage of the continually evolving AWS platform Monitoring ensures that you are aware of any deviation from expected performance and allow s you to act Your architecture can make tradeoffs to improve performance such as using compression or caching or relaxing consistency requirements Best Practices Selection WellArchitected IoT solutions are made up of multiple systems and components such as devices connectivity databases data processing and analytics In AWS there are several IoT services dat abase offerings and analytics solutions that enable you to quickly build solutions that are wellarchitected while allowing you to focus on business objectives AWS recommends that you leverage a mix of managed AWS services that ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 46 best fit your workload Th e following questions focus on these considerations for performance efficiency IOTPERF 1 How do you select the best performing IoT architecture? When you select the implementation for your architecture use a data driven approach based on the longterm v iew of your operation IoT applications align naturally to event driven architectures Your architecture will combine services that integrate with event driven patterns such as notifications publishing and subscribing to data stream processing and event driven compute In the following sections we look at the five main IoT resource types that you should consider (devices connectivity databases compute and analytics) Devices The optimal embedded software for a particular system will vary based on the hardware footprint of the device For example network security protocols while necessary for preserving data privacy and integrity can have a relatively large RAM footprint For intranet and internet connections use TLS with a combination of a strong cipher suite and minimal footprint AWS IoT supports Elliptic Curve Cryptography (ECC) for devices connecting to AWS IoT using TLS A secure software and hardware platform on device should take precedence during the selection criteria for your devices AWS also has a number of IoT partners that provide hardware solutions that can securely integrate to AWS IoT In addition to selecting the right hardware partner you may choose to use a number of software component s to run your application logic on the device including Amazon FreeRTOS and AWS IoT Greengrass IOTPERF 2 How do you select your hardware and operating system for IoT devices? IoT Connectivity Before firmware is developed to communicate to the cloud imp lement a secure scalable connectivity platform to support the longterm growth of your devices over time Based on the anticipated volume of devices an IoT platform must be able to scale the communication workflows between devices and the cloud whether that is simple ingestion of telemetry or command and response communication between devices ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 47 You can build your IoT application using AWS services such as EC2 but you take on the undifferentiated heavy lifting for building unique value into your IoT offe ring Therefore AWS recommends that you use AWS IoT Core for your IoT platform AWS IoT Core supports HTTP WebSockets and MQTT a lightweight communication protocol designed to tolerate intermittent connections minimize the code footprint on devices a nd reduce network bandwidth requirements IOTPERF 3 How do you select your primary IoT platform? Databases You will have multiple databases in your IoT application each selected for attributes such as the write frequency of data to the database the rea d frequency of data from the database and how the data is structured and queried There are other criteria to consider when selecting a database offering: • Volume of data and retention period • Intrinsic data organization and structure • Users and applicatio ns consuming the data (either raw or processed ) and their geographical location/dispersion • Advanced analytics needs such as machine learning or real time visualizations • Data synchronization across other teams organizations and business units • Security of the data at the row table and database levels • Interactions with other related data driven events such as enterprise applications drillthrough dashboards or systems of interaction AWS has several database offerings that support IoT solutions For structured data you should use Amazon Aurora a highly scalable relational interface to organizational data For semi structured data that requires low latency for queries and will be used by multiple consumers use DynamoDB a fully managed multi region multi master database that provides consistent single digit millisecond latency and offers built in security backup and resto re and in memory caching For storing raw unformatted event data use AWS IoT Analytics AWS IoT Analytics filters transforms and enriches IoT data before storing it in a time series data store for analysis Use Amazon SageMaker to build train and d eploy machine learning models ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 48 based off of your IoT data in the cloud and on the edge using AWS IoT Services such as Greengrass Machine Learning Inference Consider storing your raw formatted time series data in a data warehouse solution such as Amazon R edshift Unformatted data can be imported to Amazon Redshift via Amazon S3 and Amazon Kinesis Data Firehose By archiving unformatted data in a scalable managed data storage solution you can begin to gain business insights explor e your data and identif y trends and patterns over time In addition to storing and leveraging the historical trends of your IoT data you must have a system that stores the current state of the device and provides the ability to query against the current state of all of your de vices This supports internal analytics and customer facing views into your IoT data The AWS IoT Shadow service is an effective mechanism to store a virtual representation of your device in the cloud AWS IoT device shadow is best suited for managing the current state of each device In addition for internal teams that need to query against the shadow for operational needs leverage the managed capabilities of Fleet Indexing which provide s a searchable index incorporating your IoT registry and shadow me tadata If there is a need to provide index based searching or filtering capability to a large number of external users such as for a consumer application dynamically archive the shadow state using a combination of the IoT rules engine Kinesis Data Fire hose and Amazon ElasticSearch Service to store your data in a format that allows fine grained query access for external users IOTPERF 4 How do you select the database for your IoT device state? Compute IoT applications lend themselves to a high flow of ingestion that requires continuous processing over the stream of messages Therefore an architecture must choose compute services that support the steady enrichment of stream processing and the execution of business applications during and prior to data s torage The most common compute service used in IoT is AWS Lambda which allows actions to be invoked when telemetry data reaches AWS IoT Core or AWS IoT Greengrass AWS Lambda can be used at different points throughout IoT The location where you elect t o trigger your business logic with AWS Lambda is influenced by the time that you want to process a specific data event ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 49 Amazon EC2 instances can also be used for a variety of IoT use cases They can be used for managed relational databases systems and for a variety of applications such as web reporting or to host existing on premises solutions IOTPERF 5 How do you select your compute solutions for processing AWS IoT events? Analytics The primary business case for implementing IoT solutions is to respo nd more quickly to how devices are performing and being used in the field By acting directly on incoming telemetry businesses can make more informed decisions about which new products or features to prioritize or how to more efficiently operate workflow s within their organization Analytics services must be selected in such a way that gives you varying views on your data based on the type of analysis you are performing AWS provides several services that align with different analytics workflows including timeseries analytics real time metrics and archival and data lake use cases With IoT data your application can generate time series analytics on top of the steaming data messages You can calculate metrics over time windows and then stream values to other AWS services In addition IoT applications that use AWS IoT Analytics can implement a managed AWS Data Pipeline consisting of data transformation enrichment and filtering before storing data in a time series data store Additionally with AWS IoT Analytics visualizations and analytics can be performed natively using QuickSight and Jupyter Notebooks Review IOTPERF 6 How do you evolve your architecture based on the historical analysis of your IoT application? When building complex IoT solutions you can devote a large percentage of time on efforts that do not directly impact your business outcome For example managing IoT protocols securing device identities and transferring telemetry between devices and the cloud Although these aspects of IoT are important they do not directly lead to differentiating value The pace of innovation in IoT can also be a challenge ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 50 AWS regularly releases new features and services based on the common challenges of IoT Perform a regular review of your data to see if new AWS IoT services can solve a current IoT gap in your architecture or if they can replace components of your architecture that are not core business differentiators Leverage services built to aggregate your IoT data store your data and the n later visualize your data to implement historical analysis You can leverage a combination of sending timestamp information from your IoT device with leveraging services like AWS IoT Analytics and timebased indexing to archive your data with associated timestamp information Data in AWS IoT Analytics can be stored in your own Amazon S3 bucket along with additional IT or OT operational and efficiency logs from your devices By combining this archival state of data in IoT with visualization tools you can make data driven decisions about how new AWS services can provide additional value and measure how the services improve efficiency across your fleet Monitoring IOTPERF 7 How are you running end to end simulation tests of your IoT application? IoT applic ations can be simulated using production devices set up as test devices (with a specific test MQTT namespace) or by using simulated devices All incoming data captured using the IoT rules engine is processed using the same work flows that are used for production The frequency of end toend simulations must be driven by your specific release cycle or device adoption You should test failure pathways (code that is only executed during a failure) to ensure that the solution is resilient to errors You should also continuously run device canaries against your production and pre production accounts The device canaries act as key indicators of the system performance during simulation tests Outputs of the tests sho uld be documented and remediation plans should be drafted User acceptance tests should be performed IOTPERF 8 How are you using performance monitoring in your IoT implementation? There are several key types of performance monitoring related to IoT deplo yments including device cloud performance and storage/analytics Create appropriate ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 51 performance metrics using data collected from logs with telemetry and command data Start with basic performance tracking and build on the metrics as your business core competencies expand Leverage CloudWatch Logs metric filters to transform your IoT application standard output into custom metrics through regex (regular expressions) pattern matching Create CloudWatch alarms based on your application ’s custom metrics to gain quick insight into your IoT application ’s behavi or Set up fine grained logs to track specific thing groups During IoT solution development enable DEBUG logging for a clear understanding of the progress of events about each IoT message as it passes from your devices through the message broker and the rules engine In production change the logging to ERROR and WARN In addition to cloud instrumentation you must run instrumentation on devices prior to deployment to ensure that devices make the most efficient use of their local resources and that firmware code does not lead to unwanted scenarios like memory leaks Deploy code that is highly optimized for constrained devices and monitor the health of your devices using device diagnostic messages publi shed to AWS IoT from your embedded application Tradeoffs IoT solutions drive rich analytics capabilities across vast areas of crucial enterprise functions such as operations customer care finance sales and marketing At the same time they can be used as efficient egress points for edge gateways Careful consideration must be given to architecting highly efficient IoT implementations where data and analytics are pushed to the cloud by devices and where machine learning algorithms are pulled down on the device gateways from the cloud Individual devices wil l be constrained by the throughput supported over a given network The frequency with which data is exchanged must be balanced with the transport layer and the ability of the device to optionally store aggregate and then send data to the cloud Send data from devices to the cloud at timing intervals that align to the time required by backend applications to process and take action on the data For example if you need to see data at a one second increment your device must send data at a more frequent tim e interval than one second Conversely if your application only reads data at an hourly rate you can make a trade off in performance by aggregating data points at the edge and sending the data every half hour ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 52 IOTPERF 9 How are you ensuring that data f rom your IoT devices is ready to be consumed by business and operational systems? IOTPERF 10 How frequently is data transmitted from devices to your IoT application? The speed with which enterprise applications business and operations need to gain visibility into IoT telemetry data determines the most efficient point to process IoT data In network constrained environments where the hardware is not limited use edge solutions such as AWS IoT Greengrass to operate and process data offline from the cloud In cases where both the network and hardware are constrained look for opportunities to compress message payloads by using binary formatting and grouping similar messages together into a single request For visualizations Amazon Kinesis Data Analytics en ables you to quickly author SQL code that continuously reads processes and stores data in near realtime Using standard SQL queries on the streaming data allows you to construct applications that transform and provide insights into your data With Kines is Data Analytics you can expose IoT data for streaming analytics Key AWS Services The key AWS service for performance efficiency is Amazon CloudWatch which integrates with several IoT services including AWS IoT Core AWS IoT Device Defender AWS IoT D evice Management AWS Lambda and DynamoDB Amazon CloudWatch provides visibility into your application’s overall performance and operational health The following services also support performance efficiency: Selection Devices : AWS hardware partners provi de production ready IoT devices that can be used as part of you IoT application Amazon FreeRTOS is an operating system with software libraries for microcontrollers AWS IoT Greengrass allows you to run local compute messaging data caching sync and ML at the edge Connectivity : AWS IoT Core is a managed IoT platform that supports MQTT a lightweight publish and subscribe protocol for device communication Database : Amazon DynamoDB is a fully man aged NoSQL datastore that supports single digit millisecond latency requests to support quick retrieval of different views of your IoT data ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 53 Compute : AWS Lambda is an event driven compute service that lets you run application code without provisioning ser vers Lambda integrates natively with IoT events triggered from AWS IoT Core or upstream services such as Amazon Kinesis and Amazon SQS Analytics : AWS IoT Analytics is a managed service that operationalizes device level analytics while providing a time s eries data store for your IoT telemetry Review : The AWS IoT Blog section on the AWS website is a resource for learning about what is newly launched as part of AWS IoT Monitoring : Amazon CloudWatch Metrics and Amazon CloudWatch Logs provide metrics logs filters alarms and notifications that you can integrate with your existing monitoring solution These metrics can be augmented with device telemetry to monitor your application Tradeoff : AWS IoT Greengrass and Amazon Kinesis are services that allow yo u to aggregate and batch data at different locations of your IoT application providing you more efficient compute performance Resources Refer to the following resources to learn more about our best practices related to performance efficiency : Documentat ion and Blogs • AWS Lambda Getting Started • DynamoDB Getting Started • AWS IoT Analytics User Guide • Amazon FreeRTOS Getting Started • AWS IoT Greengrass Getting Started • AWS IoT Blog Cost Optimization Pillar The Cost Optimization pillar includes the continual process of refinement and improvement of a system over its entire lifecycle From the initial design of your first proof of concept to the ongoing operation of production workloads adopting the ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 54 practices in this paper will e nable you to build and operate cost aware systems that achieve business outcomes and minimize costs allowing your business to maximize its return on investment Design Principles In addition to the overall WellArchitected Framework cost optimization desi gn principles there is one design principle for cost optimization for IoT in the cloud: • Manage manufacturing cost tradeoffs : Business partnering criteria hardware component selection firmware complexity and distribution requirements all play a role in manufacturing cost Minimizing that cost helps determine whether a product can be brought to market successfully over multiple product generations However t aking shortcuts in the selection of your components and manufacturer can increase downstream costs For example partnering with a reputable manufacturer helps minimize downstream hardware failure and customer support cost Selecting a dedicated crypto component can increase bill of material ( BOM ) cost but reduce downstream manufacturing and provisioning complexity since the part may already come with an onboard private key and certificate Definition There are four best practice areas for Cost Optimization in the cloud: 1 Costeffective resources 2 Match ing supply and demand 3 Expenditure awareness 4 Optimizing over time There are tradeoffs to consider For example do you want to optimize for speed to market or cost? In some cases it's best to optimize for speed —going to market quickly shipping new feature s or meeting a deadline —rather than investing in upfront cost optimization Design decisions are sometimes guided by haste as opposed to empirical data as the temptation always exists to overcompensate rather than spend time benchmarking for a cost optim al deployment This leads to over provisioned and under optimized deployments The following sections provide techniques and strategic guidance for your deployment’s initial and ongoing cost optimization ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 55 Best Practices Cost Effective Resources Given the s cale of devices and data that can be generated by an IoT application using the appropriate AWS services for your system is key to cost savings In addition to the overall cost for your IoT solution IoT architects often look at connectivity through the lens of BOM costs For BOM calculations you must predict and monitor what the long term costs will be for managing the connectivity to your IoT application throughout the lifetime of that device Leveraging AWS services will help you calculate initial BOM costs mak e use of cost effective services that are event driven and update your architecture to continue to lower your overall lifetime cost for connectivity The most straightforward way to increase the cost effectiveness of your resources is to group I oT events into batches and process data collectively By processing events in groups you are able to lower the overall compute time for each individual message Aggregation can help you save on compute resources and enable solutions when data is compresse d and archived before being persisted This strategy decreases the overall storage footprint without losing data or compromising the query ability of the data COST 1 How do you select an approach for batch enriched and aggregate data delivered from yo ur IoT platform to other services? AWS IoT is best suited for streaming data for either immediate consumption or historical analyses There are several ways to batch data from AWS IoT Core to other AWS services and the differentiating factor is driven by batching raw data (as is) or enrichi ng the data and then batching it Enriching transforming and filtering IoT telemetry data during (or immediately after ) ingestion is best performed by creating an AWS IoT rule that sends the data to Kinesis Data Streams Kinesis Data Firehose AWS IoT An alytics or Amazon SQS These services allow you to process multiple data events at once When dealing with raw device data from this batch pipeline you can use AWS IoT Analytics and Amazon Kinesis Data Firehose to transfer data to S3 buckets and Amazon Redshift To lower storage costs in Amazon S3 an application can leverage lifecycle policies that archive data to lower cost storage such as Amazon S3 Glacier Matching Supply and Demand Optimally matching supply to demand delivers the lowest cost for a system However given the susceptibility of IoT workloads to data bursts solutions must be dynamically ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 56 scalable and consider peak capacity when provisioning resources With the event driven flow of data you can choose to automatically provision your AWS resources to match your peak capacity and then scale up and down during known low periods of traffic The following questions focus on the considerations for cost optimization : COST 2 How do you match the supply of resources with device demand? Serverless technologies such as AWS Lambda and API Gateway help you create a more scalable and resilient architecture and y ou only pay for when your application utilizes those services AWS IoT Core AWS IoT Device Management AWS IoT Device Defender A WS IoT Greengrass and AWS IoT Analytics are also managed services that are pay per usage and do not charge you for idle compute capacity The benefit of managed services is that AWS manages the automatic provisioning of your resources If you utilize mana ged services you are responsible for monitoring and setting alerts for limit increases for AWS services When architecting to match supply against demand proactively plan your expected usage over time and the limits that you are most likely to exceed F actor those limit increases into your future planning Optimizing Over Time Evaluating new AWS features allows you to optimize cost by analyz ing how your devices are performing and make changes to how your devices communicate with your IoT To optimize the cost of your solution through changes to device firmware you should review the pricing components of AWS services such as AWS IoT determine where you are below billing metering thresholds for a given service and then weigh the trade offs between cost and performance COST 3 How do you optimize payload size between devices and your IoT platform? IoT applications must balance the networking throughput that can be realized by end devices with the most efficient way that data should be processed by your IoT ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 57 application We recommend that IoT deployments initially optimize data transfer based on the d evice constraints Begin by sending discrete data events from the device to the cloud making minimal use of batching multiple events in a single message Later if necessary you can use serialization frameworks to compress the messages prior to sending i t to your IoT platform From a cost perspective the MQTT payload size is a critical cost optimization element for AWS IoT Core An IoT message is billed in 5 KB increments up to 128 KB For this reason each MQTT payload size should be as close to possible to any 5 KB For example a payload that is currently sized at 6 KB is not as cost eff icient as a payload that is 10 KB because the overall costs of publishing that message is identical despite one message being larger than the other In order to take advantage of the payload size look for opportunities to either compress data or aggregate data into messages: • You should shorten values while keeping them legible If 5 digits of precision are sufficient then you should not use 12 digits in the payl oad • If you do not require IoT rules engine payload inspection you can use serialization frameworks to compress payloads to smaller sizes • You can send data less frequently and aggregate messages together within the billable increments For example sendi ng a single 2 KB message every second can be achieved at a lower IoT message cost by sending two 2 KB messages every other second This approach has tradeoffs that should be considered before implementation Adding complexity or delay in your devices may unexpectedly increase processing costs A cost optimization exercise for IoT payloads should only happen after your solution has been in production and you can use a data driven approach to determine the cost impact of changing the way data is sent to AWS IoT Core COST 4 How do you optimize the costs of storing the current state of your IoT device? WellArchitected IoT applications have a virtual representation of the device in the cloud This virtual representation is composed of a managed data store or specialized IoT application data store In both cases your end devices must be programmed in a way that efficiently transmits device state changes to your IoT application For example ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 58 your device should only send its full device state if your fi rmware logic dictates that the full device state may be out of sync and would be best reconciled by sending all current settings As individual state changes occur the device should optimize the frequency it transmits those changes to the cloud In AWS Io T device shadow and registry operations are metered in 1 KB increments and billing is per million access/modify operations The shadow stores the desired or actual state of each device and the registry is used to name and manage devices Cost optimizati on processes for device shadows and registry focus on managing how many operations are performed and the size of each operation If your operation is cost sensitive to shadow and registry operations you should look for ways to optimize shadow operations For example for the shadow you could aggregate several reported fields together into one shadow message update instead of sending each reported change independently Grouping shadow updates together reduces the overall cost of the shadow by consolidating updates to the service Key AWS Services The key AWS feature supporting cost optimization is cost allocation tags which help you to understand the costs of a system The following services and features are important in the three areas of cost optimizatio n: • Cost effective resources : Amazon Kinesis AWS IoT Analytics and Amazon S3 are AWS services that enable you to process multiple IoT messages in a single request in order to improve the cost effectiveness of compute resources • Matching supply and demand : AWS IoT Core is a managed IoT platform for managing connectivity device security to the cloud messaging routing and device state • Optimizing over time : The AWS IoT Blog section on the AWS website is a resource for learning about what is newly launche d as part of AWS IoT Resources Refer to the following resources to learn more about AWS best practices for cost optimization Documentation and Blogs ArchivedAmazon Web Services AWS Well Architected Framework — IoT Lens 59 • AWS IoT Blogs Conclusion The AWS Well Architected Fra mework provides architectural best practices across the pillars for designing and operating reliable secure efficient and cost effective systems in the cloud for IoT applications The framework provides a set of questions that you can use to review an e xisting or proposed IoT architecture and also a set of AWS best practices for each pillar Using the framework in your architecture helps you produce stable and efficient systems which allows you to focus on your functional requirements Contributors The following individuals and organizations contributed to this document: • Olawale Oladehin Solutions Architect Specialist IoT • Dan Griffin Software Development Engineer IoT • Catalin Vieru Solutions Architect Specialist IoT • Brett Francis Product Solutions Architect IoT • Craig Williams Partner Solutions Architect IoT • Philip Fitzsimons Sr Manager Well Architected Amazon Web Services Document Revisions Date Description December 2019 Updated to include additional guidance on IoT SDK usage boots trapping device lifecycle management and IoT November 2018 First publication
|
General
|
consultant
|
Best Practices
|
AWS_WellArchitected_Framework__Operational_Excellence_Pillar
|
ArchivedOperational Excellence Pillar AWS WellArchitected Framework July 2020 This paper has been archived The latest version is now available at: https://docsawsamazoncom/wellarchitected/latest/operationalexcellencepillar/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Operational Excellence 1 Design Principles 1 Definition 2 Organization 2 Organization Priorities 3 Operating Model 6 Organizational Culture 14 Prepare 18 Design Telemetry 18 Improve Flow 21 Mitigate Deployment Risks 24 Operational Readiness 26 Operate 30 Understanding Workload Health 30 Understanding Operational Health 33 Responding to Events 35 Evolve 39 Learn Share and Improve 39 Conclusion 42 Contributors 42 Further Reading 42 Document Revisions 43 ArchivedAbstract The focus of this paper is the operational excellence pillar of the AWS WellArchitected Framework It provides guidance to help you apply best practices in the design delivery and maintenance of AWS workloads ArchivedAmazon Web Services – Operational Excellence AWS Well Architected Framework Page 1 Introduction The AWS Well Architected Framework helps you understand the benefits and risks of decisions you make while building workloads on AWS By using the Framework you will learn operational and architectural best practices for designing and operating reliable secure efficient and cost effective workloads in the cloud It provides a way to consistently measure your operations and architectures against best practices and identify areas for impro vement We believe that having WellArchitected workloads that are designed with operations in mind greatly increases the likelihood of business s uccess The framework is based on five pillars: • Operational Excellence • Security • Reliability • Performance Efficiency • Cost Optimization This paper focuses on the operational excellence pillar and how to apply it as the foundation of your well architected solu tions Operational excellence is challenging to achieve in environments where operations is perceived as a function isolated and distinct from the lines of business and development teams that it supports By adopting the practices in this paper you can bui ld architectures that provide insight to their status are enabled for effective and efficient operation and event response and can continue to improve and support your business goals This paper is intended for those in technology roles such as chief te chnology officers (CTOs) architects developers and operations team members After reading this paper you will understand AWS best practices and the strategies to use when designing cloud architectures for operation al excellence This paper does not pro vide implementation details or architectural patterns However it does include references to appropriate resources for this information ArchivedAmazon Web Services Operational Excellence Pillar 1 Operational Excellence The operational excellence pillar includes how your organization supports your business objectives your ability to run workloads effectively gain insight into their operations and to continuously improve supp orting processes and procedures to deliver business value Design Principles There are five design principles for operational excellence in the cloud: • Perform operations as code : In the cloud you can apply the same enginee ring discipline that you use for application code to your entire environment You can define your entire workload (applications infrastructure etc) as code and update it with code You can script your operations procedures and automate their execution b y triggering them in response to events By performing operations as code you limit human error and enable consistent responses to events • Make frequent small reversible changes : Design workloads to allow components to be updated regularly to increase t he flow of beneficial changes into your workload Make changes in small increments that can be reversed if they fail to aid in the identification and resolution of issues introduced to your environment (without affecting customers when possible) • Refine op erations procedures frequently : As you use operations procedures look for opportunities to improve them As you evolve your workload evolve your procedures appropriately Set up regular game days to review and validate that all procedures are effective a nd that teams are familiar with them • Anticipate failure : Perform “pre mortem” exercises to identify potential sources of failure so that they can be removed or mitigated Test your failure scenarios and validate your understanding of their impact Test your response procedures to ensure they are effective and that teams are familiar with their execution Set up regular game days to test workload and team responses to simula ted events • Learn from all operational failures : Drive improvement through lessons learned from all operational events and failures Share what is learned across teams and through the entire organization ArchivedAmazon Web Services Operational Excellence Pillar 2 Definition Operational excellence in the cloud is c omposed of four areas: • Organization • Prepare • Operate • Evolve Your organization’s leadership defines business objectives Your organization must understand requirements and priorities and use these to organize and conduct work to support the achievement of bu siness outcomes Your workload must emit the information necessary to support it Implementing services to enable integration deployment and delivery of your workload will enable an increased flow of beneficial changes into production by automating repet itive processes There may be risks inherent in the operation of your workload You must understand those risks and make an informed decision to enter production Your teams must be able to support your workload Business and operational metrics derived f rom desired business outcomes will enable you to understand the health of your workload your operations activities and respond to incidents Your priorities will change as your business needs and business environment changes Use these as a feedback loop to continually drive improvement for your organization and the operation of your workload Organization You need to understand your organization’s priorities your organizational structure and how your organization supports your team members so that the y can support your business outcomes To enable operational excellence you must understand the following: • Organization Priorities • Operating Model • Organizational Culture ArchivedAmazon Web Services Operational Excellence Pillar 3 Organization Priorities Your teams need to have a shared understanding of your entire workload their role in it and shared business goals to set the priorities that will enable business success Well defined priorities will maximize the benefits of your efforts Review your priorities regularly so that they can be updated as needs change Evaluate external customer needs: Involve key stakeholders including business development and operations teams to determine where to focus efforts on external customer needs Evaluate internal customer needs : Involve key stakeholders including busine ss development and operations teams to determin e where to focus efforts on internal customer needs Evaluating customer needs will ensure that you have a thorough understanding of the support that is required to achieve business outcomes Use your established priorities to focus your improvement efforts where they will have the greatest impact (for example developing team skills improving workload performance reducing costs automating runbooks or enhancing monitoring) Update your prio rities as needs change Evaluate governance requirements: Ensure that you are aware of guidelines or obligations defined by your organization that may mandate or emphasize specific focus Evaluate internal factors such as organization policy standards and requirements Validate that you have mechanisms to identify changes to governance If no governance requirements are identified ensure that you have applied due diligence to this determination Evaluate external compliance requirements: Ensure that you are aware of guidelines or obligations that may mandate or emphasize specific focus Evaluate external factors such as regulatory compliance requirements and industry standards Validate that you have mechanisms to identify changes to complia nce requirements If no compliance requirements are identified ensure that you have applied due diligence to this determination If there are external regulatory or compliance requirements that apply to your organization you should use the resources prov ided by AWS Cloud Compliance to help educate your teams so that they can determine the impact on your priorities ArchivedAmazon Web Services Operational Excellence Pillar 4 Evaluate threat landscape: Evaluate threats to the business (for example competition business risk and liabilities operational risks and information security threats) and maintain current information in a risk registry I nclude the impact of risks when determining where to focus efforts The WellArchitected Framework emphasizes learning measuring and improving It provides a consistent approach for you to evaluate architectures and implement designs that will scale over time AWS provides the AWS Well Architected Tool to help you review your approach prior to development the state of your workloads prior to production and the state of your workloads in production You can compare them to the latest AWS architectural best practices monitor the overall status of your workloads and gain insight to potential risks Enterprise Support customers are eligible for a guided Well Architected Review of their mission critical worklo ads to measure their architectures against AWS best practices They are also eligible for a n Operations Review designed to hel p them to identify gaps in their approach to operating in the cloud The cross team engagement of these reviews helps to establish common understanding of your workloads and how team roles contribute to success The needs identified through the review can help shape your priorities AWS Trusted Advisor is a tool that provides access to a core set of checks that recommend optimizations that may help shape your priorities Business and Enterprise Support customers receive access to additional checks focusing on security reli ability performance and cost optimization that can further help shape their priorities Evaluat e tradeoffs: Evaluate the impact of tradeoffs between competing interests or alternative approaches to help make informed decisions when determining where to focus operations efforts or choosing a course of action For example accelerating speed to marke t for new features may be emphasized over cost optimization or you may choose a relational database for non relational data to simplify the effort to migrate a system rather than migrating to a database optimized for your data type and updating your appl ication AWS can help you educate your teams about AWS and its services to increase their understanding of how their choices can have an impact on your workload You should use the resources provided by AWS Support (AWS Knowledge Center AWS Discussion Forms and AWS Support Center ) and AWS Documentation to educate your ArchivedAmazon Web Services Operational Excellence Pillar 5 teams Reach out to AWS Support through AWS Support Center for help with your AWS questions AWS also shares best practices and patterns that we h ave learned through the operation of AWS in The Amazon Builders' Library A wide variety of other useful information is available through the AWS Blog and The Official AWS Podcast Manage benefits and risks: Manage benefit s and risks to make informed decisions when determining where to focus efforts For example it may be beneficial to deploy a workload with unresolved issues so that significant new features can be made available to customers It may be possible to mitigat e associated risks or it may become unacceptable to allow a risk to remain in which case you will take action to address the risk You might find that you want to emphasize a small subset of your priorities at some point in time Use a balanced approach over the long term to ensure the development of needed capabilities and management of risk Review your priorities regularly and update your priorities as needs change Resources Refer to the following resources to learn more about AWS best practices for organizational priorities Documentation • AWS Trusted Advisor • AWS Cloud Compliance • AWS Well Architected Framework • AWS Business Support • AWS Enterprise Support • AWS Enterprise Support Entitlements • AWS Support Cloud Operations Reviews • AWS Cloud Adoption Framework ArchivedAmazon Web Services Operation al Excellence Pillar 6 Operating Model Your teams must understand their part in achieving business outcomes Teams need to understand their roles in the success of other teams the role of other teams in their success and have shared goals Understanding responsibility ownership how decisions are made and who has authority to make decisions will help focus efforts and maximize the benefits from your teams The needs of a team will be shaped by their industry their organization the makeup of the team and the characteristics of their workload It is unreasonable to expect a s ingle operating model to be able to support all teams and their workloads The number of operating models present in an organization is likely to increase with the number of development teams You may need to use a combination of operating models Adoptin g standards and consuming services can simplify operations and limit the support burden in your operating model The benefit of development efforts on shared standards is magnified by the number of teams who have adopted the standard and who will adopt new features It’s critical that m echanisms exist to request additions changes and exceptions to standards in support of the teams’ activities Without this option standards become a constraint on innovation Requests should be approved where viable and de termined to be appropriate after an evaluation of benefits and risks A well defined set of responsibilities will reduce the frequency of conflicting and redundant efforts Business outcomes are easier to achieve when there is strong alignment and relation ships between business development and operations teams Operating Model 2 by 2 Representations These operating m odel 2 by 2 representations are illustrations to help you understand the relationships between teams in your environment These diagrams focu s on who does what and the relationships between teams but we will also discuss governance and decision making in context of these examples our teams may have responsibilities in multiple parts of multiple models depending on the workloads they support You may wish to break out more specialized discipline areas than the high level ones described There is the potential for endless variation on these models as you separate or aggregate activities or overlay teams and provide more specific detail ArchivedAmazon Web Services Operational Excellence Pillar 7 You m ay identify that you have overlapping or unrecognized capabilities across teams that can provide additional advantage or lead to efficiencies You may also identify unsatisfied needs in your organization that you can plan to address When evaluating organ izational change examine the trade offs between models where your individual teams exist within the models (now and after the change) how your teams’ relationship and responsibilities will change and if the benefits merit the impact on your organizatio n You can be successful using each of the following four operating models Some models are more appropriate for specific use cases or at specific points in your development Some of the se models may provide advantages over the ones in use in your environm ent • Fully Separated Operating Model • Separated Application Engineering and Operations (AEO) and Infrastructure Engineering and Operations (IEO) with Centralized Governance • Separated AEO and IEO with Centralized Governance and a Service Provider • Separated A EO and IEO with Decentralized Governance Fully Separated Operating model In the following diagram on the vertical axis we have applications and infrastructure Applications refer to the workload serving a business outcome and can be custom developed or pu rchased software Infrastructure refers to the physical and virtual infrastructure and other software that supports that workload On the horizontal axis we have Engineering and Operations Engineering refers to the develop ment building and testing of a pplications and i nfrastructure Operations is the deployment update and ongoing support of applications and i nfrastructure ArchivedAmazon Web Services Operational Excellence Pillar 8 In many organizations this “ fully separated ” model is present The activities in each quadrant are performed by a separate team Work is passed between teams through mechanisms such as work requests work queues tickets or by using an IT service management (ITSM) system The transition of tasks to or between teams increases complexity and creates bottlenecks and delays Requests may be delayed until they are a priority Defects identified late may require significant rework and may have to pass through the same teams and their functions once again If there are incidents that require action by engine ering teams their responses are delayed by the hand off activity There is a higher risk of m isalignment when business development and operations teams are organized around the activities or functions that are being performed This can lead to teams focusing on their specific responsibilities instead of focusing on achieving business outcomes Teams may be narrowly specialized physically isolated or logically isolated hindering communication and collaboration Separated AEO and IEO with Centralized Governance This “Separated AEO and IEO” model follows a “you build it you run it” methodology Your application engineers and developers perform both the engineering and the operation of their workloads Similarly your infrastru cture engineers perform both the engineering and operation of the platforms they use to support application teams ArchivedAmazon Web Services Operational Excellence Pillar 9 For this example we are going to treat governance as centralized Standards are distributed provided or shared to the application teams You should use tools or services that enable you to centrally govern your environment s across accounts such as AWS Organizations Services like AWS Control Tower expand this management capability enabling you to define blueprints (supporting your operating models) for the setup of accounts apply ongoing governance using AWS Organizatio ns and automate provisioning of new accounts “You build it you run it” does not mean that the application team is responsible for the full stack tool chain and platform The platform engineering team provides a standardized set of services (for exampl e development tools monitoring tools backup and recovery tools and network) to the application team The platform team may also provide the application team access to approved cloud provider services specific configurations of the same or both Mecha nisms that provide a self service capability for deploying approved services and configurations such as AWS Service Catalog can help limit delays associated with fulfillment requests while enforcing governance The platform team enables full stack visibility so that application teams can differentiate between issues with their application components and the services and infrastructure components their applications consume The platform team may also p rovide assistance configuring these services and guidance on how to improve the applications teams’ operations ArchivedAmazon Web Services Operational Excellence Pillar 10 As discussed previously it’s critical that m echanisms exist for the application team to request additions changes and exceptions to standards in support of teams’ activities and innovation of their application The Separated AEO IEO model provides strong feedback loops to application teams Day to day operations of a workload increases contact with customers either through direct interaction or indirectly through support and feature requests This heightened visibility allows application teams to address issues more quickly The deeper engagement and closer relationship provides insight to customer needs and enables more rapid innovation All of this is also true for the platform team supporting the application teams Adopted standards may be pre approved for use reducing the amount of review necessary to enter production Consuming supported and tested standards provided by the platform team m ay reduce the frequency of issues with those services Adoption of standards enables application teams to focus on differentiating their workloads Separated AEO and IEO with Centralized Governance and a Service Provider This “Separated AEO and IEO” model follows a “you build it you run it” methodology Your application engineers and developers perform both the engineering and the operation of their workloads Your organization may not have the existing skills or team members to support a dedicated platfo rm engineering and operations team or you may not want to make the investments of time and effort to do so Alternatively you may wish to have a platform team that is focused on creating capabilities that will differentiate your business but you want t o offload the undifferentiated day to day operations to an outsourcer Managed Services providers such as AWS Managed Services AWS Mana ged Services Partners or Managed Services Providers in the AWS Partner Network provide expertise implementing cloud environments and support your security an d compliance requirements and business goals ArchivedAmazon Web Services Operational Excellence Pillar 11 For this variation we are going to treat governance as centralized and managed by the platform team with account creation and policies managed with AWS Organizations and AWS Control Tower This model does require you to modify your mechanisms to work with those of your service provider It does not address the bottlenecks and delays created by transition of tasks between teams including your service provider or the potential rework related to the lat e identification of defects You gain the advantage of your providers’ standards best practices processes and expertise You also gain the benefit s of their ongoing development of their service offerings Adding Managed Services to your operating model can save you time and resources and lets you keep your internal teams lean and focused on strategic outcomes that will differentiate your business rather than developing new skills and capabilities Separated AEO and IEO with Decentralized Governance This “Separated AEO and IEO” model follows a “you build it you run it” methodology Your application engineers and developers perform both the engineering and the operation of their workloads Similarly your infrastructure engineers perform both the engineering and operation of the platforms they use to support application teams ArchivedAmazon Web Services Operational Excellence Pillar 12 For this example we are going to treat governance as decentralized Standards are still distributed provided or shared t o application teams by the platform team but application teams are free to engineer and operate new platform capabilities in support of their workload In this model there are fewer constraints on the application team but that comes with a significant i ncrease in responsibilities Additional skills and potentially team members must be present to support the additional platform capabilities The risk of significant rework is increased if skill sets are not adequate and defects are not recognized early You should enforce policies that are not specifically delegated to application teams Use tools or services that enable you to centrally govern your environment s across accounts such as AWS Organizations Services like AWS Control Tower expand this management capability enabling you to define blueprints (supporting your operating models) for the setup of accounts apply ongoing governance usi ng AWS Organizations and automate provisioning of new accounts It’s beneficial to have m echanisms for the application team to request additions and changes to standards They may be able to contribute new standards that can provide benefit to other appli cation teams The platform teams may decide that providing direct support for these additional capabilities is an effective support for business outcomes This model limits constraints on innovation with significant skill and team member requirements It a ddresses many of the bottlenecks and delays created by transition of ArchivedAmazon Web Services Operational Excellence Pillar 13 tasks between teams while still promoting the development of effective relationships between teams and customers Relationships and Ownership Your operating model defines the relationship s between teams and supports identifiable ownership and responsibility Resources have identified owners: Understand who has ownership of each application workload platform and infrastructure component what business value is provided by that component and why that ownership exists Understanding the business value of these individual components and how they support business outcomes informs the processes and procedures applied against them Processes and procedures have identified owners: Understand who has ownership of the definition of individual processes and procedures why those specific process and procedures are used and why that ownership exists Understanding the reasons that specif ic processes and procedures are used enables identification of improvement opportunities Operations activities have identified owners responsible for their performance: Understand who has responsibility to perform specific activities on defined workloads and why that responsibility exists Understanding responsibility for performance of operations activities informs who will perform the action validate the result and provide feedback to the owner of the activity Team members know what they are responsi ble for: Understanding your role informs the prioritization of your tasks This enables team members to recognize needs and respond appropriately Mechanisms exist to identify responsibility and ownership : Where no individual or team is identified there a re defined escalation paths to someone with the authority to assign ownership or plan for that need to be addressed Mechanisms exist to request additions changes and exceptions: You are able to make requests to owners of processes procedures and resou rces Make informed decisions to approve requests where viable and determined to be appropriate after an evaluation of benefits and risks Responsibilities between teams are predefined or negotiated : There are defined or negotiated agreements between teams describing how they work with and support each other (for example response times service level objectives or service level ArchivedAmazon Web Services Operational Excellence Pillar 14 agreements) Understanding the impact of the teams’ work on business outcomes and the outcomes of other teams and organizations informs the prioritization of their tasks and enables them to respond appropriately When responsibility and ownership are undefined or unknown you are at risk of both not addressing necessary activities in a timely fashion and of redundant and potential ly conflicting efforts emerging to address those needs Resources Refer to the following resources to learn more about AWS best practices for operations design Videos •AWS re:Invent 2019: [REPEAT 1] How to ensure configuration compliance (MGT303 R1) •AWS re:Invent 2019: Automate everythi ng: Options and best practices (MGT304) Documentation •AWS Managed Services •AWS Organizations Features •AWS Control Tower Features Organizational Culture Provide support for your team me mbers so that they can be more effective in taking action and supporting your business outcome Executive Sponsorship : Senior leadership clearly sets expectations for the organization and evaluates success Senior leadership is the sponsor advocate and driver for the adoption of best practices and evolution of the organization Team members are empowered to take action when outcomes are at risk: The workload owner has defined guidance and scope empowering team members to respond when outcomes are at risk Escalation mechanisms are used to get direction when events are outside of the defined scope ArchivedAmazon Web Services Operational Excellence Pillar 15 Escalation is encouraged: Team members have mechanisms and are encouraged to escalate concerns to decision makers and stakeholders if they believe outcomes are at risk Escalation should be performed early and often so that risks can be identified and prevented from causing incidents Communications are timely clear and actionable: Mechanisms ex ist and are used to provide timely notice to team members of known risks and planned events Necessary context details and time (when possible) are provided to support determining if action is necessary what action is required and to take action in a t imely manner For example providing notice of software vulnerabilities so that patching can be expedited or providing notice of planned sales promotions so that a change freeze can be implemented to avoid the risk of service disruption Planned events ca n be recorded in a change calendar or maintenance schedule so that team members can identify what activities are pending On AWS AWS Systems Man ager Change Calendar can be used to record these details It supports programmatic checks of calendar status to determine if the calendar is open or closed to activity at a particular point of time Operations activities may be planned around specific “ approved” windows of time that are reserved for potentially disruptive activities AWS Systems Manager Maintenance Windows allows you to schedule activities against instances and other supported resources to automate the activities and make those activities discoverable Experim entation is encouraged: Experimentation accelerates learning and keeps team members interested and engaged An undesired result is a successful experiment that has identified a path that will not lead to success Team members are not punished for successfu l experiments with undesired results Experimentation is required for innovation to happen and turn ideas into outcomes Team members are enabled and encouraged to maintain and grow their skill set s: Team s must grow their skill sets to adopt new technologies and to support changes in demand and responsibilities in support of your workloads Growth of s kills in new technologies is frequently a source of team member satisfaction and support s innovation Support yo ur team members’ pursuit and maintenance of industry certifications that validate and acknowledge their growing ski lls Cross train to promote knowledge transfer and reduce the risk of significant impact when you lose skilled and experienced team members w ith institutional knowledge Provide dedicated structured time for learning ArchivedAmazon Web Services Operational Excellence Pillar 16 AWS provides resources including the AWS Getting Started Resource Center AWS Blogs AWS Online Tech Talks AWS Events and Webinars and the AWS Well Architected Labs that provide guidance examples and detailed walkthroughs to educate your teams AWS also shares best practices and patterns that we have learned through the operation of AWS in The Amazon Builders' Library and a wide variety of other useful educational material through the AWS Blog and The Official AWS Podcast You should take advantage of the education resou rces provided by AWS such as the WellArchitected labs AWS Support (AWS Knowledge Center AWS Discussion Forms and AWS Support Center ) and AWS Documentation to educate your teams Reach out to AWS Support through AWS Support Center for help with your AWS questions AWS Training and Certification provides some free training through self paced digital courses on AWS fundamentals You can also register for instructor led training to further support the development of your teams’ AWS skills Resource teams appropriately: Maintain team member capaci ty and provide tools and resources to support your workload needs Overtasking team members increases the risk of incidents resulting from human error Investments in tools and resources (for example providing automation for frequently executed activiti es) can scale the effectiveness of your team enabling them to support additional activities Diverse opinions are encouraged and sought within and across teams: Leverage cross organizational diversity to seek multiple unique perspectives Use this perspective to increase innovation challenge your assumptions and reduce the risk of confirmation bias Grow inclusion diversity and accessibility within your teams to gain beneficial perspective s Organizational culture has a direct impact on team mem ber job satisfaction and retention Enable the engagement and capabilities of your team members to enable the success of your business Resources Refer to the following resources to learn more about AWS best practices for operations design Videos • AWS re:Invent 2019: [REPEAT 1] How to ensure configuration compliance ArchivedAmazon Web Services Operational Excellence Pillar 17 (MGT303 R1) • AWS re:Invent 2019: Automate everythi ng: Options and best practices (MGT304) Documentation • AWS Managed Services • AWS Managed Services S ervice Description • AWS Organizations Features • AWS Control Tower Features ArchivedAmazon Web Services Operational Excellence Pillar 18 Prepare To prepare for operational excellence you ha ve to understand your workloads and their expected behaviors You will then be able design them to provide insight to their status and build the procedures to support them To prepare for operational excellence you need to perform the following: • Design Te lemetry • Improve Flow • Mitigate Deployment Risks • Understand Operational Readiness Design Telemetry Design your workload so that it provides the information necessary for you to understand its internal state (for example metrics logs events and traces) across all components in support of observability and investigating issues Iterate to develop the telemetry necessary to monitor the health of your workload identify when outcomes are at risk and enable effective responses In AWS you can emit and collect logs metrics and events from your application s and workloads components to enable you to understand their internal state and health You can integrate distributed tracing to track requests as the y travel through your workload Use this data t o understand how your application and underlying components interact and to analyze issues and performance When instrumenting your workload capture a broad set of information to enable situational awareness (for example changes in state user activity privilege access utilization counters) knowing that you can use filters to select the most useful information over time Implement application telemetry: Instrument your application code to emit information about its internal state status and achievement of business outcomes for example queue depth error messages and response times Use this information to determine when a response is required You s hould install and configure the Unified Amazon CloudWatch Logs Agent to send system level application logs and advanced metrics from your EC2 instances an d physical servers to Amazon CloudWatch ArchivedAmazon Web Services Operational Excellence Pillar 19 Generate and publish custom metrics using the AWS CLI or the CloudWatch API Ensure that you publish insightful business metrics as well as technical metrics to help you unders tand your customers’ behaviors You can send logs directly from your application to CloudWatch using the CloudWatch Logs API or send events using the AWS SDK and Amazon EventBridge Insert logging statements into your AWS Lambda code to automatically store them in CloudWatch Logs Implement and configure workload te lemetry: Design and configure your workload to emit information about its internal state and current status For example API call volume HTTP status codes and scaling events Use this information to help determine when a response is required Use a service like Amazon CloudWatch to aggregate logs and metrics from workload components (for example API logs from AWS CloudTrail AWS Lambda metrics Amazon VPC Flow Logs and other services ) Implement user activity telemetry: Instrument your application code to emit information about user activity for example click streams or started abandoned and completed transactions Use th is information to help understand how the application is used patterns of usage and to determine when a response is required Implement dependency telemetry: Design and configure your workload to emit information about the status (for example reachabil ity or response time) of resources it depends on Examples of external dependencies can include external databases DNS and network connectivity Use this information to determine when a response is required Implement transaction traceability: Implement your application code and configure your workload components to emit information about the flow of transactions across the workload Use this information to determine when a response is required and to assist you in identifying the factors contributing to an issue On AWS you can use distributed tracing services such as AWS X Ray to collect and record traces as transactions travel through your workload generate maps to see how transactions flow across your work load and services gain insight to the relationships between components and identify and analyze issues in real time Iterate and develop telemetry as workloads evolve to ensure that you continue to receive the information necessary to gain insight to the health of your workload ArchivedAmazon Web Services Operational Excellence Pillar 20 Resources Refer to the following resources to learn more about AWS best practices for operations design Videos • AWS re:Invent 2016: Infrastructure Continuous Delivery Using AWS CloudFormation (DEV313) • AWS re:Invent 2016: DevOps on AWS: Accelerating Software Delivery with AWS Developer Tools (DEV201) • AWS CodeSta r: The Central Experience to Quickly Start Developing Applications on AWS Documents • Accessing Amazon CloudWatch Logs for AWS Lambda • Monitoring CloudTrail Log Files with Amazon CloudWatch Logs • Publishing Flow Logs to CloudWatch Logs Documentation • Enhancing workload observability using Amazon CloudWatch Embedded Metric Format • Getting Started With Amazon CloudWatch • Store and Monitor OS & Application Log Files with Amazon CloudWatch • HighResolution Custom Metrics and Alarms for Amaz on CloudWatch • Monitoring AWS Health Events with Amazon CloudWatch Events • AWS CloudFormation Documentation • AWS Developer Tools • Set Up a CI/CD Pipeline on AWS • AWS X Ray • AWS Tagging Strategies • Enhancing workload observability using Amazon CloudWatch Embedded Metric Format ArchivedAmazon Web Services Operational Excellence Pillar 21 Improve Flow Adopt approaches that improve the flow of changes into production and that enable refactoring fast feedback on quality and bug fixing These accelerate beneficial changes entering production limit issues deployed and enable rapid identification and remediation of issues introduced thro ugh deployment activities In AWS you can view your entire workload (applications infrastructure policy governance and operations) as code It can all be defined in and updated using code This means you can apply the same engineering discipline that you use for application code to every element of your stack Use version control: Use version control to enable tracking of changes and releases Many AWS services offer version control capabilities Use a revision or source control system like AWS CodeCommit to manage code and other artifacts such as version controlled AWS CloudFormation templates of your infras tructure Test and validate changes: Test and validate changes to help limit and detect errors Automate testing to reduce errors caused by manual processes and reduce the level of effort to test On AWS you can create temporary parallel environments to lower the risk effort and cost of expe rimentation and testing Automate the deployment of these environments using AWS CloudFormation to ensure consistent implementations of your temporary environments Use configuration management systems : Use configuration management systems to make and track configuration changes These systems reduce errors caused by manual processes and reduce the level of effort to deploy changes Use build and deployment management systems: Use build and deployment management systems These systems reduce errors caused by manual processes and reduce the level of effort to deploy changes In AWS you can build Continuous Integration/Continuous Deployment (CI/CD) pipelines using services like the AWS Developer Tools (for example AWS CodeCommit AWS Co deBuild AWS CodePipeline AWS CodeDeploy and AWS CodeStar ) ArchivedAmazon Web Services Operational Excellence Pillar 22 Perform patch management: Perform patc h management to gain features address issues and remain compliant with governance Automate patch management to reduce errors caused by manual processes and reduce the level of effort to patch Patch and vulnerability management are part of your benefit and risk management activities It is preferable to have immutable infrastructures and deploy workloads in verified known good states Where that is not viable patching in place is the remaining option Updating machine images container images or Lambd a custom runtimes and additional libraries to remove vulnerabilities are part of patch management You should manage updates to Amazon Machine Images (AMIs) for Linux or Windows Server images using EC2 Image Builder You can use Amazon Elastic Container Registry with your existing pipeline to manage Amazon ECS images and manage Amazon EKS images AWS Lambda includes version management features Patching should not be performed on production systems without first testing in a safe environment Patches should only be applied if they support an operational or business outcome On AWS you can use AWS Systems Manager Patch Manager to automate the process of patching managed systems and schedule the activity using AWS Systems Manager Maintenance Windows Share design standards: Share best practices across teams to increase awareness and maximize the benefits of development efforts On AWS application compute infrastructure and operations can be defined and managed using code methodologies This allows for easy release sharing and adoption Many AWS services and resources are designed to be share d acros s accounts enabling you to share created assets and learnings across your teams For example you can share CodeCommit repositories Lambda functions Amazon S3 buckets and AMIs to specific accounts When you publish new resources or updates use Amazon SNS to provide cross account notifications Subscribers can us e Lambda to get new versions If shared standards are enforced in your organization i t’s critical that mechanisms exist to request additions changes and exceptions to standards in support of teams’ activities Without this option standards bec ome a con straint on innovation ArchivedAmazon Web Services Operational Excellence Pillar 23 Implement practices to improve code quality: Implement practices to improve code quality and minimize defects For example test driven development code reviews and standards adoption Use multiple environments: Use multiple enviro nments to experiment develop and test your workload Use increasing levels of controls as environments approach production to gain confidence your workload will operate as intended when deployed Make frequent small reversible changes: Frequent small and reversible changes reduce the scope and impact of a change This eases troubleshooting enables faster remediation and provides the option to roll back a change Fully automate integration and deployment: Automate build deployment and testing of the workload This reduces errors caused by manual processes and reduces the effort to deploy changes Apply metadata using Resource Tags and AWS Resource Groups following a consistent tagging strategy to enable identification of your resources Tag your resources for organization cost accounting access controls and targe ting the execution of automated operations activities Resources Refer to the following resources to learn more about AWS best practices for operations desig n Videos • AWS re:Invent 2016: Infrastructure Continuous Delivery Using AWS CloudFormation (DEV313) • AWS re:Invent 2016: DevOps on AWS: Accelerating Software Delivery with AWS Developer Tools (DEV201) • AWS CodeStar: The Central Experience to Quickly Start Developing Applications on AWS Documentation • What Is AWS Resou rce Groups • Getting Started With Amazon CloudWatch • Store and Monitor OS & Application Log Files with Amazon Clo udWatch ArchivedAmazon Web Services Operational Excellence Pillar 24 • HighResolution Custom Metrics and Alarms for Amazon CloudWatch • Monitoring AWS Health Events with Amazon CloudWatch Events • AWS CloudFormation Documentation • AWS Developer Tools • Set Up a CI/CD Pipeline on AWS • AWS X Ray • AWS Tagging Strategies Mitigate Deployment Risks Adopt approaches that provide fast feedback on quality a nd enable rapid recovery from changes that do not have desired outcomes Using these practices mitigates the impact of issues introduced through the deployment of changes The design of your workload should include how it will be deployed updated and operated You will want to implement engineering practices that align with defect reduction and quick and safe fixes Plan for unsuccessful changes: Plan to revert to a known good state or remediate in the production environment if a change does not have the desired outcome This preparation reduces recovery time through faster responses Test and validate changes: Test changes and validate the results at all lifecycle stages to confirm new features and minimize the risk and impact of failed deployments On AWS you can create temporary parallel environments to lower the risk effort and cost of experimentation and testing Automate the deployment of these environments using AWS CloudFormation to ensure consistent implementations of your temporary environments Use deployment management systems: Use deployment management systems to track and implement change This reduces errors cause by manual processes and reduces the effort to deploy changes In AWS y ou can build Continuous Integration/Continuous Deployment (CI/CD) pipelines using services like the AWS Developer Tools (for example AWS CodeCommit AWS CodeBuild AWS CodePipeline AWS CodeDeploy and AWS CodeStar ) ArchivedAmazon Web Services Operational Excellence Pillar 25 Have a change calendar and track when significant business or operational activities or events are planned that may be impacted by implementation of change Adjust activities to manage risk ar ound those plans AWS Systems Manager Change Calendar provides a mechanism to document blocks of time as open or closed to changes and why an d share that information with other AWS accounts AWS Systems Manager Automation scripts can be configured to adhere to the change calendar state AWS Systems Manager Maintenance Windows can be used to schedule the performance of AWS SSM Run Command or Automation scripts AWS Lambda invocations or AWS Step Function activities at specified times Mark these activities in your change calendar so that they can be included in your evaluation Test using limited deployments: Test with limited deployments alongside existing systems to confirm desired o utcomes prior to full scale deployment For example use deployment canary testing or one box deployments Deploy using parallel environments: Implement changes onto parallel environments and then transition over to the new environment Maintain the prior environment until there is confirmation of successful deployment Doing so minimizes recovery time by enabling rollback to the previous environment Deploy frequent small reversible changes: Use frequent small and reversible changes to reduce the scop e of a change This results in easier troubleshooting and faster remediation with the option to roll back a change Fully automate integration and deployment: Automate build deployment and testing of the workload This reduces errors cause by manual proc esses and reduces the effort to deploy changes Automate testing and rollback: Automate testing of deployed environments to confirm desired outcomes Automate rollback to previous known good state when outcomes are not achieved to minimize recovery time an d reduce errors caused by manual processes Resources Refer to the following resources to learn more about AWS best practices for operations design Videos • AWS re:Invent 2016: Infrastructure Continuous Delivery Using AWS CloudFormation (DEV313) ArchivedAmazon Web Services Operational Excellence Pillar 26 • AWS re:Invent 2016: DevOps on AWS: Accelerating Software Delivery with AWS Developer Tools (DEV201) • AWS CodeSta r: The Central Experience to Quickly Start Developing Applications on AWS Documentation • Getting Started With Amazon CloudWatch • Store and Monitor OS & Application Log Files with Amazon CloudWatch • HighResolution Custom Metrics and Alarms for Amazon CloudWatch • Monitoring AWS Health Events with Amazon CloudWatch Events • AWS CloudFormation Documentation • AWS Developer Tools • Set Up a CI/CD Pipeline on AWS • AWS X Ray • AWS Tagging Strategies Operational Readiness Evaluate the operational readiness of your workload processes procedures and personnel to understand the operational risks related to your workload You sh ould use a consistent process (including manual or automated checklists) to know when you are ready to go live with your workload or a change This will also enable you to find any areas that you need to make plans to address You will have runbooks that d ocument your routine activities and playbooks that guide your processes for issue resolution Ensure personnel capability: Have a mechanism to validate that you have the appropriate number of trained personnel to provide support for operational needs Train personnel and adjust personnel capacity as necessary to maintain effective support You will need to have enough team members to cover all activities (including on call) Ensure that your teams have the necessary skills to be successful with t raining on your workload your operations tools and AWS ArchivedAmazon Web Services Operational Excellence Pillar 27 AWS provides resources including the AWS Getting Started Resource Center AWS Blogs AWS Online Tech Talks AWS Events and Webinars and the AWS Well Architected Labs that provide guidance examples and detailed walkthroughs to educate your teams Additionally AWS Training and Certification provides some free training through selfpaced digital courses on AWS fundamentals You can also register for instructor led training to further support the development of your teams’ AWS skills Ensure consistent review of operational readiness: Ensure you have a consistent review of your readiness to operate a workload Review s must include at a minimum the operational readiness of the teams and the workload and security requirements Implement review activities in code and trigger automated review in response to events where appropriate to ensure consistency speed of execution and redu ce errors caused by manual processes You should automate workload configuration testing by making baselines using AWS Config and checking your configurations using AWS Config rules You can evaluate security requirements and compliance using the services and features of AWS Security Hub These services will aid in d etermining if your workloads are aligned with best practices and standards Use runbooks to perform procedures: Runbooks are documented procedures to achieve specific outcomes Enable consistent and prompt responses to well understood events by documenting procedures in runbooks Implement runbooks as code and trigger the execution of runbooks in response to events where appropriate to ensure consistency speed responses and reduce errors caused by manual processes Use playbooks to identify issues: Playb ooks are documented processes to investigate issues Enable consistent and prompt responses to failure scenarios by documenting investigation processes in playbooks Implement playbooks as code and trigger playbook execution in response to events where app ropriate to ensure consistency speed responses and reduce errors caused by manual processes AWS allows you to treat your operations as code scripting your runbook and playbook activities to reduce the risk of human error You can use Resource Tags or Resource Groups with your scripts to selectively execute based on criteria you have d efined (for example environment owner role or version) You can use scripted procedures to enable automation by triggering the scripts in response to events By treating both your operations and workloads as code you can also script and automate the evaluation of your environments ArchivedAmazon Web Services Operational E xcellence Pillar 28 You should script procedures on your instances using AWS Systems Manager (SSM) Run Command use AWS Systems Manager Automation to script actions and create workflows on instances and other resources or use AWS Lambda serverless compute functions to script responses to events across AWS service APIs and your own custom interfaces You can also use AWS Step Functions to coordinate multiple AWS s ervices scripted into serverless workflows Automate your responses by triggering these s cripts using CloudWatch Events and route desired events to additional operations support systems using Amazon EventBridge You should test your procedures failure scenarios and the success of your responses (for ex ample by holding game days and testing prior to going live) to identify areas you need to plan to address On AWS you can create temporary parallel environments to lower the risk effort and cost of experimentation and testing Automate the deployment of these environments using AWS CloudFormation to ensure consistent implementations of your temporary environments Perform failure injection testing in safe environments where there will be acceptable or no customer impact and develop or revise appropriate responses Make informed decisions to deploy systems and changes: Evaluate the capabilities of the team to support the workload and the workload's compliance with governance Evaluate these against the benefits of deployment when determining whether to transition a system or change into production Understand the benefits and risks to make informed decisions Use “pre mortems” to anticipate failure and create procedures where appropriate When you make changes to the checklists you use to evaluate your workloads plan what you will do with live systems that no longer comply Resources Refer to the following resources to learn more about AWS best practices for operational readiness Documentation • AWS Lambda • AWS Systems Manager • AWS Config Rules – Dynamic Compliance Checking for Cloud Resources • How to track configurati on changes to CloudFormation stacks using AWS Config ArchivedAmazon Web Services Operational Excellence Pillar 29 • Amazon Inspector Update blog post • AWS Events and Webinars • AWS Training • AWS Well Architected Labs • AWS launches Tag Policies • Using AWS Systems Manager Change Calendar to prevent changes during critical events ArchivedAmazon Web Services Operational Excellence Pillar 30 Operate Success is the achievement of business outcomes as measured by the metrics you define By understanding the health of your workload and operations you can identify when organizational and business outcomes may become at risk or are at risk and respond appropriately To be successful you must be able to : • Understand Workload Health • Understand Operational Health • Respond to Events Understanding Workload Health Define capture and analyze workload metrics to gain visibility to workload events so that you can take appropriate action Your team should be able to understand the health of your workload easily You will want to use metrics based on workload outcomes to gain useful insights You should use these metrics to implement dashboards with business and technical viewpoints that will help team members make informed decisions AWS makes it eas y to bring togethe r and analyze your worklo ad logs so that you can generate metrics understand the health of your workload and gain insight from operations over time Identify key performance indicators: Identify key performance indicators (KPIs) based on desired business outcomes (for example o rder rate customer retention rate and profit versus operating expense) and customer outcomes (for example customer satisfaction) Evaluate KPIs to determine workload success Define workload metrics: Define workload metrics to measure the achievement of KPIs (for example abandoned shopping carts orders placed cost price and allocated workload expense) Define workload metrics to measure the health of the workload (for example interface response time error rate requests made requests completed and utilization) Evaluate metrics to determine if the workload is achieving desired outcomes and to understand the health of the workload ArchivedAmazon Web Services Operational Excellence Pillar 31 You should send log data to a service like CloudWatch Logs and generate metrics from observations of necessary log content CloudWatch has specialized features like Amazon CloudWatch Insights for NET and SQL Server and Container Insights that can assist you by identifying and setting up key metrics logs and alarms across your specifically supported application resources and technology stack Collect and analyze workload metrics: Perform regular proactive reviews of metrics to identify trends and determine where appropriate responses are needed You should aggregate log data from your application workload components services and API calls to a service like CloudWatch Logs Generate metrics from observations of necessary log content to enable insight into the performance of operations activities In the AWS shared responsibilit y model portions of monitoring are delivered to you through the AWS Personal Health Dashboard This dashboard provide s alerts and remediation guidance when AWS is experiencing events that might affect you Customers with Business and Enterprise Support subscriptions also get access to the AWS Health API enabling integration to the ir event management systems On AWS you can export your log data to Amazon S3 or send logs directly to Amazon S3 for long term storage Using AWS Glue you can discover and prepare your log data in Amazon S3 for analytics storing a ssociated metadata in the AWS Glue Data Catalog Amazon Athena through its native integration with Glue can then be used to analyze your log data querying it using standard SQL Using a business intelligence tool like Amazon QuickSight you can visualize explore and analyze your data An alternative solution would be to use the Amazon Elasticsearch Service and Kibana to collect analyze and display logs on AWS across multiple accounts and AWS Regions Establish workload metrics baselines: Establish ba selines for metrics to provide expected values as the basis for comparison and identification of under and over performing components Identify thresholds for improvement investigation and intervention Learn expected patterns of activity for workload: Establish patterns of workload activity to identify anomalous behavior so that you can respond appropriately if necessary ArchivedAmazon Web Services Operational Excellence Pillar 32 CloudWatch through the CloudWatch Anomaly Detection feature applies statistical and machine learning algorithms to generate a range of expected values that represent normal metric behavior Alert when workload outcomes are at risk: Raise an alert when workload outcomes are at risk so that you can respond appropriately if necessary Ideally you have previously identified a metric threshold that you are able to alarm upon or an event that you can use to trigger an automated response You can also use CloudWatch Logs Insights to interactively search and analyze your log data using a purpose built query language CloudWatch Logs Insights automatically discovers fields in logs from AWS services and custom log events in JSON It scales with your log v olume and query complexity and gives you answers in seconds helping you to search for the contributing factors of an incident Alert when workload anomalies are detected: Raise an alert when workload anomalies are detected so that you can respond appropri ately if necessary Your analysis of your workload metrics over time may establish patterns of behavior that you can quantify sufficiently to define an event or raise an alarm in response Once trained the CloudWatch Anomaly Detection feature can be used to alarm on detected anomali es or can provide overlaid expected values onto a graph of metric data for ongoing comparison Validate the achievement of outcomes and the effectiveness of KPIs and metrics: Create a business level view of your workload operations to help you determine if you are satisfying needs and to identify areas that need improvement to reach business goals Validate the effectiveness of KPIs an d metric s and revise them if necessary AWS also has support for third party log analysis systems and business intelligence tools through the AWS service APIs and SDKs (for example Grafana Kibana and Logstash) Resources Refer to the following resources to learn more about AWS best practices for understanding workload health ArchivedAmazon Web Services Operational Excellence Pillar 33 Videos • AWS re:Invent 2015: Log Monitor and Analyze your IT with Amazon CloudWatch (DVO315) • AWS re:Invent 2016: Amazon CloudWatch Logs and AWS Lambda: A Match Made in Heaven (DEV301) Documentation • What Is Amazon CloudWatch Applicati on Insights for NET and SQL Server? • Store and Monitor OS & Application Log Files with Amazon CloudWatch • API & CloudFormation Support for Amazon CloudWatch Dashboards • AWS Answers: Centralized Logging Understanding Operational Health Define capture and analyze operations metrics to gain visibility to wo rkload events so that you can take appropriate action Your team should be able to understand the health of your operations easily You will want to use metrics based on operations outcomes to gain useful insights You should use these metrics to implement dashboards with business and technical viewpoints that will help team members make informed decisions AWS makes it easier to bring together and analyze your operations logs so that you can generate metrics know the status of your operations and gain in sight from operations over time Identify key performance indicators: Identify key performance indicators (KPIs) based on desired business (for example new features delivered) and customer outcomes (for example customer support cases) Evaluate KPIs to d etermine operations success Define operations metrics: Define operations metrics to measure the achievement of KPIs (for example successful deployments and failed deployments) Define operations metrics to measure the health of operations activities (for example mean time to detect an incident (MTTD) and mean time to recovery (MTTR) from an incident) Evaluate metrics to determine if operations are achieving desired outcomes and to understand the health of your operations activities ArchivedAmazon Web Services Operational Excellence Pillar 34 Collect and analyze operations metrics: Perform regular proactive reviews of metrics to identify trends and determine where appropriate responses are needed You should aggregate log data from the execution of you r operations activities and operations API calls into a service like CloudWatch Logs Generate metrics from observations of necessary log content to gain insight into the performance of operations activities On AWS you can export your log data to Amazon S3 or send logs directly to Amazon S3 for long term storage Using AWS Glue you can discover and prepare your log data in Amazon S3 for analytics storing associated metadata in the AWS Glue Data Catalog Amazon Athena through its native integration with Glue can then be used to analyze your log data querying it using standard SQL Using a business intelligence tool like Amazon QuickSight you can visualize explore and analyze your data Establish operations metrics baselines: Establish baselines for metrics to provide expected values as the basis f or comparison and identification of under and over performing operations activities Learn expected patterns of activity for operations : Establish patterns of operations activities to identify anomalous activity s o that you can respond appropriately if necessary Alert when workload outcomes are at risk: Raise an alert when operations outcomes are at risk so that you can res pond appropriately if necessary Ideally you have previously identified a metric that you are able to alarm upon o r an event that you can use to trigger an automated response You can also use CloudWatch Logs Insights to interactively search and analyze your log data using a purpose built query language CloudWatch Logs Insights automatically discovers fields in logs from AWS services and custom log events in JSON It scales with your log volume and query complexity and gives you answers in seconds helping you to search for the contributing factors of an incident Alert when operations anomalies are detected: Raise an alert when operations anomalies are detecte d so that you can respond appropriately if necessary Your analysis of your operations metrics over time may established patterns of behavior that you can quantify sufficiently to define an event or raise an alarm in response ArchivedAmazon Web Services Operational Excellence Pillar 35 Once trained the CloudWatch Anomaly Detection feature can be used to alarm on detected anomalies or can provide overlaid expected values onto a graph of metric data for ongoing comparison Validate the achievement of outcomes and the effectiveness of KPIs and metrics: Create a business level view of your operations activities to help you determine if you are satisfying needs and to identify areas that need improvement to reach business goals Validate the effectiveness of KPIs and metric s and revise them if necessary AWS also has support for third party log analysis systems and business intelligence tools through the AWS service APIs and SDKs (for example Grafana Kibana and Logstash) Resources Refer to the following resources to learn more about AWS best practices for understanding operational health Videos • AWS re:Invent 2015: Log Monitor and Analyze your IT with Ama zon CloudWatch (DVO315) • AWS re:Invent 2016: Amazon CloudWatch Logs and AWS Lambda: A Match Made in Heaven (DEV301) Documentation • Store and Monitor OS & Application Log Files with Amazon CloudWatch • API & CloudFormation Support for Amazon CloudWatch Dashboards • AWS Answers: Centralized Logging Responding to Events You should anticipate operational events both planned (for example sales promotion s deployments and failure tests) and unplanned (for example surges in utilization and component failures) You should use your existing runbooks and playbooks to deliver consistent results when you respond to alerts Defined alerts should be owned by a role or a team that is accountable for the response and escalations You will also want to know the business impact of your system components and use this to target efforts when needed You should perform a root cause analysis (RCA) after events and then prevent recurrence of failures or document workarounds ArchivedAmazon Web Services Operational Excellence Pillar 36 AWS simplifies your event response by providing tools supporting all aspects of your workload and operations as code These tools allow you to script responses to operations events and trigger their e xecution in response to monitoring data In AWS you can improve recovery time by replacing failed components with known good versions rather than trying to repair them You can then carry out analysis on the failed resource out of band Use processes for event incident and problem management: Have processes to address observed events events that require intervention (incidents) and events that require intervention and either recur or cannot currently be resolved (problems) Use these processes to miti gate the impact of these events on the business and your customers by ensuring timely and appropriate responses On AWS you can use AWS Systems Manager OpsCenter as a central location to view investigate and resolve operational issues related to any AWS resource It aggregates operational issue s and provid es contextually relevant data to assist in incident response Have a process per alert: Have a well defined response (runbook or playbook) with a specifically identified owner for any event for which you raise an alert This ensures effective and prompt responses to operations events and prevents actionable events from being obscured by less valuable notificat ions Prioritize operational events based on business impact: Ensure that when multiple events require intervention those that are most significant to the business are addressed first For example impacts can include loss of life or injury financial los s or damage to reputation or trust Define escalation paths: Define escalation paths in your runbooks and playbooks including what triggers escalation and procedures for escalation Specifically identify owners for each action to ensure effective and pr ompt responses to operations events Identify when a human decision is required before an action is taken Work with decision makers to have that decision made in advance and the action preapproved so that MTTR is not extended waiting for a response Enable push notifications: Communicate directly with your users (for example with email or SMS) when the services they use are impacted and again when the services return to normal operating conditions to enable users to take appropriate action ArchivedAmazon Web Services Operational Excellence Pillar 37 Communicat e status through dashboards: Provide dashboards tailored to their target audiences (for example internal technical teams leadership and customers) to communicate the current operating status of the business and provide metrics of interest You can creat e dashboards using Amazon CloudWatch Dashboards on customizable home pages in the CloudWatch console Using business intelligence services like Amazon QuickSight you can create and publish interactive dashboards of your workload and operational health (for example order rates connected users and transaction times) Create Dashboards that present syst em and business level views of your metrics Automate responses to events: Automate responses to events to reduce errors caused by manual processes and to ensure prompt and consistent responses There are multiple ways to automate the execution of runbook and playbook actions on AWS To respond to an event from a state change in your AWS resources or from your own custom events you should create CloudWatch Events rules to trigger responses through CloudWatch targets (for example Lambda functions Amazon Simple Notification Service (Amazon SNS) topics Amazon ECS tasks and AWS Systems Manager Automation) To respond to a metric that crosses a thre shold for a resource (for example wait time) you should create CloudWatch alarms to perform one or more actions using Amazon EC2 actions Auto Scaling actions or to send a notification to an Amazon SNS topic If you need to perform custom actions in response to an alarm invoke Lambda through Amazon SNS notification Use Amazon SNS to publish event notifications and escalation messages to keep people informed AWS also supports third party systems through the AWS service APIs and SDKs There are a number of monitoring tools provided by APN P artners and third parties that allow for monitoring notifications and responses Some of these tools include New Relic Splun k Loggly SumoLogic and Datadog You should keep critical manual procedures available for use when automated procedures fail Resources Refer to the following resources to learn more about AWS best practices for responding to events ArchivedAmazon Web Services Operational Excellence Pillar 38 Video • AWS re:Invent 2016: Automating Security Event Response from Idea to Code to Execution (SEC313) Documentation • What is Amazon CloudWatch Events? • How to Automatically Tag Amazon EC2 Resources in Response to API Events • Amazon EC2 Systems Manager Automation is now an Amazon CloudWatch Events Target • EC2 Run Command is Now a CloudWatch Events Target • Automate remediation actions for Amazon EC2 notifications and beyond using EC2 Systems Manager Automation and AWS Health • HighResolution Custom Met rics and Alarms for Amazon CloudWatch ArchivedAmazon Web Services Operational Excellence Pillar 39 Evolve Evolution is the continuous cycle of improvement over time Implement frequent small incremental changes based on the lessons learned from your operations activities and evaluate their success at bringing about improvement To evolve your operations over time you must be able to : • Learn Share and Improve Learn Share and Improve It’s essential that you regularly provide time for analysis of operations activities analysis of failures experimentation and making improvements When things fail you will want to ensure that your team as well as your larger engineering community learns from those failures You should analyze fa ilures to identify lessons learned and plan improvements You will want to regularly review your lessons learned with other teams to validate your insights Have a process for continuous improvement: Regularly evaluate and prioritize opportunities for improvement to focus efforts where they can provide the greatest benefits Perform post incident analysis : Review customer impacting events and identify the contributing causes and preventative action items Use this information to develop mitigations to limit or prevent recurrence Develop procedures for prompt and effective responses Communicate contributing factors and corrective actions as appropriate tailored to target audiences Implement feedback loops: Include feedback loops in your procedures an d workloads to help you identify issues and areas that need improvement Perform Knowledge Management : Mechanisms exist for your team members to discover the information that they are looking for in a timely manner access it and identify that it’s curren t and complete Mechanisms are present to identify needed content content in need of refresh and content that should be archived so that it’s no longer referenced Define drivers for improvement: Identify drivers for improvement to help you evaluate and prioritize opportunities ArchivedAmazon Web Services Operational Excellence Pillar 40 On AWS you can aggregate the logs of all your operations activities workloads and infrastructure to create a detailed activity history You can then use AWS tools to analyze your operations and workload health over time (for ex ample identify trends correlate events and activities to outcomes and compare and contrast between environments and across systems) to reveal opportunities for improvement based on your drivers You should use CloudTrail to track API activity (through t he AWS Management Console CLI SDKs and APIs) to know what is happening across your accounts Track your AWS Developer Tools deployment activities with CloudTrail and CloudWatch This will add a detailed activity history of your deployments and their out comes to your CloudWatch Logs log data Export your log data to Amazon S3 for long term storage Using AWS Glue you discover and prepare your log data in Amazon S3 for analytics Use Amazon Athena through its native integration with Glue to analyze your log data Use a business intelligence tool like Amazon QuickSight to visualize explore and analyze your data Validate insights: Review your analysis results and responses with cross functional teams and business owners Use these reviews to establish common understanding identify additional impacts and determine courses of action Adjust responses as appropriate Perform operations metrics reviews: Regularly perform retrospective analysis of incidents and operations metrics with cross team participants including leadership from different areas of the business Use these reviews to identify opportunities for improvement potential courses of action and to share lessons learned Look for opportunities to improve in all of your environments (for example dev elopment test and production) Document and share lessons learned: Document and share lessons learned from the execution of operations activities so that you can use them internally and across teams You should share what your teams learn to increase the benefit across your organization You will want to share information and resources to prevent avoidable errors and ease development efforts This will allow you to focus on delivering desired features Use AWS Identity and Access Management (I AM) to define permissions enabling controlled access to the resources you wish to share within and across accounts You should then use version controlled AWS CodeCommit repositories to share application ArchivedAmazon Web Services Operational Excellence Pill ar 41 libraries scripted procedures procedure documentat ion and other system documentation Share your compute standards by sharing access to your AMIs and by authorizing the use of your Lambda functions across accounts You should also share your infrastructure standards as CloudFormation templates Through the AWS APIs and SDKs you can integrate external and third party tools and repositories (for example GitHub BitBucket and SourceForge) When sharing what you have learned and developed be careful to structure permissions to ensure the integrity of sha red repositories Allocate time to make improvements: Dedicate time and resources within your processes to make continuous incremental improvements possible On AWS you can create temporary duplicates of environments lowering the risk effort and cost o f experimentation and testing These duplicated environments can be used to test the conclusions from your analysis experiment and develop and test planned improvements Resources Refer to the following resources to learn more about AWS best practices fo r learning from experience Documentation • Querying Amazon VPC Flog Logs • Monitori ng Deployments with Amazon CloudWatch Tools • Analyzing VPC Flow Logs with Amazon Kinesis Data Firehose Amazon Athena and Amazon QuickSight • Share an AWS CodeCommit Repository • Use resource based policies to give other accounts and AWS services permission to use your Lambda resources • Sharing an AMI with Specific AWS Accounts • Using AWS Lambda with Amazon SNS ArchivedAmazon Web Services Operational Excellence Pillar 42 Conclusion Operational excellence is an ongoing and iterative effort Set up your organization for success by having shared goals Ensure that everyone understands their part in achieving business outcomes and how they impact the ability of others to succeed Provide support for your team members so that they can support your business outcomes Every operational event and failure should be treated as an opportunity to improve the operations of your architecture By understanding the needs of your workloads predefining runbooks for routine activities and playbooks to guide issue resolution using the operations as code features in AWS and maintaining situational awareness your operations will be better prepared and able to respon d more effe ctively when incidents occur Through focusing on incremental improvement based on priorities as they change and lessons learned from event response and retrospective analysis you will enable the success of your business by increasing the efficiency and effectiveness of your activities AWS strives to help you build and operate architectures that maximize efficiency while you build highly responsive and adaptive deployments To increase the operational excellen ce of your workloads you should use the bes t practices discussed in this paper Contributors • Brian Carlson Operations Lead Well Architected Amazon Web Services • Jon Steele Sr Technical Account Manager Amazon Web Services • Ryan King Technical Program Manager Amazon Web Services • Philip Fitzsimons Sr Manager Well Architected Amazon Web Services Further Reading For additional help consult the following sources: • AWS Well Architected Framework ArchivedAmazon Web Services Operational Excellence Pillar 43 Document Revisions Date Description July 2020 Updates to reflect new AWS services and features and latest best practices July 2018 Updates to reflect new AWS services and features and updated references November 2017 First publication
|
General
|
consultant
|
Best Practices
|
AWS_WellArchitected_Framework__Performance_Efficiency_Pillar
|
ArchivedPerformance Efficiency Pillar AWS WellArchitected Framework July 2020 This paper has been archived The latest version is now available at: https://docsawsamazoncom/wellarchitected/latest/performanceefficiencypillar/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Performance Efficiency 1 Design Principles 2 Definition 2 Selection 4 Performance Architecture Selection 4 Compute Architecture Selection 8 Storage Architecture Selection 14 Database Architecture Selection 17 Network Architecture Selection 21 Review 29 Evolve Your Workload to Take Advantage of New Releases 30 Monitoring 32 Monitor Your Resources to Ensure That They Are Performing as Expected 33 Trade offs 35 Using Trade offs to Improve Performance 36 Conclusion 38 Contributors 38 Further Reading 38 Document Revisions 39 ArchivedAbstract This whitepaper focuses on the performance efficiency pillar of the Amazon Web Services (AWS) WellArchitected Framework It provides guidance to help c ustomers apply best practices in the design delivery and maintenance of AWS environments The performance efficiency pillar addresses best practices for managing production environments This paper does not cover the design and management of non producti on environments and processes such as continuous integration or delivery ArchivedAmazon Web Services Perfor mance Efficiency Pillar 1 Introduction The AWS Well Architected Framework helps you understand the pros and cons of decisions you make while building workloads on AWS Using the Framework helps you learn architectural best pra ctices for designing and operating reliable secure efficient and cost effective workloads in the cloud The Framework provides a way for you to consistently measure your architectures against best practices and identify areas for improvement We believe that having well architected workloads greatly increases the likelihood of business success The framework is based on five pillars: • Operational Excellence • Security • Reliability • Performance Efficiency • Cost Optimization This paper focuses on applying the principles of the performance efficiency pillar to your workloads In traditional on premises environments achieving high and lasting performance is challenging Using the principles in this paper will help you build architectures on AWS tha t efficiently deliver sustained performance over time This paper is intended for those in technology roles such as chief technology officers (CTOs) architects developers and operations team members After reading this paper you’ll understand AWS best practices and strategies to use when designing a performant cloud architecture This paper does not provide implementation details or architectural patterns However it does include references to appropriate resources Performance Efficiency The performa nce efficiency pillar focuses on the efficient use of computing resources to meet requirements and how to maintain efficiency as demand changes and technologies evolve ArchivedAmazon Web Services Performance Efficiency Pillar 2 Design Principles The following design principles can help you achieve and maintain e fficient workloads in the cloud • Democratize advanced technologies: Make advanced technology implementation easier for your team by delegating complex tasks to your cloud vendor Rather than asking your IT team to learn about hosting and running a new tech nology consider consuming the technology as a service For example NoSQL databases media transcoding and machine learning are all technologies that require specialized expertise In the cloud these technologies become services that your team can consu me allowing your team to focus on product development rather than resource provisioning and management • Go global in minutes: Deploying your workload in multiple AWS Regions around the world allows you to provide lower latency and a better experience for your customers at minimal cost • Use serverless architectures: Serverless architectures remove the need for you to run and maintain physical servers for traditional compute activities For example serverless storage services can act as static websites (rem oving the need for web servers) and event services can host code This removes the operational burden of managing physical servers and can lower transactional costs because managed services operate at cloud scale • Experiment more often: With virtual and a utomatable resources you can quickly carry out comparative testing using different types of instances storage or configurations • Consider mechanical sympathy: Use the technology approach that aligns best with your goals For example consider data acces s patterns when you select database or storage approaches Definition Focus on the following areas to achieve performance efficiency in the cloud: • Selection • Review • Monitoring ArchivedAmazon Web Services Performance Efficiency Pillar 3 • Trade offs Take a data driven approach to building a high performance architecture Gather data on all aspects of the architecture from the high level design to the selection and configuration of resource types Reviewing your choices on a regular basis ensures th at you are taking advantage of the continually evolving AWS Cloud Monitoring ensures that you are aware of any deviance from expected performance Make trade offs in your architecture to improve performance such as using compression or caching or relaxi ng consistency requirements ArchivedAmazon Web Services Performance Efficiency Pillar 4 Selection The optimal solution for a particular workload varies and solutions often combine multiple approaches Well architected workloads use multiple solutions and enable different features to improve performance AWS res ources are available in many types and configurations which makes it easier to find an approach that closely matches your needs You can also find options that are not easily achievable with on premises infrastructure For example a managed service such as Amazon DynamoDB provides a fully managed NoSQL database with single digit millisecond latency at any scale Performance Architecture Selection Often multiple approaches are required to get optimal performance across a workload Wellarchitected system s use multiple solutions and enable different features to improve performance Use a data driven approach to select the patterns and implementation for your architecture and achieve a cost effective solution AWS Solutions Architects AWS Reference Architectures and AWS Partner Network (APN) partners can help you select an architecture based on industry knowledge but data obtained through benchmarking or load testing will be required to optimize your architecture Your architecture will likely combine a number of different architectural approaches (for example event driven ETL or pipeline) The implementation of your architecture will use the AWS servic es that are specific to the optimization of your architecture's performance In the following sections we discuss the four main resource types to consider (compute storage database and network) Understand the available services and resources: Learn abo ut and understand the wide range of services and resources available in the cloud Identify the relevant services and configuration options for your workload and understand how to achieve optimal performance If you are evaluating an existing workload yo u must generate an inventory of the various services resources it consumes Your inventory helps you evaluate which components can be replaced with managed services and newer technologies Define a process for architectural choices: Use internal experienc e and knowledge of the cloud or external resources such as published use cases relevant ArchivedAmazon Web Services Performance Efficiency Pillar 5 documentation or whitepapers to define a process to choose resources and services You should define a process that encourages experimentation and benchmarking with the services that could be used in your workload When you write critical user stories for your architecture you should include performance requirements such as specifying how quickly each critical story should execute For these critical stories you sh ould implement additional scripted user journeys to ensure that you have visibility into how these stories perform against your requirement s Factor cost requirements into decisions: Workloads often have cost requirements for operation Use internal cost c ontrols to select resource types and sizes based on predicted resource need Determine which workload components could be replaced with fully managed services such as managed databases in memory caches and other services Reducing your operational workload allows you to focus resources on business outcomes For cost requirement best practices refer to the CostEffective Resources section of the Cost Optimization Pillar whitepaper Use policies or reference architectures: Maximize p erformance and efficiency by evaluating internal policies and existing reference architectures and using your analysis to select services and configurations for your workload Use guidance from your cloud provider or an appropriate partner : Use cloud compa ny resources such as solutions architects professional services or an appropriate partner to guide your decisions These resources can help review and improve your architecture for optimal performance Reach out to AWS for assistance when you need addit ional guidance or product information AWS Solutions Architects and AWS P rofessional Services provide guidance for solution implementation APN P artners provide AWS expertise to help you unlock agility and innovation for your business Benchmark existing workloads: Benchmark the performance of an existing workload to understand how it performs on the cloud Use the data collected from benchmarks to drive architectural decisions Use benchmarking with synthetic tests to generate data about how your workload’s components perform Benchmarking is generally quicker to set up than load testing and ArchivedAmazon Web Services Performance Efficiency Pillar 6 is used to evaluate the technology for a particular compon ent Benchmarking is often used at the start of a new project when you lack a full solution to load test You can either build your own custom benchmark tests or you can use an industry standard test such as TPCDS to benchmark your data warehousing workloads Industry benchmarks are helpful when comparing environments Custom benchmarks are useful for targeting specific types of operations that you expect to make in your architecture When benchmarkin g it is important to pre warm your test environment to ensure valid results Run the same benchmark multiple times to ensure that you’ve captured any variance over time Because benchmarks are generally faster to run than load tests they can be used earlier in the deployment pipeline and provide faster feedback on performance deviations When you evaluate a significant change in a component or service a benchmark can be a quick way to see if you can justify the effort to make the change Using benchmarki ng in conjunction with load testing is important because load testing informs you about how your workload will perform in production Load test your workload: Deploy your latest workload architecture on the cloud using different resource types and sizes M onitor the deployment to capture performance metrics that identify bottlenecks or excess capacity Use this performance information to design or improve your architecture and resource selection Load testing uses your actual workload so you can see how you r solution performs in a production environment Load tests must be executed using synthetic or sanitized versions of production data (remove sensitive or identifying information) Use replayed or pre programmed user journeys through your workload at scale that exercise your entire architecture Automatically carry out load tests as part of your delivery pipeline and compare the results against pre defined KPIs and thresholds This ensures that you continue to achieve required performance Amazon CloudWatch can collect metrics across the resources in your architecture You can also collect and publish custom metrics to surface business or derived metrics Use CloudWatch to set alarms that in dicate when thresholds are breached and signal that a test is outside of the expected performance Using AWS services you can run production scale environments to test your architecture aggressively Since you only pay for the test environment when it is needed you can carry out full scale testing at a fraction of the cost of using an on ArchivedAmazon Web Services Performance Efficiency Pillar 7 premises environment Take advantage of the AWS Cloud to test your workload to see where it fails to scale or scales in a non linear way You can use Amazon EC2 Spot Instances to generate loads at low costs and discover bottlenecks before they are experienced in production When load tests take considerable time to execute parallelize them using multiple copies of your test environment Your costs will be similar but your testing time will be reduced (It costs the same to run one EC2 instance for 100 hours as it does to run 100 instances for one hour) You can also lower the costs of load testing by using Spot Instance s and selecting Regions that have lower costs than the Regions you use for production The location of your load test clients should reflect the geographic spread of your end users Resources Refer to the following resources to learn more about AWS best pr actices for load testing Videos • Introducing The Amazon Builders’ Library (DOP328) Documentation • AWS Architecture Center • Amazon S3 Performance Optimization • Amazon EBS Volume Performance • AWS CodeDeploy • AWS CloudFormation • Load Testing CloudFront • AWS CloudWatch Dashboards ArchivedAmazon Web Services Performance Efficiency Pillar 8 Compute Architecture Selection The optimal compute choice for a particular workload can vary based on application design usage patterns and configuration settings Architectures may use different compute choices for various components and enable different features to improve performance Selecting the wrong compute choice for an architectu re can lead to lower performance efficiency Evaluate the available compute options: Understand the performance characteristics of the compute related options available to you Know how instances containers and functions work and what advantages or disadvantages they bring to your workload In AWS compute is available in three forms: instances containers and functions: Instances Instances are virtualized servers allowing you to change their capabilities with a button or an API cal l Because resource decisions in the cloud aren’t fixed you can experiment with different server types At AWS these virtual server instances come in different families and sizes and they offer a wide variety of capabilities including solid state drive s (SSDs) and graphics processing units (GPUs) Amazon Elastic Compute Cloud (Amazon EC2) virtual server instances come in different families and sizes They offer a wide variety of capabilities including solid state drives (SSDs) and graphics processing units (GPUs) When you launch an EC2 instance the instance type that you specify determines the hardware of the host computer used for your instance Each instance type offers different compute memory and stora ge capabilities Instance types are grouped in instance families based on these capabilities Use data to select the optimal EC2 instance type for your workload ensure that you have the correct networking and storage options and consider operating system settings that can improve the performance for your workload Containers Containers are a method of operating system virtualization that allow you to run an application and its dependencies in resource isolated processes When running conta iners on AWS you have two choices to make First choose whether or not you want to manage servers AWS Fargate is serverless compute for containers or Amazon EC2 can be used if you need control over the in stallation ArchivedAmazon Web Services Performance Efficiency Pillar 9 configuration and management of your compute environment Second choose which container orchestrator to use: Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS) Amazon Ela stic Container Service (Amazon ECS) is a fully managed container orchestration service that allows you to automatically execute and manage containers on a cluster of EC2 instances or serverless instances using AWS Fargate You can natively integrate Amazon ECS with other services such as Amazon Route 53 Secrets Manager AWS Identity and Access Management (IAM) and Amazon CloudWatc h Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service You can choose to run your EKS clusters using AWS Fargate removing the need to provision and manage servers EKS is deeply integrated with services such as Amazon CloudWatch Auto Scaling Groups AWS Identity and Access Management (IAM) and Amaz on Virtual Private Cloud (VPC) When using containers you must use data to select the optimal type for your workload — just as you use data to select your EC2 or AWS Fargate instance types Consider container configuration options such as memory CPU and tenancy configuration To enable network access between container services consider using a service mesh such as AWS App Mesh which standardizes how your services communicate Service mesh gives you end toend visibility and ensur es highavailability for your applications Functions Functions abstract the execution environment from the code you want to execute For example AWS Lambda allows you to execute code without running an instance You can use AWS Lambda to run code for any type of application or backend service with zero administration Simply upload your code and AWS Lambda will manage everything required to run and scale that code You can set up y our code to automatically trigger from other AWS services call it directly or use it with Amazon API Gateway Amazon API Gateway is a fully managed service that makes it easy for developers to create p ublish maintain monitor and secure APIs at any scale You can create an API that acts as a “front door” to your Lambda function API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls including traffic management authorization and access control monitoring and API version management ArchivedAmazon Web Services Performance Efficiency Pillar 10 To deliver optimal performance with AWS Lambda choose the amount of memory you want for your function You are allocated proportional CPU power and oth er resources For example choosing 256 MB of memory allocates approximately twice as much CPU power to your Lambda function as requesting 128 MB of memory You can control the amount of time each function is allowed to run (up to a maximum of 300 seconds) Understand the available compute configuration options: Understand how various options complement your workload and which configuration options are best for your system Examples of these options include instance family sizes features (GPU I/O) function sizes container instances and single versus multi tenancy When selecting instance families and types you must also consider the configuration options available to meet your workload’s needs: • Graphics Processing Units (GPU) — Using general purpose computing on GPUs (GPGPU) you can build applications that benefit from the high degree of parallelism that GPUs provide by leveraging platforms (such as CUDA) in the development process If your workload requires 3D rendering or video compression GPUs enable hardware accelerated computation and encoding making your workload more efficient • Field Programmable Gate Arrays (FPGA) — Using FPGAs you can optimize your workloads by having custom hardware accelerated execution for your most demanding workloads You can define your algorithms by leveraging supported general programming languages such as C or Go or hardware oriented languages such as Verilog or VHDL • AWS Inferentia (Inf1) — Inf1 instances are built to support machine learning inference applications Using Inf1 instances customers can run large scale machine learning inference applications like image recognition speech recognition natural language processing personalization and fraud detection You can build a model in one of the popular machine learning frameworks such as TensorFlow PyTorch or MXNet and use GPU instances such as P3 or P3dn to train your model After your machine learning model is trained to meet your requirements you can deploy your model on Inf1 instances by using AWS Neuron a specialized software development kit (SDK) consisting of a compiler runtime and profiling tools that optimize the machine learning inference performance of Inferentia chips ArchivedAmazon Web Services Performance Efficiency Pillar 11 • Burstable inst ance families — Burstable instances are designed to provide moderate baseline performance and the capability to burst to significantly higher performance when required by your workload These instances are intended for workloads that do not use the ful l CPU often or consistently but occasionally need to burst They are well suited for general purpose workloads such as web servers developer environments and small databases These instances provide CPU credits that can be consumed when the instance mu st provide performance Credits accumulate when the instance doesn’t need them • Advanced computing features — Amazon EC2 gives you access to advanced computing features such as managing C state and P state registers and controlling turbo boost of processo rs Access to co processors allows cryptography operations offloading through AES NI or advanced computation through AVX extensions The AWS Nitro System is a combination of dedicated hardware and lightwe ight hypervisor enabling faster innovation and enhanced security Utilize AWS Nitro Systems when available to enable full consumption of the compute and memory resources of the host hardware Additionally dedicated Nitro Cards enable high speed networking high speed EBS and I/O acceleration Collect compute related metrics : One of the best ways to understand how your compute systems are performing is to record and track the true utilization of various resources This data can be used to make more accurat e determinations about resource requirements Workloads (such as those running on microservices architectures ) can generate large volumes of data in the form of metrics logs and events Determine if your existing monitoring and observability service can manage the data generated Amazon CloudWatch can be used to collect access and correlate this data on a sing le platform from across all your AWS resources applications and services running on AWS and onpremises servers so you can easily gain system wide visibility and quickly resolve issues Determine the required configuration by right sizing: Analyze the various performance characteristics of your workload and how these characteristics relate to memory network and CPU usage Use this data to choose resources that best match your workload's profile For example a memory intensive workload such as a database could be served best by the r family of instances However a bursting workload can benefit more from an elastic container system ArchivedAmazon Web Services Performance Efficiency Pillar 12 Use the available elasticity of resources: The cloud provides the flexibility to expand or reduce your resources dynami cally through a variety of mechanisms to meet changes in demand Combined with compute related metrics a workload can automatically respond to changes and utilize the optimal set of resources to achieve its goal Optimally matching supply to demand delive rs the lowest cost for a workload but you also must plan for sufficient supply to allow for provisioning time and individual resource failures Demand can be fixed or variable requiring metrics and automation to ensure that management does not become a b urdensome and disproportionately large cost With AWS you can use a number of different approaches to match supply with demand The Cost Optimization Pillar whitepaper describes how to use the following approaches to cost: • Demand based approach • Buffer based approach • Time based approach You must ensure that workload deployments can handle both scale up and scale down events Create test scenarios for scale down events to ensure that the workload behaves as expected Reevaluate compute needs based on metrics: Use system level metrics to identify the behavior and requirements of your workload over time Evaluate your workload's needs by comparing the av ailable resources with these requirements and make changes to your compute environment to best match your workload's profile For example over time a system might be observed to be more memory intensive than initially thought so moving to a different ins tance family or size could improve both performance and efficiency Resources Refer to the following resources to learn more about AWS best practices for compute Videos • Amazon EC2 foundations (CMP21 1R2) • Powering next gen Amazon EC2: Deep dive into the Nitro system • Deliver high performance ML inference with AWS Inferentia (CMP324 R1) ArchivedAmazon Web Services Performance Efficiency Pillar 13 • Optimize performance and cost for your AWS compute (CMP323 R1) • Better faster cheaper compute: Cost optimizing Amazon EC2 (CMP202 R1) Documentation • Instances: o Instance Types o Processor St ate Control for Your EC2 Instance • EKS Containers: EKS Worker Nodes • ECS Containers: Amazon ECS Container Instances • Functions: Lambda Function Configuration ArchivedAmazon Web Services Performance Efficien cy Pillar 14 Storage Architecture Selection The optimal storage solution for a par ticular system varies based on the kind of access method (block file or object) patterns of access (random or sequential) throughput required frequency of access (online offline archival) frequency of update (WORM dynamic) and availability and du rability constraints Well architected systems use multiple storage solutions and enable different features to improve performance In AWS storage is virtualized and is available in a number of different types This makes it easier to match your storage m ethods with your needs and offers storage options that are not easily achievable with on premises infrastructure For example Amazon S3 is designed for 11 nines of durability You can also change from using magnetic hard disk drives (HDDs) to SSDs and e asily move virtual drives from one instance to another in seconds Performance can be measured by looking at throughput input/output operations per second (IOPS) and latency Understanding the relationship between those measurements will help you select the most appropriate storage solution Storage Services Latency Throughput Shareable Block Amazon EBS EC2 instance store Lowest consistent Single Mounted on EC2 instance copies via snapshots File system Amazon EFS Amazon FSx Low consistent Multiple Many clients Object Amazon S3 Lowlatency Web scale Many clients Archival Amazon S3 Glacier Minutes to hours High No From a latency perspective if your data is only accessed by one instance then you should use block storage such as Amazon EBS Distributed file systems such as Amazon EFS generally have a small latency overhead for each file operation so they should be used where multiple instances need access ArchivedAmazon Web Services Performance Efficiency Pillar 15 Amazon S3 has features than can reduce latency and increase throughput You can use cross region replication (CRR) to p rovide lower latency data access to different geographic regions From a throughput perspective Amazon EFS supports highly parallelized workloads (for example using concurrent operations from multiple threads and multiple EC2 instances) which enables hi gh levels of aggregate throughput and operations per second For Amazon EFS use a benchmark or load test to select the appropriate performance mode Understand storage characteristics and requirements: Understand the different characteristics (for example shareable file size cache size access patterns latency throughput and persistence of data) that are required to select the services that best fit your workload such as object storage block storage file storage or ins tance storage Determine the expected growth rate for your workload and choose a storage solution that will meet those rates Object and file storage solutions such as Amazon S3 and Amazon Elastic File System enable unlimited storage ; Amazon EBS have pr e determined storage sizes Elastic volumes allow you to dynamically increase capacity tune performance and change the type of any new or existing current generation volume with no downtime or performance impact but it requires OS filesystem changes Evaluate available configuration options: Evaluate the various characteristics and configuration options and how they relate to storage Understand where and how to use provisioned IOPS SSDs magnetic storage object storage archival storage or ephemera l storage to optimize storage space and performance for your workload Amazon EBS provides a range of options that allow you to optimize storage performance and cost for your workload These options are divided in to two major categories: SSD backed storage for transactional workloads such as databases and boot volumes (performance depends primarily on IOPS) and HDD backed storage for throughput intensive workloads such as MapReduce and log processing (performanc e depends primarily on MB/s) SSDbacked volumes include the highest performance provisioned IOPS SSD for latency sensitive transactional workloads and general purpose SSD that balance price and performance for a wide variety of transactional data Amazon S3 transfer acceleration enables fast transfer of files over long distances between your client and your S3 bucket Transfer acceleration leverages Amazon CloudFront globally distributed edge locations to route data over an optimized network ArchivedAmazon Web Services Performance Efficiency Pillar 16 path For a workload in an S3 bucket that has intensive GET requests use Amazon S3 with CloudFront When uploading large files use multi part uploads with multiple parts uploading at the same time to hel p maximize network throughput Amazon Elastic File System (Amazon EFS) provides a simple scalable fully managed elastic NFS file system for use with AWS Cloud services and on premises resources To support a wi de variety of cloud storage workloads Amazon EFS offers two performance modes : general purpose performance mode and max I/O performance mode There are also two throughput modes to choose from for your file system Bursting Throughput and Provisioned Th roughput To determine which settings to use for your workload see the Amazon EFS User Guide Amazon FSx provides two file systems to choo se from: Amazon FSx for Windows File Server for enterprise workloads and Amazon FSx for Lustre for high performance workloads FSx is SSD backed and is designed to deliver fast predictable scalable and consistent performance Amazon FSx file systems deliver sustained high read and write speeds and consistent low latency data access You can choo se the throughput level you need to match your workload ’s needs Make decisions based on access patterns and metrics: Choose storage systems based on your workload's access patterns and configure them by determining how the workload accesses data Increase storage efficiency by choosing object storage over block storage Configure the storage options you choose to match your data access patterns How you access data impacts how the storage solution performs Select the storage solution that aligns best to y our access patterns or consider changing your access patterns to align with the storage solution to maximize performance Creating a RAID 0 (zero) array allows you to achieve a higher level of performance for a file system than what you can provision on a single volume Consider using RAID 0 when I/O performance is more important than fault tolerance For example you could use it with a heavily used database where data replication is already set up separately Select appropriate storage metrics for your w orkload across all of the storage options consumed for the workload When utilizing filesystems that use burst credits create alarms to let you know when you are approaching those credit limits You must create storage dashboards to show the overall workl oad storage health For storage systems that are a fixed sized such as Amazon EBS or Amazon FSx ensure that you are monitoring the amount of storage used versus the overall storage ArchivedAmazon Web Services Performance Efficiency Pillar 17 size and create automation if possible to increase the storage size when reaching a threshold Resources Refer to the following resources to learn more about AWS best practices for storage Videos • Deep dive on Amazon EBS (STG303 R1) • Optimize your storage performance with Amazon S3 (STG343) Documentation • Amazon EBS: o Amazon EC2 Storage o Amazon EBS Volume Types o I/O Characteristics • Amazon S3: Request Rate and Performance Considerations • Amazon Glacier: Amazon Glacier Documentation • Amazon EFS: Amazon EFS Performance • Amazon FSx: o Amazon FSx for Lustre Performance o Amazon FSx for Windows File Server Performance Database Architectu re Selection The optimal database solution for a system varies based on requirements for availability consistency partition tolerance latency durability scalability and query capability Many systems use different database solutions for various sub systems and enable different features to improve performance Selecting the wrong database solution and features for a system can lead to lower performance efficiency Understand data characteristics: Understand the different characteristics of data in your workload Determine if the workload requires transactions how it interacts with data and what its performance demands are Use this data to select the best ArchivedAmazon Web Services Performance Efficiency Pillar 18 performing database approach for your workload (for example relational databases NoSQL Key value document wide column graph time series or in memory storage ) You can choose from many purpose built database engines including relational key value document in memory graph time series and ledger databases By picking the best database to sol ve a specific problem (or a group of problems ) you can break away from restrictive one sizefitsall monolithic databases and focus on building applications to meet the needs of your customers Relational databases store data with predefined schemas and r elationships between them These databases are designed to support ACID (atomicity consistency isolation durability) transactions and maintain referential integrity and strong data consistency Many t raditional applications enterprise resource plannin g (ERP) customer relationship management (CRM) and ecommerce use relational databases to store their data You can run many of these database engines on Amazon EC2 or choose from one of the AWS managed database services : Amazon Aurora Amazon RDS and Amazon Redshift Keyvalue databases are optimized for common access patterns typically to store and retrieve large volumes of data These databases deliver quick response times even in extreme volumes of concurrent requests High traffic web apps e commerce systems and gaming applications ar e typical use cases for key value databases In AWS you can utilize Amazon DynamoDB a fully managed multi Region multi master durable database with built in security backup and restore and in memory c aching for internet scale applications Inmemory databases are used for applications that require real time access to data By storing data directly in memory these databases deliver microsecond latency to applications for whom millisecond latency is not enough You may use in memory databases for application caching session management gaming leaderboards and geospatial applications Amazon ElastiCache is a fully managed in memory data store compatibl e with Redis or Memcached A document database is designed to store semi structured data as JSON like documents These databases help de velopers build and update applications such as content management catalogs and user profiles quickly Amazon DocumentDB is a fast scalable highly available and fully managed document database service that supports MongoDB workloads ArchivedAmazon Web Services Performance Efficiency Pillar 19 A wide column store is a type of NoSQL database It uses tables rows and columns but unlike a relational database the names and format of the columns can vary from row to row in the same table You typically see a wide column store in high scale industrial apps for equipment maintenance fleet management and route optimization Amazon Managed Apache Cassandra Service is a wide column scalable highly available and managed Apa che Cassandra –compatible database service Graph databases are for applications that must navigate and query millions of relationships between highly connected graph datasets with millisecond latency at large scale Many companies use graph databases for f raud detection social networking and recommendation engines Amazon Neptune is a fast reliable fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets Time series databases efficiently collect synthesize and derive insights from data that changes over time IoT applications DevOps and industrial telemetry can utilize time series databases Amazon Timest ream is a fast scalable fully managed time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day Ledger databases provide a centralized and trusted authority to maintain a sca lable immutable and cryptographically verifiable record of transactions for every application We see ledger databases used for systems of record supply chain registrations and even banking transactions Amazon Quantum Ledger Database (QLDB) is a fully managed ledger database that provides a transparent immutable and cryptographically verifiable transaction log owned by a central trusted authority Amazon QLDB tracks every application data change and mainta ins a complete and verifiable history of changes over time Evaluate the available options: Evaluate the services and storage options that are available as part of the selection process for your workload's storage mechanisms Understand how and when to u se a given service or system for data storage Learn about available configuration options that can optimize database performance or efficiency such as provisioned IOPs memory and compute resources and caching Database solutions generally have configur ation options that allow you to optimize for the type of workload Using benchmarking or load testing identify database metrics that matter for your workload Consider the configuration options for your selected database approach such as storage optimizat ion database level settings memory and cache ArchivedAmazon Web Services Performance Efficiency Pillar 20 Evaluate database caching options for your workload The three most common types of database caches are the following: • Database integrated caches: Some databases (such as Amazon Aurora) offer an integrated cache that is managed within the database engine and has built in write through capabilities • Local caches: A local cache stores your frequently used data within your application This speeds up your data retrieval and removes network traffic associated wi th retrieving data making data retrieval faster than other caching architectures • Remote caches: Remote caches are stored on dedicated servers and typically built upon key/value NoSQL stores such as Redis and Memcached They provide up to a million reques ts per second per cache node For Amazon DynamodDB workloads DynamoDB Accelerator (DAX) provides a fully managed in memory cache DAX is an in memory cache that delivers fast read performance for your tables at scale by enabling you to use a fully managed in memory cache Using DAX you can improve the read performance of your DynamoDB tables by up to 10 times — taking the time required for reads from milliseconds to microseconds even at millions of requests per second Collect and record database performance metrics: Use tools libraries and systems that record performance measurements related to database performance For example measure transactions per second slow queries or system latency i ntroduced when accessing the database Use this data to understand the performance of your database systems Instrument as many database activity metrics as you can gather from your workload These metrics may need to be published directly from the workloa d or gathered from an application performance management service You can use AWS X Ray to analyze and debug production distributed applications such as those built using a microservices architecture An X Ray trace can include segments which encapsulate all the data points for single component For example when your application makes a call to a database in response to a request it creates a segment for that request with a sub segment representing the databa se call and its result The sub segment can contain data such as the query table used timestamp and error status Once instrumented you should enable alarms for your database metrics that indicate when thresholds are breached ArchivedAmazon Web Services Performance Efficiency Pillar 21 Choose data storage based on access patterns: Use the access patterns of the workload to decide which services and technologies to use For example utilize a relational database for workloads that require transactions or a key value store that provides higher throughput but is e ventually consistent where applicable Optimize data storage based on access patterns and metrics: Use performance characteristics and access patterns that optimize how data is stored or queried to achieve the best possible performance Measure how optimiz ations such as indexing key distribution data warehouse design or caching strategies impact system performance or overall efficiency Resources Refer to the following resources to learn more about AWS best practices for databases Videos • AWS purpose built databases (DAT209 L) • Amazon Aurora storage demystified: How it all works (DAT309 R) • Amazon DynamoDB deep dive: Advanced design patterns (DAT403 R1) Documentation • AWS Database Caching • Cloud Databases with AWS • Amazon Aurora best practices • Amazon Redshift performance • Amazon Athena top 10 performance tips • Amazon Redshift Spectrum best practices • Amazon DynamoDB best practices • Amazon DynamoDB Accelerator Network Architecture Selection The opti mal network solution for a workload varies based on latency throughput requirements jitter and bandwidth Physical constraints such as user or on premises ArchivedAmazon Web Services Performance Efficiency Pillar 22 resources determine location options These constraints can be offset with edge locations or res ource placement On AWS networking is virtualized and is available in a number of different types and configurations This makes it easier to match your networking methods with your needs AWS offers product features (for example Enhanced Networking Ama zon EC2 networking optimized instances Amazon S3 transfer acceleration and dynamic Amazon CloudFront) to optimize network traffic AWS also offers networking features (for example Amazon Route 53 latency routing Amazon VPC endpoints AWS Direct Connect and AWS Global Accelerator ) to reduce network distance or jitter Understand how networking impacts performance: Analyze and understand how network related features impact workload performance For example network latency often impacts the user experien ce and not providing enough network capacity can bottleneck workload performance Since the network is between all application components it can have large positive and negative impacts on application performance and behavior There are also applications that are heavily dependent on network performance such as High Performance Computing (HPC) where deep network understanding is important to increase cluster performance You must determine the workload requirements for bandwidth latency jitter and thro ughput Evaluate available networking features: Evaluate networking features in the cloud that may increase performance Measure the impact of these features through testing metrics and analysis For example take advantage of network level features that are available to reduce latency network distance or jitter Many services commonly offer features to optimize network performance Consider product features such as EC2 instance network capability enhanced networking instance types Amazon EBS optimize d instances Amazon S3 transfer acceleration and dynamic CloudFront to optimize network traffic AWS Global Accelerator is a service that improves global application availability and performance using the AWS global network It optimizes the network path taking advantage of the vast congestion free AWS global network It provides static IP addresses that make it easy to move endpoints between Availability Zones or AWS Regions without needing to update your DNS configuration or change client facing applications ArchivedAmazon Web Services Performanc e Efficiency Pillar 23 Amazon S3 content acceleration is a feature that lets external users benefit from the networking optimizations of CloudFront to upload data to Amazon S3 This makes it easy to t ransfer large amounts of data from remote locations that don’t have dedicated connectivity to the AWS Cloud Newer EC2 instances can leverage enhanced networking N series EC2 instances such as M5n and M5dn leverage the fourth generation of custom Nitro card and Elastic Network Adapter (ENA) device to deliver up to 100 Gbps of network throughput to a single instance These instances offer 4x the network bandwidth and packet process compared to the base M5 instances and are ideal for network intensive appl ications Customers can also enable Elastic Fabric Adapter (EFA) on certain instance sizes of M5n and M5dn instances for low and consistent network latency Amazon Elastic Network Adapters (ENA) provide further optimization by delivering 20 Gbps of network capacity for your instances within a single placement group Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables you to run workloads requiring high levels of inter node communications at scale on AWS With EFA High Performance Computing (HPC) applications using the Message Passing Interface (MPI) and Machine Learning (ML) applications using NVIDIA Collective Communications Library (NCCL) can scale to thousands of CPUs or GPUs Amazon EBS optimized instances use an op timized configuration stack and provide additional dedicated capacity for Amazon EBS I/O This optimization provides the best performance for your EBS volumes by minimizing contention between Amazon EBS I/O and other traffic from your instance Latency based routing (LBR) for Amazon Route 53 helps you improve your workload’s performance for a global audience LBR works by routing your customers to the AWS endpoint (for EC2 instances Elastic IP addresses or ELB load balancers) that provides the fastest ex perience based on actual performance measurements of the different AWS Regions where your workload is running Amazon VPC endpoints provide reliable connectivity to AWS services (for example Amazon S3) without requiring an internet gateway or a Network Ad dress Translation (NAT) instance Choose appropriately sized dedicated connectivity or VPN for hybrid workloads : When there is a requirement for on premise communication ensure that you have adequate bandwidth for workload performance Based on bandwidth requirements a single dedicated connection or a single VPN might not be enough and you must enable traffic load balancing across multiple connections ArchivedAmazon Web Services Performance Efficiency Pillar 24 You must estimate the bandwidth and latency requirements for your hybrid workload These numbers will drive the sizing requirements for AWS Direct Connect or your VPN endpoints AWS Direct Connect provides dedicated connectivity to the AWS environment from 50 Mbps up to 10 Gbps This gives you managed and controlled latency and provisioned bandwidth so your workload can connect easily and in a performant way to other environments Using one of the AWS Direct Connect partners you can have end toend connectivity from multiple environments thus providi ng an extended network with consistent performance The AWS SitetoSite VPN is a managed VPN service for VPCs When a VPN connection is created AWS provides tunnels to two different VPN endpoints With AWS Transit Gateway you can simplify the connectivity between multiple VPCs and also connect to any VPC attached to AWS Transit Gateway with a single VPN connection AWS Transit Gateway also enables you to s cale beyond the 125Gbps IPsec VPN throughput limit by enabling equal cost multi path (ECMP) routing support over multiple VPN tunnels Leverage load balancing and encryption offloading: Distribute traffic across multiple resources or services to allow yo ur workload to take advantage of the elasticity that the cloud provides You can also use load balancing for offloading encryption termination to improve performance and to manage and route traffic effectively When implementing a scale out architecture wh ere you want to use multiple instances for service content you can leverage load balancers inside your Amazon VPC AWS provides multiple models for your applications in the ELB service Application Load Balancer is best suited for load balancing of HTTP a nd HTTPS traffic and provides advanced request routing targeted at the delivery of modern application architectures including microservices and containers Network Load Balancer is best suited for load balancing of TCP traffic where extreme performance is required It is capable of handling millions of requests per second while maintaining ultra low latencies and it is optimized to handle sudden and volatile traffic patterns Elastic Load Balancing provides integrated certificate management and SSL/TLS decryption allowing you the flexibility to centrally manage the SSL settings of the load balancer and offload CPU intensive work from your workload ArchivedAmazon Web Services Performance Efficiency Pillar 25 Choose network protocols to optimize network traffic: Make decisions about protocols for communication between systems and networks based on the impact to the workload’s performance There is a relationship between latency and bandwidth to achieve throughput If your file transfer is using TC P higher latencies will reduce overall throughput There are approaches to fix this with TCP tuning and optimized transfer protocols some approaches use UDP Choose location based on network requirements: Use the cloud location options available to reduc e network latency or improve throughput Utilize AWS Regions Availability Zones placement groups and edge locations such as Outposts Local Zones and Wavelength to reduce network latency or improve throughput The AWS Cloud infrastructure is built ar ound Regions and Availability Zones A Region is a physical location in the world having multiple Availability Zones Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity housed in separate facilities These Availability Zones offer you the ability to operate production applications and databases that are more highly available fault tolerant and scalable than would be possible from a single data center Choose the appropriate Region or Regions for your deployment based on the following key elements: • Where your users are located : Choosing a Region close to your workload’s users ensures lower latency when they use the workload • Where your data is located : For data heavy applications t he major bottleneck in latency is data transfer Application code should execute as close to the data as possible • Other constraints : Consider constraints such as security and compliance Amazon EC2 provides placement groups for networking A placement gro up is a logical grouping of instances within a single Availability Zone Using placement groups with supported instance types and an Elastic Network Adapter (ENA) enables workloads to participate in a low latency 25 Gbps network Placement groups are reco mmended for workloads that benefit from low network latency high network throughput or both Using placement groups has the benefit of lowering jitter in network communications ArchivedAmazon Web Services Performance Efficiency Pillar 26 Latency sensitive services are delivered at the edge using a global network of edge locations These edge locations commonly provide services such as content delivery network (CDN) and domain name system (DNS) By having these services at the edge workloads can respond with low latency to requests for content or DNS resolution These services also provide geographic services such as geo targeting of content (providing different content based on the end users’ location) or latency based routing to direct end users to the nearest Region (minimum latency) Amazon CloudFront is a global CDN that can be used to accelerate both static content such as images scripts and videos as well as dynamic content such as APIs or web applications It relies on a global network of edge locations that will cache the content and provide high performance network connectivity to your users CloudFront also accelerates many other features such as content uploading and dynamic applications making it a performance addition to all applications serving tr affic over the internet Lambda@Edge is a feature of Amazon CloudFront that will let you run code closer to users of your workload which improves performance and reduces latency Amazon Route 53 is a hig hly available and scalable cloud DNS web service It’s designed to give developers and businesses an extremely reliable and cost effective way to route end users to internet applications by translating names like wwwexamplecom into numeric IP addresses like 19216821 that computers use to connect to each other Route 53 is fully compliant with IPv6 AWS Outposts is designed for workloads that need to remain on premises due to latency requirements whe re you want that workload to run seamlessly with the rest of your other workloads in AWS AWS Outposts are fully managed and configurable compute and storage racks built with AWS designed hardware that allow you to run compute and storage on premises whil e seamlessly connecting to AWS’s broad array of services in the cloud AWS Local Zones are a new type of AWS infrastructure designed to run workloads that require single digit millisecond latency like video rendering and graphics intensive virtual desktop applications Local Zones allow you to gain all the benefits of having compute and storage resources closer to end users AWS Wavelength is designed to deliver ultra low latency applications to 5G devices by extending AWS infrastructure services APIs and tools to 5G networks Wavelength embeds storage and compute inside telco providers 5G networks to help your 5G workloa d if it requires single digit millisecond latency such as IoT devices game streaming autonomous vehicles and live media production ArchivedAmazon Web Services Performance Efficiency Pillar 27 Use edge services to reduce latency and to enable content caching Ensure that you have configured cache control correct ly for both DNS and HTTP/HTTPS to gain the most benefit from these approaches Optimize network configuration based on metrics: Use collected and analyzed data to make informed decisions about optimizing your network configuration Measure the impact of th ose changes and use the impact measurements to make future decisions Enable VPC Flow logs for all VPC networks that are used by your workload VPC Flow Logs are a feature that allows you to capture information about the IP traffic going to and from networ k interfaces in your VPC VPC Flow Logs help you with a number of tasks such as troubleshooting why specific traffic is not reaching an instance which in turn help s you diagnose overly restrictive security group rules You can use flow logs as a security tool to monitor the traffic that is reaching your instance to profile your network traffic and to look for abnormal traffic behaviors Use networking metrics to make changes to networking configuration as the workload evolves Cloud based networks can b e quickly re built so evolving your network architecture over time is necessary to maintain performance efficiency Resources Refer to the following resources to learn more about AWS best practices for networking Videos • Connectivity to AWS and hybrid AWS network architectures (NET317 R1) • Optimizing Network Performance for Amazon EC2 Instances (CMP308 R1) Documentation • Transitioning to Latency Based Routing in Amazon Route 53 • Networking Products with AWS • EC2 o Amazon EBS – Optimized Instances o EC2 Enhanced Networking on Linux o EC2 Enhanced Networking on Windows o EC2 Placement Groups ArchivedAmazon Web Services Performance Efficiency Pillar 28 o Enabling Enhanced Networking with the Elastic Netw ork Adapter (ENA) on Linux Instances • VPC o Transit Gateway o VPC Endpoints o VPC Flow Logs • Elastic Load Balancers o Application Load Balancer o Network Load Balancer ArchivedAmazon Web Services Performance Efficiency Pillar 29 Review When architecting workloads there are finite options that you can choose from However over time new technologies and approaches become available that could improve the performance of your workload In the cloud it’s much easier to experiment with new features and services because your infrastructure is code To adopt a data driven approach to architecture you should implement a performance review process that considerer s the following: • Infrastructure as code: Define your infrastructure as code using approaches such as AWS CloudFormation templates The use of templates allows you to place your infrastructure into source control alongside your application code and configur ations This allows you to apply the same practices you use to develop software in your infrastructure so you can iterate rapidly • Deployment pipeline: Use a continuous integration/continuous deployment (CI/CD) pipeline (for example source code repository build systems deployment and testing automation) to deploy your infrastructure This enables you to deploy in a repeatable consistent and low cost fashion as you iterate • Welldefined metrics: Set up your metrics and monitor to capture key performanc e indicators (KPIs) We recommend that you use both technical and business metrics For website s or mobile apps key metrics are capturing time to first byte or rendering Other generally applicable metrics include thread count garbage collection rate an d wait states Business metrics such as the aggregate cumulative cost per request can alert you to ways to drive down costs Carefully consider how you plan to interpret metrics For example you could choose the maximum or 99th percentile instead of the average • Performance test automatically: As part of your deployment process automatically trigger performance tests after the quicker running tests have passed successfully The automation should create a new environment set up initial conditions such a s test data and then execute a series of benchmarks and load tests Results from these tests should be tied back to the build so you can track performance changes over time For long running tests you can make this part of the pipeline asynchronous from the rest of the build Alternatively you could execute performance tests overnight using Amazon EC2 Spot Instances ArchivedAmazon Web Services Performance Efficiency Pillar 30 • Load generation: You should create a series of test scripts that replicate synthetic or prerecorded user journeys These scripts should be idempotent and not coupled and you might need to include “pre warming” scripts to yield valid results As much as possible your test scripts should replicate the behavior of usage in production You can use software or software asaservice (SaaS) solut ions to generate the load Consider using AWS Marketplace solutions and Spot Instances — they can be cost effective ways to generate the load • Performance visibility: Key metrics should be visible to your team especially metrics against each build version This allows you to see any significant positive or negative trend over time You should also display metrics on the number of errors or exceptions to make sure you are testing a working system • Visualization: Use visualization techniques that make it clear where performance issues hot spots wait states or low utilization is occurring Overlay performance metrics over architecture diagrams — call graphs or code can help identify issues quickly This performance review process can be implemented as a simple extension of your existing deployment pipeline and then evolved over time as your testing requirements become more sophisticated For future architectures you can generalize your approach and reuse the same process and artifacts Architectures performing poorly is usually the result of a non existent or broken performance review process If your architecture is performing poorly implementing a performance review process will allow you to apply Deming’s plandocheck act (PDCA) cycle to drive iterative improvement Evolve Your Workload to Take Advantage of New Releases Take advantage of the continual innovation at AWS driven by customer need We release new Regions edge locations services and features regularly Any of these releases could positively improve the performance efficiency of your architecture Stay up todate on new resources and services : Evaluate ways to improve performance as new services design patterns and pro duct offerings become available Determine which of these could improve performance or increase the efficiency of the workload through ad hoc evaluation internal discussion or external analysis ArchivedAmazon Web Services Performance Efficiency Pillar 31 Define a process to evaluate updates new features and ser vices from AWS For example building proof ofconcepts that use new technologies or consulting with an internal group When trying new ideas or services run performance tests to measure the impact that they have on the efficiency or performance of the wo rkload Take advantage of the flexibility that you have in AWS to test new ideas or technologies frequently with minimal cost or risk Define a process to improve workload performance: Define a process to evaluate new services design patterns resource ty pes and configurations as they become available For example run existing performance tests on new instance offerings to determine their potential to improve your workload Your workload's performance has a few key constraints Document these so that you know what kinds of innovation might improve the performance of your workload Use this information when learning about new services or technology as it becomes available to identify ways to alleviate constraints or bottlenecks Evolve workload performance over time: As an organization use the information gathered through the evaluation process to actively drive adoption of new services or resources when they become available Use the information you gather when evaluating new services or technologies to d rive change As your business or workload changes performance needs also change Use data gathered from your workload metrics to evaluate areas where you can get the biggest gains in efficiency or performance and proactively adopt new services and techno logies to keep up with demand Resources Refer to the following resources to learn more about AWS best practices for benchmarking Videos • Amazon Web Services YouTube Channel • AWS Online Tech Talks YouTube Channel • AWS Events YouTube Channel ArchivedAmazon Web Services Performance Efficiency Pillar 32 Monitoring After you implement your architecture you must monitor its performance so that you can remediate any issues before they impact your customers Monitoring metrics should be used to raise alarms when thresholds are breached Monitoring at AWS consists of five distinct phases which are explained in more detail in the Reliability Pillar whitepaper : 1 Generation – scope of monitoring metrics and thresholds 2 Aggregation – creating a complete view from multiple sour ces 3 Real time processing and alarming – recognizing and responding 4 Storage – data management and retention policies 5 Analytics – dashboards reporting and insights CloudWatch is a monitoring service for AWS Cloud resources and the workloads that run on AWS You can use CloudWatch to collect and track metrics collect and monitor log files and set alarms CloudWatch can monitor AWS resources such as EC2 instances and RDS DB instances as well as custom metrics generated by your workloads and services an d any log files your applications generate You can use CloudWatch to gain system wide visibility into resource utilization application performance and operational health You can use these insights to react quickly and keep your workload running smoothl y CloudWatch dashboards enable you to create reusable graphs of AWS resources and custom metrics so you can monitor operational status and identify issues at a glance Ensuring that you do not see false positives is key to an effective monitoring solu tion Automated triggers avoid human error and can reduce the time it takes to fix problems Plan for game days where simulations are conducted in the production environment to test your alarm solution and ensure that it correctly recognizes issues Moni toring solutions fall into two types: active monitoring (AM) and passive monitoring (PM) AM and PM complement each other to give you a full view of how your workload is performing Active monitoring simulates user activity in scripted user journeys across critical paths in your product AM should be continuously performed in order to test the performance and availability of a workload AM complements PM by being continuous lightweight ArchivedAmazon Web Services Performance Efficiency Pillar 33 and predictable It can be run across all environments (especially pre production environments) to identify problems or performance issues before they impact end users Passive monitoring is commonly used with web based workloads PM collects performance metrics from the browser (non webbased workloads can use a similar approach) You can collect metrics across all users (or a subset of users) geographies browsers and device types Use PM to understand the following issues: • User experience performance : PM provides you with metrics on what your users are experiencing which gives you a continuous view into how production is working as well as a view into the impact of changes over time • Geographic performance variability : If a workload has a global footprint and users access the workload from all around t he world using PM can enable you to spot a performance problem impacting users in a specific geography • The impact of API use : Modern workloads use internal APIs and third party APIs PM provides the visibility into the use of APIs so you can identify performance bottlenecks that originate not only from internal APIs but also from thirdparty API providers CloudWatch provides the ability to monitor and send notification alarms You can use automation to work around performance issues by triggering action s through Amazon Kinesis Amazon Simple Queue Service (Amazon SQS) and AWS Lambda Monitor Your Resources to Ensure That They Are Performing as Expected System performance can degrade over time Monitor system performance to identify degradation and reme diate internal or external factors such as the operating system or application load Record performance related metrics: Use a monitoring and observability service to record performance related metrics For example record database transactions slow queries I/O latency HTTP request throughput service latency or other key data Identify the performance metrics that matter for your workload and record them This data is an important part of being able to identify which components are impacting overall performance or efficiency of the workload ArchivedAmazon Web Services Performance Efficiency Pillar 34 Working back from the customer experience identify metrics that matter For each metric identify the target measurement approach and priority Use these to build alarms and notifications to proactively address performance related issues Analyze metrics when events or incidents occur : In response to (or during) an event or incident use monitoring dashboards or reports to understand and diagnose the impact These views provide insight into which portions of the workload are not performing as expected When you write critical user stories for your architecture include performance requirements such as specifying how quickly each critical story should execute For these critical stories implement additional scripted user journeys to ensure that you know how these stories perform against your requirement Establish Key Performance Indicators (KPIs) to measure workload performance : Identify the KPIs that indicate whether the workload is performing as intended For example an API based workload might use ov erall response latency as an indication of overall performance and an e commerce site might choose to use the number of purchases as its KPI Document the performance experience required by customers including how customers will judge the performance of the workload Use these requirements to establish your key performance indicators (KPIs) which will indicate how the system is performing overall Use monitoring to generate alarm based notifications: Using the performance related key performance indicato rs (KPIs) that you defined use a monitoring system that generates alarms automatically when these measurements are outside expected boundaries Amazon CloudWatch can collect metrics across the resources in your architecture You can also collect and publ ish custom metrics to surface business or derived metrics Use CloudWatch or a 3rd party monitoring service to set alarms that indicate when thresholds are breached; the alarms signal that a metric is outside of the expected boundaries Review metrics at r egular intervals: As routine maintenance or in response to events or incidents review which metrics are collected Use these reviews to identify which metrics were key in addressing issues and which additional metrics if they were being tracked would h elp to identify address or prevent issues ArchivedAmazon Web Services Performance Efficiency Pillar 35 As part of responding to incidents or events evaluate which metrics were helpful in addressing the issue and which metrics could have helped that are not currently being tracked Use this to improve the quality of metrics you collect so that you can prevent or more quickly resolve future incidents Monitor and alarm proactively: Use key performance indicators (KPIs) combined with monitoring and alerting systems to proactively address performance related issues Use alarms to trigger automated actions to remediate issues where possible Escalate the alarm to those able to respond if automated response is not possible For example you may have a system that can predict expected key perf ormance indicators (KPI) values and alarm when they breach certain thresholds or a tool that can automatically halt or roll back deployments if KPIs are outside of expected values Implement processes that provide visibility into performance as your workl oad is running Build monitoring dashboards and establish baseline norms for performance expectations to determine if the workload is performing optimally Resources Refer to the following resources to learn more about AWS best practices for monitoring to promote performance efficiency Videos • Cut through the chaos: Gain operational visibility and insight (MGT301 R1) Documentation • XRay Documentation • CloudWatch Documentation Trade offs When you architect solutions think about trade offs to ensure an optimal approach Depending on your situation you could trade consistency durability and space for time or latency to deliver higher performance Using AWS you can go global in minutes and deploy resources in multiple locations across the globe to be closer to your end u sers You can also dynamically add read only replicas to information stores (such as database systems ) to reduce the load on the primary database ArchivedAmazon Web Services Performance Efficiency P illar 36 AWS offers caching solutions such as Amazon ElastiCache which provides an in memory data store or cache and Amazon CloudFront which caches copies of your static content closer to end users Amazon DynamoDB Accelerator (DAX) provides a readthrough/write through distributed caching tier in front of Amazon DynamoDB supporting the same API but providi ng sub millisecond latency for entities that are in the cache Using Trade offs to Improve Performanc e When architecting solutions actively considering trade offs enables you to select an optimal approach Often you can improve performance by trading con sistency durability and space for time and latency Trade offs can increase the complexity of your architecture and require load testing to ensure that a measurable benefit is obtained Understand the areas where performance is most critical: Understand and identify areas where increasing the performance of your workload will have a positive impact on efficiency or customer experience For example a website that has a large amount of customer interaction can benefit from using edge services to move conte nt delivery closer to customers Learn about design patterns and services: Research and understand the various design patterns and services that help improve workload performance As part of the analysis identify what you could trade to achieve higher per formance For example using a cache service can help to reduce the load placed on database systems; however it requires some engineering to implement safe caching or possible introduction of eventual consistency in some areas Learn which performance con figuration options are available to you and how they could impact the workload Optimizing the performance of your workload depends on understanding how these options interact with your architecture and the impact they will have on both measured performanc e and the performance perceived by users The Amazon Builders’ Library provides readers with a detailed description of how Amazon builds and operates technology These free articles are written by Am azon’s senior engineers and cover topics across architecture software delivery and operations For example you can see how Amazon automates software delivery to achieve over 150 million deployments a year or how Amazon’s engineers implement principles such as shuffle sharding to build resilient systems that are highly available and fault tolerant ArchivedAmazon Web Services Performance Efficiency Pillar 37 Identify how trade offs impact customers and efficiency: When evaluating performance related improvements determine which choices will impact your customers and workload efficiency For example if using a key value data store increases system performance it is important to evaluate how the eventually consistent nature of it will impact customers Identify areas of poor performance in your system through metr ics and monitoring Determine how you can make improvements what trade offs those improvements bring and how they impact the system and the user experience For example implementing caching data can help dramatically improve performance but requires a clear strategy for how and when to update or invalidate cached data to prevent incorrect system behavior Measure the impact of performance improvements: As changes are made to improve performance evaluate the collected metrics and data Use this informati on to determine impact that the performance improvement had on the workload the workload’s components and your customers This measurement helps you understand the improvements that result from the tradeoff and helps you determine if any negative sideeffects were introduced A well architected system uses a combination of performance related strategies Determine which strategy will have the largest positive impact on a given hotspot or bottleneck For example sharding data across multiple relational d atabase systems could improve overall throughput while retaining support for transactions and within each shard caching can help to reduce the load Use various performance related strategies: Where applicable utilize multiple strategies to improve perf ormance For example using strategies like caching data to prevent excessive network or database calls using read replicas for database engines to improve read rates sharding or compressing data where possible to reduce data volumes and buffering and s treaming of results as they are available to avoid blocking As you make changes to the workload collect and evaluate metrics to determine the impact of those changes Measure the impacts to the system and to the end user to understand how your trade offs impact your workload Use a systematic approach such as load testing to explore whether the tradeoff improves performance Resources Refer to the following resources to learn more about AWS best practices for caching ArchivedAmazon Web Services Performance Efficiency Pillar 38 Video • Introducing The Amazon Builders’ Library (DOP328) Documentation • Amazon Builders’ Library • Best Practices for Implementing Amazon ElastiCache Conclusion Achieving and maintaining performance efficiency requires a data driven approach You should actively consider access patterns and trade offs tha t will allow you to optimize for higher performance Using a review process based on benchmarks and load tests allows you to select the appropriate resource types and configurations Treating your infrastructure as code enables you to rapidly and safely ev olve your architecture while you use data to make fact based decisions about your architecture Putting in place a combination of active and passive monitoring ensures that the performance of your architecture does not degrade over time AWS strives to he lp you build architectures that perform efficiently while delivering business value Use the tools and techniques discussed in this paper to ensure success Contributors The following individuals and organizations contributed to this document: • Eric Pullen Performance Efficiency Lead Well Architected Amazon Web Services • Philip Fitzsimons Sr Manager Well Architected Amazon Web Service s • Julien Lépine Specialist SA Manager Amazon Web Services • Ronnen Slasky Solutions Architect Amazon Web Services Further Reading For additional help consult the following sources: ArchivedAmazon Web Services Performance Efficiency Pillar 39 •AWS Well Architected Framework Document Revisions Date Description July 2020 Major review and update of content July 2018 Minor update for grammatical issues November 2017 Refreshed the whitepaper to reflect changes in AWS November 2016 First publication
|
General
|
consultant
|
Best Practices
|
AWS_WellArchitected_Framework__Reliability_Pillar
|
ArchivedReliability Pillar AWS WellArchitected Framework This paper has been archived The latest version is now available at: https://docsawsamazoncom/wellarchitected/latest/reliabilitypillar/welcomehtmlArchivedReliability Pillar AWS WellArchitected Framework Reliability Pillar: AWS WellArchitected Framework Copyright © 2020 Amazon Web Services Inc and/or its affiliates All rights reserved Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's in any manner that is likely to cause confusion among customers or in any manner that disparages or discredits Amazon All other trademarks not owned by Amazon are the property of their respective owners who may or may not be affiliated with connected to or sponsored by AmazonArchivedReliability Pillar AWS WellArchitected Framework Table of Contents Abstract 1 Abstract 1 Introduction 2 Reliability 3 Design Principles 3 Definitions 3 Resiliency and the Components of Reliability 4 Availability 4 Disaster Recovery (DR) Objectives 7 Understanding Availability Needs 8 Foundations 9 Manage Service Quotas and Constraints 9 Resources 10 Plan your Network Topology 10 Resources 14 Workload Architecture 15 Design Your Workload Service Architecture 15 Resources 17 Design Interactions in a Distributed System to Prevent Failures 17 Resources 19 Design Interactions in a Distributed System to Mitigate or Withstand Failures 20 Resources 24 Change Management 25 Monitor Workload Resources 25 Resources 28 Design your Workload to Adapt to Changes in Demand 28 Resources 29 Implement Change 30 Additional deployment patterns to minimize risk: 32 Resources 32 Failure Management 34 Back up Data 34 Resources 35 Use Fault Isolation to Protect Your Workload 36 Resources 40 Design your Workload to Withstand Component Failures 41 Resources 43 Test Reliability 44 Resources 46 Plan for Disaster Recovery (DR) 47 Resources 49 Example Implementations for Availability Goals 50 Dependency Selection 50 SingleRegion Scenarios 50 2 9s (99%) Scenario 51 3 9s (999%) Scenario 52 4 9s (9999%) Scenario 54 MultiRegion Scenarios 56 3½ 9s (9995%) with a Recovery Time between 5 and 30 Minutes 56 5 9s (99999%) or Higher Scenario with a Recovery Time under 1 minute 59 Resources 61 Documentation 61 Labs 62 External Links 62 iiiArchivedReliability Pillar AWS WellArchitected Framework Books 62 Conclusion 63 Contributors 64 Further Reading 65 Document Revisions 66 Appendix A: DesignedFor Availability for Select AWS Services 68 ivArchivedReliability Pillar AWS WellArchitected Framework Abstract Reliability Pillar AWS Well Architected Framework Publication date: July 2020 (Document Revisions (p 66)) Abstract The focus of this paper is the reliability pillar of the AWS WellArchitected Framework It provides guidance to help customers apply best practices in the design delivery and maintenance of Amazon Web Services (AWS) environments 1ArchivedReliability Pillar AWS WellArchitected Framework Introduction The AWS WellArchitected Framework helps you understand the pros and cons of decisions you make while building workloads on AWS By using the Framework you will learn architectural best practices for designing and operating reliable secure efficient and costeffective workloads in the cloud It provides a way to consistently measure your architectures against best practices and identify areas for improvement We believe that having wellarchitected workload greatly increases the likelihood of business success The AWS WellArchitected Framework is based on five pillars: • Operational Excellence • Security • Reliability • Performance Efficiency • Cost Optimization This paper focuses on the reliability pillar and how to apply it to your solutions Achieving reliability can be challenging in traditional onpremises environments due to single points of failure lack of automation and lack of elasticity By adopting the practices in this paper you will build architectures that have strong foundations resilient architecture consistent change management and proven failure recovery processes This paper is intended for those in technology roles such as chief technology officers (CTOs) architects developers and operations team members After reading this paper you will understand AWS best practices and strategies to use when designing cloud architectures for reliability This paper includes highlevel implementation details and architectural patterns as well as references to additional resources 2ArchivedReliability Pillar AWS WellArchitected Framework Design Principles Reliability The reliability pillar encompasses the ability of a workload to perform its intended function correctly and consistently when it’s expected to This includes the ability to operate and test the workload through its total lifecycle This paper provides indepth best practice guidance for implementing reliable workloads on AWS Topics •Design Principles (p 3) •Definitions (p 3) •Understanding Availability Needs (p 8) Design Principles In the cloud there are a number of principles that can help you increase reliability Keep these in mind as we discuss best practices: •Automatically recover from failure: By monitoring a workload for key performance indicators (KPIs) you can trigger automation when a threshold is breached These KPIs should be a measure of business value not of the technical aspects of the operation of the service This allows for automatic notification and tracking of failures and for automated recovery processes that work around or repair the failure With more sophisticated automation it’s possible to anticipate and remediate failures before they occur •Test recovery procedures: In an onpremises environment testing is often conducted to prove that the workload works in a particular scenario Testing is not typically used to validate recovery strategies In the cloud you can test how your workload fails and you can validate your recovery procedures You can use automation to simulate different failures or to recreate scenarios that led to failures before This approach exposes failure pathways that you can test and fix before a real failure scenario occurs thus reducing risk •Scale horizontally to increase aggregate workload availability: Replace one large resource with multiple small resources to reduce the impact of a single failure on the overall workload Distribute requests across multiple smaller resources to ensure that they don’t share a common point of failure •Stop guessing capacity: A common cause of failure in onpremises workloads is resource saturation when the demands placed on a workload exceed the capacity of that workload (this is often the objective of denial of service attacks) In the cloud you can monitor demand and workload utilization and automate the addition or removal of resources to maintain the optimal level to satisfy demand without over or underprovisioning There are still limits but some quotas can be controlled and others can be managed (see Manage Service Quotas and Constraints (p 9)) •Manage change in automation : Changes to your infrastructure should be made using automation The changes that need to be managed include changes to the automation which then can be tracked and reviewed Definitions This whitepaper covers reliability in the cloud describing best practice for these four areas: • Foundations • Workload Architecture 3ArchivedReliability Pillar AWS WellArchitected Framework Resiliency and the Components of Reliability • Change Management • Failure Management To achieve reliability you must start with the foundations—an environment where service quotas and network topology accommodate the workload The workload architecture of the distributed system must be designed to prevent and mitigate failures The workload must handle changes in demand or requirements and it must be designed to detect failure and automatically heal itself Topics •Resiliency and the components of Reliability (p 4) •Availability (p 4) •Disaster Recovery (DR) Objectives (p 7) Resiliency and the components of Reliability Reliability of a workload in the cloud depends on several factors the primary of which is Resiliency: •Resiliency is the ability of a workload to recover from infrastructure or service disruptions dynamically acquire computing resources to meet demand and mitigate disruptions such as misconfigurations or transient network issues The other factors impacting workload reliability are: • Operational Excellence which includes automation of changes use of playbooks to respond to failures and Operational Readiness Reviews (ORRs) to confirm that applications are ready for production operations • Security which includes preventing harm to data or infrastructure from malicious actors which would impact availability For example encrypt backups to ensure that data is secure • Performance Efficiency which includes designing for maximum request rates and minimizing latencies for your workload • Cost Optimization which includes tradeoffs such as whether to spend more on EC2 instances to achieve static stability or to rely on automatic scaling when more capacity is needed Resiliency is the primary focus of this whitepaper The other four aspects are also important and they are covered by their respective pillars of the AWS WellArchitected Framework Many of the best practices here also address those aspects of reliability but the focus is on resiliency Availability Availability (also known as service availability) is both a commonly used metric to quantitatively measure resiliency as well as a target resiliency objective •Availability is the percentage of time that a workload is available for use Available for use means that it performs its agreed function successfully when required This percentage is calculated over a period of time such as a month year or trailing three years Applying the strictest possible interpretation availability is reduced anytime that the application isn’t operating normally including both scheduled and unscheduled interruptions We define availability as follows: 4ArchivedReliability Pillar AWS WellArchitected Framework Availability • Availability is a percentage uptime (such as 999%) over a period of time (commonly a month or year) • Common shorthand refers only to the “number of nines”; for example “five nines” translates to being 99999% available • Some customers choose to exclude scheduled service downtime (for example planned maintenance) from the Total Time in the formula However this is not advised as your users will likely want to use your service during these times Here is a table of common application availability design goals and the maximum length of time that interruptions can occur within a year while still meeting the goal The table contains examples of the types of applications we commonly see at each availability tier Throughout this document we refer to these values Availability Maximum Unavailability (per year)Application Categories 99% (p 51) 3 days 15 hours Batch processing data extraction transfer and load jobs 999% (p 52) 8 hours 45 minutes Internal tools like knowledge management project tracking 9995% (p 56) 4 hours 22 minutes Online commerce point of sale 9999% (p 54) 52 minutes Video delivery broadcast workloads 99999% (p 59) 5 minutes ATM transactions telecommunications workloads Measuring availability based on requests For your service it may be easier to count successful and failed requests instead of “time available for use” In this case the following calculation can be used: This is often measured for oneminute or fiveminute periods Then a monthly uptime percentage (time base availability measurement) can be calculated from the average of these periods If no requests are received in a given period it is counted at 100% available for that time Calculating availability with hard dependencies Many systems have hard dependencies on other systems where an interruption in a dependent system directly translates to an interruption of the 5ArchivedReliability Pillar AWS WellArchitected Framework Availability invoking system This is opposed to a soft dependency where a failure of the dependent system is compensated for in the application Where such hard dependencies occur the invoking system’s availability is the product of the dependent systems’ availabilities For example if you have a system designed for 9999% availability that has a hard dependency on two other independent systems that each are designed for 9999% availability the workload can theoretically achieve 9997% availability: Avail invok × Avail dep1× Avail dep2 = Avail workload 9999% × 9999% × 9999% = 9997% It’s therefore important to understand your dependencies and their availability design goals as you calculate your own Calculating availability with redundant components When a system involves the use of independent redundant components (for example redundant resources in different Availability Zones) the theoretical availability is computed as 100% minus the product of the component failure rates For example if a system makes use of two independent components each with an availability of 999% the effective availability of this dependency is 999999%: Avail effective =Avail MAX− ((100%−Avail dependency )×(100%−Avail dependency )) 999999% = 100% − (01%×01%) Shortcut calculation: If the availabilities of all components in your calculation consist solely of the digit nine then you can sum the count of the number of nines digits to get your answer In the above example two redundant independent components with three nines availability results in six nines Calculating dependency availability Some dependencies provide guidance on their availability including availability design goals for many AWS services (see Appendix A: DesignedFor Availability for Select AWS Services (p 68)) But in cases where this isn’t available (for example a component where the manufacturer does not publish availability information) one way to estimate is to determine the Mean Time Between Failure (MTBF) and Mean Time to Recover (MTTR) An availability estimate can be established by: For example if the MTBF is 150 days and the MTTR is 1 hour the availability estimate is 9997% 6ArchivedReliability Pillar AWS WellArchitected Framework Disaster Recovery (DR) Objectives For additional details see this document (Calculating Total System Availability) which can help you calculate your availability Costs for availability Designing applications for higher levels of availability typically results in increased cost so it’s appropriate to identify the true availability needs before embarking on your application design High levels of availability impose stricter requirements for testing and validation under exhaustive failure scenarios They require automation for recovery from all manner of failures and require that all aspects of system operations be similarly built and tested to the same standards For example the addition or removal of capacity the deployment or rollback of updated software or configuration changes or the migration of system data must be conducted to the desired availability goal Compounding the costs for software development at very high levels of availability innovation suffers because of the need to move more slowly in deploying systems The guidance therefore is to be thorough in applying the standards and considering the appropriate availability target for the entire lifecycle of operating the system Another way that costs escalate in systems that operate with higher availability design goals is in the selection of dependencies At these higher goals the set of software or services that can be chosen as dependencies diminishes based on which of these services have had the deep investments we previously described As the availability design goal increases it’s typical to find fewer multipurpose services (such as a relational database) and more purposebuilt services This is because the latter are easier to evaluate test and automate and have a reduced potential for surprise interactions with included but unused functionality Disaster Recovery (DR) Objectives In addition to availability objectives your resiliency strategy should also include Disaster Recovery (DR) objectives based on strategies to recover your workload in case of a disaster event Disaster Recovery focuses on onetime recovery objectives in response natural disasters largescale technical failures or human threats such as attack or error This is different than availability which measures mean resiliency over a period of time in response to component failures load spikes or software bugs Recovery Time Objective (RTO) Defined by the organization RTO is the maximum acceptable delay between the interruption of service and restoration of service This determines what is considered an acceptable time window when service is unavailable Recovery Point Objective (RPO) Defined by the organization RPO is the maximum acceptable amount of time since the last data recovery point This determines what is considered an acceptable loss of data between the last recovery point and the interruption of service 7ArchivedReliability Pillar AWS WellArchitected Framework Understanding Availability Needs The relationship of RPO (Recovery Point Objective) RTO (Recovery Time Objective) and the disaster event RTO is similar to MTTR (Mean Time to Recovery) in that both measure the time between the start of an outage and workload recovery However MTTR is a mean value taken over several availability impacting events over a period of time while RTO is a target or maximum value allowed for a single availability impacting event Understanding Availability Needs It’s common to initially think of an application’s availability as a single target for the application as a whole However upon closer inspection we frequently find that certain aspects of an application or service have different availability requirements For example some systems might prioritize the ability to receive and store new data ahead of retrieving existing data Other systems prioritize real time operations over operations that change a system’s configuration or environment Services might have very high availability requirements during certain hours of the day but can tolerate much longer periods of disruption outside of these hours These are a few of the ways that you can decompose a single application into constituent parts and evaluate the availability requirements for each The benefit of doing this is to focus your efforts (and expense) on availability according to specific needs rather than engineering the whole system to the strictest requirement Recommendation Critically evaluate the unique aspects to your applications and where appropriate differentiate the availability and disaster recovery design goals to reflect the needs of your business Within AWS we commonly divide services into the “data plane” and the “control plane” The data plane is responsible for delivering realtime service while control planes are used to configure the environment For example Amazon EC2 instances Amazon RDS databases and Amazon DynamoDB table read/write operations are all data plane operations In contrast launching new EC2 instances or RDS databases or adding or changing table metadata in DynamoDB are all considered control plane operations While high levels of availability are important for all of these capabilities the data planes typically have higher availability design goals than the control planes Therefore workloads with high availability requirements should avoid runtime dependency on control plan operations Many AWS customers take a similar approach to critically evaluating their applications and identifying subcomponents with different availability needs Availability design goals are then tailored to the different aspects and the appropriate work efforts are executed to engineer the system AWS has significant experience engineering applications with a range of availability design goals including services with 99999% or greater availability AWS Solution Architects (SAs) can help you design appropriately for your availability goals Involving AWS early in your design process improves our ability to help you meet your availability goals Planning for availability is not only done before your workload launches It’s also done continuously to refine your design as you gain operational experience learn from real world events and endure failures of different types You can then apply the appropriate work effort to improve upon your implementation The availability needs that are required for a workload must be aligned to the business need and criticality By first defining business criticality framework with defined RTO RPO and availability you can then assess each workload Such an approach requires that the people involved in implementation of the workload are knowledgeable of the framework and the impact their workload has on business needs 8ArchivedReliability Pillar AWS WellArchitected Framework Manage Service Quotas and Constraints Foundations Foundational requirements are those whose scope extends beyond a single workload or project Before architecting any system foundational requirements that influence reliability should be in place For example you must have sufficient network bandwidth to your data center In an onpremises environment these requirements can cause long lead times due to dependencies and therefore must be incorporated during initial planning With AWS however most of these foundational requirements are already incorporated or can be addressed as needed The cloud is designed to be nearly limitless so it’s the responsibility of AWS to satisfy the requirement for sufficient networking and compute capacity leaving you free to change resource size and allocations on demand The following sections explain best practices that focus on these considerations for reliability Topics •Manage Service Quotas and Constraints (p 9) •Plan your Network Topology (p 10) Manage Service Quotas and Constraints For cloudbased workload architectures there are service quotas (which are also referred to as service limits) These quotas exist to prevent accidentally provisioning more resources than you need and to limit request rates on API operations so as to protect services from abuse There are also resource constraints for example the rate that you can push bits down a fiberoptic cable or the amount of storage on a physical disk If you are using AWS Marketplace applications you must understand the limitations of those applications If you are using thirdparty web services or software as a service you must be aware of those limits also Aware of service quotas and constraints: You are aware of your default quotas and quota increase requests for your workload architecture You additionally know which resource constraints such as disk or network are potentially impactful Service Quotas is an AWS service that helps you manage your quotas for over 100 AWS services from one location Along with looking up the quota values you can also request and track quota increases from the Service Quotas console or via the AWS SDK AWS Trusted Advisor offers a service quotas check that displays your usage and quotas for some aspects of some services The default service quotas per service are also in the AWS documentation per respective service for example see Amazon VPC Quotas Rate limits on throttled APIs are set within the API Gateway itself by configuring a usage plan Other limits that are set as configuration on their respective services include Provisioned IOPS RDS storage allocated and EBS volume allocations Amazon Elastic Compute Cloud (Amazon EC2) has its own service limits dashboard that can help you manage your instance Amazon Elastic Block Store (Amazon EBS) and Elastic IP address limits If you have a use case where service quotas impact your application’s performance and they are not adjustable to your needs then contact AWS Support to see if there are mitigations Manage quotas across accounts and regions: If you are using multiple AWS accounts or AWS Regions ensure that you request the appropriate quotas in all environments in which your production workloads run 9ArchivedReliability Pillar AWS WellArchitected Framework Resources Service quotas are tracked per account Unless otherwise noted each quota is AWS Regionspecific In addition to the production environments also manage quotas in all applicable nonproduction environments so that testing and development are not hindered Accommodate fixed service quotas and constraints through architecture: Be aware of unchangeable service quotas and physical resources and architect to prevent these from impacting reliability Examples include network bandwidth AWS Lambda payload size throttle burst rate for API Gateway and concurrent user connections to an Amazon Redshift cluster Monitor and manage quotas : Evaluate your potential usage and increase your quotas appropriately allowing for planned growth in usage For supported services you can manage your quotas by configuring CloudWatch alarms to monitor usage and alert you to approaching quotas These alarms can be triggered from Service Quotas or from Trusted Advisor You can also use metric filters on CloudWatch Logs to search and extract patterns in logs to determine if usage is approaching quota thresholds Automate quota management : Implement tools to alert you when thresholds are being approached By using Service Quotas APIs you can automate quota increase requests If you integrate your Configuration Management Database (CMDB) or ticketing system with Service Quotas you can automate the tracking of quota increase requests and current quotas In addition to the AWS SDK Service Quotas offers automation using AWS command line tools Ensure that a sufficient gap exists between the current quotas and the maximum usage to accommodate failover: When a resource fails it may still be counted against quotas until it’s successfully terminated Ensure that your quotas cover the overlap of all failed resources with replacements before the failed resources are terminated You should consider an Availability Zone failure when calculating this gap Resources Video •AWS Live re:Inforce 2019 Service Quotas Documentation •What Is Service Quotas? •AWS Service Quotas (formerly referred to as service limits) •Amazon EC2 Service Limits •AWS Trusted Advisor Best Practice Checks (see the Service Limits section) •AWS Limit Monitor on AWS Answers •AWS Marketplace: CMDB products that help track limits •APN Partner: partners that can help with configuration management Plan your Network Topology Workloads often exist in multiple environments These include multiple cloud environments (both publicly accessible and private) and possibly your existing data center infrastructure Plans must 10ArchivedReliability Pillar AWS WellArchitected Framework Plan your Network Topology include network considerations such as intrasystem and intersystem connectivity public IP address management private IP address management and domain name resolution When architecting systems using IP addressbased networks you must plan network topology and addressing in anticipation of possible failures and to accommodate future growth and integration with other systems and their networks Amazon Virtual Private Cloud (Amazon VPC) lets you provision a private isolated section of the AWS Cloud where you can launch AWS resources in a virtual network Use highly available network connectivity for your workload public endpoints : These endpoints and the routing to them must be highly a vailable To achieve this use highly available DNS content delivery networks (CDNs) API Gateway load balancing or reverse proxies Amazon Route 53 AWS Global Accelerator Amazon CloudFront Amazon API Gateway and Elastic Load Balancing (ELB) all provide highly available public endpoints You might also choose to evaluate AWS Marketplace software appliances for load balancing and proxying Consumers of the service your workload provides whether they are endusers or other services make requests on these service endpoints Several AWS resources are available to enable you to provide highly available endpoints Elastic Load Balancing provides load balancing across Availability Zones performs Layer 4 (TCP) or Layer 7 (http/https) routing integrates with AWS WAF and integrates with AWS Auto Scaling to help create a selfhealing infrastructure and absorb increases in traffic while releasing resources when traffic decreases Amazon Route 53 is a scalable and highly available Domain Name System (DNS) service that connects user requests to infrastructure running in AWS–such as Amazon EC2 instances Elastic Load Balancing load balancers or Amazon S3 buckets–and can also be used to route users to infrastructure outside of AWS AWS Global Accelerator is a network layer service that you can use to direct traffic to optimal endpoints over the AWS global network Distributed Denial of Service (DDoS) attacks risk shutting out legitimate traffic and lowering availability for your users AWS Shield provides automatic protection against these attacks at no extra cost for AWS service endpoints on your workload You can augment these features with virtual appliances from APN Partners and the AWS Marketplace to meet your needs Provision redundant connectivity between private networks in the cloud and onpremises environments: Use multiple AWS Direct Connect (DX) connections or VPN tunnels between separately deployed private networks Use multiple DX locations for high availability If using multiple AWS Regions ensure redundancy in at least two of them You might want to evaluate AWS Marketplace appliances that terminate VPNs If you use AWS Marketplace appliances deploy redundant instances for high availability in different Availability Zones AWS Direct Connect is a cloud service that makes it easy to establish a dedicated network connection from your onpremises environment to AWS Using Direct Connect Gateway your onpremises data center can be connected to multiple AWS VPCs spread across multiple AWS Regions This redundancy addresses possible failures that impact connectivity resiliency: • How are you going to be resilient to failures in your topology? • What happens if you misconfigure something and remove connectivity? • Will you be able to handle an unexpected increase in traffic/use of your services? • Will you be able to absorb an attempted Distributed Denial of Service (DDoS) attack? 11ArchivedReliability Pillar AWS WellArchitected Framework Plan your Network Topology When connecting your VPC to your onpremises data center via VPN you should consider the resiliency and bandwidth requirements that you need when you select the vendor and instance size on which you need to run the appliance If you use a VPN appliance that is not resilient in its implementation then you should have a redundant connection through a second appliance For all these scenarios you need to define an acceptable time to recovery and test to ensure that you can meet those requirements If you choose to connect your VPC to your data center using a Direct Connect connection and you need this connection to be highly available have redundant DX connections from each data center The redundant connection should use a second DX connection from different location than the first If you have multiple data centers ensure that the connections terminate at different locations Use the Direct Connect Resiliency Toolkit to help you set this up If you choose to fail over to VPN over the internet using AWS VPN it’s important to understand that it supports up to 125Gbps throughput per VPN tunnel but does not support Equal Cost Multi Path (ECMP) for outbound traffic in the case of multiple AWS Managed VPN tunnels terminating on the same VGW We do not recommend that you use AWS Managed VPN as a backup for Direct Connect connections unless you can tolerate speeds less than 1 Gbps during failover You can also use VPC endpoints to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without traversing the public internet Endpoints are virtual devices They are horizontally scaled redundant and highly available VPC components They allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic Ensure IP subnet allocation accounts for expansion and availability: Amazon VPC IP address ranges must be large enough to accommodate workload requirements including factoring in future expansion and allocation of IP addresses to subnets across Availability Zones This includes load balancers EC2 instances and containerbased applications When you plan your network topology the first step is to define the IP address space itself Private IP address ranges (following RFC 1918 guidelines) should be allocated for each VPC Accommodate the following requirements as part of this process: • Allow IP address space for more than one VPC per Region • Within a VPC allow space for multiple subnets that span multiple Availability Zones • Always leave unused CIDR block space within a VPC for future expansion • Ensure that there is IP address space to meet the needs of any transient fleets of EC2 instances that you might use such as Spot Fleets for machine learning Amazon EMR clusters or Amazon Redshift clusters • Note that the first four IP addresses and the last IP address in each subnet CIDR block are reserved and not available for your use You should plan on deploying large VPC CIDR blocks Note that the initial VPC CIDR block allocated to your VPC cannot be changed or deleted but you can add additional nonoverlapping CIDR blocks to the VPC Subnet IPv4 CIDRs cannot be changed however IPv6 CIDRs can Keep in mind that deploying the largest VPC possible (/16) results in over 65000 IP addresses In the base 10xxx IP address space alone you could provision 255 such VPCs You should therefore err on the side of being too large rather than too small to make it easier to manage your VPCs Prefer hubandspoke topologies over manytomany mesh: If more than two network address spaces (for example VPCs and onpremises networks) are connected via VPC peering AWS Direct Connect or VPN then use a hubandspoke model like those provided by AWS Transit Gateway If you have only two such networks you can simply connect them to each other but as the number of networks grows the complexity of such meshed connections becomes untenable AWS Transit Gateway provides an easy to maintain hubandspoke model allowing the routing of traffic across your multiple networks 12ArchivedReliability Pillar AWS WellArchitected Framework Plan your Network Topology Figure 1: Without AWS Transit Gateway: You need to peer each Amazon VPC to each other and to each onsite location using a VPN connection which can become complex as it scales Figure 2: With AWS Transit Gateway: You simply connect each Amazon VPC or VPN to the AWS Transit Gateway and it routes traffic to and from each VPC or VPN Enforce nonoverlapping private IP address ranges in all private address spaces where they are connected: The IP address ranges of each of your VPCs must not overlap when peered or connected via VPN You must similarly avoid IP address conflicts between a VPC and onpremises environments or with other cloud providers that you use You must also have a way to allocate private IP address ranges when needed An IP address management (IPAM) system can help with this Several IPAMs are available from the AWS Marketplace 13ArchivedReliability Pillar AWS WellArchitected Framework Resources Resources Videos •AWS re:Invent 2018: Advanced VPC Design and New Capabilities for Amazon VPC (NET303) •AWS re:Invent 2019: AWS Transit Gateway reference architectures for many VPCs (NET406R1) Documentation •What Is a Transit Gateway? •What Is Amazon VPC? •Working with Direct Connect Gateways •Using the Direct Connect Resiliency Toolkit to get started •Multiple data center HA network connectivity •What Is AWS Global Accelerator? •Using redundant SitetoSite VPN connections to provide failover •VPC Endpoints and VPC Endpoint Services (AWS PrivateLink) •Amazon Virtual Private Cloud Connectivity Options Whitepaper •AWS Marketplace for Network Infrastructure •APN Partner: partners that can help plan your networking 14ArchivedReliability Pillar AWS WellArchitected Framework Design Your Workload Service Architecture Workload Architecture A reliable workload starts with upfront design decisions for both software and infrastructure Your architecture choices will impact your workload behavior across all five WellArchitected pillars For reliability there are specific patterns you must follow The following sections explain best practices to use with these patterns for reliability Topics •Design Your Workload Service Architecture (p 15) •Design Interactions in a Distributed System to Prevent Failures (p 17) •Design Interactions in a Distributed System to Mitigate or Withstand Failures (p 20) Design Your Workload Service Architecture Build highly scalable and reliable workloads using a serviceoriented architecture (SOA) or a microservices architecture Serviceoriented architecture (SOA) is the practice of making software components reusable via service interfaces Microservices architecture goes further to make components smaller and simpler Serviceoriented architecture (SOA) interfaces use common communication standards so that they can be rapidly incorporated into new workloads SOA replaced the practice of building monolith architectures which consisted of interdependent indivisible units At AWS we have always used SOA but have now embraced building our systems using microservices While microservices have several attractive qualities the most important benefit for availability is that microservices are smaller and simpler They allow you to differentiate the availability required of different services and thereby focus investments more specifically to the microservices that have the greatest availability needs For example to deliver product information pages on Amazoncom (“detail pages”) hundreds of microservices are invoked to build discrete portions of the page While there are a few services that must be available to provide the price and the product details the vast majority of content on the page can simply be excluded if the service isn’t available Even such things as photos and reviews are not required to provide an experience where a customer can buy a product Choose how to segment your workload: Monolithic architecture should be avoided Instead you should choose between SOA and microservices When making each choice balance the benefits against the complexities—what is right for a new product racing to first launch is different than what a workload built to scale from the start needs The benefits of using smaller segments include greater agility organizational flexibility and scalability Complexities include possible increased latency more complex debugging and increased operational burden Even if you choose to start with a monolith architecture you must ensure that it’s modular and has the ability to ultimately evolve to SOA or microservices as your product scales with user adoption SOA and microservices offer respectively smaller segmentation which is preferred as a modern scalable and reliable architecture but there are tradeoffs to consider especially when deploying a microservice architecture One is that you now have a distributed compute architecture that can make it harder to achieve user latency requirements and there is additional complexity in debugging and tracing of user interactions AWS XRay can be used to assist you in solving this problem Another effect to consider is increased operational complexity as you proliferate the number of applications that you are managing which requires the deployment of multiple independency components 15ArchivedReliability Pillar AWS WellArchitected Framework Design Your Workload Service Architecture Figure 3: Monolithic architecture versus microservices architecture Build services focused on specific business domains and functionality: SOA builds services with well delineated functions defined by business needs Microservices use domain models and bounded context to limit this further so that each service does just one thing Focusing on specific functionality enables you to differentiate the reliability requirements of different services and target investments more specifically A concise business problem and small team associated with each service also enables easier organizational scaling In designing a microservice architecture it’s helpful to use DomainDriven Design (DDD) to model the business problem using entities For example for Amazoncom entities may include package delivery schedule price discount and currency Then the model is further divided into smaller models using Bounded Context where entities that share similar features and attributes are grouped together So using the Amazon example package delivery and schedule would be part of the shipping context while price discount and currency are part of the pricing context With the model divided into contexts a template for how to boundary microservices emerges Provide service contracts per API: Service contracts are documented agreements between teams on service integration and include a machinereadable API definition rate limits and performance expectations A versioning strategy allows clients to continue using the existing API and migrate their applications to the newer API when they are ready Deployment can happen anytime as long as the contract is not violated The service provider team can use the technology stack of their choice to satisfy the API contract Similarly the service consumer can use their own technology 16ArchivedReliability Pillar AWS WellArchitected Framework Resources Microservices take the concept of SOA to the point of creating services that have a minimal set of functionality Each service publishes an API and design goals limits and other considerations for using the service This establishes a “contract” with calling applications This accomplishes three main benefits: • The service has a concise business problem to be served and a small team that owns the business problem This allows for better organizational scaling • The team can deploy at any time as long as they meet their API and other “contract” requirements • The team can use any technology stack they want to as long as they meet their API and other “contract” requirements Amazon API Gateway is a fully managed service that makes it easy for developers to create publish maintain monitor and secure APIs at any scale It handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls including traffic management authorization and access control monitoring and API version management Using OpenAPI Specification (OAS) formerly known as the Swagger Specification you can define your API contract and import it into API Gateway With API Gateway you can then version and deploy the APIs Resources Documentation •Amazon API Gateway: Configuring a REST API Using OpenAPI •Implementing Microservices on AWS •Microservices on AWS External Links •Microservices a definition of this new architectural term •Microservice TradeOffs •Bounded Context (a central pattern in DomainDriven Design) Design Interactions in a Distributed System to Prevent Failures Distributed systems rely on communications networks to interconnect components such as servers or services Your workload must operate reliably despite data loss or latency in these networks Components of the distributed system must operate in a way that does not negatively impact other components or the workload These best practices prevent failures and improve mean time between failures (MTBF) Identify which kind of distributed system is required: Hard realtime distributed systems require responses to be given synchronously and rapidly while soft realtime systems have a more generous time window of minutes or more for response Offline systems handle responses through batch or asynchronous processing Hard realtime distributed systems have the most stringent reliability requirements The most difficult challenges with distributed systems are for the hard realtime distributed systems also known as request/reply services What makes them difficult is that requests arrive unpredictably and responses must be given rapidly (for example the customer is actively waiting for the response) 17ArchivedReliability Pillar AWS WellArchitected Framework Design Interactions in a Distributed System to Prevent Failures Examples include frontend web servers the order pipeline credit card transactions every AWS API and telephony Implement loosely coupled dependencies: Dependencies such as queuing systems streaming systems workflows and load balancers are loosely coupled Loose coupling helps isolate behavior of a component from other components that depend on it increasing resiliency and agility If changes to one component force other components that rely on it to also change then they are tightly coupled Loose coupling breaks this dependency so that dependent components only need to know the versioned and published interface Implementing loose coupling between dependencies isolates a failure in one from impacting another Loose coupling enables you to add additional code or features to a component while minimizing risk to components that depend on it Also scalability is improved as you can scale out or even change underlying implementation of the dependency To further improve resiliency through loose coupling make component interactions asynchronous where possible This model is suitable for any interaction that does not need an immediate response and where an acknowledgment that a request has been registered will suffice It involves one component that generates events and another that consumes them The two components do not integrate through direct pointtopoint interaction but usually through an intermediate durable storage layer such as an SQS queue or a streaming data platform such as Amazon Kinesis or AWS Step Functions Figure 4: Dependencies such as queuing systems and load balancers are loosely coupled 18ArchivedReliability Pillar AWS WellArchitected Framework Resources Amazon SQS queues and Elastic Load Balancers are just two ways to add an intermediate layer for loose coupling Eventdriven architectures can also be built in the AWS Cloud using Amazon EventBridge which can abstract clients (event producers) from the services they rely on (event consumers) Amazon Simple Notification Service is an effective solution when you need highthroughput pushbased many tomany messaging Using Amazon SNS topics your publisher systems can fan out messages to a large number of subscriber endpoints for parallel processing While queues offer several advantages in most hard realtime systems requests older than a threshold time (often seconds) should be considered stale (the client has given up and is no longer waiting for a response) and not processed This way more recent (and likely still valid requests) can be processed instead Make all responses idempotent: An idempotent service promises that each request is completed exactly once such that making multiple identical requests has the same effect as making a single request An idempotent service makes it easier for a client to implement retries without fear that a request will be erroneously processed multiple times To do this clients can issue API requests with an idempotency token—the same token is used whenever the request is repeated An idempotent service API uses the token to return a response identical to the response that was returned the first time that the request was completed In a distributed system it’s easy to perform an action at most once (client makes only one request) or at least once (keep requesting until client gets confirmation of success) But it’s hard to guarantee an action is idempotent which means it’s performed exactly once such that making multiple identical requests has the same effect as making a single request Using idempotency tokens in APIs services can receive a mutating request one or more times without creating duplicate records or side effects Do constant work: Systems can fail when there are large rapid changes in load For example a health check system that monitors the health of thousands of servers should send the same size payload (a full snapshot of the current state) each time Whether no servers are failing or all of them the health check system is doing constant work with no large rapid changes For example if the health check system is monitoring 100000 servers the load on it is nominal under the normally light server failure rate However if a major event makes half of those servers unhealthy then the health check system would be overwhelmed trying to update notification systems and communicate state to its clients So instead the health check system should send the full snapshot of the current state each time 100000 server health states each represented by a bit would only be a 125 KB payload Whether no servers are failing or all of them are the health check system is doing constant work and large rapid changes are not a threat to the system stability This is actually how the control plane is designed for Amazon Route 53 health checks Resources Videos •AWS re:Invent 2019: Moving to eventdriven architectures (SVS308) •AWS re:Invent 2018: Close Loops & Opening Minds: How to Take Control of Systems Big & Small ARC337 (includes loose coupling constant work static stability) •AWS New York Summit 2019: Intro to Eventdriven Architectures and Amazon EventBridge (MAD205) (discusses EventBridge SQS SNS) Documentation •AWS Services That Publish CloudWatch Metrics •What Is Amazon Simple Queue Service? • Amazon EC2: Ensuring Idempotency 19ArchivedReliability Pillar AWS WellArchitected Framework Design Interactions in a Distributed System to Mitigate or Withstand Failures • The Amazon Builders' Library: Challenges with distributed systems •Centralized Logging solution •AWS Marketplace: products that can be used for monitoring and alerting •APN Partner: partners that can help you with monitoring and logging Design Interactions in a Distributed System to Mitigate or Withstand Failures Distributed systems rely on communications networks to interconnect components (such as servers or services) Your workload must operate reliably despite data loss or latency over these networks Components of the distributed system must operate in a way that does not negatively impact other components or the workload These best practices enable workloads to withstand stresses or failures more quickly recover from them and mitigate the impact of such impairments The result is improved mean time to recovery (MTTR) These best practices prevent failures and improve mean time between failures (MTBF) Implement graceful degradation to transform applicable hard dependencies into soft dependencies: When a component's dependencies are unhealthy the component itself can still function although in a degraded manner For example when a dependency call fails instead use a predetermined static response Consider a service B that is called by service A and in turn calls service C Figure 5: Service C fails when called from service B Service B returns a degraded response to service A When service B calls service C it received an error or timeout from it Service B lacking a response from service C (and the data it contains) instead returns what it can This can be the last cached good value or service B can substitute a predetermined static response for what it would have received from service C It can then return a degraded response to its caller service A Without this static response the failure in service C would cascade through service B to service A resulting in a loss of availability As per the multiplicative factor in the availability equation for hard dependencies (see Calculating availability with hard dependencies (p 5)) any drop in the availability of C seriously impacts effective availability of B By returning the static response service B mitigates the failure in C and although degraded makes service C’s availability look like 100% availability (assuming it reliably returns the static response under error conditions) Note that the static response is a simple alternative to returning an error and is not an attempt to recompute the response using different means Such attempts at a completely different mechanism to try to achieve the same result are called fallback behavior and are an antipattern to be avoided Another example of graceful degradation is the circuit breaker pattern Retry strategies should be used when the failure is transient When this is not the case and the operation is likely to fail the circuit breaker pattern prevents the client from performing a request that is likely to fail When requests are being processed normally the circuit breaker is closed and requests flow through When the remote system begins returning errors or exhibits high latency the circuit breaker opens and the dependency is ignored or results are replaced with more simply obtained but less comprehensive responses (which 20ArchivedReliability Pillar AWS WellArchitected Framework Design Interactions in a Distributed System to Mitigate or Withstand Failures might simply be a response cache) Periodically the system attempts to call the dependency to determine if it has recovered When that occurs the circuit breaker is closed Figure 6: Circuit breaker showing closed and open states In addition to the closed and open states shown in the diagram after a configurable period of time in the open state the circuit breaker can transition to halfopen In this state it periodically attempts to call the service at a much lower rate than normal This probe is used to check the health of the service After a number of successes in halfopen state the circuit breaker transitions to closed and normal requests resume Throttle requests: This is a mitigation pattern to respond to an unexpected increase in demand Some requests are honored but those over a defined limit are rejected and return a message indicating they have been throttled The expectation on clients is that they will back off and abandon the request or try again at a slower rate Your services should be designed to a known capacity of requests that each node or cell can process This can be established through load testing You then need to track the arrival rate of requests and if the temporary arrival rate exceeds this limit the appropriate response is to signal that the request has been throttled This allows the user to retry potentially to a different node/cell that might have available capacity Amazon API Gateway provides methods for throttling requests Amazon SQS and Amazon Kinesis can buffer requests smoothing out request rate and alleviate the need for throttling for requests that can be addressed asynchronously Control and limit retry calls: Use exponential backoff to retry after progressively longer intervals Introduce jitter to randomize those retry intervals and limit the maximum number of retries Typical components in a distributed software system include servers load balancers databases and DNS servers In operation and subject to failures any of these can start generating errors The default technique for dealing with errors is to implement retries on the client side This technique increases the reliability and availability of the application However at scale—and if clients attempt to retry the failed operation as soon as an error occurs—the network can quickly become saturated with new and retired requests each competing for network bandwidth This can result in a retry storm which will reduce availability of the service This pattern might continue until a full system failure occurs 21ArchivedReliability Pillar AWS WellArchitected Framework Design Interactions in a Distributed System to Mitigate or Withstand Failures To avoid such scenarios backoff algorithms such as the common exponential backoff should be used Exponential backoff algorithms gradually decrease the rate at which retries are performed thus avoiding network congestion Many SDKs and software libraries including those from AWS implement a version of these algorithms However never assume a backoff algorithm exists—always test and verify this to be the case Simple backoff alone is not enough because in distributed systems all clients may backoff simultaneously creating clusters of retry calls Marc Brooker in his blog post Exponential Backoff And Jitter explains how to modify the wait() function in the exponential backoff to prevent clusters of retry calls The solution is to add jitter in the wait() function To avoid retrying for too long implementations should cap the backoff to a maximum value Finally it’s important to configure a maximum number of retries or elapsed time after which retrying will simply fail AWS SDKs implement this by default and it can be configured For services lower in the stack a maximum retry limit of zero or one will limit risk yet still be effective as retries are delegated to services higher in the stack Fail fast and limit queues: If the workload is unable to respond successfully to a request then fail fast This allows the releasing of resources associated with a request and permits the service to recover if it’s running out of resources If the workload is able to respond successfully but the rate of requests is too high then use a queue to buffer requests instead However do not allow long queues that can result in serving stale requests that the client has already given up on This best practice applies to the serverside or receiver of the request Be aware that queues can be created at multiple levels of a system and can seriously impede the ability to quickly recover as older stale requests (that no longer need a response) are processed before newer requests in need of a response Be aware of places where queues exist They often hide in workflows or in work that’s recorded to a database Set client timeouts : Set timeouts appropriately verify them systematically and do not rely on default values as they are generally set too high This best practice applies to the clientside or sender of the request Set both a connection timeout and a request timeout on any remote call and generally on any call across processes Many frameworks offer builtin timeout capabilities but be careful as many have default values that are infinite or too high A value that is too high reduces the usefulness of the timeout because resources continue to be consumed while the client waits for the timeout to occur A too low value can generate increased traffic on the backend and increased latency because too many requests are retried In some cases this can lead to complete outages because all requests are being retried To learn more about how Amazon use timeouts retries and backoff with jitter refer to the Builder’s Library: Timeouts retries and backoff with jitter Make services stateless where possible: Services should either not require state or should offload state such that between different client requests there is no dependence on locally stored data on disk or in memory This enables servers to be replaced at will without causing an availability impact Amazon ElastiCache or Amazon DynamoDB are good destinations for offloaded state 22ArchivedReliability Pillar AWS WellArchitected Framework Design Interactions in a Distributed System to Mitigate or Withstand Failures Figure 7: In this stateless web application session state is offloaded to Amazon ElastiCache When users or services interact with an application they often perform a series of interactions that form a session A session is unique data for users that persists between requests while they use the application A stateless application is an application that does not need knowledge of previous interactions and does not store session information Once designed to be stateless you can then use serverless compute platforms such as AWS Lambda or AWS Fargate In addition to server replacement another benefit of stateless applications is that they can scale horizontally because any of the available compute resources (such as EC2 instances and AWS Lambda functions) can service any request Implement emergency levers: These are rapid processes that may mitigate availability impact on your workload They can be operated in the absence of a root cause An ideal emergency lever reduces the cognitive burden on the resolvers to zero by providing fully deterministic activation and deactivation criteria Example levers include blocking all robot traffic or serving a static response Levers are often manual but they can also be automated Tips for implementing and using emergency levers: • When levers are activated do LESS not more • Keep it simple avoid bimodal behavior • Test your levers periodically These are examples of actions that are NOT emergency levers: • Add capacity • Call up service owners of clients that depend on your service and ask them to reduce calls • Making a change to code and releasing it 23ArchivedReliability Pillar AWS WellArchitected Framework Resources Resources Video •Retry backoff and jitter: AWS re:Invent 2019: Introducing The Amazon Builders’ Library (DOP328) Documentation •Error Retries and Exponential Backoff in AWS • Amazon API Gateway: Throttle API Requests for Better Throughput • The Amazon Builders' Library: Timeouts retries and backoff with jitter • The Amazon Builders' Library: Avoiding fallback in distributed systems • The Amazon Builders' Library: Avoiding insurmountable queue backlogs • The Amazon Builders' Library: Caching challenges and strategies Labs • WellArchitected lab: Level 300: Implementing Health Checks and Managing Dependencies to Improve Reliability External Links •CircuitBreaker (summarizes Circuit Breaker from “Release It!” book) Books • Michael Nygard “Release It! Design and Deploy ProductionReady Software” 24ArchivedReliability Pillar AWS WellArchitected Framework Monitor Workload Resources Change Management Changes to your workload or its environment must be anticipated and accommodated to achieve reliable operation of the workload Changes include those imposed on your workload such as spikes in demand as well as those from within such as feature deployments and security patches The following sections explain the best practices for change management Topics •Monitor Workload Resources (p 25) •Design your Workload to Adapt to Changes in Demand (p 28) •Implement Change (p 30) Monitor Workload Resources Logs and metrics are powerful tools to gain insight into the health of your workload You can configure your workload to monitor logs and metrics and send notifications when thresholds are crossed or significant events occur Monitoring enables your workload to recognize when lowperformance thresholds are crossed or failures occur so it can recover automatically in response Monitoring is critical to ensure that you are meeting your availability requirements Your monitoring needs to effectively detect failures The worst failure mode is the “silent” failure where the functionality is no longer working but there is no way to detect it except indirectly Your customers know before you do Alerting when you have problems is one of the primary reasons you monitor Your alerting should be decoupled from your systems as much as possible If your service interruption removes your ability to alert you will have a longer period of interruption At AWS we instrument our applications at multiple levels We record latency error rates and availability for each request for all dependencies and for key operations within the process We record metrics of successful operation as well This allows us to see impending problems before they happen We don’t just consider average latency We focus even more closely on latency outliers like the 999th and 9999th percentile This is because if one request out of 1000 or 10000 is slow that is still a poor experience Also although your average may be acceptable if one in 100 of your requests causes extreme latency it will eventually become a problem as your traffic grows Monitoring at AWS consists of four distinct phases: 1 Generation — Monitor all components for the workload 2 Aggregation — Define and calculate metrics 3 Realtime processing and alarming — Send notifications and automate responses 4 Storage and Analytics Generation — Monitor all components for the workload: Monitor the components of the workload with Amazon CloudWatch or thirdparty tools Monitor AWS services with Personal Health Dashboard All components of your workload should be monitored including the frontend business logic and storage tiers Define key metrics and how to extract them from logs if necessary and set create thresholds for corresponding alarm events 25ArchivedReliability Pillar AWS WellArchitected Framework Monitor Workload Resources Monitoring in the cloud offers new opportunities Most cloud providers have developed customizable hooks and insights into multiple layers of your workload AWS makes an abundance of monitoring and log information available for consumption which can be used to define changeindemand processes The following is just a partial list of services and features that generate log and metric data • Amazon ECS Amazon EC2 Elastic Load Balancing AWS Auto Scaling and Amazon EMR publish metrics for CPU network I/O and disk I/O averages • Amazon CloudWatch Logs can be enabled for Amazon Simple Storage Service (Amazon S3) Classic Load Balancers and Application Load Balancers • VPC Flow Logs can be enabled to analyze network traffic into and out of a VPC • AWS CloudTrail logs AWS account activity including actions taken through the AWS Management Console AWS SDKs command line tools • Amazon EventBridge delivers a realtime stream of system events that describes changes in AWS services • AWS provides tooling to collect operating systemlevel logs and stream them into CloudWatch Logs • Custom Amazon CloudWatch metrics can be used for metrics of any dimension • Amazon ECS and AWS Lambda stream log data to CloudWatch Logs • Amazon Machine Learning (Amazon ML) Amazon Rekognition Amazon Lex and Amazon Polly provide metrics for successful and unsuccessful requests • AWS IoT provides metrics for number of rule executions as well as specific success and failure metrics around the rules • Amazon API Gateway provides metrics for number of requests erroneous requests and latency for your APIs • Personal Health Dashboard gives you a personalized view into the performance and availability of the AWS services underlying your AWS resources In addition monitor all of your external endpoints from remote locations to ensure that they are independent of your base implementation This active monitoring can be done with synthetic transactions (sometimes referred to as “user canaries” but not to be confused with canary deployments) which periodically execute some number of common tasks performed by consumers of the application Keep these short in duration and be sure not to overload your workflow during testing Amazon CloudWatch Synthetics enables you to create canaries to monitor your endpoints and APIs You can also combine the synthetic canary client nodes with AWS XRay console to pinpoint which synthetic canaries are experiencing issues with errors faults or throttling rates for the selected time frame Aggregation — Define and calculate metrics: Store log data and apply filters where necessary to calculate metrics such as counts of a specific log event or latency calculated from log event timestamps Amazon CloudWatch and Amazon S3 serve as the primary aggregation and storage layers For some services like AWS Auto Scaling and Elastic Load Balancing default metrics are provided “out the box” for CPU load or average request latency across a cluster or instance For streaming services like VPC Flow Logs and AWS CloudTrail event data is forwarded to CloudWatch Logs and you need to define and apply metrics filters to extract metrics from the event data This gives you time series data which can serve as inputs to CloudWatch alarms that you define to trigger alerts Realtime processing and alarming — Send notifications: Organizations that need to know receive notifications when significant events occur Alerts can also be sent to Amazon Simple Notification Service (Amazon SNS) topics and then pushed to any number of subscribers For example Amazon SNS can forward alerts to an email alias so that technical staff can respond 26ArchivedReliability Pillar AWS WellArchitected Framework Monitor Workload Resources Realtime processing and alarming — Automate responses: Use automation to take action when an event is detected for example to replace failed components Alerts can trigger AWS Auto Scaling events so that clusters react to changes in demand Alerts can be sent to Amazon Simple Queue Service (Amazon SQS) which can serve as an integration point for thirdparty ticket systems AWS Lambda can also subscribe to alerts providing users an asynchronous serverless model that reacts to change dynamically AWS Config continuously monitors and records your AWS resource configurations and can trigger AWS Systems Manager Automation to remediate issues Storage and Analytics: Collect log files and metrics histories and analyze these for broader trends and workload insights Amazon CloudWatch Logs Insights supports a simple yet powerful query language that you can use to analyze log data Amazon CloudWatch Logs also supports subscriptions that allow data to flow seamlessly to Amazon S3 where you can use or Amazon Athena to query the data It supports queries on a large array of formats For more information see Supported SerDes and Data Formats in the Amazon Athena User Guide For analysis of huge log file sets you can run an Amazon EMR cluster to run petabytescale analyses There are a number of tools provided by partners and third parties that allow for aggregation processing storage and analytics These tools include New Relic Splunk Loggly Logstash CloudHealth and Nagios However outside generation of system and application logs is unique to each cloud provider and often unique to each service An oftenoverlooked part of the monitoring process is data management You need to determine the retention requirements for monitoring data and then apply lifecycle policies accordingly Amazon S3 supports lifecycle management at the S3 bucket level This lifecycle management can be applied differently to different paths in the bucket Toward the end of the lifecycle you can transition data to Amazon S3 Glacier for longterm storage and then expiration after the end of the retention period is reached The S3 IntelligentTiering storage class is designed to optimize costs by automatically moving data to the most costeffective access tier without performance impact or operational overhead Conduct reviews regularly: Frequently review how workload monitoring is implemented and update it based on significant events and changes Effective monitoring is driven by key business metrics Ensure these metrics are accommodated in your workload as business priorities change Auditing your monitoring helps ensure that you know when an application is meeting its availability goals Root Cause Analysis requires the ability to discover what happened when failures occur AWS provides services that allow you to track the state of your services during an incident: •Amazon CloudWatch Logs: You can store your logs in this service and inspect their contents •Amazon CloudWatch Logs Insights: Is a fully managed service that enables you to run analyze massive logs in seconds It gives you fast interactive queries and visualizations •AWS Config: You can see what AWS infrastructure was in use at different points in time •AWS CloudTrail: You can see which AWS APIs were invoked at what time and by what principal At AWS we conduct a weekly meeting to review operational performance and to share learnings between teams Because there are so many teams in AWS we created The Wheel to randomly pick a workload to review Establishing a regular cadence for operational performance reviews and knowledge sharing enhances your ability to achieve higher performance from your operational teams Monitor endtoend tracing of requests through your system: Use AWS XRay or thirdparty tools so that developers can more easily analyze and debug distributed systems to understand how their applications and its underlying services are performing 27ArchivedReliability Pillar AWS WellArchitected Framework Resources Resources Documentation •Using Amazon CloudWatch Metrics •Using Canaries (Amazon CloudWatch Synthetics) •Amazon CloudWatch Logs Insights Sample Queries •AWS Systems Manager Automation •What is AWS XRay? •Debugging with Amazon CloudWatch Synthetics and AWS XRay • The Amazon Builders' Library: Instrumenting distributed systems for operational visibility Design your Workload to Adapt to Changes in Demand A scalable workload provides elasticity to add or remove resources automatically so that they closely match the current demand at any given point in time Use automation when obtaining or scaling resources: When replacing impaired resources or scaling your workload automate the process by using managed AWS services such as Amazon S3 and AWS Auto Scaling You can also use thirdparty tools and AWS SDKs to automate scaling Managed AWS services include Amazon S3 Amazon CloudFront AWS Auto Scaling AWS Lambda Amazon DynamoDB AWS Fargate and Amazon Route 53 AWS Auto Scaling lets you detect and replace impaired instances It also lets you build scaling plans for resources including Amazon EC2 instances and Spot Fleets Amazon ECS tasks Amazon DynamoDB tables and indexes and Amazon Aurora Replicas When scaling EC2 instances or Amazon ECS containers hosted on EC2 instances ensure that you use multiple Availability Zones (preferably at least three) and add or remove capacity to maintain balance across these Availability Zones When using AWS Lambda they scale automatically Every time an event notification is received for your function AWS Lambda quickly locates free capacity within its compute fleet and runs your code up to the allocated concurrency You need to ensure that the necessary concurrency is configured on the specific Lambda and in your Service Quotas Amazon S3 automatically scales to handle high request rates For example your application can achieve at least 3500 PUT/COPY/POST/DELETE or 5500 GET/HEAD requests per second per prefix in a bucket There are no limits to the number of prefixes in a bucket You can increase your read or write performance by parallelizing reads For example if you create 10 prefixes in an Amazon S3 bucket to parallelize reads you could scale your read performance to 55000 read requests per second Configure and use Amazon CloudFront or a trusted content delivery network A content delivery network (CDN) can provide faster enduser response times and can serve requests for content that may cause unnecessary scaling of your workloads Obtain resources upon detection of impairment to a workload: Scale resources reactively when necessary if availability is impacted so as to restore workload availability You first must configure health checks and the criteria on these checks to indicate when availability is impacted by lack of resources Then either notify the appropriate personnel to manually scale the resource or trigger automation to automatically scale it 28ArchivedReliability Pillar AWS WellArchitected Framework Resources Scale can be manually adjusted for your workload for example changing the number of EC2 instances in an Auto Scaling group or modifying throughput of a DynamoDB table can be done through the console or AWS CLI However automation should be used whenever possible (see Use automation when scaling a workload up or down) Obtain resources upon detection that more resources are needed for a workload: Scale resources proactively to meet demand and avoid availability impact Many AWS services automatically scale to meet demand (see Use automation when scaling a workload up or down) If using EC2 instances or Amazon ECS clusters you can configure automatic scaling of these to occur based on usage metrics that correspond to demand for your workload For Amazon EC2 average CPU utilization load balancer request count or network bandwidth can be used to scale out (or scale in) EC2 instances For Amazon ECS average CPU utilization load balancer request count and memory utilization can be used to scale our (or scale in) ECS tasks Using Target Auto Scaling on AWS the autoscaler acts like a household thermostat adding or removing resources to maintain the target value (for example 70% CPU utilization) that you specify AWS Auto Scaling can also do Predictive Auto Scaling which uses machine learning to analyze each resource's historical workload and regularly forecasts the future load for the next two days Little’s Law helps calculate how many instances of compute (EC2 instances concurrent Lambda functions etc) that you need L=λW L = number of instances (or mean concurrency in the system) λ = mean rate at which requests arrive (req/sec) W = mean time that each request spends in the system (sec) For example at 100 rps if each request takes 05 seconds to process you will need 50 instances to keep up with demand Load test your workload: Adopt a load testing methodology to measure if scaling activity will meet workload requirements It’s important to perform sustained load testing Load tests should discover the breaking point and test performance of your workload AWS makes it easy to set up temporary testing environments that model the scale of your production workload In the cloud you can create a productionscale test environment on demand complete your testing and then decommission the resources Because you only pay for the test environment when it's running you can simulate your live environment for a fraction of the cost of testing on premises Load testing in production should also be considered as part of game days where the production system is stressed during hours of lower customer usage with all personnel on hand to interpret results and address any problems that arise Resources Documentation • AWS Auto Scaling: How Scaling Plans Work •What Is Amazon EC2 Auto Scaling? •Managing Throughput Capacity Automatically with DynamoDB Auto Scaling •What is Amazon CloudFront? •Distributed Load Testing on AWS: simulate thousands of connected users 29ArchivedReliability Pillar AWS WellArchitected Framework Implement Change •AWS Marketplace: products that can be used with auto scaling •APN Partner: partners that can help you create automated compute solutions External Links •Telling Stories About Little's Law Implement Change Controlled changes are necessary to deploy new functionality and to ensure that the workloads and the operating environment are running known properly patched software If these changes are uncontrolled then it makes it difficult to predict the effect of these changes or to address issues that arise because of them Use runbooks for standard activities such as deployment: Runbooks are the predefined steps to achieve specific outcomes Use runbooks to perform standard activities whether done manually or automatically Examples include deploying a workload patching it or making DNS modifications For example put processes in place to ensure rollback safety during deployments Ensuring that you can roll back a deployment without any disruption for your customers is critical in making a service reliable For runbook procedures start with a valid effective manual process implement it in code and trigger automated execution where appropriate Even for sophisticated workloads that are highly automated runbooks are still useful for running game days (p 46) or meeting rigorous reporting and auditing requirements Note that playbooks are used in response to specific incidents and runbooks are used to achieve specific outcomes Often runbooks are for routine activities while playbooks are used for responding to non routine events Integrate functional testing as part of your deployment: Functional tests are run as part of automated deployment If success criteria are not met the pipeline is halted or rolled back These tests are run in a preproduction environment which is staged prior to production in the pipeline Ideally this is done as part of a deployment pipeline Integrate resiliency testing as part of your deployment: Resiliency tests (as part of chaos engineering) are run as part of the automated deployment pipeline in a preproduction environment These tests are staged and run in the pipeline prior to production They should also be run in production but as part of Game Days (p 46) Deploy using immutable infrastructure: This is a model that mandates that no updates security patches or configuration changes happen inplace on production systems When a change is needed the architecture is built onto new infrastructure and deployed into production The most common implementation of the immutable infrastructure paradigm is the immutable server This means that if a server needs an update or a fix new servers are deployed instead of updating the ones already in use So instead of logging into the server via SSH and updating the software version every change in the application starts with a software push to the code repository for example git push Since changes are not allowed in immutable infrastructure you can be sure about the state of the deployed system Immutable infrastructures are inherently more consistent reliable and predictable and they simplify many aspects of software development and operations 30ArchivedReliability Pillar AWS WellArchitected Framework Implement Change Use a canary or blue/green deployment when deploying applications in immutable infrastructures Canary deployment is the practice of directing a small number of your customers to the new version usually running on a single service instance (the canary) You then deeply scrutinize any behavior changes or errors that are generated You can remove traffic from the canary if you encounter critical problems and send the users back to the previous version If the deployment is successful you can continue to deploy at your desired velocity while monitoring the changes for errors until you are fully deployed AWS CodeDeploy can be configured with a deployment configuration that will enable a canary deployment Blue/green deployment is similar to the canary deployment except that a full fleet of the application is deployed in parallel You alternate your deployments across the two stacks (blue and green) Once again you can send traffic to the new version and fall back to the old version if you see problems with the deployment Commonly all traffic is switched at once however you can also use fractions of your traffic to each version to dial up the adoption of the new version using the weighted DNS routing capabilities of Amazon Route 53 AWS CodeDeploy and AWS Elastic Beanstalk can be configured with a deployment configuration that will enable a blue/green deployment Figure 8: Blue/green deployment with AWS Elastic Beanstalk and Amazon Route 53 Benefits of immutable infrastructure: •Reduction in configuration drifts: By frequently replacing servers from a base known and version controlled configuration the infrastructure is reset to a known state avoiding configuration drifts •Simplified deployments: Deployments are simplified because they don’t need to support upgrades Upgrades are just new deployments •Reliable atomic deployments: Deployments either complete successfully or nothing changes It gives more trust in the deployment process •Safer deployments with fast rollback and recovery processes: Deployments are safer because the previous working version is not changed You can roll back to it if errors are detected •Consistent testing and debugging environments: Since all servers use the same image there are no differences between environments One build is deployed to multiple environments It also prevents inconsistent environments and simplifies testing and debugging •Increased scalability: Since servers use a base image are consistent and repeatable automatic scaling is trivial •Simplified toolchain: The toolchain is simplified since you can get rid of configuration management tools managing production software upgrades No extra tools or agents are installed on servers Changes are made to the base image tested and rolledout 31ArchivedReliability Pillar AWS WellArchitected Framework Additional deployment patterns to minimize risk: •Increased security: By denying all changes to servers you can disable SSH on instances and remove keys This reduces the attack vector improving your organization’s security posture Deploy changes with automation: Deployments and patching are automated to eliminate negative impact Making changes to production systems is one of the largest risk areas for many organizations We consider deployments a firstclass problem to be solved alongside the business problems that the software addresses Today this means the use of automation wherever practical in operations including testing and deploying changes adding or removing capacity and migrating data AWS CodePipeline lets you manage the steps required to release your workload This includes a deployment state using AWS CodeDeploy to automate deployment of application code to Amazon EC2 instances onpremises instances serverless Lambda functions or Amazon ECS services Recommendation Although conventional wisdom suggests that you keep humans in the loop for the most difficult operational procedures we suggest that you automate the most difficult procedures for that very reason Additional deployment patterns to minimize risk: Feature flags (also known as feature toggles) are configuration options on an application You can deploy the software with a feature turned off so that your customers don’t see the feature You can then turn on the feature as you’d do for a canary deployment or you can set the change pace to 100% to see the effect If the deployment has problems you can simply turn the feature back off without rolling back Fault isolated zonal deployment: One of the most important rules AWS has established for its own deployments is to avoid touching multiple Availability Zones within a Region at the same time This is critical to ensuring that Availability Zones are independent for purposes of our availability calculations We recommend that you use similar considerations in your deployments Operational Readiness Reviews (ORRs) AWS finds it useful to perform operational readiness reviews that evaluate the completeness of the testing ability to monitor and importantly the ability to audit the applications performance to its SLAs and provide data in the event of an interruption or other operational anomaly A formal ORR is conducted prior to initial production deployment AWS will repeat ORRs periodically (once per year or before critical performance periods) to ensure that there has not been “drift” from operational expectations For more information on operational readiness see the Operational Excellence pillar of the AWS WellArchitected Framework Recommendation Conduct an Operational Readiness Review (ORR) for applications prior to initial production use and periodically thereafter Resources Videos •AWS Summit 2019: CI/CD on AWS 32ArchivedReliability Pillar AWS WellArchitected Framework Resources Documentation •What Is AWS CodePipeline? •What Is CodeDeploy? •Overview of a Blue/Green Deployment •Deploying Serverless Applications Gradually • The Amazon Builders' Library: Ensuring rollback safety during deployments • The Amazon Builders' Library: Going faster with continuous delivery •AWS Marketplace: products that can be used to automate your deployments •APN Partner: partners that can help you create automated deployment solutions Labs • WellArchitected lab: Level 300: Testing for Resiliency of EC2 RDS and S3 External Links •CanaryRelease 33ArchivedReliability Pillar AWS WellArchitected Framework Back up Data Failure Management Failures are a given and everything will eventually fail over time: from routers to hard disks from operating systems to memory units corrupting TCP packets from transient errors to permanent failures This is a given whether you are using the highestquality hardware or lowest cost components Werner Vogels CTO Amazoncom Lowlevel hardware component failures are something to be dealt with every day in in an onpremises data center In the cloud however you should be protected against most of these types of failures For example Amazon EBS volumes are placed in a specific Availability Zone where they are automatically replicated to protect you from the failure of a single component All EBS volumes are designed for 99999% availability Amazon S3 objects are stored across a minimum of three Availability Zones providing 99999999999% durability of objects over a given year Regardless of your cloud provider there is the potential for failures to impact your workload Therefore you must take steps to implement resiliency if you need your workload to be reliability A prerequisite to applying the best practices discussed here is that you must ensure that the people designing implementing and operating your workloads are aware of business objectives and the reliability goals to achieve these These people must be aware of and trained for these reliability requirements The following sections explain the best practices for managing failures to prevent impact on your workload Topics •Back up Data (p 34) •Use Fault Isolation to Protect Your Workload (p 36) •Design your Workload to Withstand Component Failures (p 41) •Test Reliability (p 44) •Plan for Disaster Recovery (DR) (p 47) Back up Data Back up data applications and configuration to meet requirements for recovery time objectives (RTO) and recovery point objectives (RPO) Identify and back up all data that needs to be backed up or reproduce the data from sources: Amazon S3 can be used as a backup destination for multiple data sources AWS services like Amazon EBS Amazon RDS and Amazon DynamoDB have built in capabilities to create backups Or thirdparty backup software can be used Alternatively if the data can be reproduced from other sources to meet RPO you may not require a backup Onpremises data can be backed up to the AWS Cloud using Amazon S3 buckets and AWS Storage Gateway Backup data can be archived using Amazon S3 Glacier or S3 Glacier Deep Archive for affordable nontime sensitive cloud storage If you have loaded data from Amazon S3 to a data warehouse (like Amazon Redshift) or MapReduce cluster (like Amazon EMR) to do analysis on that data this may be an example of data that can be 34ArchivedReliability Pillar AWS WellArchitected Framework Resources reproduced from other sources As long as the results of these analyses are either stored somewhere or reproducible you would not suffer a data loss from a failure in the data warehouse or MapReduce cluster Other examples that can be reproduced from sources include caches (like Amazon ElastiCache) or RDS read replicas Secure and encrypt backup: Detect access using authentication and authorization like AWS Identity and Access Management (IAM) and detect data integrity compromise by using encryption Amazon S3 supports several methods of encryption of your data at rest Using serverside encryption Amazon S3 accepts your objects as unencrypted data and then encrypts them before persisting them Using clientside encryption your workload application is responsible for encrypting the data before it is sent to S3 Both methods allow you to either use AWS Key Management Service (AWS KMS) to create and store the data key or you may provide your own key (which you are then responsible for) Using AWS KMS you can set policies using IAM on who can and cannot access your data keys and decrypted data For Amazon RDS if you have chosen to encrypt your databases then your backups are encrypted also DynamoDB backups are always encrypted Perform data backup automatically: Configure backups to be made automatically based on a periodic schedule or by changes in the dataset RDS instances EBS volumes DynamoDB tables and S3 objects can all be configured for automatic backup AWS Marketplace solutions or thirdparty solutions can also be used Amazon Data Lifecycle Manager can be used to automate EBS snapshots Amazon RDS and Amazon DynamoDB enable continuous backup with Point in Time Recovery Amazon S3 versioning once enabled is automatic For a centralized view of your backup automation and history AWS Backup provides a fully managed policybased backup solution It centralizes and automates the back up of data across multiple AWS services in the cloud as well as on premises using the AWS Storage Gateway In additional to versioning Amazon S3 features replication The entire S3 bucket can be automatically replicated to another bucket in a different AWS Region Perform periodic recovery of the data to verify backup integrity and processes: Validate that your backup process implementation meets your recovery time objective (RTO) and recovery point objective (RPO) by performing a recovery test Using AWS you can stand up a testing environment and restore your backups there to assess RTO and RPO capabilities and run tests on data content and integrity Additionally Amazon RDS and Amazon DynamoDB allow pointintime recovery (PITR) Using continuous backup you are able to restore your dataset to the state it was in at a specified date and time Resources Videos •AWS re:Invent 2019: Deep dive on AWS Backup ft Rackspace (STG341) Documentation •What Is AWS Backup? •Amazon S3: Protecting Data Using Encryption •Encryption for Backups in AWS 35ArchivedReliability Pillar AWS WellArchitected Framework Use Fault Isolation to Protect Your Workload •Ondemand backup and restore for DynamoDB •EFStoEFS backup •AWS Marketplace: products that can be used for backup •APN Partner: partners that can help with backup Labs • WellArchitected lab: Level 200: Testing Backup and Restore of Data • WellArchitected lab: Level 200: Implementing BiDirectional CrossRegion Replication (CRR) for Amazon Simple Storage Service (Amazon S3) Use Fault Isolation to Protect Your Workload Fault isolated boundaries limit the effect of a failure within a workload to a limited number of components Components outside of the boundary are unaffected by the failure Using multiple fault isolated boundaries you can limit the impact on your workload Deploy the workload to multiple locations: Distribute workload data and resources across multiple Availability Zones or where necessary across AWS Regions These locations can be as diverse as required One of the bedrock principles for service design in AWS is the avoidance of single points of failure in underlying physical infrastructure This motivates us to build software and systems that use multiple Availability Zones and are resilient to failure of a single zone Similarly systems are built to be resilient to failure of a single compute node single storage volume or single instance of a database When building a system that relies on redundant components it’s important to ensure that the components operate independently and in the case of AWS Regions autonomously The benefits achieved from theoretical availability calculations with redundant components are only valid if this holds true Availability Zones (AZs) AWS Regions are composed of multiple Availability Zones that are designed to be independent of each other Each Availability Zone is separated by a meaningful physical distance from other zones to avoid correlated failure scenarios due to environmental hazards like fires floods and tornadoes Each Availability Zone also has independent physical infrastructure: dedicated connections to utility power standalone backup power sources independent mechanical services and independent network connectivity within and beyond the Availability Zone This design limits faults in any of these systems to just the one affected AZ Despite being geographically separated Availability Zones are located in the same regional area which enables highthroughput lowlatency networking The entire AWS region (across all Availability Zones consisting of multiple physically independent data centers) can be treated as a single logical deployment target for your workload including the ability to synchronously replicate data (for example between databases) This allows you to use Availability Zones in an active/active or active/standby configuration Availability Zones are independent and therefore workload availability is increased when the workload is architected to use multiple zones Some AWS services (including the Amazon EC2 instance data plane) are deployed as strictly zonal services where they have shared fate with the Availability Zone they are in Amazon EC2 instance in the other AZs will however be unaffected and continue to function Similarly if a failure in an Availability Zone causes an Amazon Aurora database to fail a readreplica Aurora instance in an unaffected AZ can be automatically promoted to primary Regional AWS services like Amazon DynamoDB on the other hand internally use multiple Availability Zones in an active/active configuration to achieve the availability design goals for that service without you needing to configure AZ placement 36ArchivedReliability Pillar AWS WellArchitected Framework Use Fault Isolation to Protect Your Workload Figure 9: Multitier architecture deployed across three Availability Zones Note that Amazon S3 and Amazon DynamoDB are always MultiAZ automatically The ELB also is deployed to all three zones While AWS control planes typically provide the ability to manage resources within the entire Region (multiple Availability Zones) certain control planes (including Amazon EC2 and Amazon EBS) have the ability to filter results to a single Availability Zone When this is done the request is processed only in the specified Availability Zone reducing exposure to disruption in other Availability Zones In this AWS CLI example it illustrates getting Amazon EC2 instance information from only the useast2c Availability Zone: aws ec2 describeinstances filters Name=availabilityzoneValues=useast2c AWS Local Zones AWS Local Zones act similarly to Availability Zones within their respective AWS Region in that they can be selected as a placement location for zonal AWS resources like subnets and EC2 instances What makes them special is that they are located not in the associated AWS Region but near large population industry and IT centers where no AWS Region exists today Yet they still retain highbandwidth secure connection between local workloads in the local zone and those running in the AWS Region You should use AWS Local Zones to deploy workloads closer to your users for lowlatency requirements Amazon Global Edge Network Amazon Global Edge Network consists of edge locations in cities around the world Amazon CloudFront uses this network to deliver content to end users with lower latency AWS Global Accelerator enables you to create your workload endpoints in these edge locations to provide onboarding to the AWS global network close to your users Amazon API Gateway enables edgeoptimized API endpoints using a CloudFront distribution to facilitate client access through the closest edge location AWS Regions AWS Regions are designed to be autonomous therefore to use a multiregion approach you would deploy dedicated copies of services to each Region A multiregion approach is common for disaster recovery strategies to meet recovery objectives when oneoff largescale events occur See Plan for Disaster Recovery (DR) (p 47) for more information on these strategies Here however we focus instead on availability which seeks to deliver a mean 37ArchivedReliability Pillar AWS WellArchitected Framework Use Fault Isolation to Protect Your Workload uptime objective over time For highavailability objectives a multiregion architecture will generally be designed to be active/active where each service copy (in their respective regions) is active (serving requests) Recommendation Availability goals for most workloads can be satisfied using a MultiAZ strategy within a single AWS Region Consider multiregion architectures only when workloads have extreme availability requirements or other business goals that require a multiregion architecture AWS provides customers the capabilities to operate services crossregion For example AWS provides continuous asynchronous data replication of data using Amazon Simple Storage Service (Amazon S3) Replication Amazon RDS Read Replicas (including Aurora Read Replicas) and Amazon DynamoDB Global Tables With continuous replication versions of your data are availability for immediate use in each of your active regions Using AWS CloudFormation you can define your infrastructure and deploy it consistently across AWS accounts and across AWS regions And AWS CloudFormation StackSets extends this functionality by enabling you to create update or delete AWS CloudFormation stacks across multiple accounts and regions with a single operation For Amazon EC2 instance deployments an AMI (Amazon Machine Image) is used to supply information such as hardware configuration and installed software You can implement an Amazon EC2 Image Builder pipeline that creates the AMIs you need and copy these to your active regions This ensures that these “Golden AMIs” have everything you need to deploy and scaleout your workload in each new region To route traffic both Amazon Route 53 and AWS Global Accelerator enable the definition of policies that determine which users go to which active regional endpoint With Global Accelerator you set a traffic dial to control the percentage of traffic that is directed to each application endpoint Route 53 supports this percentage approach and also multiple other available policies including geoproximity and latency based ones Global Accelerator automatically leverages the extensive network of AWS edge servers to onboard traffic to the AWS network backbone as soon as possible resulting in lower request latencies All of these capabilities operate so as to preserve each Region’s autonomy There are very few exceptions to this approach including our services that provide global edge delivery (such as Amazon CloudFront and Amazon Route 53) along with the control plane for the AWS Identity and Access Management (IAM) service The vast majority of services operate entirely within a single Region Onpremises data center For workloads that run in an onpremises data center architect a hybrid experience when possible AWS Direct Connect provides a dedicated network connection from your premises to AWS enabling you to run in both Another option is to run AWS infrastructure and services on premises using AWS Outposts AWS Outposts is a fully managed service that extends AWS infrastructure AWS services APIs and tools to your data center The same hardware infrastructure used in the AWS Cloud is installed in your data center AWS Outposts are then connected to the nearest AWS Region You can then use AWS Outposts to support your workloads that have low latency or local data processing requirements Automate recovery for components constrained to a single location: If components of the workload can only run in a single Availability Zone or onpremises data center you must implement the capability to do a complete rebuild of the workload within defined recovery objectives If the best practice to deploy the workload to multiple locations is not possible due to technological constraints you must implement an alternate path to resiliency You must automate the ability to recreate necessary infrastructure redeploy applications and recreate necessary data for these cases 38ArchivedReliability Pillar AWS WellArchitected Framework Use Fault Isolation to Protect Your Workload For example Amazon EMR launches all nodes for a given cluster in the same Availability Zone because running a cluster in the same zone improves performance of the jobs flows as it provides a higher data access rate If this component is required for workload resilience then you must have a way to redeploy the cluster and its data Also for Amazon EMR you should provision redundancy in ways other than using MultiAZ You can provision multiple master nodes Using EMR File System (EMRFS) data in EMR can be stored in Amazon S3 which in turn can be replicated across multiple Availability Zones or AWS Regions Similarly for Amazon Redshift by default it provisions your cluster in a randomly selected Availability Zone within the AWS Region that you select All the cluster nodes are provisioned in the same zone Use bulkhead architectures to limit scope of impact: Like the bulkheads on a ship this pattern ensures that a failure is contained to a small subset of requests/users so that the number of impaired requests is limited and most can continue without error Bulkheads for data are often called partitions while bulkheads for services are known as cells In a cellbased architecture each cell is a complete independent instance of the service and has a fixed maximum size As load increases workloads grow by adding more cells A partition key is used on incoming traffic to determine which cell will process the request Any failure is contained to the single cell it occurs in so that the number of impaired requests is limited as other cells continue without error It is important to identify the proper partition key to minimize crosscell interactions and avoid the need to involve complex mapping services in each request Services that require complex mapping end up merely shifting the problem to the mapping services while services that require crosscell interactions create dependencies between cells (and thus reduce the assumed availability improvements of doing so) Figure 10: Cellbased architecture In his AWS blog post Colm MacCarthaigh explains how Amazon Route 53 uses the concept of shuffle sharding to isolate customer requests into shards A shard in this case consists of two or more cells Based on partition key traffic from a customer (or resources or whatever you want to isolate) is routed to its assigned shard In the case of eight cells with two cells per shard and customers divided among the four shards 25% of customers would experience impact in the event of a problem 39ArchivedReliability Pillar AWS WellArchitected Framework Resources Figure 11: Service divided into four traditional shards of two cells each With shuffle sharding you create virtual shards of two cells each and assign your customers to one of those virtual shards When a problem happens you can still lose a quarter of the whole service but the way that customers or resources are assigned means that the scope of impact with shuffle sharding is considerably smaller than 25% With eight cells there are 28 unique combinations of two cells which means that there are 28 possible shuffle shards (virtual shards) If you have hundreds or thousands of customers and assign each customer to a shuffle shard then the scope of impact due to a problem is just 1/28th That’s seven times better than regular sharding Figure 12: Service divided into 28 shuffle shards (virtual shards) of two cells each (only two shuffle shards out of the 28 possible are shown) A shard can be used for servers queues or other resources in addition to cells Resources Videos •AWS re:Invent 2018: Architecture Patterns for MultiRegion ActiveActive Applications (ARC209R2) •Shufflesharding: AWS re:Invent 2019: Introducing The Amazon Builders’ Library (DOP328) •AWS re:Invent 2018: How AWS Minimizes the Blast Radius of Failures (ARC338) •AWS re:Invent 2019: Innovation and operation of the AWS global network infrastructure (NET339) 40ArchivedReliability Pillar AWS WellArchitected Framework Design your Workload to Withstand Component Failures Documentation •What is AWS Outposts? •Global Tables: MultiRegion Replication with DynamoDB •AWS Local Zones FAQ • AWS Global Infrastructure •Regions Availability Zones and Local Zones • The Amazon Builders' Library: Workload isolation using shufflesharding Design your Workload to Withstand Component Failures Workloads with a requirement for high availability and low mean time to recovery (MTTR) must be architected for resiliency Monitor all components of the workload to detect failures: Continuously monitor the health of your workload so that you and your automated systems are aware of degradation or complete failure as soon as they occur Monitor for key performance indicators (KPIs) based on business value All recovery and healing mechanisms must start with the ability to detect problems quickly Technical failures should be detected first so that they can be resolved However availability is based on the ability of your workload to deliver business value so this needs to be a key measure of your detection and remediation strategy Failover to healthy resources: Ensure that if a resource failure occurs that healthy resources can continue to serve requests For location failures (such as Availability Zone or AWS Region) ensure you have systems in place to failover to healthy resources in unimpaired locations This is easier for individual resource failures (such as an EC2 instance) or impairment of an Availability Zone in a multiAZ workload as AWS services such as Elastic Load Balancing and AWS Auto Scaling help distribute load across resources and Availability Zones For multiregion workloads this is more complicated For example crossregion read replicas enable you to deploy your data to multiple AWS Regions but you still must promote the read replica to primary and point your traffic at it in the event of a primary location failure Amazon Route 53 and AWS Global Accelerator can also help route traffic across AWS Regions If your workload is using AWS services such as Amazon S3 or Amazon DynamoDB then they are automatically deployed to multiple Availability Zones In case of failure the AWS control plane automatically routes traffic to healthy locations for you For Amazon RDS you must choose MultiAZ as a configuration option and then on failure AWS automatically directs traffic to the healthy instance For Amazon EC2 instances or Amazon ECS tasks you choose which Availability Zones to deploy to Elastic Load Balancing then provides the solution to detect instances in unhealthy zones and route traffic to the healthy ones Elastic Load Balancing can even route traffic to components in your onpremises data center For MultiRegion approaches (which might also include onpremises data centers) Amazon Route 53 provides a way to define internet domains and assign routing policies that can include health checks to ensure that traffic is routed to healthy regions Alternately AWS Global Accelerator provides static IP addresses that act as a fixed entry point to your application then routes to endpoints in AWS Regions of your choosing using the AWS global network instead of the internet for better performance and reliability 41ArchivedReliability Pillar AWS WellArchitected Framework Design your Workload to Withstand Component Failures AWS approaches the design of our services with fault recovery in mind We design services to minimize the time to recover from failures and impact on data Our services primarily use data stores that acknowledge requests only after they are durably stored across multiple replicas These services and resources include Amazon Aurora Amazon Relational Database Service (Amazon RDS) MultiAZ DB instances Amazon S3 Amazon DynamoDB Amazon Simple Queue Service (Amazon SQS) and Amazon Elastic File System (Amazon EFS) They are constructed to use cellbased isolation and use the fault isolation provided by Availability Zones We use automation extensively in our operational procedures We also optimize our replaceandrestart functionality to recover quickly from interruptions Automate healing on all layers: Upon detection of a failure use automated capabilities to perform actions to remediate Ability to restart is an important tool to remediate failures As discussed previously for distributed systems a best practice is to make services stateless where possible This prevents loss of data or availability on restart In the cloud you can (and generally should) replace the entire resource (for example EC2 instance or Lambda function) as part of the restart The restart itself is a simple and reliable way to recover from failure Many different types of failures occur in workloads Failures can occur in hardware software communications and operations Rather than constructing novel mechanisms to trap identify and correct each of the different types of failures map many different categories of failures to the same recovery strategy An instance might fail due to hardware failure an operating system bug memory leak or other causes Rather than building custom remediation for each situation treat any of them as an instance failure Terminate the instance and allow AWS Auto Scaling to replace it Later carry out the analysis on the failed resource out of band Another example is the ability to restart a network request Apply the same recovery approach to both a network timeout and a dependency failure where the dependency returns an error Both events have a similar effect on the system so rather than attempting to make either event a “special case” apply a similar strategy of limited retry with exponential backoff and jitter Ability to restart is a recovery mechanism featured in Recovery Oriented Computing (ROC) and high availability cluster architectures Amazon EventBridge can be used to monitor and filter for events such as CloudWatch Alarms or changes in state in other AWS services Based on event information it can then trigger AWS Lambda (or other targets) to execute custom remediation logic on your workload Amazon EC2 Auto Scaling can be configured to check for EC2 instance health If the instance is in any state other than running or if the system status is impaired Amazon EC2 Auto Scaling considers the instance to be unhealthy and launches a replacement instance If using AWS OpsWorks you can configure Auto Healing of EC2 instances at the layer level For largescale replacements (such as the loss of an entire Availability Zone) static stability is preferred for high availability instead of trying to obtain multiple new resources at once Use static stability to prevent bimodal behavior: Bimodal behavior is when your workload exhibits different behavior under normal and failure modes for example relying on launching new instances if an Availability Zone fails You should instead build systems that are statically stable and operate in only one mode In this case provision enough instances in each zone to handle workload load if one zone were removed and then use Elastic Load Balancing or Amazon Route 53 health checks to shift load away from the impaired instances Static stability for compute deployment (such as EC2 instances or containers) will result in the highest reliability This must be weighed against cost concerns It’s less expensive to provision less compute capacity and rely on launching new instances in the case of a failure But for largescale failures (such as an Availability Zone failure) this approach is less effective because it relies on reacting to impairments as they happen rather than being prepared for those impairments before they happen Your solution should weigh reliability versus the cost needs for your workload By using more Availability Zones the amount of additional compute you need for static stability decreases 42ArchivedReliability Pillar AWS WellArchitected Framework Resources Figure 13: After traffic has shifted use AWS Auto Scaling to asynchronously replace instances from the failed zone and launch them in the healthy zones Another example of bimodal behavior would be a network timeout that could cause a system to attempt to refresh the configuration state of the entire system This would add unexpected load to another component and might cause it to fail triggering other unexpected consequences This negative feedback loop impacts availability of your workload Instead you should build systems that are statically stable and operate in only one mode A statically stable design would be to do constant work (p 19) and always refresh the configuration state on a fixed cadence When a call fails the workload uses the previously cached value and triggers an alarm Another example of bimodal behavior is allowing clients to bypass your workload cache when failures occur This might seem to be a solution that accommodates client needs but should not be allowed because it significantly changes the demands on your workload and is likely to result in failures Send notifications when events impact availability: Notifications are sent upon the detection of significant events even if the issue caused by the event was automatically resolved Automated healing enables your workload to be reliable However it can also obscure underlying problems that need to be addressed Implement appropriate monitoring and events so that you can detect patterns of problems including those addressed by auto healing so that you can resolve root cause issues Amazon CloudWatch Alarms can be triggered based on failures that occur They can also trigger based on automated healing actions executed CloudWatch Alarms can be configured to send emails or to log incidents in thirdparty incident tracking systems using Amazon SNS integration Resources Videos •Static stability in AWS: AWS re:Invent 2019: Introducing The Amazon Builders’ Library (DOP328) Documentation • AWS OpsWorks: Using Auto Healing to Replace Failed Instances •What Is Amazon EventBridge? •Amazon Route 53: Choosing a Routing Policy 43ArchivedReliability Pillar AWS WellArchitected Framework Test Reliability •What Is AWS Global Accelerator? • The Amazon Builders' Library: Static stability using Availability Zones • The Amazon Builders' Library: Implementing health checks •AWS Marketplace: products that can be used for fault tolerance •APN Partner: partners that can help with automation of your fault tolerance Labs • WellArchitected lab: Level 300: Implementing Health Checks and Managing Dependencies to Improve Reliability External Links •The Berkeley/Stanford RecoveryOriented Computing (ROC) Project Test Reliability After you have designed your workload to be resilient to the stresses of production testing is the only way to ensure that it will operate as designed and deliver the resiliency you expect Test to validate that your workload meets functional and nonfunctional requirements because bugs or performance bottlenecks can impact the reliability of your workload Test the resiliency of your workload to help you find latent bugs that only surface in production Exercise these tests regularly Use playbooks to investigate failures: Enable consistent and prompt responses to failure scenarios that are not well understood by documenting the investigation process in playbooks Playbooks are the predefined steps performed to identify the factors contributing to a failure scenario The results from any process step are used to determine the next steps to take until the issue is identified or escalated The playbook is proactive planning that you must do so as to be able to take reactive actions effectively When failure scenarios not covered by the playbook are encountered in production first address the issue (put out the fire) Then go back and look at the steps you took to address the issue and use these to add a new entry in the playbook Note that playbooks are used in response to specific incidents while runbooks are used to achieve specific outcomes Often runbooks are used for routine activities and playbooks are used to respond to nonroutine events Perform postincident analysis: Review customerimpacting events and identify the contributing factors and preventative action items Use this information to develop mitigations to limit or prevent recurrence Develop procedures for prompt and effective responses Communicate contributing factors and corrective actions as appropriate tailored to target audiences Assess why existing testing did not find the issue Add tests for this case if tests do not already exist Test functional requirements: These include unit tests and integration tests that validate required functionality You achieve the best outcomes when these tests are run automatically as part of build and deployment actions For instance using AWS CodePipeline developers commit changes to a source repository where CodePipeline automatically detects the changes Those changes are built and tests are run After the tests are complete the built code is deployed to staging servers for testing From the staging server CodePipeline runs more tests such as integration or load tests Upon the successful completion of those tests CodePipeline deploys the tested and approved code to production instances 44ArchivedReliability Pillar AWS WellArchitected Framework Test Reliability Additionally experience shows that synthetic transaction testing (also known as “canary testing” but not to be confused with canary deployments) that can run and simulate customer behavior is among the most important testing processes Run these tests constantly against your workload endpoints from diverse remote locations Amazon CloudWatch Synthetics enables you to create canaries to monitor your endpoints and APIs Test scaling and performance requirements: This includes load testing to validate that the workload meets scaling and performance requirements In the cloud you can create a productionscale test environment on demand for your workload If you run these tests on scaled down infrastructure you must scale your observed results to what you think will happen in production Load and performance testing can also be done in production if you are careful not to impact actual users and tag your test data so it does not comingle with real user data and corrupt usage statistics or production reports With testing ensure that your base resources scaling settings service quotas and resiliency design operate as expected under load Test resiliency using chaos engineering: Run tests that inject failures regularly into preproduction and production environments Hypothesize how your workload will react to the failure then compare your hypothesis to the testing results and iterate if they do not match Ensure that production testing does not impact users In the cloud you can test how your workload fails and you can validate your recovery procedures You can use automation to simulate different failures or to recreate scenarios that led to failures before This exposes failure pathways that you can test and fix before a real failure scenario occurs thus reducing risk Chaos Engineering is the discipline of experimenting on a system in order to build confidence in the system’s capability to withstand turbulent conditions in production – Principles of Chaos Engineering In preproduction and testing environments chaos engineering should be done regularly and be part of your CI/CD cycle In production teams must take care not to disrupt availability and should use game days as a way to control risk of chaos engineering in production The testing effort should be commensurate with your availability goals Testing to ensure that you can meet your availability goals is the only way you can have confidence that you will meet those goals Test for component failures that you have designed your workload to be resilient against These include loss of EC2 instances failure of the primary Amazon RDS database instance and Availability Zone outages Test for external dependency unavailability Your workload’s resiliency to transient failures of dependencies should be tested for durations that may last from less than a second to hours Other modes of degradation might cause reduced functionality and slow responses often resulting in a brownout of your services Common sources of this degradation are increased latency on critical services and unreliable network communication (dropped packets) You want to use the ability to inject such failures into your system including networking effects such as latency and dropped messages and DNS failures such as being unable to resolve a name or not being able to establish connections to dependent services There are several thirdparty options for injecting failures These include open source options such as Netflix Chaos Monkey The Chaos ToolKit and Shopify Toxiproxy as well as commercial options like Gremlin We advise that initial investigations of how to implement chaos engineering use selfauthored scripts This enables engineering teams to become comfortable with how chaos is introduced into their workloads For examples of these see Testing for Resiliency of EC2 RDS and S3 using multiple languages such as a Bash Python Java and PowerShell You should also implement Injecting Chaos to Amazon EC2 using AWS Systems Manager which enables you to simulate brownouts and high CPU conditions using AWS Systems Manager Documents 45ArchivedReliability Pillar AWS WellArchitected Framework Resources Conduct game days regularly: Use game days to regularly exercise your procedures for responding to events and failures as close to production as possible (including in production environments) with the people who will be involved in actual failure scenarios Game days enforce measures to ensure that production events do not impact users Game days simulate a failure or event to test systems processes and team responses The purpose is to actually perform the actions the team would perform as if an exceptional event happened This will help you understand where improvements can be made and can help develop organizational experience in dealing with events These should be conducted regularly so that your team builds "muscle memory" on how to respond After your design for resiliency is in place and has been tested in nonproduction environments a game day is the way to ensure that everything works as planned in production A game day especially the first one is an “all hands on deck” activity where engineers and operations are all informed when it will happen and what will occur Runbooks are in place Simulated events are executed including possible failure events in the production systems in the prescribed manner and impact is assessed If all systems operate as designed detection and selfhealing will occur with little to no impact However if negative impact is observed the test is rolled back and the workload issues are remedied manually if necessary (using the runbook) Since game days often take place in production all precautions should be taken to ensure that there is no impact on availability to your customers Resources Videos •AWS re:Invent 2019: Improving resiliency with chaos engineering (DOP309R1) Documentation •Continuous Delivery and Continuous Integration •Using Canaries (Amazon CloudWatch Synthetics) •Use CodePipeline with AWS CodeBuild to test code and run builds •Automate your operational playbooks with AWS Systems Manager •AWS Marketplace: products that can be used for continuous integration •APN Partner: partners that can help with implementation of a continuous integration pipeline Labs • WellArchitected lab: Level 300: Testing for Resiliency of EC2 RDS and S3 External Links •Principles of Chaos Engineering •Resilience Engineering: Learning to Embrace Failure •Apache JMeter Books • Casey Rosenthal Nora Jones “Chaos Engineering ” (April 2020) 46ArchivedReliability Pillar AWS WellArchitected Framework Plan for Disaster Recovery (DR) Plan for Disaster Recovery (DR) Having backups and redundant workload components in place is the start of your DR strategy RTO and RPO are your objectives (p 7) for restoration of your workload Set these based on business needs Implement a strategy to meet these objectives considering locations and function of workload resources and data The probability of disruption and cost of recovery are also key factors that help to inform the business value of providing disaster recovery for a workload Both Availability and Disaster Recovery rely on the same best practices such as monitoring for failures deploying to multiple locations and automatic failover However Availability focuses on components of the workload while Disaster Recovery focuses on discrete copies of the entire workload Disaster Recovery has different objectives from Availability focusing on time to recovery after a disaster Define recovery objectives for downtime and data loss: The workload has a recovery time objective (RTO) and recovery point objective (RPO) Recovery Time Objective (RTO) is defined by the organization RTO is the maximum acceptable delay between the interruption of service and restoration of service This determines what is considered an acceptable time window when service is unavailable Recovery Point Objective (RPO) is defined by the organization RPO is the maximum acceptable amount of time since the last data recovery point This determines what is considered an acceptable loss of data between the last recovery point and the interruption of service Use defined recovery strategies to meet the recovery objectives: A disaster recovery (DR) strategy has been defined to meet your workload objectives Choose a strategy such as: backup and restore; standby (active/passive); or active/active When architecting a multiregion disaster recovery strategy for your workload you should choose one of the following multiregion strategies They are listed in increasing order of complexity and decreasing order of RTO and RPO DR Region refers to an AWS Region other than the one primary used for your workload (or any AWS Region if your workload is on premises) Some workloads have regulatory data residency requirements If this applies to your workload in a locality that currently has only one AWS region then you can use the Availability Zones within that region as discrete locations instead of AWS regions •Backup and restore (RPO in hours RTO in 24 hours or less): Back up your data and applications using pointintime backups into the DR Region Restore this data when necessary to recover from a disaster •Pilot light (RPO in minutes RTO in hours): Replicate your data from one region to another and provision a copy of your core workload infrastructure Resources required to support data replication and backup such as databases and object storage are always on Other elements such as application 47ArchivedReliability Pillar AWS WellArchitected Framework Plan for Disaster Recovery (DR) servers are loaded with application code and configurations but are switched off and are only used during testing or when Disaster Recovery failover is invoked •Warm standby (RPO in seconds RTO in minutes): Maintain a scaleddown but fully functional version of your workload always running in the DR Region Businesscritical systems are fully duplicated and are always on but with a scaled down fleet When the time comes for recovery the system is scaled up quickly to handle the production load The more scaledup the Warm Standby is the lower RTO and control plane reliance will be When scaled up to full scale this is known as a Hot Standby •Multiregion (multisite) activeactive (RPO near zero RTO potentially zero): Your workload is deployed to and actively serving traffic from multiple AWS Regions This strategy requires you to synchronize data across Regions Possible conflicts caused by writes to the same record in two different regional replicas must be avoided or handled Data replication is useful for data synchronization and will protect you against some types of disaster but it will not protect you against data corruption or destruction unless your solution also includes options for pointintime recovery Use services like Amazon Route 53 or AWS Global Accelerator to route your user traffic to where your workload is healthy For more details on AWS services you can use for activeactive architectures see the AWS Regions section of Use Fault Isolation to Protect Your Workload (p 36) Recommendation The difference between Pilot Light and Warm Standby can sometimes be difficult to understand Both include an environment in your DR Region with copies of your primary region assets The distinction is that Pilot Light cannot process requests without additional action taken first while Warm Standby can handle traffic (at reduced capacity levels) immediately Pilot Light will require you to turn on servers possibly deploy additional (noncore) infrastructure and scale up while Warm Standby only requires you to scale up (everything is already deployed and running) Choose between these based on your RTO and RPO needs Test disaster recovery implementation to validate the implementation: Regularly test failover to DR to ensure that RTO and RPO are met A pattern to avoid is developing recovery paths that are rarely executed For example you might have a secondary data store that is used for readonly queries When you write to a data store and the primary fails you might want to fail over to the secondary data store If you don’t frequently test this failover you might find that your assumptions about the capabilities of the secondary data store are incorrect The capacity of the secondary which might have been sufficient when you last tested may be no longer be able to tolerate the load under this scenario Our experience has shown that the only error recovery that works is the path you test frequently This is why having a small number of recovery paths is best You can establish recovery patterns and regularly test them If you have a complex or critical recovery path you still need to regularly execute that failure in production to convince yourself that the recovery path works In the example we just discussed you should fail over to the standby regularly regardless of need Manage configuration drift at the DR site or region: Ensure that your infrastructure data and configuration are as needed at the DR site or region For example check that AMIs and service quotas are up to date AWS Config continuously monitors and records your AWS resource configurations It can detect drift and trigger AWS Systems Manager Automation to fix it and raise alarms AWS CloudFormation can additionally detect drift in stacks you have deployed Automate recovery: Use AWS or thirdparty tools to automate system recovery and route traffic to the DR site or region Based on configured health checks AWS services such as Elastic Load Balancing and AWS Auto Scaling can distribute load to healthy Availability Zones while services such as Amazon Route 53 and AWS Global Accelerator can route load to healthy AWS Regions 48ArchivedReliability Pillar AWS WellArchitected Framework Resources For workloads on existing physical or virtual data centers or private clouds CloudEndure Disaster Recovery available through AWS Marketplace enables organizations to set up an automated disaster recovery strategy to AWS CloudEndure also supports crossregion / crossAZ disaster recovery in AWS Resources Videos •AWS re:Invent 2019: Backupandrestore and disasterrecovery solutions with AWS (STG208) Documentation •What Is AWS Backup? •Remediating Noncompliant AWS Resources by AWS Config Rules •AWS Systems Manager Automation • AWS CloudFormation: Detect Drift on an Entire CloudFormation Stack •Amazon RDS: Crossregion backup copy •RDS: Replicating a Read Replica Across Regions •S3: CrossRegion Replication •Route 53: Configuring DNS Failover •CloudEndure Disaster Recovery • How do I implement an Infrastructure Configuration Management solution on AWS? •CloudEndure Disaster Recovery to AWS •AWS Marketplace: products that can be used for disaster recovery •APN Partner: partners that can help with disaster recovery 49ArchivedReliability Pillar AWS WellArchitected Framework Dependency Selection Example Implementations for Availability Goals In this section we’ll review workload designs using the deployment of a typical web application that consists of a reverse proxy static content on Amazon S3 an application server and a SQL database for persistent storage of data For each availability target we provide an example implementation This workload could instead use containers or AWS Lambda for compute and NoSQL (such as Amazon DynamoDB) for the database but the approaches are similar In each scenario we demonstrate how to meet availability goals through workload design for these topics: Topic For more information see this section Monitor resources Monitor Workload Resources (p 25) Adapt to changes in demand Design your Workload to Adapt to Changes in Demand (p 28) Implement change Implement Change (p 30) Back up data Back up Data (p 34) Architect for resiliency Use Fault Isolation to Protect Your Workload (p 36) Design your Workload to Withstand Component Failures (p 41) Test resiliency Test Reliability (p 44) Plan for disaster recovery (DR) Plan for Disaster Recovery (DR) (p 47) Dependency Selection We have chosen to use Amazon EC2 for our applications We will show how using Amazon RDS and multiple Availability Zones improves the availability of our applications We will use Amazon Route 53 for DNS When we use multiple Availability Zones we will use Elastic Load Balancing Amazon S3 is used for backups and static content As we design for higher reliability we must use services with higher availability themselves See Appendix A: DesignedFor Availability for Select AWS Services (p 68) for the design goals for the respective AWS services SingleRegion Scenarios Topics •2 9s (99%) Scenario (p 51) •3 9s (999%) Scenario (p 52) •4 9s (9999%) Scenario (p 54) 50ArchivedReliability Pillar AWS WellArchitected Framework 2 9s (99%) Scenario 2 9s (99%) Scenario These workloads are helpful to the business but it’s only an inconvenience if they are unavailable This type of workload can be internal tooling internal knowledge management or project tracking Or these can be actual customerfacing workloads but served from an experimental service with a feature toggle that can hide the service if needed These workloads can be deployed with one Region and one Availability Zone Monitor resources We will have simple monitoring indicating whether the service home page is returning an HTTP 200 OK status When problems occur our playbook will indicate that logging from the instance will be used to establish root cause Adapt to changes in demand We will have playbooks for common hardware failures urgent software updates and other disruptive changes Implement change We will use AWS CloudFormation to define our infrastructure as code and specifically to speed up reconstruction in the event of a failure Software updates are manually performed using a runbook with downtime required for the installation and restart of the service If a problem happens during deployment the runbook describes how to roll back to the previous version Any corrections of the error are done using analysis of logs by the operations and development teams and the correction is deployed after the fix is prioritized and completed Back up data We will use a vendor or purpose built backup solution to send encrypted backup data to Amazon S3 using a runbook We will test that the backups work by restoring the data and ensuring the ability to use it on a regular basis using a runbook We configure versioning on our Amazon S3 objects and remove permissions for deletion of the backups We use an Amazon S3 bucket lifecycle policy to archive or permanently delete according to our requirements Architect for resiliency Workloads are deployed with one Region and one Availability Zone We deploy the application including the database to a single instance Test resiliency The deployment pipeline of new software is scheduled with some unit testing but mostly whitebox/ blackbox testing of the assembled workload Plan for disaster recovery (DR) During failures we wait for the failure to finish optionally routing requests to a static website using DNS modification via a runbook The recovery time for this will be determined by the speed at which the infrastructure can be deployed and the database restored to the most recent backup This deployment can either be into the same Availability Zone or into a different Availability Zone in the event of an Availability Zone failure using a runbook 51ArchivedReliability Pillar AWS WellArchitected Framework 3 9s (999%) Scenario Availability design goal We take 30 minutes to understand and decide to execute recovery deploy the whole stack in AWS CloudFormation in 10 minutes assume that we deploy to a new Availability Zone and assume that the database can be restored in 30 minutes This implies that it takes about 70 minutes to recover from a failure Assuming one failure per quarter our estimated impact time for the year is 280 minutes or four hours and 40 minutes This means that the upper limit on availability is 999% The actual availability also depends on the real rate of failure the duration of failure and how quickly each failure actually recovers For this architecture we require the application to be offline for updates (estimating 24 hours per year: four hours per change six times per year) plus actual events So referring to the table on application availability earlier in the whitepaper we see that our availability design goal is 99% Summary Topic Implementation Monitor resources Site health check only; no alerting Adapt to changes in demand Vertical scaling via redeployment Implement change Runbook for deploy and rollback Back up data Runbook for backup and restore Architect for resiliency Complete rebuild; restore from backup Test resiliency Complete rebuild; restore from backup Plan for disaster recovery (DR) Encrypted backups restore to different Availability Zone if needed 3 9s (999%) Scenario The next availability goal is for applications for which it’s important to be highly available but they can tolerate short periods of unavailability This type of workload is typically used for internal operations that have an effect on employees when they are down This type of workload can also be customer facing but are not high revenue for the business and can tolerate a longer recovery time or recovery point Such workloads include administrative applications for account or information management We can improve availability for workloads by using two Availability Zones for our deployment and by separating the applications to separate tiers Monitor resources Monitoring will be expanded to alert on the availability of the website over all by checking for an HTTP 200 OK status on the home page In addition there will be alerting on every replacement of a web server and when the database fails over We will also monitor the static content on Amazon S3 for availability and alert if it becomes unavailable Logging will be aggregated for ease of management and to help in root cause analysis Adapt to changes in demand Automatic scaling is configured to monitor CPU utilization on EC2 instances and add or remove instances to maintain the CPU target at 70% but with no fewer than one EC2 instance per Availability 52ArchivedReliability Pillar AWS WellArchitected Framework 3 9s (999%) Scenario Zone If load patterns on our RDS instance indicate that scale up is needed we will change the instance type during a maintenance window Implement change The infrastructure deployment technologies remain the same as the previous scenario Delivery of new software is on a fixed schedule of every two to four weeks Software updates will be automated not using canary or blue/green deployment patterns but rather using replace in place The decision to roll back will be made using the runbook We will have playbooks for establishing root cause of problems After the root cause has been identified the correction for the error will be identified by a combination of the operations and development teams The correction will be deployed after the fix is developed Back up data Backup and restore can be done using Amazon RDS It will be executed regularly using a runbook to ensure that we can meet recovery requirements Architect for resiliency We can improve availability for applications by using two Availability Zones for our deployment and by separating the applications to separate tiers We will use services that work across multiple Availability Zones such as Elastic Load Balancing Auto Scaling and Amazon RDS MultiAZ with encrypted storage via AWS Key Management Service This will ensure tolerance to failures on the resource level and on the Availability Zone level The load balancer will only route traffic to healthy application instances The health check needs to be at the data plane/application layer indicating the capability of the application on the instance This check should not be against the control plane A health check URL for the web application will be present and configured for use by the load balancer and Auto Scaling so that instances that fail are removed and replaced Amazon RDS will manage the active database engine to be available in the second Availability Zone if the instance fails in the primary Availability Zone then repair to restore to the same resiliency After we have separated the tiers we can use distributed system resiliency patterns to increase the reliability of the application so that it can still be available even when the database is temporarily unavailable during an Availability Zone failover Test resiliency We do functional testing same as in the previous scenario We do not test the selfhealing capabilities of ELB automatic scaling or RDS failover We will have playbooks for common database problems securityrelated incidents and failed deployments Plan for disaster recovery (DR) Runbooks exist for total workload recovery and common reporting Recovery uses backups stored in the same region as the workload Availability design goal We assume that at least some failures will require a manual decision to execute recovery However with the greater automation in this scenario we assume that only two events per year will require this decision We take 30 minutes to decide to execute recovery and assume that recovery is completed within 30 minutes This implies 60 minutes to recover from failure Assuming two incidents per year our estimated impact time for the year is 120 minutes 53ArchivedReliability Pillar AWS WellArchitected Framework 4 9s (9999%) Scenario This means that the upper limit on availability is 9995% The actual availability also depends on the real rate of failure the duration of the failure and how quickly each failure actually recovers For this architecture we require the application to be briefly offline for updates but these updates are automated We estimate 150 minutes per year for this: 15 minutes per change 10 times per year This adds up to 270 minutes per year when the service is not available so our availability design goal is 999% Summary Topic Implementation Monitor resources Site health check only; alerts sent when down Adapt to changes in demand ELB for web and automatic scaling application tier; resizing MultiAZ RDS Implement change Automated deploy in place and runbook for rollback Back up data Automated backups via RDS to meet RPO and runbook for restoring Architect for resiliency Automatic scaling to provide selfhealing web and application tier; RDS is MultiAZ Test resiliency ELB and application are selfhealing; RDS is Multi AZ; no explicit testing Plan for disaster recovery (DR) Encrypted backups via RDS to same AWS Region 4 9s (9999%) Scenario This availability goal for applications requires the application to be highly available and tolerant to component failures The application must be able to absorb failures without needing to get additional resources This availability goal is for mission critical applications that are main or significant revenue drivers for a corporation such as an ecommerce site a business to business web service or a high traffic content/media site We can improve availability further by using an architecture that will be statically stable within the Region This availability goal doesn’t require a control plane change in behavior of our workload to tolerate failure For example there should be enough capacity to withstand the loss of one Availability Zone We should not require updates to Amazon Route 53 DNS We should not need to create any new infrastructure whether it’s creating or modifying an S3 bucket creating new IAM policies (or modifications of policies) or modifying Amazon ECS task configurations Monitor resources Monitoring will include success metrics as well as alerting when problems occur In addition there will be alerting on every replacement of a failed web server when the database fails over and when an AZ fails Adapt to changes in demand We will use Amazon Aurora as our RDS which enables automatic scaling of read replicas For these applications engineering for read availability over write availability of primary content is also a key architecture decision Aurora can also automatically grow storage as needed in 10 GB increments up to 64 TB 54ArchivedReliability Pillar AWS WellArchitected Framework 4 9s (9999%) Scenario Implement change We will deploy updates using canary or blue/green deployments into each isolation zone separately The deployments are fully automated including a roll back if KPIs indicate a problem Runbooks will exist for rigorous reporting requirements and performance tracking If successful operations are trending toward failure to meet performance or availability goals a playbook will be used to establish what is causing the trend Playbooks will exist for undiscovered failure modes and security incidents Playbooks will also exist for establishing the root cause of failures We will also engage with AWS Support for Infrastructure Event Management offering The team that builds and operates the website will identify the correction of error of any unexpected failure and prioritize the fix to be deployed after it is implemented Back up data Backup and restore can be done using Amazon RDS It will be executed regularly using a runbook to ensure that we can meet recovery requirements Architect for resiliency We recommend three Availability Zones for this approach Using a three Availability Zone deployment each AZ has static capacity of 50% of peak Two Availability Zones could be used but the cost of the statically stable capacity would be more because both zones would have to have 100% of peak capacity We will add Amazon CloudFront to provide geographic caching as well as request reduction on our application’s data plane We will use Amazon Aurora as our RDS and deploy read replicas in all three zones The application will be built using the software/application resiliency patterns in all layers Test resiliency The deployment pipeline will have a full test suite including performance load and failure injection testing We will practice our failure recovery procedures constantly through game days using runbooks to ensure that we can perform the tasks and not deviate from the procedures The team that builds the website also operates the website Plan for disaster recovery (DR) Runbooks exist for total workload recovery and common reporting Recovery uses backups stored in the same region as the workload Restore procedures are regularly exercised as part of game days Availability design goal We assume that at least some failures will require a manual decision to execute recovery however with greater automation in this scenario we assume that only two events per year will require this decision and the recovery actions will be rapid We take 10 minutes to decide to execute recovery and assume that recovery is completed within five minutes This implies 15 minutes to recover from failure Assuming two failures per year our estimated impact time for the year is 30 minutes This means that the upper limit on availability is 9999% The actual availability will also depend on the real rate of failure the duration of the failure and how quickly each failure actually recovers For this architecture we assume that the application is online continuously through updates Based on this our availability design goal is 9999% 55ArchivedReliability Pillar AWS WellArchitected Framework MultiRegion Scenarios Summary Topic Implementation Monitor resources Health checks at all layers and on KPIs; alerts sent when configured alarms are tripped; alerting on all failures Operational meetings are rigorous to detect trends and manage to design goals Adapt to changes in demand ELB for web and automatic scaling application tier; automatic scaling storage and read replicas in multiple zones for Aurora RDS Implement change Automated deploy via canary or blue/green and automated rollback when KPIs or alerts indicate undetected problems in application Deployments are made by isolation zone Back up data Automated backups via RDS to meet RPO and automated restoration that is practiced regularly in a game day Architect for resiliency Implemented fault isolation zones for the application; auto scaling to provide selfhealing web and application tier; RDS is MultiAZ Test resiliency Component and isolation zone fault testing is in pipeline and practiced with operational staff regularly in a game day; playbooks exist for diagnosing unknown problems; and a Root Cause Analysis process exists Plan for disaster recovery (DR) Encrypted backups via RDS to same AWS Region that is practiced in a game day MultiRegion Scenarios Implementing our application in multiple AWS Regions will increase the cost of operation partly because we isolate regions to maintain their autonomy It should be a very thoughtful decision to pursue this path That said regions provide a strong isolation boundary and we take great pains to avoid correlated failures across regions Using multiple regions will give you greater control over your recovery time in the event of a hard dependency failure on a regional AWS service In this section we’ll discuss various implementation patterns and their typical availability Topics •3½ 9s (9995%) with a Recovery Time between 5 and 30 Minutes (p 56) •5 9s (99999%) or Higher Scenario with a Recovery Time under 1 minute (p 59) 3½ 9s (9995%) with a Recovery Time between 5 and 30 Minutes This availability goal for applications requires extremely short downtime and very little data loss during specific times Applications with this availability goal include applications in the areas of: banking 56ArchivedReliability Pillar AWS WellArchitected Framework 3½ 9s (9995%) with a Recovery Time between 5 and 30 Minutes investing emergency services and data capture These applications have very short recovery times and recovery points We can improve recovery time further by using a Warm Standby approach across two AWS Regions We will deploy the entire workload to both Regions with our passive site scaled down and all data kept eventually consistent Both deployments will be statically stable within their respective regions The applications should be built using the distributed system resiliency patterns We’ll need to create a lightweight routing component that monitors for workload health and can be configured to route traffic to the passive region if necessary Monitor resources There will be alerting on every replacement of a web server when the database fails over and when the Region fails over We will also monitor the static content on Amazon S3 for availability and alert if it becomes unavailable Logging will be aggregated for ease of management and to help in root cause analysis in each Region The routing component monitors both our application health and any regional hard dependencies we have Adapt to changes in demand Same as the 4 9s scenario Implement change Delivery of new software is on a fixed schedule of every two to four weeks Software updates will be automated using canary or blue/green deployment patterns Runbooks exist for when Region failover occurs for common customer issues that occur during those events and for common reporting We will have playbooks for common database problems securityrelated incidents failed deployments unexpected customer issues on Region failover and establishing root cause of problems After the root cause has been identified the correction of error will be identified by a combination of the operations and development teams and deployed when the fix is developed We will also engage with AWS Support for Infrastructure Event Management Back up data Like the 4 9s scenario we automatic RDS backups and use S3 versioning Data is automatically and asynchronously replicated from the Aurora RDS cluster in the active region to crossregion read replicas in the passive region S3 crossregion replication is used to automatically and asynchronously move data from the active to the passive region Architect for resiliency Same as the 4 9s scenario plus regional failover is possible This is managed manually During failover we will route requests to a static website using DNS failover until recovery in the second Region Test resiliency Same as the 4 9s scenario plus we will validate the architecture through game days using runbooks Also RCA correction is prioritized above feature releases for immediate implementation and deployment 57ArchivedReliability Pillar AWS WellArchitected Framework 3½ 9s (9995%) with a Recovery Time between 5 and 30 Minutes Plan for disaster recovery (DR) Regional failover is manually managed All data is asynchronously replicated Infrastructure in the warm standby is scaled out This can be automated using a workflow executed on AWS Step Functions AWS Systems Manager (SSM) can also help with this automation as you can create SSM documents that update Auto Scaling groups and resize instances Availability design goal We assume that at least some failures will require a manual decision to execute recovery however with good automation in this scenario we assume that only two events per year will require this decision We take 20 minutes to decide to execute recovery and assume that recovery is completed within 10 minutes This implies that it takes 30 minutes to recover from failure Assuming two failures per year our estimated impact time for the year is 60 minutes This means that the upper limit on availability is 9995% The actual availability will also depend on the real rate of failure the duration of the failure and how quickly each failure actually recovers For this architecture we assume that the application is online continuously through updates Based on this our availability design goal is 9995% Summary Topic Implementation Monitor resources Health checks at all layers including DNS health at AWS Region level and on KPIs; alerts sent when configured alarms are tripped; alerting on all failures Operational meetings are rigorous to detect trends and manage to design goals Adapt to changes in demand ELB for web and automatic scaling application tier; automatic scaling storage and read replicas in multiple zones in the active and passive regions for Aurora RDS Data and infrastructure synchronized between AWS Regions for static stability Implement change Automated deploy via canary or blue/green and automated rollback when KPIs or alerts indicate undetected problems in application deployments are made to one isolation zone in one AWS Region at a time Back up data Automated backups in each AWS Region via RDS to meet RPO and automated restoration that is practiced regularly in a game day Aurora RDS and S3 data is automatically and asynchronously replicated from active to passive region Architect for resiliency Automatic scaling to provide selfhealing web and application tier; RDS is MultiAZ; regional failover is managed manually with static site presented while failing over Test resiliency Component and isolation zone fault testing is in pipeline and practiced with operational staff regularly in a game day; playbooks exist for 58ArchivedReliability Pillar AWS WellArchitected Framework 5 9s (99999%) or Higher Scenario with a Recovery Time under 1 minute Topic Implementation diagnosing unknown problems; and a Root Cause Analysis process exists with communication paths for what the problem was and how it was corrected or prevented RCA correction is prioritized above feature releases for immediate implementation and deployment Plan for disaster recovery (DR) Warm Standby deployed in another region Infrastructure is scaled out using workflows executed using AWS Step Functions or AWS Systems Manager Documents Encrypted backups via RDS Crossregion read replicas between two AWS Regions Crossregion replication of static assets in Amazon S3 Restoration is to the current active AWS Region is practiced in a game day and is coordinated with AWS 5 9s (99999%) or Higher Scenario with a Recovery Time under 1 minute This availability goal for applications requires almost no downtime or data loss for specific times Applications that could have this availability goal include for example certain banking investing finance government and critical business applications that are the core business of an extremely largerevenue generating business The desire is to have strongly consistent data stores and complete redundancy at all layers We have selected a SQLbased data store However in some scenarios we will find it difficult to achieve a very small RPO If you can partition your data it’s possible to have no data loss This might require you to add application logic and latency to ensure that you have consistent data between geographic locations as well as the capability to move or copy data between partitions Performing this partitioning might be easier if you use a NoSQL database We can improve availability further by using an ActiveActive approach across multiple AWS Regions The workload will be deployed in all desired Regions that are statically stable across regions (so the remaining regions can handle load with the loss of one region) A routing layer directs traffic to geographic locations that are healthy and automatically changes the destination when a location is unhealthy as well as temporarily stopping the data replication layers Amazon Route 53 offers 10second interval health checks and also offers TTL on your record sets as low as one second Monitor resources Same as the 3½ 9s scenario plus alerting when a Region is detected as unhealthy and traffic is routed away from it Adapt to changes in demand Same as the 3½ 9s scenario Implement change The deployment pipeline will have a full test suite including performance load and failure injection testing We will deploy updates using canary or blue/green deployments to one isolation zone at a time in one Region before starting at the other During the deployment the old versions will still be kept 59ArchivedReliability Pillar AWS WellArchitected Framework 5 9s (99999%) or Higher Scenario with a Recovery Time under 1 minute running on instances to facilitate a faster rollback These are fully automated including a rollback if KPIs indicate a problem Monitoring will include success metrics as well as alerting when problems occur Runbooks will exist for rigorous reporting requirements and performance tracking If successful operations are trending towards failure to meet performance or availability goals a playbook will be used to establish what is causing the trend Playbooks will exist for undiscovered failure modes and security incidents Playbooks will also exist for establishing root cause of failures The team that builds the website also operates the website That team will identify the correction of error of any unexpected failure and prioritize the fix to be deployed after it’s implemented We will also engage with AWS Support for Infrastructure Event Management Back up data Same as the 3½ 9s scenario Architect for resiliency The applications should be built using the software/application resiliency patterns It’s possible that many other routing layers may be required to implement the needed availability The complexity of this additional implementation should not be underestimated The application will be implemented in deployment fault isolation zones and partitioned and deployed such that even a Region wideevent will not affect all customers Test resiliency We will validate the architecture through game days using runbooks to ensure that we can perform the tasks and not deviate from the procedures Plan for disaster recovery (DR) ActiveActive multiregion deployment with complete workload infrastructure and data in multiple regions Using a read local write global strategy one region is the primary database for all writes and data is replicated for reads to other regions If the primary DB region fails a new DB will need to be promoted Read local write global has users assigned to a home region where DB writes are handled This lets users read or write from any region but requires complex logic to manage potential data conflicts across writes in different regions When a region is detected as unhealthy the routing layer automatically routes traffic to the remaining healthy regions No manual intervention is required Data stores must be replicated between the Regions in a manner that can resolve potential conflicts Tools and automated processes will need to be created to copy or move data between the partitions for latency reasons and to balance requests or amounts of data in each partition Remediation of the data conflict resolution will also require additional operational runbooks Availability design goal We assume that heavy investments are made to automate all recovery and that recovery can be completed within one minute We assume no manually triggered recoveries but up to one automated recovery action per quarter This implies four minutes per year to recover We assume that the application is online continuously through updates Based on this our availability design goal is 99999% Summary 60ArchivedReliability Pillar AWS WellArchitected Framework Resources Topic Implementation Monitor resources Health checks at all layers including DNS health at AWS Region level and on KPIs; alerts sent when configured alarms are tripped; alerting on all failures Operational meetings are rigorous to detect trends and manage to design goals Adapt to changes in demand ELB for web and automatic scaling application tier; automatic scaling storage and read replicas in multiple zones in the active and passive regions for Aurora RDS Data and infrastructure synchronized between AWS Regions for static stability Implement change Automated deploy via canary or blue/green and automated rollback when KPIs or alerts indicate undetected problems in application deployments are made to one isolation zone in one AWS Region at a time Back up data Automated backups in each AWS Region via RDS to meet RPO and automated restoration that is practiced regularly in a game day Aurora RDS and S3 data is automatically and asynchronously replicated from active to passive region Architect for resiliency Implemented fault isolation zones for the application; auto scaling to provide selfhealing web and application tier; RDS is MultiAZ; regional failover automated Test resiliency Component and isolation zone fault testing is in pipeline and practiced with operational staff regularly in a game day; playbooks exist for diagnosing unknown problems; and a Root Cause Analysis process exists with communication paths for what the problem was and how it was corrected or prevented RCA correction is prioritized above feature releases for immediate implementation and deployment Plan for disaster recovery (DR) ActiveActive deployed across at least two regions Infrastructure is fully scaled and statically stable across regions Data is partitioned and synchronized across regions Encrypted backups via RDS Region failure is practiced in a game day and is coordinated with AWS During restoration a new database primary may need to be promoted Resources Documentation •The Amazon Builders' Library How Amazon builds and operates software 61ArchivedReliability Pillar AWS WellArchitected Framework Labs •AWS Architecture Center Labs •AWS WellArchitected Reliability Labs External Links • Adaptive Queuing Pattern: Fail at Scale •Calculating Total System Availability Books • Robert S Hammer “Patterns for Fault Tolerant Software” • Andrew Tanenbaum and Marten van Steen “Distributed Systems: Principles and Paradigms” 62ArchivedReliability Pillar AWS WellArchitected Framework Conclusion Whether you are new to the topics of availability and reliability or a seasoned veteran seeking insights to maximize your mission critical workload’s availability we hope this whitepaper has triggered your thinking offered a new idea or introduced a new line of questioning We hope this leads to a deeper understanding of the right level of availability based on the needs of your business and how to design the reliability to achieve it We encourage you to take advantage of the design operational and recoveryoriented recommendations offered here as well as the knowledge and experience of our AWS Solution Architects We’d love to hear from you–especially about your success stories achieving high levels of availability on AWS Contact your account team or use Contact US on our website 63ArchivedReliability Pillar AWS WellArchitected Framework Contributors Contributors to this document include: • Seth Eliot Principal Reliability Solutions Architect WellArchitected Amazon Web Services • Adrian Hornsby Principal Technical Evangelist Architecture Amazon Web Services • Philip Fitzsimons Sr Manager WellArchitected Amazon Web Services • Rodney Lester Principal Solutions Architect WellArchitected Amazon Web Services • Kevin Miller Director Software Development Amazon Web Services • Shannon Richards Sr Technical Program Manager Amazon Web Services 64ArchivedReliability Pillar AWS WellArchitected Framework Further Reading For additional information see: •AWS WellArchitected Framework 65ArchivedReliability Pillar AWS WellArchitected Framework Document Revisions To be notified about updates to this whitepaper subscribe to the RSS feed updatehistorychange updatehistorydescription updatehistorydate Whitepaper updated (p 66) Updated Appendix A to update the Availability Design Goal for Amazon SQS Amazon SNS and Amazon MQ; Re order rows in table for easier lookup; Improve explanation of differences between availability and disaster recovery and how they both contribute to resiliency; Expand coverage of multiregion architectures (for availability) and multiregion strategies (for disaster recovery); Update referenced book to latest version; Expand availability calculations to include request based calculation and shortcut calculations; Improve description for Game DaysDecember 7 2020 Minor update (p 66) Updated Appendix A to update the Availability Design Goal for AWS LambdaOctober 27 2020 Minor update (p 66) Updated Appendix A to add the Availability Design Goal for AWS Global AcceleratorJuly 24 2020 Updates for new Framework (p 66)Substantial updates and new/ revised content including: Added “Workload Architecture” best practices section re organized best practices into Change Management and Failure Management sections updated Resources updated to include latest AWS resources and services such as AWS Global Accelerator AWS Service Quotas and AWS Transit Gateway added/updated definitions for Reliability Availability Resiliency better aligned whitepaper to the AWS Well Architected Tool (questions and best practices) used for WellArchitected Reviews reorder design principlesJuly 8 2020 66ArchivedReliability Pillar AWS WellArchitected Framework moving Automatically recover from failure before Test recovery procedures updated diagrams and formats for equations removed Key Services sections and instead integrated references to key AWS services into the best practices Minor update (p 66) Fixed broken link October 1 2019 Whitepaper updated (p 66) Appendix A updated April 1 2019 Whitepaper updated (p 66) Added specific AWS Direct Connect networking recommendations and additional service design goalsSeptember 1 2018 Whitepaper updated (p 66) Added Design Principles and Limit Management sections Updated links removed ambiguity of upstream/ downstream terminology and added explicit references to the remaining Reliability Pillar topics in the availability scenariosJune 1 2018 Whitepaper updated (p 66) Changed DynamoDB Cross Region solution to DynamoDB Global Tables Added service design goalsMarch 1 2018 Minor updates (p 66) Minor correction to availability calculation to include application availabilityDecember 1 2017 Whitepaper updated (p 66) Updated to provide guidance on high availability designs including concepts best practice and example implementationsNovember 1 2017 Initial publication (p 66) Reliability Pillar AWS Well Architected Framework publishedNovember 1 2016 67ArchivedReliability Pillar AWS WellArchitected Framework Appendix A: DesignedFor Availability for Select AWS Services Below we provide the availability that select AWS services were designed to achieve These values do not represent a Service Level Agreement or guarantee but rather provide insight to the design goals of each service In certain cases we differentiate portions of the service where there’s a meaningful difference in the availability design goal This list is not comprehensive for all AWS services and we expect to periodically update with information about additional services Amazon CloudFront Amazon Route 53 AWS Global Accelerator and the AWS Identity and Access Management Control Plane provide global service and the component availability goal is stated accordingly Other services provide services within an AWS Region and the availability goal is stated accordingly Many services operate within an Availability Zone separate from those in other Availability Zones In these cases we provide the availability design goal for a single AZ and when any two (or more) Availability Zones are used Note The numbers in the following table do not refer to durability (long term retention of data); they are availability numbers (access to data or functions) Service Component Availability Design Goal Amazon API Gateway Control Plane 99950% Data Plane 99990% Amazon Aurora Control Plane 99950% SingleAZ Data Plane 99950% MultiAZ Data Plane 99990% Amazon CloudFront Control Plane 99900% Data Plane (content delivery) 99990% Amazon CloudSearch Control Plane 99950% Data Plane 99950% Amazon CloudWatch CW Metrics (service) 99990% CW Events (service) 99990% CW Logs (service) 99950% Amazon DynamoDB Service (standard) 99990% Service (Global Tables) 99999% Amazon Elastic Block Store Control Plane 99950% Data Plane (volume availability) 99999% Amazon Elastic Compute Cloud (Amazon EC2)Control Plane 99950% SingleAZ Data Plane 99950% 68ArchivedReliability Pillar AWS WellArchitected Framework Service Component Availability Design Goal MultiAZ Data Plane 99990% Amazon Elastic Container Service (Amazon ECS)Control Plane 99900% EC2 Container Registry 99990% EC2 Container Service 99990% Amazon Elastic File System Control Plane 99950% Data Plane 99990% Amazon ElastiCache Service 99990% Amazon Elasticsearch Service Control Plane 99950% Data Plane 99950% Amazon EMR Control Plane 99950% Amazon Kinesis Data Firehose Service 99900% Amazon Kinesis Data Streams Service 99990% Amazon Kinesis Video Streams Service 99900% Amazon MQ Data Plane 99950% Control Plane 99950% Amazon Neptune Service 99900% Amazon Redshift Control Plane 99950% Data Plane 99950% Amazon Rekognition Service 99980% Amazon Relational Database Service (Amazon RDS)Control Plane 99950% SingleAZ Data Plane 99950% MultiAZ Data Plane 99990% Amazon Route 53 Control Plane 99950% Data Plane (query resolution) 100000% Amazon SageMaker Data Plane (Model Hosting) 99990% Control Plane 99950% Amazon Simple Notification Service (Amazon SNS)Data Plane 99990% Control Plane 99900% Amazon Simple Queue Service (Amazon SQS)Data Plane 99980% 69ArchivedReliability Pillar AWS WellArchitected Framework Service Component Availability Design Goal Control Plane 99900% Amazon Simple Storage Service (Amazon S3)Service (Standard) 99990% Amazon S3 Glacier Service 99900% AWS Auto Scaling Control Plane 99900% Data Plane 99990% AWS Batch Control Plane 99900% Data Plane 99950% AWS CloudFormation Service 99950% AWS CloudHSM Control Plane 99900% SingleAZ Data Plane 99900% MultiAZ Data Plane 99990% AWS CloudTrail Control Plane (config) 99900% Data Plane (data events) 99990% Data Plane (management events)99999% AWS Config Service 99950% AWS Data Pipeline Service 99990% AWS Database Migration Service (AWS DMS)Control Plane 99900% Data Plane 99950% AWS Direct Connect Control Plane 99900% AWS Global Accelerator Control Plane 99900% Data Plane 99995% AWS Glue Service 99990% AWS Identity and Access ManagementControl Plane 99900% Data Plane (authentication) 99995% AWS IoT Core Service 99900% AWS IoT Device Management Service 99900% AWS IoT Greengrass Service 99900% AWS Key Management Service (AWS KMS)Control Plane 99990% 70ArchivedReliability Pillar AWS WellArchitected Framework Service Component Availability Design Goal Data Plane 99995% Single Location Data Plane 99900% Multi Location Data Plane 99990% AWS Lambda Function Invocation 99990% AWS Secrets Manager Service 99900% AWS Shield Control Plane 99500% Data Plane (detection) 99000% Data Plane (mitigation) 99900% AWS Storage Gateway Control Plane 99950% Data Plane 99950% AWS XRay Control Plane (console) 99900% Data Plane 99950% Elastic Load Balancing Control Plane 99950% Data Plane 99990% 71
|
General
|
consultant
|
Best Practices
|
AWS_WellArchitected_Framework__Security_Pillar
|
ArchivedSecurity Pillar AWS Well Architected Framework July 2020 This paper has been archived The latest version is now available at: https://docsawsamazoncom/wellarchitected/latest/securitypillar/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Security 2 Design Principles 2 Definition 3 Operating Your Workload Securely 3 AWS Account Management and Separation 5 Identity and Access Management 7 Identity Management 7 Permissions Management 11 Detection 15 Configure 15 Investigate 18 Infrastructure Protect ion 19 Protecting Networks 20 Protecting Compute 23 Data Protection 27 Data Classification 27 Protecting Dat a at Rest 29 Protecting Data in Transit 32 Incident Response 34 Design Goals of Cloud Response 34 Educate 35 Prepare 36 Simulate 38 Iterate 39 Conclusion 40 ArchivedContributors 40 Further Reading 41 Document Revisions 41 ArchivedAbstract The focus of this paper is the security pillar of the WellArchitected Framework It provides guidance to help you apply best practices current recommendations in the design delivery and maintenance of secure AWS workloads ArchivedAmazon Web Services Security Pillar 1 Introduction The AWS Well Architected Framework helps you understand trade offs for decisions you make while building workloads on AWS By using the Framework you will learn current architectural best practices for designing and operating reliable secure efficient and cost effective workloads in the cloud It provides a way fo r you to consistently measure your workload against best practices and identify areas for improvement We believe that having well architected workload s greatly increases the likelihood of business success The framework is based on five pillars: • Operation al Excellence • Security • Reliability • Performance Efficiency • Cost Optimization This paper focuses on the security pillar This will help you meet your business and regulatory requirements by following current AWS recommendations It’s intended for those in technology roles such as chief technology officers (CTOs) chief information security officers (CSOs/CISOs) architects developers and operations team members After reading this paper you will understand AWS current recommendations and strategies to use when designing cloud architectures with security in mind This paper doesn ’t provide implementation details or architectural patterns but does include references to appropriate resources for this information By adopting the practices in this paper you can build architectures that protect your data and systems control access and respond automatically to security events ArchivedAmazon Web Services Security P illar 2 Security The security pillar describes how to take advantage of cloud technologies to protect data systems and assets in a way that can improve your security posture This paper provides in depth best practice guidance for architecting secure workloads on AWS Design Principles In the cloud there are a number of principles that can help you strengthen your workload security: • Implement a strong identity foundation: Implement the principle of least privilege and enforce separation of duties with appropriate authorization for each interaction with your AWS resources Centralize identity management and aim to eliminate reliance on long term static credentials • Enable traceability: Monitor alert and audit act ions and changes to your environment in real time Integrate log and metric collection with systems to automatically investigate and take action • Apply security at all layers: Apply a defense in depth approach with multiple security controls Apply to all layers (for example edge of network VPC load balancing every instance and compute service operating system application and code) • Automate security best practices: Automated software based security mechanisms improve your ability to securely scale more rapidly and cost effectively Create secure architectures including the implementation of controls that are defined and managed as code in version controlled templates • Protect data in transit and at rest : Classify your data into sensitivity le vels and use mechanisms such as encryption tokenization and access control where appropriate • Keep people away from data: Use mechanisms and tools to reduce or eliminate the need for direct access or manual processing of data This reduces the risk of mishandling or modification and human error when handling sensitive data • Prepare for security events: Prepare for an incident by having incident management and investigation policy and processes that align to your organizational requirements Run incident response simulations and use tools with automation to increase your speed for detection investigatio n and recovery ArchivedAmazon Web Services Security Pillar 3 Definition Security in the cloud is composed of five areas: 1 Identity and access management 2 Detection 3 Infrastructure protection 4 Data protection 5 Incident response Security and Compliance is a shared responsibility between AWS and you the customer This shared model can help re duce your operational burden You should carefully examine the services you choose as your responsibilities vary depending on the services used the integration of those services into your IT environment and applicable laws and regulations The nature of this shared responsibility also provides the flexibility and control that permits the deployment Operating Your Workload Securely To operate your workload securely you must apply overarching best practices to every area of security Take requirements and processes that you have defined in operational excellence at an organizational and workload level and apply them to all areas Staying up to date with AWS and industry recommendations and threat intelligence helps you evolve your threat model and control objectives Automating security processes testing and validation allow you to scale your security operations Identify and prioritize risks using a threat model: Use a threat model to identify and maintain an up todate register of potential threats Prioritize your threats and adapt your security controls to prevent detect and respond Revisit and maintain this in the context of the evolving security landscape Identify and validate control objectives: Based on yo ur compliance requirements and risks identified from your threat model derive and validate the control objectives and controls that you need to apply to your workload Ongoing validation of control objectives and controls help you measure the effectivenes s of risk mitigation Keep up to date with security threats: Recognize attack vectors by staying up to date with the latest security threats to help you define and implement appropriate controls ArchivedAmazon Web Services Security Pillar 4 Keep up to date with security recommendations : Stay up to date with both AWS and industry security recommendations to evolve the security posture of your workload Evaluate and implement new security services and features regularly: Evaluate and implement security services and features from AWS and APN Partners that allow you to evolve the security posture of your workload Automate testing and validation of security controls in pipelines: Establish secure baselines and templates for security mechanisms that are tested and validated as part of your build pipelines and processes Use tools and automation to test and validate all security controls continuously For example scan items such as machine images and infrastructure as code templates for security vulnerabilities irregularities and drift from an established baseline at each stage Reducing the number of security misconfigurations introduced into a production environment is critical —the more quality control and reduction of defects you can perform in the build process the better Design continuous integration and continuous deployment (CI/CD) pipelines to test for security issues whenever possible CI/CD pipelines offer the opportunity to enhance security at each stage of build and delivery CI/CD security tooling must also be kept updated to mitigate evolving threats Resources Refer to the following resources to learn more about operating your workload securely Videos • Security Best Practices the Well Architected Way • Enable AWS adoption at scale with automation and governance • AWS Security Hub: Manage Security Alerts & Automate Compliance • Automate your security on AWS Documentation • Overview of Security Processes • Security Bulletins • Security Blog • What's New with AWS • AWS Security Audit Guidelines ArchivedAmazon Web Services Security Pillar 5 • Set Up a CI/CD Pipeline on AWS AWS Account Management and Separation We recommend that you organize workloads in separate accounts and group accounts based on function compliance requirements or a common set of controls rather than mirroring your organization’s reporting structure In AWS accounts are a hard boundary zero trust container for your resources For example account level separation is strongly recommended for isolating production workloads from development and test workloads Separate workloads using accounts: Start with security and infrastructure in mind to enable your organization to set common guardrails as your workloads grow This approach provides b oundaries and controls between workloads Account level separation is strongly recommended for isolating production environments from development and test environments or providing a strong logical boundary between workloads that process data of different sensitivity levels as defined by external compliance requirements (such as PCI DSS or HIPAA) and workloads that don’t Secure AWS accounts: There are a number of aspects to securing your AWS accounts including the securing of and not using the root user and keeping the contact information up to date You can use AWS Organizations to centrally ma nage and govern your accounts as you grow and scale your workloads AWS Organizations helps you manage accounts set controls and configure services across your accounts Manage accounts centrally : AWS Organizations automates AWS account creation and management and control of those accounts after they are created When you create an account through AWS Organizations it is important to consider the email address you use as this will be the root user that allows the password to be reset Organizations allows you to group accounts into organizational units (OUs) which can represent different environments based on the workload’s requirements and purpose Set controls centrally : Control what your AWS accounts can do by only allowing specific services Regions and service actions at the appropriate level AWS Organi zations allows you to use service control policies (SCPs) to apply permission guardrails at the organization organizational unit or account level which apply to all AWS Identity and Access Management (IAM) users and roles For example you can apply an SCP that restricts users from launching resources in Regions that you have not explicitly allow ed AWS Control Tower offers a simplified way to set up and govern multiple accounts It automates the setu p of accounts in your AWS Organization ArchivedAmazon Web Services Security Pillar 6 automates provisioning applies guardrails (which include prevention and detection ) and provides you with a dashboard for vis ibility Configure services and resources centrally : AWS Organizations helps you configure AWS services that apply to all of your accounts For example you can configure central logging of all actions performed across your organization using AWS CloudTrail and prevent member account s from disabling logging You can also centrally aggregate data for rules that you’ve defined using AWS Config enabling you to audit your workloads for compliance and react quickly to changes AWS CloudFormation StackSets allow you to centrally manage AWS CloudFormation stacks across accounts and OUs in your organization This allows you to automatically provision a new account to meet your security requirements Resources Refer to the following resources to learn mo re about AWS recommendations for deploying and managing multiple AWS accounts Videos • Managing and governing multi account AWS environments using AWS Organizations • AXA: Scaling adoption with a Global Landing Zone • Using AWS Control Tower to Govern Multi Account AWS Environments Documentation • Establishing your best practice AWS environment • AWS Organizations • AWS Control Tower • Working with AWS CloudFormation StackSet s • How to use service control policies to set permission guardrails across accounts in your AWS Organization Hands on • Lab: AWS Account and Root User ArchivedAmazon Web Services Security Pillar 7 Identity and Access Management To use AWS services you must grant your users and applications access to resources in your AWS accounts As you run more workloads on AWS you need robust identity management and permissions in place to ensure that the right people have access to the righ t resources under the right conditions AWS offers a large selection of capabilities to help you manage your human and machine identities and their permissions The best practices for these capabilities fall into two main areas : • Identity management • Permiss ions management Identity Management There are two types of identities you need to manage when approaching operating secure AWS workloads • Human Identities : The administrators developers operators and consumers of your applications require an identity to access your AWS environments and applications These can be members of your organization or external users with whom you collaborate and who interact with your AWS resources via a web browser client application mobile app or interactive command line tools • Machine Identities : Your workload applications operational tools and components require an identity to make requests to AWS services for example to read data These identities include machines running in your AWS environment such as Amazon EC2 instances or AWS Lambda functions You can also manage machine identities for external parties who need access Additionally you might also have machines outside of AWS that need access to your AWS environment Rely on a centralized identity provider: For workforce identities rely on an identity provider that enables you to manage identities in a centralized place This makes it easier to manage access across multiple applications and services because you are creat ing manag ing and revok ing access from a single location For example if someone leaves your organization you can revoke access for all applications and services (including AWS ) from one location This reduces the need for multiple credentials and provides an opportunity to integrate with existing human resources (HR) processes ArchivedAmazon Web Services Security Pillar 8 For federation with individual AWS accounts you can use centralized identities for AWS with a SAML 20 based provider with AWS IAM You can use any provider —whether hosted by you in AWS external to AWS or supplied by the AWS Partner Network (APN) —that is compatible with the SAML 20 protocol You can use federation between your AWS account and your chosen provider to grant a user or application access to call AWS API operations by using a SAML assertion to get temporary security credentials Web based single sign on is also supported allowing users to sign in to the AWS Management Console from your sign i n portal For federation to multiple accounts in your AWS Organization you can configure your identity source in AWS Single Sign On (AWS SSO) and specify where your users and groups are stored Once configured your identity provider is your source of truth and information can be synchronized using the System for Cross domain Identity Management (SC IM) v20 protocol You can then look up users or groups and grant them single sign on access to AWS accounts cloud applications or both AWS SSO integrates with AWS Organizations which enables you to configure your identity provider once and then grant access to existing and new accounts managed in your organization AWS SSO provides you with a default store which you can use to manage your users and groups If yo u choose to use the AWS SSO store create your users and groups and assign their level of access to your AWS accounts and applications keeping in mind the best practice of least privilege Alternatively you can choose to Connect to Your External Identity Provider using SAML 20 or Connec t to Your Microsoft AD Directory using AWS Directory Service Once configured you can sign into the AWS Management Console command line interface or the AWS mobile app by authenticating through your central identity provider For managing end users or consumers of your workloads such as a mobile app you can use Amazon Cognito It provides authentication authorization and user management for your web and mobile apps Your users can sign in directly with a user name and password or through a third party such as Amazon Apple Facebook or Google Leverage user groups and attributes: As the number of users you manage grows you will need to determine ways to organize them so that you can manage them at scale Place users with common security requirements in groups defined by your identity provider and put mechanisms in place to ensure that user attributes that may be used for access control ( for example department or location) are correct and updated Use these groups and attributes to control access rather than individual users This allows you to manage access centrally by changing a user’s group membership or ArchivedAmazon Web Services Security Pillar 9 attributes once with a permission set rather than updating many individual policies when a user’s access needs change You can use AWS SSO to manage user groups and attributes AWS SSO supports most commonly used attributes whether they are entered manually during user creation or automatically provi sioned using a synchronization engine such as defined in the System for Cross Domain Identity Management (SCIM) specification Use strong sign in mechanisms: Enforce minimum password length and educate your users to avoid common or reused passwords Enfo rce multi factor authentication (MFA) with software or hardware mechanisms to provide an additional layer of verification For example when using AWS SSO as the identity source configure the “context aware” or “always on” setting for MFA and allow users to enroll their own MFA devices to accelerate adoption When using an external identity provider (IdP) configure your IdP for MFA Use temporary credentials: Require ide ntities to dynamically acquire temporary credentials For workforce identities use AWS SSO or federation with IAM to access AWS accounts For machine ident ities such as EC2 instances or Lambda functions require the use of IAM roles instead of IAM users with long term access keys For human identities using the AWS Management Console require users to acquire temporary credentials and federate into AWS Yo u can do this using the AWS SSO user portal or configuring federation with IAM For users requ iring CLI access ensure that they use AWS CLI v2 which supports di rect integration with AWS Single Sign On (AWS SSO) Users can create CLI profiles that are linked to AWS SSO accounts and roles The CLI automatically retrieves AWS credentials from AWS SSO and refreshes them on your behalf This eliminates the need to cop y and paste temporary AWS credentials from the AWS SSO console For SDK users should rely on AWS STS to assume roles to receive temporary credentials In certain cases temporary credentials might not be practical You should be aware of the risks of stor ing access keys rotate these often and require MFA as a condition when possible For cases where you need to grant consumers access to your AWS resource s use Amazon Cognito identity pools and assign them a set of temporary limited privilege credentials to access your AWS resources The permissions for each us er are controlled through IAM roles that you create You can define rules to choose the role for each user based on claims in the user's ID token You can define a default role for authenticated users You can also define a separate IAM role with limited permissions for guest users who are not authenticated ArchivedAmazon Web Services Security Pillar 10 For machine identities you should rely on IAM roles to grant access to AWS For EC2 instances you can use roles for Amazon EC2 You can attach an IAM role to your EC2 instance to enable your applications running on Amazon EC2 to use temporary security credentials that AWS crea tes distributes and rotates automatically For accessing EC2 instances using keys or passwords AWS Systems Manager is a more secure way to access a nd manage your instances using a pre installed agent without the stored secret Additionally other AWS services such as AWS Lambda enable you to configure an IAM service role to grant the service permissions to perform AWS actions using temporary creden tials Audit and rotate credentials periodically: Periodic validation preferably through an automated tool is necessary to verify that the correct controls are enforced For human identities you should require users to change their passwords periodicall y and retire access keys in favor of temporary credentials We also recommend that you continuously monitor MFA settings in your identity provider You can set up AWS Config Rules to monitor these settings For machine identities you should rely on temporary credentials using IAM roles For situations where this is not possible frequent auditing and rotating access keys is necessary Store and use secrets secure ly: For credentials that are not IAM related such as database login s use a service that is designed to handle management of secrets such as AWS Secrets Manager AWS Secrets Manager makes it easy to manage rotat e and securely store encrypted secrets using supported services Calls to access the secrets are logged in CloudTrail for auditing purposes and IAM permissions can grant least privilege access to them Resources Refer to the following resources to learn more about AWS best practices for protecting your AWS credentials Videos • Mastering identity at every layer of the cake • Managing user permissions at scale with AWS SSO • Best Practices for Managing Retrieving & Rotating Secrets at Scale Documentation • The AWS Account Root User ArchivedAmazon Web Services Security Pillar 11 • AWS Account Root User Credentials vs IAM User Creden tials • IAM Best Practices • Setting an Account Password Pol icy for IAM Users • Getting Started with AWS Secrets Manager • Using Instance Profiles • Temporary Security Credentials • Identity Providers and Federation Permissions Management Manage permissions to control access to people and machine identities that require access to AWS and your workloads Permissions control who can access what and under what conditions Set permissions to specific human and machine identities to grant acces s to specific service actions on specific resources Additionally specify conditions that must be true for access to be granted For example you can allow developers to create new Lambda functions but only in a specific Region When managing your AWS en vironments at scale adhere to the following best practices to ensur e that identities only have the access they need and nothing more Define permission guardrails for your organization: As you grow and manage additional workloads in AWS you should separa te these workloads using accounts and manage those accounts using AWS Organizations We recommend that you establish common permission guardrails that restrict access to all identities in your organization For example you can restrict access to specific AWS Regions or prevent your team from deleting common resources such as an IAM role used by your central security team You can get started by implementing example service control policies such as preventing users from disabling key services You can use AWS Organizations to group accounts and set common controls on each group of accounts To set these common controls you can use services in tegrated with AWS Organizations Specifically you can use service control policies (SCPs) to r estrict access to group of accounts SCPs use the IAM policy language and enable you to establish controls that all IAM principals (users and roles) adhere to You can restrict access to specific service actions resources and based on specific condition t o meet the access control needs of your organization If necessary you can define exceptions ArchivedAmazon Web Services Security Pillar 12 to your guardrails For example you can restrict service actions for all IAM entities in the account except for a specific administrator role Grant least privil ege access: Establishing a principle of least privilege ensures that identities are only permitted to perform the most minimal set of functions necessary to fulfill a specific task while balancing usability and efficiency Operating on this principle limits unintended access and help s ensure that you can audit who has access to which resources In AWS identities have no permissions by default wi th the exception of the root user which should only be used for a few specific tasks You use policies to explicitly grant permissions attached to IAM or resou rce entities such as an IAM role used by federated identities or machines or resources ( for example S3 buckets) When you create and attach a policy you can specify the service actions resources and conditions that must be true for AWS to allow acces s AWS supports a variety of conditions to help you scope down access For example using the PrincipalOrgID condition key the identifier of the AWS Or ganizations is verified so access can be granted within your AWS Organization You can also control requests that AWS services make on your behalf like AWS CloudFormation creating an AWS Lambda function by using the CalledVia condition key This enables you to set granular permissions for your human and machine identities across AWS AWS also has capabilities that enable you to scale your permissions management and adhere to least privilege Permissions Boundaries : You can use permission boundaries to set the maximum permissions that an administrator can set This enables you to delegate the abili ty to create and manage permissions to developers such as the creation of an IAM role but limit the permissions they can grant so that they cannot escalate their privilege using what they have created Attribute based access control (ABAC) : AWS enables you to grant permissions based on attributes In AWS these are called tags Tags can be attached to IAM principals (users or roles) and to AWS resources Using IAM policies administrators can create a reusable policy that applies permissions based on the attributes of the IAM principal For example as an administrator you can use a single IAM policy that grants developers in your organization access to AWS resources that match the develop ers’ project tags As the team of developers adds resources to projects permissions are automatically applied based on attributes As a result no policy update is required for each new resource ArchivedAmazon Web Services Security Pillar 13 Analyze public and cross account access: In AWS you can gr ant access to resources in another account You grant direct cross account access using policies attached to resources ( for example S3 bucket policies) or by allowing an identity to assume an IAM role in another account When using resource policies you want to ensure you grant access to identities in your organization and are intentional about when you make a resource public Making a resource public should be used sparingly as this action allows anyone to access the resource IAM Access Analyzer uses mathematical methods ( that is provable security ) to identity all access paths to a resource from outside of its account It reviews resource policies continuously and reports findings of public and cross account access to make it eas y for you to analyze potentially broad access Share resources securely: As you manage workloads using separate accounts there will be cases w here you need to share resources between those accounts We recommend that you share resources using AWS Resource Access Manager ( AWS RAM) This service enables you to easily and securely share AWS resources with in your AWS Organization and Organizational Units Using AWS RAM access to shared resources is automatically granted or revoked as accounts are moved in and out of the Organization or Organization Unit with which they are shared This helps you ensure that resources are only shared with the accounts that you intend Reduce permissions continuously: Sometime s when teams and projects are just getting started you might choose to grant broad access to inspire innovation and agility We recommend that you evaluate access continuously and restrict access to only the permissions required and achieve least privilege AWS provides access analysis capabilities to help you identify unuse d access To help you identify unused users and roles AWS analyzes access activity and provides access key and role last used information You can use the last accessed timestamp to identify unused users and roles and remove them Moreover you can review service and action last accessed information to identify and tighten permissions for specific users and roles For example you can use last accessed information to identify the specific S3 actions that your application role requires and restrict access to only those These feature are available in the console and programmatically to enable you to incorporate them into your infrastructure workflows and automated tools Establish e mergency access process: You should have a process that allows emergency access to your workload in particular your AWS accounts in the unlikely event of an automated process or pipeline issue This process could include a combination of different capabi lities for example an emergency AWS cross account ArchivedAmazon Web Services Security Pillar 14 role for access or a specific process for administrators to follow to validate and approve an emergency request Resources Refer to the following resources to learn more about current AWS best practices for finegrained authorization Videos • Become an IAM Policy Master in 60 Minutes or Less • Separation of Duties Least Privilege Delegation & CI/CD Documentation • Grant least privilege • Working with Policies • Delegating Permissions to Administer IAM Users Groups and Credentials • IAM Access Analyze r • Remove unnecessary credentials • Assuming a role in the CLI with MFA • Permissions Boundaries • Attribute based access control (ABAC) Hands on • Lab: IAM Permission Boundaries Delegating Role Creation • Lab: IAM Tag Based Access Control for EC2 • Lab: Lambda Cross Account IAM Role Assumption ArchivedAmazon Web Services Security Pillar 15 Detection Detection enables you to identify a potential security misconfiguration threat or unexpec ted behavior It’s an essential part of the security lifecycle and can be used to support a quality process a legal or compliance obligation and for threat identification and response efforts There are different types of detection mechanisms For exampl e logs from your workload can be analyzed for exploits that are being used You should regularly review the detection mechanisms related to your workload to ensure that you are meeting internal and external policies and requirements Automated alerting an d notifications should be based on defined conditions to enable your teams or tools to investigate These mechanisms are important reactive factors that can help your organization identify and understand the scope of anomalous activity In AWS there are a number of approaches you can use when addressing detective mechanisms The following sections describe how to use these approaches: • Configure • Investigate Configure Configure service and application logging : A foundational practice is to establish a set of detection mechanisms at the account level This base set of mechanisms is aimed at recording and detecting a wide range of actions on all resources in your account They allow you to build out a comprehensive detective capability with options that include automated remediation and partner integrations to add functionality In AWS services in this base set include: • AWS CloudTrail provides event history of your AWS account activity including actions taken through the AWS Management Console AWS SDKs command line tools and other AWS services • AWS Config monitors and records your AWS resource configurations and allows you to automate the evaluation and remedia tion against desired configurations • Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads ArchivedAmazon Web Services Security Pillar 16 • AWS Security Hub provides a single place that aggregates organizes and prioritizes your security alerts or findings from multiple AWS services and optional third party products to give you a comprehens ive view of security alerts and compliance status Building on the foundation at the account level many core AWS services for example Amazon Virtual Private Cloud (VPC) provide service level logging features VPC Flow Logs enable you to capture information about the IP traffic going to and from network interfaces that can provide valuable in sight into connectivity history and trigger automated actions based on anomalous behavior For EC2 instances and application based logging that doesn’t originate from AWS services logs can be stored and analyzed using Amazon CloudWatch Logs An agent collects the logs from the operating system and the applications that are running and automatically stores them Once the logs are available in CloudWatch Logs you can process them in real time or dive into analysis using Insights Equally important to collecting and aggregating logs is the ability to extract meaningful insight from the great volumes of log and event data generated by complex architectures See the Monitoring section of The Reliability Pillar whitepaper for more detail Logs can themselves contain data that is considered sensitive –either when application data has erroneously found its way into log files that the CloudWatch Logs agent is capturing or when cross region logging is configured for log aggregation and there are legislative considerations about shipping certain kinds of information acr oss borders One approach is to use Lambda functions triggered on events when logs are delivered to filter and redact log data before forwarding into a central logging location such as an S3 bucket The unredacted logs can be retained in a local bucket until a “reasonable time” has passed (as determined by legislation and your legal team) at which point an S3 lifecycle rule can automatically delete them Logs can further be protected in Amazon S3 by using S3 Object Lock where you can store objects using a write once readmany (WORM) model Analyze logs findings and metrics centrally : Security operations teams rely on the collection of logs and the use of search tools to d iscover potential events of interest which might indicate unauthorized activity or unintentional change However simply analyzing collected data and manually processing information is insufficient to keep up with the volume of information flowing from co mplex architectures Analysis and reporting alone don’t facilitate the assignment of the right resources to work an event in a timely fashion ArchivedAmazon Web Services Security Pillar 17 A best practice for building a mature security operations team is to deeply integrate the flow of security events and findings into a notification and workflow system such as a ticketing system a bug/issue system or other security information and event management (SIEM) system This takes the workflow out of email and static reports and allows you to route escalate and manage events or findings Many organizations are also integrating security alerts into their chat/collaboration and developer producti vity platforms For organizations embarking on automation an API driven low latency ticketing system offers considerable flexibility when planning “what to automate first” This best practice applies not only to security events generated from log messag es depicting user activity or network events but also from changes detected in the infrastructure itself The ability to detect change determine whether a change was appropriate and then route that information to the correct remediation workflow is esse ntial in maintaining and validating a secure architecture in the context of changes where the nature of their undesirability is sufficiently subtle that their execution cannot currently be prevented with a combination of IAM and Organizations configuratio n GuardDuty and Security Hub provide aggregation deduplication and analysis mechanisms for log records that are also made available to you via other AWS services Specifically GuardDuty ingests aggregates and analyses information from the VPC DNS service and inf ormation which you can otherwise see via CloudTrail and VPC Flow Logs Security Hub can ingest aggregate and analyze output from GuardDuty AWS Config Amazon Inspector Macie AWS Firewall Manager and a significant number of thirdparty security produc ts available in the AWS Marketplace and if built accordingly your own code Both GuardDuty and Security Hub have a Master Member model that can aggregate findings and insights across multiple accounts and Security Hub is often used by customers who have an on premises SIEM as an AWS side log and alert preprocessor and aggregator from which they can then ingest Amazon EventBridge via a Lambda based processor and forwarder Resources Refer to the following resources to learn more about current AWS recomme ndations for capturing and analyzing logs Videos • Threat management in the cloud: Amazon GuardDuty & AWS Security Hub • Centrally Monitoring Resource Configuration & Co mpliance ArchivedAmazon Web Services Security Pillar 18 Documentation • Setting up Amazon GuardDuty • AWS Security Hu b • Getting started: Amazon CloudWatch Logs • Amazon EventBridge • Configuring Athena to analyze CloudTrail logs • Amazon CloudWatch • AWS Config • Creating a trail in CloudTrail • Centralize logging solution Hands on • Lab: Enable Security Hub • Lab: Automated Deployment of Detective Controls • Lab: Amazon GuardDuty hands on Investigate Implement actionab le security events : For each detective mechanism you have you should also have a process in the form of a runbook or playbook to investigate For example when you enable Amazon GuardDuty it generates different findings You shou ld have a runbook entry for each finding type for example if a trojan is discovered your runbook has simple instructions that instruct someone to investigate and remed iate Automate response to events : In AWS investigating events of interest and information on potentially unexpected changes into an automated workflow can be achieved using Amazon EventBridge This serv ice provides a scalable rules engine designed to broker both native AWS event formats (such as CloudTrail events) as well as custom events you can generate from your application Amazon EventBridge also allows you to route events to a workflow system for those building incident response systems (Step Functions) or to a central Security Account or to a bucket for further analysis ArchivedAmazon Web Services Security Pil lar 19 Detecting change and routing this information to the correct workflow can also be accomplished using AWS Config rules AWS Con fig detects changes to in scope services (though with higher latency than Amazon EventBridge) and generates events that can be parsed using AWS Config rules for rollback enforcement of compliance policy and forwarding of information to systems such as c hange management platforms and operational ticketing systems As well as writing your own Lambda functions to respond to AWS Config events you can also take advantage of the AWS Config Rules Developme nt Kit and a library of open source AWS Config Rules Resources Refer to the following resources to learn more about current AWS best practices for integrating auditing controls with notification and workflow Videos • Amazon Detective • Remediating Amazon GuardDuty and AWS Security Hub Findings • Best Practices for Managing Security Operations on AWS • Achieving Continuous Compliance using AWS Config Documentation • Amazon Detective • Amazon EventBridge • AWS Config Rules • AWS Config Rules Repository (open source) • AWS Config Rules Development Kit Hands on • Solution: RealTime Insights on AWS Account Activity • Solution: Centralized Logging Infrastructure Protection Infrastructure protection encompasses control methodologies such as defense in depth that are necessary to meet best practices and organizational or regulatory ArchivedAmazon Web Services Security Pillar 20 obligations Use of these methodologies is critical for successful ongoing operations in the clo ud Infrastructure protection is a key part of an information security program It ensures that systems and services within your workload are protected against unintended and unauthorized access and potential vulnerabilities For example you’ll define tr ust boundaries (for example network and account boundaries) system security configuration and maintenance (for example hardening minimization and patching) operating system authentication and authorizations (for example users keys and access levels ) and other appropriate policy enforcement points (for example web application firewalls and/or API gateways) In AWS there are a number of approaches to infrastructure protection The following sections describe how to use these approaches: • Protecting networks • Protecting compute Protecting Networks The careful planning and management of your network design forms the foundation of how you provide isolation and boundaries for resources within your workload Because many resources in your workload operate in a VPC and inherit the security properties it’s critical that the design is supported with inspection and protection mechanisms backed by automation Likewise for workloads that operate outside a VPC using purely edge services and/or serverless the b est practices apply in a more simplified approach Refer to the AWS Well Architected Serverless Applications Lens for specific guidance on serverless secur ity Create network layers: Components such as EC2 instances RDS database clusters and Lambda functions that share reachability requirements can be segmented into layers formed by subnets For example a n RDS database cluster in a VPC with no need for in ternet access should be placed in subnets with no route to or from the internet This layered approach for the control s mitigate s the impact of a single layer misconfiguration which could allow unintended access For AWS Lambda you can run your functions in your VPC to take advance of VPCbased controls For network connectivity that can include thousands of VPCs AWS accounts and on premises networks you should use AWS Transit Gateway It acts as a hub that controls how traffic is routed among all the connected networks which act like spokes Traffic ArchivedAmazon Web Services Security Pillar 21 between an Amazon VPC and AWS Transit Gateway remains on the AWS private network which reduces external threat vectors such as distributed denial of service (DDoS) attacks and common exploits such as SQL injection cross site scripting cross site request forgery or abuse of broken authentication code AWS Transit Gateway interregion peering also encrypts inter region traffic with no single point of failure or bandwidth bottleneck Control traffic a t all layers: When architecting your network topology you should examine the connectivity requirements of each component For example if a component requires internet accessib ility (inbound and outbound) connectivity to VPCs edge services and external data centers A VPC allows you to define your network topology that spans an AWS Region with a private IPv4 address range that you set or an IPv6 address range AWS selects You should a pply multiple controls with a defense in depth approach for both in bound and outbound traffic including the use of security groups (stateful inspection firewall) Network ACLs subnets and route tables Within a VPC you can create subnets in an Availability Zone Each subnet can have an associated route table that defin es routing rules for managing the paths that traffic takes within the subnet You can define an internet routable subnet by having a route that goes to an internet or NAT gateway attached to the VPC or through another VPC When an instance RDS database or other service is launched within a VPC it has its own security group per network interface This firewall is outside the operating system layer and can be used to define rules for allowed inbound and outbound traffic You can also define relationships between security groups For example instances within a database tier security group only accept traffic from instances within the application tier by reference to the security groups applied to the instances involved Unless you are using non TCP proto cols it should n’t be necessary to have an EC2 instance directly accessible by the internet (even with ports restricted by security groups) without a load balancer or CloudFront This helps protect it from unintended access through an operating system or application issue A subnet can also have a network ACL attached to it which acts as a stateless firewall You should configure the network ACL to narrow the scope of traffic allowed between layers note that you need to define both inbound and outbound rules While some AWS services require components to access the internet to make API calls (this being where AWS API endp oints are located ) others use endpoints within your VPCs Many AWS services including Amazon S3 and DynamoDB support VPC endpoints and this technology has been general ized in AWS PrivateLink For VPC ArchivedAmazon Web Services Security Pillar 22 assets that need to make outbound connections to the internet these can be made outbound only (one way) through an AWS managed NAT gateway outbound only internet gateway or web proxies that you create and manage Impleme nt inspection and protection: Inspect and filter your traffic at each layer For components transacting over HTTP based protocols a web application firewall can help protect from common attacks AWS WAF is a web a pplication firewall that lets you monitor and block HTTP(s) requests that match your configurable rules that are forwarded to an Amazon API Gateway API Amazon CloudFront or an Application Load Balancer To get started with AWS WAF you can use AWS Managed Rules in combination with your own or use existing partner integrations For managing AWS WAF AWS Shield Advanced protections and Amazon VPC security groups across AWS Organizations you can use AWS Firewall Manager It allows you to centrally configure and manage firewall rules across your accounts and applications mak ing it easier to scale enforcement of common rules It also enables you to rapidly respond to attacks using AWS Shield Advanced or solutions that can automatically block unwanted requests to your web applications Automate network protection: Automate protection mechanisms to provide a self defending network based on threat intelligence and anomaly detection For example intrusion detection and prevention tools that can adapt to current threats and reduce their impact A web application firewall is an example of where you can automate network protection for example by using the AWS WAF Security Automations solution (https://githubcom/awslabs/aws wafsecurity automations ) to automatically b lock requests originating from IP addresses associated with known threat actors Resources Refer to the following resources to learn more about AWS best practices for protecting networks Video • AWS Transit Gatew ay reference architectures for many VPCs • Application Acceleration and Protection with Amazon CloudFront AWS WAF and AWS Shield • DDoS Attack Detection at Scale ArchivedAmazon Web Services Security Pillar 23 Docume ntation • Amazon VPC Documentation • Getting started with AWS WAF • Network Access Control Lists • Security Groups for Your VPC • Recommended Network ACL Rules for Your VPC • AWS Firewall Manager • AWS PrivateLink • VPC Endpoints • Amazon Inspector Hands on • Lab: Automated Deployment of VPC • Lab: Automated Deployment of Web Application Firewall Protecting Compute Perform vulnerability management : Frequently scan and patch for vulnerabilities in your code dependencies and in your infrastructure to help protect against new threats Using a build and deployment pipeline you can automate many parts of vulnerability management : • Using thirdparty st atic code analysis tools to identify common security issues such as unchecked function input bounds as well as more recent CVEs You can use Amazon CodeGuru for languages supported • Using thirdparty depend ency checking tools to determine whether libraries your code links against are the latest versions are themselves free of CVEs and have licensing conditions that meet your software policy requirements ArchivedAmazon Web Services Security Pillar 24 • Using Amazon Inspector you can perform configurati on assessments against your instances for known common vulnerabilities and exposures (CVEs) assess against security benchmarks and fully automate the notification of defects Amazon Inspector runs on production instances or in a build pipeline and it notifies developers and engineers when findings are present You can access findings programmatically and direct your team to backlogs and bug tracking systems EC2 Image Builder can be used to maintain s erver images (AMIs) with automated patching AWS provided security policy enforcement and other customizations • When using containers implement ECR Image Scanning in your build pipeline and on a regular basis against your image repository to look for CVEs in your containers • While Amazon Inspector and other tools are effective at identifying configurations and any CVEs that are present other methods are required to test your workload at the application level Fuzzing is a well known method of finding bugs using automation to inject malformed data into input fields and other areas of your application A number o f these functions can be performed using AWS services products in the AWS Marketplace or open source tooling Reduce attack surface: Reduce your attack surface by hardening operating systems minimizing components libraries and externally consumable se rvices in use To reduce your attack surface you need a threat model to identify the entry points and potential threats that could be encountered A common practice in reducing attack surface is to start at reducing unused components whether they are operating system p ackages applications etc (for EC2 based workloads) or external software modules in your code (for all workloads) Many hardening and security configuration guides exist for common operating systems and server software for example from the Center for Internet Security that you can use as a starting point and iterate Enable people to perform actions at a distance: Removing the ability for interactive access reduces the risk of human error and the potential for manual configuration or management For example use a change management workflow to manage EC2 instances using tools such as AWS Systems Manager instead of allowing direct access or via a bastion host AWS Systems Manager can automate a variety of maint enance and deployment tasks using features including automation workflows documents (playbooks) and the run command AWS CloudFormation stacks build from pipelines ArchivedAmazon Web Services Security Pillar 25 and can automate your infrastructure deployment and management tasks without using the AWS Management Console or APIs directly Implement managed services: Implement services that manage resources such as Amazon RDS AWS Lambda and Amazon ECS to reduce your security maintenance tasks as part of the shared responsibility model For example Amazon RDS helps you set up operate and scale a relational database automates administration tasks such as hardware provisioning database setup patching and backups This means you have mo re free time to focus on securing your application in other ways described in the AWS Well Architected Framework AWS Lambda lets you run code without provisioning or managing servers so you only need to focus on the connectivity invocation and security at the code level –not the infrastructure or operating system Validate software integrity : Implement mechanisms (eg code signing) to validate that the software code and libraries used in the workload are from trusted sources and have not been tampered with For example you should verify the code signing certificate of binaries and scripts to confirm the author and ensure it has not been tampered with since created by the author Additionally a checksum of software that you download compared to that of the checksum from the provider can help ensure it has not been tampered with Automate compute protection: Automate your protective compute mechanisms including vulnerability management reduction in attack surface and management of resources The au tomation will help you invest time in securing other aspects of your workload and reduce the risk of human error Resources Refer to the following resources to learn more about AWS best practices for protecting compute Video • Security best practices for the Amazon EC2 instance metadata service • Securing Your Block Storage on AWS • Securing Serverless and Container Services • Running high security workloads on Amazon EKS • Architecting Security through Policy Guardrails in Amazon EKS ArchivedAmazon Web Services Security Pillar 26 Documentation • Security Overview of AWS Lambda • Security in Amazon EC2 • AWS Systems Manager • Amazon Inspector • Writing your own AWS Systems Manager documents • Replacing a Bastion Host with Amazon EC2 Systems Manager Hands on • Lab: Automated Deployment of EC2 Web Application ArchivedAmazon Web Services Security Pillar 27 Data Protection Before architecting any workload foundational practices that influence security should be in place For example data classification provides a way to categorize data based on levels of sensitivity and encryption protects data by way of render ing it unintelligible to unauthorized access These methods are important because they support objectives such as preventing mishandling or complying with regulatory obligations In AWS there are a number of different approaches you can use when addressin g data protection The following section describes how to use these approaches: • Data classification • Protecting data at rest • Protecting data in transit Data Classification Data classification provides a way to categorize organizational data based on critica lity and sensitivity in order to help you determine appropriate protecti on and retention controls Identify the data within your workload : You need to understand the type and classiciation of data your workload is processing the associated business proce sses data owner applicable legal and compliance requirements where it’s stored and the resulting controls that are needed to be enforced This may include classifications to indicate if the data is intended to be publicly available if the data is inte rnal use only such as customer personally identifiable information (PII) or if the data is for more restricted access such as intellectual property legally privileged or marked sensititve and more By carefully managing an appropriate data classificatio n system along with each workload’s level of protection requirements you can map the controls and level of access/protection appropriate for the data For example public content is available for anyone to access but important content is encrypted and s tored in a protected manner that requires authorized access to a key for decrypting the content Define data protection controls: By using resource tags separate AWS accounts per sensitivity (and potentially also per caveat / enclave / community of intere st) IAM policies Organizations SCPs AWS KMS and AWS CloudHSM you can define and implement your policies for data classification and protection with encryption For example if you have a project with S3 buckets that contain highly critical data or EC2 ArchivedAmazon Web Services Security Pillar 28 instances that process confidential data they can be tagged with a “Project=ABC” tag Only your immediate team knows what the project code means and it provides a way to use attribute based access control You can define levels of access to the AWS KMS encryption keys through key policies and grants to ensure that only appropriate services have access to the sensitive content through a secure mechanism If you are making authorization decisions based on tags you should make sure that the permissions on the tags are defined appropriately using tag policies in AWS Organizations Define data lifecycle management: Your defined lifecycle strategy should be based on sensitivity level as well as legal and organization requirements Aspects including the duration for which you retain data data destruction processes data access management data transformation and data sharing should be considered When choosing a data classification methodology balance usability versus access You should also accommodate the mu ltiple levels of access and nuances for implementing a secure but still usable approach for each level Always use a defense in depth approach and reduce human access to data and mechanisms for transforming deleting or copying data For example requir e users to strongly authenticate to an application and give the application rather than the users the requisite access permission to perform “action at a distance” In addition ensure that users come from a trusted network path and require access to th e decryption keys Use tools such as dashboards and automated reporting to give users information from the data rather than giving them direct access to the data Automate identification and classification: Automat ing the identification and classificatio n of data can help you implement the correct controls Using automation for this instead of direct access from a person reduce s the risk of human error and exposure You should evaluate using a tool such as Amazon Macie that uses machine learning to automatically discover classify and protect sensitive data in AWS Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property and provides you with dashboards and alerts that give visibility into how this data is being accessed or moved Resources Refer to the following resources to learn more about data classification Documentation • Data Classification Whitepaper • Tagging Your Amazon EC2 Resources ArchivedAmazon Web Services Security Pillar 29 • Amazon S3 Object Tagging Protecting Data at Rest Data at rest represents any data that you persist in non volatile storage for any duration in your workload This includes block stor age object storage databases archives IoT devices and any other storage medium on which data is persisted Protecting your data at rest reduces the risk of unauthorized access when encryption and appropriate access controls are implemented Encryptio n and tokenization are two important but distinct data protection schemes Tokenization is a process that allows you to define a token to represent an otherwise sensitive piece of information (for example a token to represent a customer’s credit card numb er) A token must be meaningless on its own and must not be derived from the data it is tokenizing –therefore a cryptographic digest is not usable as a token By carefully planning your tokenization approach you can provide additional protection for your content and you can ensure that you meet your compliance requirements For example you can reduce the compliance scope of a credit card processing system if you leverage a token instead of a credit card number Encryption is a way of transforming content in a manner that makes it unreadable without a secret key necessary to decrypt the content back into plaintext Both tokenization and encryption can be used to secure and protect information as appropriate Further masking is a techni que that allows part of a piece of data to be redacted to a point where the remaining data is not considered sensitive For example PCIDSS allows the last four digits of a card number to be retained outside the compliance scope boundary for indexing Implement secure key management: By defining an encryption approach that includes the storage rotation and access control of keys you can help provide protection for your content against unauthorized users and against unnecessary exposure to authorized use rs AWS KMS helps you manage encryption keys and integrates with many AWS services This service provides durable secure and redundant storage for your master keys You can define your key aliases as well as key level policies The policies help you define key administrators as well as key users Additionally AWS CloudHSM is a cloud based hardware security module (HSM) that enables you to easily generate and use your own encryption keys in the AWS Cloud It helps you meet corporate contractual and regulatory compliance requirements for data security by using FIPS 140 2 Level 3 validated HSMs ArchivedAmazon Web Services Security Pillar 30 Enforce encryption at rest: You should ensure that the only way to store data is by using encr yption AWS KMS integrates seamlessly with many AWS services to make it easier for you to encrypt all your data at rest For example in Amazon S3 you can set default encry ption on a bucket so that all new objects are automatically encrypted Additionally Amazon EC2 supports the enforcement of encryption by setting a default encryption option for an entire Region Enforce access control: Different controls including access (using least privilege ) backups (see Reliability whitepaper) isolation and versioning can all help protect your data at rest Access to your data should be audited using detective mechanisms covered earlier in this paper including CloudTrail and service level log such as S3 access logs You should inventory what data is publicly accessible and plan for how you can reduce the amount of d ata available over time Amazon S3 Glacier Vault Lock and S3 Object Lock are capabilities providing mandatory access control —once a vault policy is locked with the compliance option not even the root user can change it until the lock expires The mechanis m meets the Books and Records Management requirements of the SEC CFTC and FINRA For more details see this whitepaper Audit the use of encryption keys: Ensure that you understand and audit the use of encryption keys to validate that the access control mechanisms on the keys are appropriately implemented For example any AWS service using an AWS KMS key logs each use in A WS CloudTrail You can then query AWS CloudTrail by using a tool such as Amazon CloudWatch Insights to ensure that all uses of your keys are valid Use mechanisms to keep people away from data: Keep all users away from directly accessing sensitive data and systems under normal operational circumstances For example use a change management workflow to manage EC2 instances using tools instead of allowing direct access or a bastion host This can be achieved using AWS Systems Manager Automation which uses automation documents that contain steps you use to perform tasks These documents can be stored in source control be peer reviewed before running and tested thoroughly to minimize risk compared to shell access Business users could have a dashboard instead of direct access to a data store to run q ueries Where CI/CD pipelines are not used determine which controls and processes are required to adequately provide a normally disabled break glass access mechanism Automate data at rest protection: Use automated tools to validate and enforce data at rest controls continuously for example verify that there are only encrypted storage resources You can automate validation that all EBS volumes are encrypted using AWS Config Rules AWS Security Hub can also verify a number of different controls through ArchivedAmazon Web Services Security Pillar 31 automated check s against security standards Additionally your AWS Config Rules can automatically remediate noncompliant resources Resources Refer to the following resources to learn more about AWS best practices for protecting data at rest Video • How Encryption Works in AWS • Securing Your Block Storage on AWS • Achieving security goals with AWS CloudHSM • Best Practices for Implementing AWS Key Management Service • A Deep Dive into AWS Encryption Services Documentation • Protecting Amazon S3 Data Using Encryption • Amazon EBS Encryption • Encrypting Amazon RDS Resources • Protecting Data Using Encryption • How AWS services use AWS KMS • Amazon EBS Encryption • AWS Key Management Service • AWS CloudHSM • AWS KMS Cryptographic Details Whitepaper • Using Key Policies in AWS KMS • Using Bucket Policies and User Policies • AWS Crypto Tools ArchivedAmazon Web Services Security Pillar 32 Protecting Data in Transit Data in transit is any data that is sent from one system to another This includes communication between resources within your workload as well as communicati on between other services and your end users By providing the appropriate level of protection for your data in transit you protect the confidentiality and integrity of your workload’s data Implement secure key and certificate management: Store encrypti on keys and certificates securely and rotate them at appropriate time intervals with strict access control The best way to accomplish this is to use a managed service such as AWS Certificate Manage r (ACM) It lets you easily provision manage and deploy public and private Transport Layer Security (TLS) certificates for use with AWS services and your internal connected resources TLS certificates are used to secure network communications and esta blish the identity of websites over the internet as well as resources on private networks ACM integrates with AWS resources such as Elastic Load Balancers Amazon CloudFront distributions and APIs on API Gateway also handl ing automatic certificate rene wals If you use ACM to deploy a private root CA both certificates and private keys can be provided by it for use in EC2 instances containers etc Enforce encryption in transit: Enforce your defined encryption requirements based on appropriate standards and recommendation s to help you meet your organizational legal and compliance requirements AWS services provide HTTPS endpoints using TLS for communication thus providing encryption in transit when communicating with the AWS APIs Insecure protocols such as HTTP can be audited and blocked in a VPC through the use of security groups HTTP requests can also be automatically redirected to HTTPS in Amazon CloudFront or on an Application Load Balancer You have full control over your computing resources to implement encryption in transit across yo ur services Additionally you can use VPN connectivity into your VPC from an external network to facilitate encryption of traffic Third party solutions are available in the AWS Marketplace if you have special requirements Authenticate network communica tions: Using network protocols that support authentication allows for trust to be established between the parties This adds to the encryption used in the protocol to reduce the risk of communications being altered or intercepted Common protocols that imp lement authentication include Transport Layer Security (TLS) which is used in many AWS services and IPsec which is used in AWS Virtual Private Network (AWS VPN) ArchivedAmazon Web Services Security Pillar 33 Automate detection of unintended data access: Use tools such as Amazon GuardDuty to automatically detect attempts to move data outside of defined boundaries based on data classification level for example to detect a trojan that is copying data to an unknown or untrusted network using the DNS protocol In addition to Amazon GuardDuty Amazon VPC Flow Logs which capture network traffic information can be used with Amazon EventBridge to trigger detection of abnormal connecti ons–both successful and denied S3 Access Analyzer can help assess what data is accessible to who in your S3 buckets Resources Refer to the follow ing resources to learn more about AWS best practices for protecting data in transit Video • How can I add certificates for websites to the ELB using AWS Certificate Manager • Deep Dive on AWS Certificate Manager Private CA Documentation • AWS Certificate Manager • HTTPS Listeners for Your Application Load Balancer • AWS VPN • API Gateway Edge Optimized ArchivedAmazon Web Services Security Pillar 34 Incident Response Even with extremely mature preventive and detective controls your organization should still implement mechanisms to respond to and mitigate the potential impact of security incidents Your preparation strongly affects the ability of your teams to operate effectively during an incident to isolate and contain issues and to restore operations to a known good state Putting in place the tools and access ahead of a security incident then routinely practicing incident response through game days helps ensure that you can recover while minimizing business disruption Design Goals of Cloud Response Although the general processes and mechanisms of incident response such as those defined in the NIST SP 800 61 Computer Security Incident Handling Guide remain true we encourage you to evaluate these specific design goals that are relevant to responding to security incidents in a cloud environment: • Establish response objectives : Work with your stakeholders legal counsel and organizational leadership to determine the goal of responding to an incident Some common goals include containing and mitigating the issue recovering the affected resources preserving data for forensics and attribution • Document plans : Create plans to help you respond to communicate during and recover from an incident • Respond using the cloud : Implement your response patterns where the event and data occurs • Know what you have and what you need : Prese rve logs snapshots and other evidence by copying them to a centralized security cloud account Use tags metadata and mechanisms that enforce retention policies For example you might choose to use the Linux dd command or a Windows equivalent to make a complete copy of the data for investigative purposes • Use redeployment mechanisms : If a security anomaly can be attributed to a misconfiguration the remediation might be as simple as removing the variance by redeploying the resources with the proper con figuration When possible make your response mechanisms safe to execute more than once and in environments in an unknown state ArchivedAmazon Web Services Security Pillar 35 • Automate where possible : As you see issues or incidents repeat build mechanisms that programmatically triage and respond to c ommon situations Use human responses for unique new and sensitive incidents • Choose scalable solutions : Strive to match the scalability of your organization's approach to cloud computing and reduce the time between detection and response • Learn and i mprove your process : When you identify gaps in your process tools or people implement plans to fix them Simulations are safe methods to find gaps and improve processes In AWS there are a number of different approaches you can use when addressing incident response The following section describes how to use these approaches: • Educate your security operations and incident response staff about cloud technologies and how your organization intends to use them • Prepare your incident response team to detect and respond to incidents in the cloud enabl e detective capabilities and ensur e appropriate access to the necessary tools and cloud services Additionally prepare the necessary runbooks both manual and automated to ensure reliable and consistent respo nses Work with other teams to establish expected baseline operations and use that knowledge to identify deviations from those normal operations • Simulate both expected and unexpected security events within your cloud environment to understand the effect iveness of your preparation • Iterate on the outcome of your simulation to improve the scale of your response posture reduce time to value and further reduce risk Educate Automated processes enable organizations to spend more time focusing on measures to increase the security of their workloads Automated incident response also makes humans available to correlate events practice simulations devise new response procedures perform research develop new skills and test or build new tools Desp ite increased automation your team specialists and responders within a security organization still require continuous education Beyond general cloud experience you need to significantly invest in your people to be successful Your organization can ben efit by providing additional training to your staff to learn programming skills development processes (including version control systems ArchivedAmazon Web Services Security Pilla r 36 and deployment practices) and infrastructure automation The best way to learn is hands on through running incident response game days This allows for experts in your team to hone the tools and techniques while teaching others Prepare During an incident your incident response teams must have access to various tools and the workload resources involved in the incident Make sure that your teams have appropriate preprovisioned access to perform their duties before an event occurs All tools access and plans should be documented and tested before an event occurs to make sure that they can provide a timely response Identify key personnel and external resources: When you define your approach to incident response in the cloud in unison wi th other teams (such as your legal counsel leadership business stakeholders AWS Support Services and others) you must identify key personnel stakeholders and relevant contacts To reduce dependency and decrease response time make sure that your team specialist security teams and responders are educated about the services that you use and have opportunities to practice hands on We encourage you to identify external AWS security partners that can provide you with outside expertise and a different p erspective to augment your response capabilities Your trusted security partners can help you identify potential risks or threats that you might not be familiar with Develop incident management plans: Create plans to help you respond to communicate durin g and recover from an incident For example you can start at incident response plan with the most likely scenarios for your workload and organization Include how you would communicate and escalate both internally and externally Create incident response plans in the form of playbooks starting with the most likely scenarios for your workload and organization These might be events that are currently generated If you need a starting p lace you should look at AWS Trusted Advisor and Amazon GuardDuty findings Use a simple format such as markdown so it’s easily maintained but ensure that important commands or code snippets are included s o they can be executed without having to lookup other documentation Start simple and iterate Work closely with your security experts and partners to identify the tasks required to ensure that the processes are possible Define the manual descriptions of the processes you perform After this test the processes and iterate on the runbook pattern to improve the core logic of your response Determine what the exceptions are and what the alternative resolutions are for those scenarios For ArchivedAmazon Web Services Security Pillar 37 example in a deve lopment environment you might want to terminate a misconfigured Amazon EC2 instance But if the same event occurred in a production environment instead of terminating the instance you might stop the instance and verify with stakeholders that critical d ata will not be lost and that termination is acceptable Include how you would communicate and escalate both internally and externally When you are comfortable with the manual response to the process automate it to reduce the time to resolution Preprov ision access: Ensure that incident responders have the correct access pre provisioned into AWS and other relevant systems to reduce the time for investigation through to recovery Determining how to get access for the right people during an incident delays the time it takes to respond and can introduce other security weaknesses if access is shared or not properly provisioned while under pressure You must know what level of access your team members require (for example what kinds of actions they are likel y to take) and you must provision access in advance Access in the form of roles or users created specifically to respond to a security incident are often privileged in order to provide sufficient access Therefore use of these user accounts should be res tricted they should not be used for daily activities and usage alerted on Predeploy tools: Ensure that security personnel have the right tools pre deployed into AWS to reduce the time for investigation through to recovery To automate security engine ering and operations functions you can use a comprehensive set of APIs and tools from AWS You can fully automate identity management network security data protection and monitoring capabilities and deliver them using popular software development metho ds that you already have in place When you build security automation your system can monitor review and initiate a response rather than having people monitor your security position and manually react to events If your incident response teams continue to respond to alerts in the same way they risk alert fatigue Over time the team can become desensitized to alerts and can either make mistakes handling ordinary situations or miss unusual alerts Automation helps avoid alert fatigue by using functions that process the repetitive and ordinary alerts leaving humans to handle the sensitive and unique incidents You can improve manual processes by programmatically automating steps in the process After you define the remediation pattern to an event you c an decompose that pattern into actionable logic and write the code to perform that logic Responders can then execute that code to remediate the issue Over time you can automate more and more steps and ultimately automatically handle whole classes of c ommon incidents ArchivedAmazon Web Services Security Pillar 38 For tools that execute within the operating system of your EC2 instance you should evaluate using the AWS Systems Manager Run Command which enables you to remotely and securely administrate instances using an agent that you install on yo ur Amazon EC2 instance operating system It requires the AWS Systems Manager Agent (SSM Agent) which is installed by default on many Amazon Machine Images (AMIs) Be aware though that once an instance has been compromised no responses from tools or age nts running on it should be considered trustworthy Prepare forensic capabilities: Identify and prepare forensic investigation capabilities that are suitable including external specialists tools and automation Some of your incident response activities might include analyzing disk images file systems RAM dumps or other artifacts that are involved in an incident Build a customized forensic workstation that they can use to mount copies of any affected data volumes As forensic investigation techniques require specialist training you might need to engage external specialists Simulate Run game days: Game days also known as simulations or exercises are internal events that provide a structured opportunity to practice your incident management plans and procedures during a realistic scenario Game days are fundamentally about being prepared and iteratively improving your response capabilities Some of the reasons you might find value in performing game day activities include: • Validating readiness • Developing confidence – learning from simulations and training staff • Following compliance or contractual obligations • Generating artifacts for accreditation • Being agile – incremental improvement • Becoming faster and improving tools • Refining communication and escalation • Developing comfort with the rare and the unexpected For these reasons the value derived from participating in a SIRS activity increases an organization's effectiveness during stressful events Developing a SIRS act ivity that is both realistic and beneficial can be a difficult exercise Although testing your procedures or automation that handles well understood events has certain advantages it is just as ArchivedAmazon Web Services Security Pillar 39 valuable to participate in creative SIRS activities to test yo urself against the unexpected and continuously improve Iterate Automate containment and recovery capability: Automate containment and recovery of an incident to reduce response times and organizational impact Once you create and practice the processes an d tools from your playbooks you can deconstruct the logic into a code based solution which can be used as a tool by many responders to automate the response and remove variance or guess work by your responders This can speed up the lifecycle of a respon se The next goal is to enable this code to be fully automated by being invoked by the alerts or events themselves rather than by a human responder to create an event driven response With an event driven response system a detective mechanism triggers a responsive mechanism to automatically remediate the event You can use event driven response capabilities to reduce the time tovalue between detective mechanisms and responsive mechanisms To create this event driven architecture you can use AWS Lambda which is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you For example assume that you have an AWS account with the AWS CloudTrail service enabled If AWS CloudTrail is ever disabled (through the cloudtrail:StopLogging API call) you can use Amazon EventBridge to monitor for the specific cloudtrail:StopLogging event and invoke an AWS Lambda function to call cloudtrail:StartLogging to restart logging Resources Refer to the following resources to learn more about current AWS best practices for incident response Videos • Prepare for & respond to security incidents in your AWS environment • Automating Incident Response and Forensics • DIY guide to runbooks incident reports and incident response ArchivedAmazon Web Services Security Pillar 40 Documentation • AWS Incident Response Guide • AWS Step Functions • Amazon EventBridge • CloudEndure Disaster Recovery Hands on • Lab: Incident Response with AWS Console and CLI • Lab: Incident Response Playbook with Jupyter AWS IAM • Blog: Orchestrating a security incident response with AWS Step Functions Conclusion Security is an ongoing effort When incidents occur they should be treated as opportunities to improve the security of the architecture Having strong identity controls automating responses to security events protecting infrast ructure at multiple levels and managing well classified data with encryption provides defense in depth that every organization should implement This effort is easier thanks to the programmatic functions and AWS features and services discussed in this pap er AWS strives to help you build and operate architectures that protect information systems and assets while delivering business value Contributors The following individuals and organizations contributed to this document: • Ben Potter Principal Security Lead Well Architected Amazon Web Services • Bill Shinn Senior Principal Office of the CISO Amazon Web Services • Brigid Johnson Senior Software Development Manager AWS Identity Amazon Web Services • Byron Pogson Senior Solution Architect Amazon Web Services • Darran Boyd Principal Security Solutions Architect Financial Services Amazon Web Services ArchivedAmazon Web Services Security Pillar 41 • Dave Walker Principal Specialist Solutions Architect Security and Compliance Amazon Web Services • Paul Hawkins Senior Security Strategist Amazon Web Services • Sam Elmalak Senior Technology Leader Amazon Web Services Further Reading For additional help please consult the following sources: • AWS Well Architected Framework Whitepaper Document Revisions Date Description July 2020 Updated guidance on account identity and permissions management April 2020 Updated to expand advice in every area new best practices services and features July 2018 Updates to reflect new AWS services and features and updated references May 2017 Updated System Security Configuration and Maintenance section to reflect new AWS services and features November 2016 First publication
|
General
|
consultant
|
Best Practices
|
AWS_WellArchitected_Framework__Serverless_Applications_Lens
|
ArchivedServerless Application Lens AWS Well Architected Framework December 2019 This paper has been archived The latest version is now available at: https://docsawsamazoncom/wellarchitected/latest/serverlessapplicationslens/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Definitions 1 Compute Layer 2 Data Layer 2 Messaging and Streaming Layer 3 User Management and Identity Layer 3 Edge Layer 4 Systems Monitoring and Deployment 4 Deployment Approaches 4 General Design Principles 7 Scenarios 8 RESTful Microservices 8 Alexa Skills 10 Mobile Backend 14 Stream Processing 18 Web Application 20 The Pillars of the Well Architected Framework 22 Operational Excellence Pillar 23 Security Pillar 33 Reliability Pillar 43 Performance Efficiency Pillar 51 Cost Optimization Pillar 62 Conclusion 72 Contributors 72 Further Reading 73 Document Revisions 73 Archived ArchivedAbstract This document describes the Serverless Applications Lens for the AWS Well Architected Framework The document covers common serverless applications scenarios and identif ies key elements to ensure that your workloads are architected according to best practices ArchivedAmazon Web Services Serverless Application Lens 1 Introduction The AWS Well Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS 1 By using the Framework you will learn architectural best practices for designing and operating reliable secure efficient and cost effective systems in the cl oud It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement We believe that having well architected systems greatly increases the likelihood of business success In this “Lens ” we foc us on how to design deploy and architect your serverless application workloads in the AWS Cloud For brevity we have only covered details from the WellArchitected Framework that are specific to serverless workloads You should still consider best pract ices and q uestions that have not been included in this document when designing your architecture We recommend that you read the AWS WellArchitected Framework whitepaper 2 This document is intended for those in technology roles such as chief technology officers (CTOs) architects developers and operations team members After reading this document you will understand AWS best practices and strategies to use when designing architectures for serverless applications Definitions The AWS Well Architected Framework is based on five pillars : operational excellence security reliability performance efficiency and cost optimization For serverless workloads AWS provides multiple core components (serverless and non serverless) that allow you to design robust architectures for your serverless applications In this section we will present an overview of the services that will be used throughout this document There are s even areas you should consider when building a serverless workload: • Compute layer • Data layer • Messaging and streaming layer • User management and identity layer • Edge layer ArchivedAmazon Web Services Serverless Application Lens 2 • Systems monitoring and deployment • Deployment approaches Compute Layer The compute layer of your workload manages requests from external systems controlling access and ensuring requests are appropriately authorized It contains the runtime environment that your business logic will be deployed and executed by AWS Lambd a lets you run stateless serverless application s on a manag ed platform that supports micro services architectures deployment and management of execution at the function layer With Amazon API Gateway you can run a fully managed REST API that integrates with Lambda to execute your business logic and includes traffic management authorization and access control monitoring and API versioning AWS Step Functions orchestrates serverless workflows including coordination state and function chaining as well as combining long running executions not supported within Lambda execution limits by breaking into multiple steps or by calling workers running on Amazon Elastic Compute Cloud (Amazon EC2) instances or on premises Data Layer The data layer of your workload m anages persistent storage from within a system It provides a secure mechanism to store the states that your business logic will need It provides a mechanism to trigger events in response to data changes Amazon DynamoDB helps you build serverless applications by providing a managed NoSQL database for persistent storage Combined with DynamoDB Streams you can respond in near real time to changes in your DynamoDB table by invoking Lambda functions DynamoDB Accelerator (DAX) adds a highly available in memory cache for DynamoDB that delivers up to 10x performance improvement from milliseconds to microseconds With Amazon Simple Storage Service (Amazon S3) you can build serverless web applications and websites by providing a highly available key value store from which static assets can be served via a Content Delivery Network (CDN) such as Amazon Cloud Front ArchivedAmazon Web Services Serverless Application Lens 3 Amazon Elasticsearch Service (Amazon ES) makes it easy to depl oy secure operate and scale Elasticsearch for log analytics full text search application monitoring and more Amazon ES is a fully managed service that provides both a search engine and analytics tools AWS AppSync is a managed GraphQL service with r ealtime and offline capabilities as well as enterprise grade security controls that make developing applications simple AWS AppSync provides a data driven API and consistent programming language for applications and devices to connect to services such a s DynamoDB Amazon ES and Amazon S3 Messaging and Streaming Layer The messaging layer of your workload manages communications between components The streaming layer manages real time analysis and processing of streaming data Amazon Simple Notification Service (Amazon SNS) provides a fully managed messaging service for pub/sub patterns using asynchronous event notifications and mobile push notifications for microservices distributed systems and serverless applications Amazon Kinesis makes it easy to c ollect process and analyze real time streaming data With Amazon Kinesis Data Analytics you can run standard SQL or build entire streaming applications using SQL Amazon Kinesis Data Firehose captures transforms and loads streaming data into Kinesis Data Analytics Amazon S3 Amazon Redshift and Amazon ES enabling near realtime analytics with existing business intelligence tools User Management and Identity Layer The user management and identity layer of your workload provides identity authenticat ion and authorization for both external and internal customers of your workload’s interfaces With Amazon Cognito you can easily add user sign up sign in and data synchronization to serverless applications Amazon Cognito user pools provide built in signin screens and federation with Facebook Google Amazon and Security Assertion Markup Language (SAML) Amazon Cognito Federated Identities lets you securely provide scoped access to AWS resources that are part of your serverless arc hitecture ArchivedAmazon Web Services Serverless Application Lens 4 Edge Layer The edge layer of your workload manages the presentation layer and connectivity to external customers It provides an efficient delivery method to external customers residing in distinct geographical locations Amazon CloudFront provi des a CDN that secur ely delivers web application content and data with low latency and high transfer speeds Systems Monitoring and Deployment The system monitoring layer of your workload manages system visibility through metrics and creates contextual awa reness of how it operates and behaves over time The deployment layer defines how your workload changes are promoted through a release management process With Amazon CloudWatch you can access system metrics on all the AWS services you use consolidate sy stem and application level logs and create business key performance indicators (KPIs) as custom metrics for your specific needs It provides dashboards and alerts that can trigger automated actions on the platform AWS X Ray lets you analyze and debug ser verless applications by providing distributed tracing and service maps to easily identify performance bottlenecks by visualizing a request end toend AWS Serverless Application Model (AWS SAM) is an extension of AWS CloudFormation that is used to package test and deploy serverless applications The AWS SAM CLI can also enable faster debugging cycles when developing Lambda functions locally Deployment Approaches A best practice for deployments in a microservice architecture is to ensure that a change doe s not break the service contract of the consumer If the API owner makes a change t hat breaks the service contract and the consumer is not prepared for it failures can occur Being aware of which consumers are using your APIs is the first step to ensure that deployments are safe Collecting metadata on consumers and their usage allows you to make data driven decisions about the impact of changes API Keys are an effective way ArchivedAmazon Web Services Serverless Application Lens 5 to capture metadata about the API consumer/clients and often used as a form of co ntact if a breaking change is made to an API Some customers who want to take a risk adverse approach to breaking changes may choose to clone the API and route customers to a different subdomain ( for example v2my servicecom) to ensure that existing consumers aren’t impacted While this approach enables new deployments with a new service contract the tradeoff is that the overhead of maintaining dual APIs (and subsequent backend infrastructure) require s additional overhead The table shows th e different approaches to deployment: Deployment Consumer Impact Rollback Event Model Factors Deployment Speed Allatonce All at once Redeploy older version Any event model at low concurrency rate Immediate Blue/Green All at once with some level of production environment testing beforehand Revert traffic to previous environment Better for async and sync event models at medium concurrency workloads Minutes to hours of validation and then immediate to customers Canar y/Linear 1–10% typical initial traffic shift then phased increases or all at once Revert 100% of traffic to previous deployment Better for high concurrency workloads Minutes to hours Allatonce Deployments Allatonce deployments involve making changes on top of the existing configuration An advantage to this style of deployment is that backend changes to data stores such as a relational database require a much smaller level of effort to reconcile transactions during the change cycle W hile this type of deployment style is low effort and can be made with little impact in low concurrency models it adds risk when it comes to rollback ArchivedAmazon Web Services Serverless Application Lens 6 and usually causes downtime An example scenario to use this deployment model is for development environme nts where the user impact is minimal Blue /Green Deployments Another traffic shifting pattern is enabling blue/green deployments This near zero downtime release enables traffic to shift to the new live environment (green) while still keeping the old prod uction environment (blue) warm in case a rollback is necessary Since API Gateway allows you to define what percentage of traffic is shifted to a particular environment; this style of deployment can be an effective technique Since blue/green deployments a re designed to reduce downtime many customers adopt this pattern for production changes Serverless architectures that follow the best practice of statelessness and idempotency are amenable to this deployment style because there is no affinity to the unde rlying infrastructure You should bias these deployments toward smaller incremental changes so that you can easily roll back to a working environment if necessary You need the right indicators in place to know if a rollback is required As a best practice we recommend customers using CloudWatch high resolution metrics which can monitor in 1 second intervals and quickly capture downward trends Used with CloudWatch alarms you can enable an expedited rollback to occur CloudWatch metrics can be captured on API Gateway Step Functions Lambda (including custom metrics) and DynamoDB Canary Deployments Canary deployments are an ever increasing way for you to leverage the new release of a software in a controlled environment and enabling rapid deployment cy cles Canary deployments involve deploying a small number of requests to the new change to analyze impact to a small number of your users Since you no longer need to worry about provisioning and scaling the underlying infrastructure of the new deployment the AWS Cloud ha s helped facilitate this adoption With Canary deployments in API Gateway you can deploy a change to your backend endpoint ( for example Lambda) while still maintaining the same API Gateway HTTP endpoint for consumers In addition you can also control what percentage of traffic is routed to new deployment and for a controlled traffic cutover A practical scenario for a canary deployment might be a new website You can monitor the clickthrough rates on a small number of end users before shifting all traffic to the new deployment ArchivedAmazon Web Services Serverless Application Lens 7 Lambda Version Control Like all software maintaining versioning enables the quick visibility of previously functioning code as well as the ability to revert back to a previous version if a new deployment is unsuccessful Lambda allows you to publish one or more immutable versions for individual Lambda functions; such that previous versions cannot be changed Each Lambda function version has a unique Amazon Resource Name (ARN) and new version changes are auditable as they are recorded in CloudTrail As a best practice in production customers should enable versioning to best leverage a reliable architecture To sim plify deployment operations and reduce the risk of error Lambda Aliases enable different variations of your Lambda function in your development workflow such as development beta and production An example of this is when an API Gateway integration with Lambda points to the ARN of a production alias The production alias will point to a Lambda version The value of this technique is that it enables a safe deployment when promoting a new version to the live environment because the Lambda Alias within the caller configuration remains static thus less changes to make General Design Principles The Well Architected Framework identifies a set of general design principles to facilitate good design in the cloud for serverless applications : • Speedy simple singular : Functions are concise short single purpose and their environment may live up to the ir request lifecycle Transactions are efficiently cost aware and thus faster executions are preferred • Think concurrent requests not total requests : Serverless applications take advantage of the concurrency model and tradeoffs at the design level are evaluated based on concurrency • Share nothing : Function runtime environment and underlying infrastructure are short lived therefore local resources such as temporary storage is not guaranteed State can be manipulated within a state machine execution lifecy cle and persistent storage is preferred for highly durable requirements • Assume no hardware affinity : Underlying infrastructure may change Leverage code or dependencies that are hardware agnostic as CPU flags for example may not be available consistent ly ArchivedAmazon Web Services Serverless Application Lens 8 • Orchestrate your application with state machines not functions : Chaining Lambda executions within the code to orchestrate the workflow of your application results in a monolithic and tightly coupled application Instead use a state machine to orchest rate transactions and communication flows • Use events to trigger transactions : Events such as writing a new Amazon S3 object or an update to a database allow for transaction execution in response to business functionalities This asynchronous event behavio r is often consumer agnostic and drives just intime processing to ensure lean service design • Design for failures and duplicates : Operations triggered from requests/events must be idempotent as failures can occur and a given request/event can be delivered more than once Include appropriate retries for downstream calls Scenarios In this section we cover the f ive key scenarios that are common in many serverless applications and how they influence the design and architecture of your serverless application workload s on AWS We will present the assumptions we made for each of these scenarios the common drivers for the design and a reference architecture of how these scenarios should be implemented RESTful Microservice s When building a microservice you’re thinking about how a business context can be delivered as a re usable service for your consumers The specific implementation will be tailored to individual use cases but there are several common themes across microservice s to ensure that your im plementation is secure resilient and constructed to give the best experience for your customers Building serverless microservices on AWS enables you to not only take advantage of the serverless capabilities themselves but also to use other AWS services and features as well as the ecosystem of AWS and AWS Partner Network (APN) tools Serverless technologies are built on top of fault tolerant infrastructure enabling you to build reliable services for your mission critical workloads The ecosystem of too ling enables you to streamline the build automate tasks orchestrate dependencies and monitor and govern your microservices Lastly AWS serverless tools are pay asyou go enabling you to grow the service with your business and keep your cost s down during entry phases and non peak times ArchivedAmazon Web Services Serverless Application Lens 9 Characteristics: • You want a secure easy tooperate framework that is simple to replicate and has high levels of resiliency and availability • You want to log utilization and access patterns to continually improve your backend to support customer usage • You are seeking to leverage managed services as much as possible for your platforms which reduces the heavy lifting associated with managing common platforms including security and scalability Reference Architecture Figure 1: Reference architecture for RESTful microservices 1 Customers leverage your microservices by making HTTP API calls Ideally your consumers should have a tightly bound service contract to your API to achieve consistent expectations of service levels and change control 2 Amazon API Gateway hosts RESTful HTTP requests and responses to customers In this scenario API Gateway provides built in authorization throttling security fault tolerance request/response mapping and performance optimizations 3 AWS Lambda contains the business logic to process incoming API calls and leverage DynamoDB as a p ersistent storage 4 Amazon DynamoDB persistently stores microservices data and scales based on demand Since microservices are often designed to do one thing well a schemaless NoSQL data store is regularly incorporated Configuration notes: AWS Lambda ClientAmazon API Gateway Amazon DynamoDB 1 2 3 4 ConsumerArchivedAmazon Web Services Serverless Application Lens 10 • Leverage API Ga teway logging to understand visibility of microservices consumer access behaviors This information is visible in Amazon CloudWatch Logs and can be quickly viewed through Log Pivots analyzed in CloudWatch Logs Insights or fed into other searchable engines such as Amazon ES or Amazon S3 (with Amazon Athena) The information delivered gives key visibility such as: o Understanding common customer locations which may change geographically based on the proximity of your backend o Understanding how customer input requests may have an impact on how you partition your database o Understanding the semantics of abnormal behavior which can be a security flag o Understanding errors latency and cache hits/misses to optimize configuration This model provides a framework tha t is easy to deploy and maintain and a secure environment that will scale as your needs grow Alexa Skills The Alexa Skills Kit gives developers the ability to extend Alexa's capabilities by building natural and engaging voice and visual experiences Succe ssful skills are habit forming where users routinely come back because it offers something unique it provides value in new novel and frictionless ways The biggest cause of frustration from users is when the skill doesn’t act how they expect it to and it might take multiple interactions before accomplishing what they need It’s essential to start by designing a voice interaction model and working backwards from that since some users may say too little too much or possibly something you aren’t expect ing The voice design process involves creating scripting and planning for expected as well as unexpected utterances ArchivedAmazon Web Services Serverless Application Lens 11 Figure 2: Alexa Skill example design script With a basic script in mind you can use the following techniques before start building a skill: • Outline the shortest route to completion o The shortest route to completion is generally when the user gives all information and slots at once an account is already linked if relevant and other prerequisites are satisfied in a single invocation of the skill • Outline alternate paths and decision trees o Often what the user says doesn’t include all information necessary to complete the request In the flow identify alternate pathways and use r decisions • Outline behind thescenes decisions the system logic will have to make o Identify behind thescenes system decisions for example with new or returning users A background system check might change the flow a user follows • Outline how the skill will help the user o Include clear directions in the help for what users can do with the skill Based on the complexity of the skill the help might provide one simple response or many responses • Outline the account linking process if present ArchivedAmazon Web Services Serverless Application Lens 12 o Determine the information that is required for account linking You also need to identif y how the skill will respond when account linking hasn’t been completed Characteristics: • You want to create a complete serverless architecture without managing any instance s or server s • You want your content to be decoupled from your skill as much as possible • You are looking to provide engaging voice experiences exposed as an API to optimize development across wide ranging Alexa devices Regions and languages • You want elasticity that scale s up and down to meet the demands of users and handles unexpected usage patterns Reference Architecture Figure 3: Reference architecture for an Alexa Skill 1 Alexa users interact with Alexa skills by speaking to Alexa enabled devices using voice as t he primary method of interaction ArchivedAmazon Web Services Serverless Application Lens 13 2 Alexa enabled devices listen for a wake word and activate as soon as one is recognized Supported wake words are Alexa Computer and Echo 3 The Alexa Service performs common Speech Language Understanding (SLU) processing o n behalf of your Alexa Skill including Automated Speech Recognition (ASR) Natural Language Understanding (NLU) and Text to Speech (TTS) conversion 4 Alexa Skills Kit (ASK) is a collection of self service APIs tools documentation and code examples that make it fast and easy for you to add skills to Alexa ASK is a trusted AWS Lambda trigger allowing for seamless integration 5 Alexa Custom Skill gives you control over the user experience allowing you to build a custom interaction model It is the most f lexible type of skill but also the most complex 6 A Lambda function using the Alexa Skills Kit allowing you to seamlessly build skills avoiding unneeded complexity Using it you can process different types of requests sent from the Alexa Service and build speech responses 7 A DynamoDB Database can provide a NoSQL data store that can elastically scale with the usage of your sill It is commonly used by skills to for persisting user state and sessions 8 Alexa Smart Home Skill allows you to control devices such as lights thermostats smart TVs etc using the Smart Home API Smart Home skills are simpler to build that custom skills as the don’t give you control over the interaction model 9 A Lambda function is used to respond to device discovery and control requ ests from the Alexa Service Developers use it to control a wide ranging number of devices including entertainment devices cameras lighting thermostats locks and many more 10 AWS Internet of Things (IoT) allows developers to securely connect their devices to AWS and control interaction between their Alexa skill and their devices 11 An Alexa enabled Smart Home can have an unlimited number of IoT connected devices receiving and responding and to directives from an Alexa Skill 12 Amazon S3 stores your skills s tatic assets including images content and media Its contents are securely served using CloudFront ArchivedAmazon Web Services Serverless App lication Lens 14 13 Amazon CloudFront Content Delivery Network (CDN) provides a CDN that serves content faster to geographically distributed mobile users and includes security mechanisms to static assets in Amazon S3 14 Account Linking is needed when your skill must authenticate with a nother system This action associa tes the Alexa user with a specific user in the other system Configuration notes: • Validate Smart Home request and response payloads by validating against the JSON schema for all possible Alexa Smart Home messages sent by a skill to Alexa • Ensure that your Lambda function timeout is le ss than eight seconds and can handle requests within that timeframe (The Alexa Service timeout is 8 seconds ) • Follow best pra ctices 7 when creating your DynamoDB tables Use on demand tables when you are not certain how m uch read/write capacity you need Otherwise choose provisioned capacity with auto matic scaling enabled For Skills that are heavy on ready DynamoDB Accelera tor (DAX) can greatly improve response times • Account linking can provide user information that may be stored in an external system Use that information to provide contextual and personalized experience for your user Alexa has guidelines on Account Linking to provide frictionless experiences • Use the s kill beta testing tool to collect early feedback on skill development and for skills versioning to reduce impact on skills that are already live • Use A SK CLI to automate skill development and deployment Mobile Backend Users increasingly expect their mobile applications t o have a fast consistent and feature rich user experience At the same time mobile user patterns are dynamic with unpredictable peak usage and often have a global footprint The growing demand from mobile users means that applications need a rich set of mobile services that work together seamlessly without sacrificing control and flexibility of the backend infrastructure Certain capabilities across mobile applications are expected by default: ArchivedAmazon Web Services Serverless Application Lens 15 • Ability to query mutate and subscribe to database change s • Offline persistence of data and bandwidth optimizations when connected • Search filtering and discovery of data in applications • Analytics of user behavior • Targeted messaging through multiple channels (Push Notifications SMS Email) • Rich content such as images and videos • Data synchronization across multiple devices and multiple users • FineGrained authorization controls for viewing and manipulating data Building a serverless mobile backend on AWS enables you to provide these capabilities while automaticall y managing scalability elasticity and availability in an efficient and cost effective way Characteristics: • You want to control application data behavior from the client and explicitly select what data you want from the API • You want your business logic to be decoupled from your mobile application as much as possible • You are looking to provide business functionalities as an API to optimize development across multiple platforms • You are seeking to leverage managed services to reduce undifferentiated heavy lifting of maintaining mobile backend infrastructure while providing high levels of scalability and availability • You want to optimize your mobile backend costs based upon actual user demand versus paying for idle resources Reference Architecture ArchivedAmazon Web Services Serverless Application Lens 16 Figure 2: Reference architecture for a mobile backend 1 Amazon Cognito is used for user management and as an identity provider for your mobile application Additionally it allows mobile users to leverage existing social identities s uch as Facebook Twitter Google+ and Amazon to sign in 2 Mobile users interact with the mobile application backend by performing GraphQL operations against AWS AppSync and AWS service APIs (for example Amazon S3 and Amazon Cognito) 3 Amazon S3 stores mobi le application static assets including certain mobile user data such as profile images Its contents are securely served via CloudFront 4 AWS AppSync hosts GraphQL HTTP requests and responses to mobile users In this scenario data from AWS AppSync is real time when devices are connected and data is available offline as well Data sources for this scenario are Amazon DynamoDB Amazon Elasticsearch Serv ice or AWS Lambda functions 5 Amazon Elasticsearch Service acts as a main search engine for your mobile application as well as analytics 6 DynamoDB provides persistent storage for your mobile application including mechanisms to expire unwanted data from in active mobile users through a Time to Live (TTL) feature ArchivedAmazon Web Services Serverless Application Lens 17 7 A Lambda function handles interaction with other thirdparty services or calling other AWS services for custom flows which can be part of the GraphQL response to clients 8 DynamoDB Streams captures itemlevel changes and enables a Lambda function to update additional data sources 9 A Lambda function manages streaming data between DynamoDB and Amazon ES allowing customers to combine data sources logical GraphQL types and operations 10 Amazon Pinpoint captures analytics from clients including user sessions and custom metrics for application insights 11 Amazon Pinpoint delivers messages to all users/devices or a targeted subset based on analytics that have been gathered Messages can be customized and sent using p ush notifications email or SMS channels Configuration notes: • Performance test 3 your Lambda functions with different memory and timeout settings to ensure that you’re usin g the most appropriate resources for the job • Follow best practices 4 when creating your DynamoDB tables and consider having AWS AppSync automatically provis ion them from a GraphQL schema which will use a well distributed hash key and create indexes for your operations Make certain to calculate your read/write capacity and table partitioning to ensure reasonable response times • Use the AWS AppSync server side data caching to optimize your application experience as a ll subsequent query requests to your API will be returned from the cache which means data sources won’t b e contacted directly unless the TTL expires • Follow best practices 5 when managing Amazon ES Domains Additionally Amazon ES provides an extensive guide 6 on designing concerning sharding and access patterns that also apply here • Use the finegrained access controls o f AWS AppSync configured in resolvers to filter GraphQL requests down to the peruser or group level if necessary This can be applied to AWS Identity and Access Management ( IAM) or Amazon Cognito User Pools authorization with AWS AppSync ArchivedAmazon Web Services Serverless Application Lens 18 • Use AWS Amplif y and Amplify CLI to compose and integrate your application with multiple AWS services Amplify Console also takes care of deploying and managing stacks For low latency requirements where near tonone business logic is required Amazon Cognito Federated I dentity can provide scoped credentials so that your mobile application can talk directly to an AWS service for example when uploading a user’s profile picture retrieve metadata files from Amazon S3 scoped to a user etc Stream Processing Ingesting and processing real time streaming data requires scalability and low latency to support a variety of applications such as activity tracking transaction order processing clickstream analysis data cleansing metrics generation log filtering i ndexing social media analysis and IoT device data telemetry and metering These applications are often spiky and process thousands of events per second Using AWS Lambda and Amazon Kinesis you can build a serverless stream process that automatically sca les without provisioning or managing servers Data processed by AWS Lambda can be stored in DynamoDB and analyzed later Characteristics: • You want to create a complete serverless architecture without managing any instance or server for processing streaming data • You want to use the Amazon Kinesis Producer Library (KPL) to take care of data ingestion from a data producer perspective Reference Architecture Here we are presenting a scenario for common stream processing which is a reference architecture for a nalyzing social media data ArchivedAmazon Web Services Serverless Application Lens 19 Figure 3: Reference architecture for stream processing 1 Data producers use the Amazon Kinesis Producer Library (KPL) to send social media streaming data to a Kinesis stream Amazon Kinesis Agent and custom data producers that leverage the Kinesis API can also be used 2 An Amazon Kinesis stream collects processes and analyzes realtime streaming data produced by data producers Data ingested into the stream can be processed by a consumer which in th is case is Lambda 3 AWS Lambda acts as a consumer of the stream that receives an array of the ingested data as a single event/invocation Further processing is carried out by the Lambda function The transformed data is then stored in a persistent storage which in this case is DynamoDB 4 Amazon DynamoDB provides a fast and flexible NoSQL database service including triggers that can integrate with AWS Lambda to make such data available elsewhere 5 Business users leverage a reporting interface on top of Dynam oDB to gather insights out of social media trend data Configuration notes: • Follow best practices 7 when re sharding Kinesis streams to accommodate a higher ingestion rate Concurrency for stream processing is dictated by the number of shards and by the parallelization factor Therefore adjust it according to your throughput requirements • Consider reviewing the Streaming Data Solutions whitepaper 8 for batch processing analytics on streams and other useful patterns ArchivedAmazon Web Services Serverless Application Lens 20 • When not using KPL make certain to take into account partial failures for non atomic operations such as PutRecords since the Kinesis API returns both successfully and unsuccessfully processed records 9 upon ingestion time • Duplicated r ecords 10 may occur and you mu st leverage both retries and idempotency within your application for both consumers and producers • Consider using Kinesis Data Firehose over Lambda when ingested data needs to be continuously loaded into Amazon S3 Amazon Redshift or Amazon ES • Consider using Kinesis Data Analytics over Lambda when standard SQL could be used to query streaming data and load only its results into Amazon S3 Amazon Redshift Amazon ES or Ki nesis Streams • Follow best practices for AWS Lambda stream based invocation 11 since that covers the effects on batch size concurrency per shard and monitor ing stream processing in more detail • Use Lambda maximum retry attempts maximum record age bisect batch on function error and on failure destination error controls to build more resilient stream processing applications Web Application Web applications typically have demanding requirements to ensure a consistent secure and reliable user experience To ensure high availability global availability and the ability to scale to thousands or potentially millions of users you often had to reserve substantial excess capacity to handle web requests at their highest anticipated demand This often required managing fleets of servers and additional infrastructure components which in turn led to significant capital expenditures and long lead times for capacity provisioning Using serverless computing on AWS you can deploy your entire web application stack without performing the undiffer entiated heavy lifting of managing servers guessing at provisioning capacity or paying for idle resources Additionally you do not have to compromise on security reliability or performance Characteristics: • You want a scalable web application that can go global in minutes with a high level of resiliency and availability • You want a consistent user experience with adequate response times ArchivedAmazon Web Services Serverless Application Lens 21 • You are seeking to leverage managed services as much as possible for your platforms to limit the heavy lifting assoc iated with managing common platforms • You want to optimize your costs based on actual user demand versus paying for idle resources • You want to create a framework that is easy to set up and operate and that you can extend with limited impact later Refere nce Architecture Figure 4: Reference architecture for a web application 1 Consumers of this web application m ight be geographically concentrated or distributed worldwide Leveraging Amazon CloudFront not only provides a better performance experience for these consumers through caching and optimal origin routing but also limits redundant calls to your backend 2 Amazon S3 hosts web application static assets and is securely served through CloudFront 3 An Amazon Cognito user p ool provides user management and identity provider feature s for your web application ArchivedAmazon Web Services Serverless Application Lens 22 4 In many scenarios a s static content from Amazon S3 is downloaded by the consumer dynamic content needs to be sent to or received by your application For example when a user submits data through a form Amazon API Gateway serves as the secure endpoint to make these calls and return responses displayed through your web application 5 An AWS Lambda function provides create read update and d elete (CRUD) operations on top of DynamoDB for your web application 6 Amazon DynamoDB can provide the backend NoSQL data store to elastically scale with the traffic of your web application Configuration Notes: • Follow best practices for deploying your serverless web application frontend on AWS More information can be found in the operational excellence pillar • For single page web applications use AWS Amplify Console to manage atomic deployments cache expirati on custom domain and user interface (UI) testing • Refer to the security pillar for recommendations on authentication and authorization • Refer to the RESTful Microservices scenario for recommendations on web application backend • For web applications that offer personalized services you can leverage API Gateway usage plans 12 as we ll as Amazon Cognito user pools to scope what different sets of users have access to For example a premium user can have higher throughput for API calls access to additional APIs additional storage etc • Refer to the Mobile Back end scenario if your application use s search capabilities that are not covered in this scenario The Pillars of the Well Architected Framework This section describes each of the pillars and includes definitions best practices questions consi derations and key AWS services that are relevant when architecting solutions for serverless applications For brevity we have only selected the questions from the Well Architected Framework that are specific to serverless workloads Questions that have n ot been included in this ArchivedAmazon Web Services Serverless Application Lens 23 document should still be considered when designing your architecture We recommend that you read the AWS Well Architected Framework whitepaper Operational Excellence Pillar The operational excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures Definition There are three be st practice areas for operational excellence in the cloud: • Prepare • Operate • Evolve In addition to what is covered by the Well Architected Framework concerning processes runbooks and game days there are specific areas you should look into to drive operati onal excellence within serverless applications Best Practices Prepare There are no operational practices unique to serverless applications that belong to this subsection Operate OPS 1 : How do you understand the health of your Serverless application? Metrics and Alerts It’s important to understand Amazon CloudWatch Metrics and Dimensions for every AWS service you intend to use so that you can put a plan in a place to assess its behavior and add custom metrics where you see fit Amazon CloudWatch provi des automated cross service and per service dashboards to help you understand key metrics for the AWS services that you u se For custom metrics use Amazon CloudWatch Embedded Metric Format to log a batch of metrics ArchivedAmazon Web Services Serverless Application Lens 24 that will be processed asynchronously b y CloudWatch without impacting the performance of your Serverless application The following guidelines can be used whether you are creating a dashboard or looking to formulate a plan for new and existing applications when it comes to metrics: • Business Me trics o Business KPIs that will measure your application performance against business goals and are important to know when something is critically affecting your overall business revenue wise or not o Examples : Orders placed debit/credit card operations fl ights purchased etc • Customer Experience Metrics o Customer experience data dictates not only the overall effectiveness of its UI/UX but also whether changes or anomalies are affecting customer experience in a particular section of your application Often t imes these are measured in percentiles to prevent outliers when trying to understand the impact over time and how it ’s spread across your customer base o Examples : Perceived latency time it takes to add an item to a basket or to check out page load times etc • System Metrics o Vendor and application metrics are important to underpin root causes from the previous sections They also tell you if your systems are healthy at risk or already your customers o Examples : Percentage of HTTP errors/ success memory utilization function duration/error/throttling queue length stream records length integration latency etc • Operational Metrics o Operational metrics are equally important to understand sustainability and maintenance of a given system and crucial t o pinpoint how stability progressed/degraded over time ArchivedAmazon Web Services Serverless Application Lens 25 o Examples : Number of tickets (successful and unsuccessful resolutions etc) number of times people on call were paged availability CI/CD pipeline stats (successful/failed deployments feedback time cycle and lead time etc) CloudWatch Alarms should be configured at both individual and aggregated levels An individual level example is alarming on the Duration metric from Lambda or IntegrationLatency from API Gateway when invoked through API since different parts of the application likely have different profiles In this instance you can quickly identify a bad deployment that makes a function execute for much longer than usual Aggregate level examples include alarming but is not limited to the fo llowing metrics: • AWS Lambda : Duration Errors Throttling and ConcurrentExecutions For stream based invocations alert on IteratorAge For Asynchronous invocations alert on DeadLetterErrors • Amazon API Gateway : IntegrationLatency Latency 5XXError • Application Load Balancer : HTTPCode_ELB_5XX_ Count RejectedConnectionCount HTTPCode_Target_5XX_Count UnHealthyHostCount LambdaInternalError LambdaUserError • AWS AppSync : 5XX and Latency • Amazon SQS: ApproximateAgeOfO ldestMessage • Amazon Kinesis Data Streams: ReadProvisionedThroughputExceeded WriteProvisionedThroughputExceeded GetRecordsIteratorAgeMilliseconds PutRecordSuccess PutRecordsSuccess (if using Kinesis Producer Library) and GetRecordsSuccess • Amazon SNS : NumberOfNotificationsFailed NumberOfNotificationsFilteredOut InvalidAttributes • Amazon SES: Rejects Bounces Complaints Rendering Failures • AWS Step Functions: ExecutionThrottled ExecutionsFailed ExecutionsTimedOut • Amazon EventBridge: FailedInvocations ThrottledRules • Amazon S3: 5xxErrors TotalRequestLatency • Amazon DynamoDB: ReadThrottleEvents WriteThrottleEvents SystemErrors ThrottledRequests UserErrors ArchivedAmazon Web Services Serverless Application Lens 26 Centralized and structured logging Standardize your application logging to emit operational information about transactions correlation identifiers request identifiers across components and business outcomes Use this information to answer arbitrary questions about the state of your workload Below is an example of a structured logging using JSON as the output: { "timestamp":"2019 1126 18:17:33774" "level":"INFO" "location":"cancelcancel_booking:45" "service":"booking" "lambda_function_name":"test" "lambda_function_memory_size":"128" "lambda_function_arn":"arn:aws:lambda:eu west1: 12345678910:function:test" "lambda_request_id":"52fdfc07 2182154f163f5f0f9a621d72" "cold_start": "true" "message":{ "operation":"update_item" "details:":{ "Attributes": { "status":"CANCELLED" } "ResponseMetadata":{ "RequestId":"G7S3SCFDEMEINPG6AOC6CL5IDNVV4KQNSO5AEMVJF66Q9ASUAAJG" "HTTPStatusCode":200 "HTTPHeaders":{ "server":"Server" "date":"Thu 26 Nov 2019 18:17:33 GMT" "content type":"application/x amzjson10" "content length":"43" "connection":"keep alive" "xamzn requestid":"G7S3SCFDEMEINPG6AOC6CL5IDNVV4KQNSO5AEMVJF66Q9ASUAAJG" "xamzcrc32":"1848747586" } "RetryAttempts":0 } } ArchivedAmazon Web Services Serverless Application Lens 27 } } Centralized logging helps you search and analyze your serverless application log s Structured logging makes it easier to derive queries to answer arbitrary questions about the health of your application As your system grows and more logging is ingested consider using appropriate logging levels and a sampling mechanism to log a small percentage of logs in DEBUG mode Distributed Tracing Similar to non serverless applications anomalies can occur at larger scale in distributed systems Due to the nature of serverless architectures it ’s fundamental to have distributed tracing Making c hanges to your serverless application entails many of the same principles of deployment change and release management used in traditional workloads However there are subtle changes in how you use existing tools to accomplish these principles Active tr acing with AWS X Ray should be enabled to provide distributed tracing capabilities as well as to enable visual service maps for faster troubleshooting X Ray helps you identify performance degradation and quickly understand anomalies including latency dis tributions Figure 7: AWS X Ray Service Map visualizing 2 services Service Maps are helpful to understand integration points that need attention and resiliency practices For integration calls retries backoffs and possibly circuit breakers are necessary to prevent faults from propagat ing to downstream services ArchivedAmazon Web Services Serverless Application Len s 28 Another example is networking anomalies You should not rely on default timeouts and retry settings Instead tune them to fail fast if a socket read/w rite timeout happens where the default can be seconds if not minutes in certain clients XRay also provides two powerful features that can improve the efficiency on identifying anomalies within applications: Annotations and Subsegments Subsegments are he lpful to understand how application logic is constructed and what external dependencies it has to talk to Annotations are key value pairs with string number or Boolean values that are automatically indexed by AWS X Ray Combined they can help you quic kly identify performance stat istics on specific operations and business transactions for example how long it takes to query a database or how long it takes to process pictures with large crowds Figure 8: AWS X Ray Trace with subsegments beginning with ## ArchivedAmazon Web Services Serverless Application Lens 29 Figure 9: AWS X Ray Traces grouped by custom annotations OPS 2 : How do you approach application lifecycle management ? Prototyping Use infrastructure as code to create temporary environments for new features that you want to prototype a nd tear them down as you complete them You can use dedicated accounts per team or per developer depending on the size of the team and the level of automation within the organization Temporary environments allow for higher fidelity when working with mana ged services and increase levels of control to help you gain confidence that your workload integrates and operates as intended For configuration management use environment variables for infrequent changes such as logging level and database connection strings Use AWS System Manager Parameter Store for dynamic configuration such as feature toggles and store sensitive data using AWS Secrets Manager Testing Testing is commonly done through unit integration and acceptance tests Developing robust testing strategies allows you to emulate your serverless application under different loads and conditions Unit tests shouldn’t be different from non serverless applications and therefore can run locally without any changes ArchivedAmazon Web Services Serverless Application Lens 30 Integration tests shou ldn’t mock services you can’t control since they might change and provide unexpected results These tests are better performed when using real services because they can provide the same environment a serverless application would use when processing reques ts in production Acceptance or end toend tests should be performed without any changes because the primary goal is to simulate the end users’ action s through the available external interface Therefore there is no unique recommendation to be aware of he re In general Lambda and thirdparty tools that are available in the AWS Marketplace can be used as a test harness in the context of performance testing Here are some considerations during performance testing to be aware of: • Metrics such as invoked max memory used and init duration are available in CloudWatch Logs For more information read the performance pillar section • If your Lambda function runs inside Amazon Virtual Private Cloud (VPC) pay attention to available IP address space inside your subnet • Creating modularized code as separate functions outside of the handler enables more unit testable functions • Establishing externalized connection code (such as a connection pool to a relati onal database) referenced in the Lambda function’s static constructor/initialization code (that is global scope outside the handler) will ensure that external connection thresholds aren’t reached if the Lambda execution environment is reused • Use DynamoD B ondemand table unless your performance tests exceed current limits in your account • Take into account any other service limits that m ight be used within your serverless application under performance testing Deploying Use infrastructure as code and ver sion control to enable tracking of changes and releases Isolate development and production stages in separate environments This reduces errors caused by manual processes and helps increase levels of control to help you gain confidence that your workload operates as intended Use a serverless framework to model prototype build package and deploy serverless applications such as AWS SAM or Serverless Framework With infrastructure as code ArchivedAmazon Web Services Serverless Application Lens 31 and a framework you can parametrize your serverless application and its dependencies to ease deployment across isolated stages and across AWS accounts For example a CI/CD pipeline Beta stage can create the following resources in a beta AWS account and equally for the respective stages you may want to have in differen t accounts too (Gamma Dev Prod): OrderAPIBeta OrderServiceBeta OrderStateMachineBeta OrderBucketBeta OrderTableBeta Figure 10: CI/CD Pipeline for multiple accounts When deploying to production favor safe deployments over all atonce systems as new changes will gradually shift over time towards the end user in a canary or linear deployment Use CodeDeploy hooks ( BeforeAllowTraffic AfterAllowTraffic ) and alarms to gain more control over deployment validation rollback and any customiz ation you may need for your application ArchivedAmazon Web Services Serverless Application Lens 32 You can also combine the use of synthetic traffic custom metrics and alert s as part of a rollout deployment Th ese help you proactively detect errors with new changes that otherwise would have impacted your customer experience Evolve There are no operational practices unique to serverless applications that belong to this subsection Key AWS Services Key AWS services for oper ational excellence include AWS Systems Manager Parameter Store AWS SAM Cloud Watch AWS CodePipeline AWS XRay Lambda and API Gateway Resources Refer to the following resources to learn more about our best practices for operational excellence Documen tation & Blogs • API Gateway stage variables 13 • Lambda environment variables 14 • AWS SAM CLI 15 Figure 5: AWS CodeDeploy Lambda deployment and Hooks ArchivedAmazon Web Services Serverless A pplication Lens 33 • XRay latency distribution 16 • Troubleshooting Lambda based applications with X Ray 17 • System Manager (SSM) Parameter Store 18 • Contin uous Deployment for Serverless applications blog post 19 • SamFarm: CI/CD example 20 • Serverless Application example using CI/CD • Serverless Application example automating Alerts and Dashboard • CloudWatch Embedded Metric Format library for Python • CloudWatch Embedded Metric Format library for Nodejs • Example library to implement tracing structured logging and custom metrics • General AWS Limits • Stackery: Multi Account Best Practices Whitepaper • Practicing Continuous Integ ration /Continuous Delivery on AWS 21 Third Party Tools • Serverless Developer Tools page including third party frameworks/tools 22 • Stelligent: CodePipeline Dashboard for operational metrics Security Pillar The security pillar includes the ability to protect information systems and assets while delivering business value through risk assessm ents and mitigation strateg ies Definition There are five best practice areas for security in the cloud: • Identity and access management • Detective controls • Infrastructure protection ArchivedAmazon Web Services Serverless Application Lens 34 • Data protection • Incident response Serverless addresses some of today’s biggest security concerns as it removes infrastructure management tasks such as operating system patching updating binaries etc Although the attack surface is reduced compared to non serverless architectures the Op en Web Application Security Project ( OWASP ) and application security best practices still apply The questions in this section are designed to help you address specific ways an attacker could try to gain access to or exploit misconfigured permissions which could lead to abuse The practices described in this section strong ly influence the security of your entire cloud platform and so they should be validated carefully and also reviewed frequently The incident response category will not be described in thi s document because the practices from the AWS WellArchitected Framework still apply Best Practices Identity and Access Management SEC 1 : How do you control access to your Serverless API ? APIs are often targeted by attackers because of the operations that they can perform and the valuable data they can obtain There are various security best practices to defend against these attacks From an authentication/authorization perspective there are currently four mechanisms to authorize an API call within API Gateway: • AWS_IAM authorization • Amazon Cognito user pools • API Gateway Lambda authorizer • Resource policies Primarily you want to understand if and how any of these mechanisms are implemente d For consumers who currently are located within your AWS environment or have the means to retrieve AWS Identity and Access Management (IAM) temporary ArchivedAmazon Web Services Serverless Application Lens 35 credentials to access your environment you can use AWS_IAM authorization and add least privileged permi ssions to the respective IAM role to securely invoke your API The following diagram illustrat es using AWS_IAM authorization in this context: Figure 10: AWS_IAM authorization If you have an existing Identity Provider (IdP) you can use an API Gateway Lam bda authorizer to invoke a Lambda function to authenticate/validate a given user against your IdP You can use a Lambda authorizer for custom validation logic based on identity metadata A Lambda authorizer can send additional information derived from a bearer token or request context values to your backend service For example the authorizer can return a map containing user IDs user names and scope By using Lambda authorizers your backend does not need to map authorization tokens to user centric data allowing you to limit the exposure of such information to just the authorization function ArchivedAmazon Web Services Serverless Application Lens 36 Figure 6: API Gateway Lambda authorizer If you don’t have an IdP you can leverage Amazon Cognito user pools to either provide builtin user management or integrate with external identity providers such as Faceboo k Twitter Google+ and Amazon This is commonly seen in the mobile backend scenario where users authenticate by using existing accounts in social media platforms whil e being able to register/sign in with their email address/username This approach also provides granular authorization through OAuth Scopes ArchivedAmazon Web Services Serverless Application Lens 37 Figure 7: Amazon Cognito user pools API Gateway API Keys is not a security mechanism and should not be used for authorization unless it ’s a public API It should be used primarily to track a consumer’s usage across your API and could be used in addition to the authorizers previously mentioned in this section When using Lambda authorizers we strictly advise against pass ing credentials or any sort of sensitive data via query string parameters or headers otherwise you may open your system up to abuse Amazon API Gateway resource policies are JSON policy documents that can be attached to an API to control whether a specified AWS Principal can invoke the API This mechanism allows you to restrict API invocations by: • Users from a specified AWS account or any AWS IAM identity • Specified source IP address ranges or CIDR blocks • Specified virtual private clouds (VPCs) or VPC endpoi nts (in any account) With resource policies you can restrict common scenarios such as only allowing requests coming from known clients with a specific IP range or from another AWS account If you plan to restrict requests coming from private IP addresses it’s recommended to use API Gateway private endpoints instead ArchivedAmazon Web Services Serverless Application Lens 38 Figure 14: Amazon API Gateway Resource Policy based on IP CIDR With private endpoints API Gateway will restrict access to services and resources inside your VPC or those connected via Di rect Connect to your own data centers Combining both private endpoints and resource policies an API can be limited to specific resource invocations within a specific private IP range This combination is mostly used on internal microservices where they m ay be in the same or another account When it comes to large deployments and multiple AWS accounts organizations can leverage cross account Lambda authorizers in API Gateway to reduce maintenance and centralize security practices For example API Gateway has the ability to use Amazon Cognito User Pools in a separate account Lambda authorizers can also be created and managed in a separate account and then re used across multiple APIs managed by API Gateway Both scenarios are common for deployments with m ultiple microservices that need to standardize authorization practices across APIs ArchivedAmazon Web Services Serverless Application Lens 39 Figure 15: API Gateway Cross Account Authorizers SEC 2 : How are you managing the security boundaries of your Serverless Application? With Lambda function s it’s recommended that you follow least privileged access and only allow the access needed to perform a given operation Attaching a role with more permissions than necessary can open up your systems for abuse With the security context having smaller functio ns that perform scoped activities contribute to a more well architected serverless applicati on Regard ing IAM roles sharing an IAM role within more than one Lambda function will likely violate least privileged access Detective Controls Log management is an important part of a well architected design for reasons ranging from security/forensics to regulatory or legal requirements It is e qually important that you track vulnerabilities in application dependencies because attackers can exploit kn own vulnerabilities found in dependencies regardless of which programming language is used For application dependency vulnerability scans there are several commercial and open source solutions such as OWASP Dependency Check that can integrate within yo ur CI/CD pipeline It ’s important to include all your dependencies including AWS SDKs as part of your version control software repository ArchivedAmazon Web Services Serverless Application Lens 40 Infrastructure Protection For scenarios where your serverless application need s to interact with other components d eployed in a virtual private cloud (VPC) or applications residing on premises it’s important to ensure that networking boundaries are considered Lambda functions can be configured to access resources within a VPC Control traffic at all layers as describ ed in the AWS WellArchitected Framework For workloads that require outbound traffic filtering due to compliance reasons proxies can be used in the same manner that they are applied in non serverless architectures Enforcing networking boundaries solely at the application code level and giving instructions as to what resource s one could access is not recommended due to separation of concerns For service toservice communication favor dynamic authentication such as temporary credentials with AWS IAM over static keys API Gateway and AWS AppSync both support IAM Authorization that makes it ideal to protect communication to and from AWS services Data Protection Consider enabling API Gateway Access Logs and selectively choose only what you need since the logs might contain sensitive data depending on your serverless application design For this reason we recommend that y ou encrypt any sensitive data traversing your serverless application API Gateway and AWS AppSync employ TLS across all communications client s and integrations Although HTTP payloads are encrypted in transit request path and query strings that are part of a URL might not be Therefore sensitive data can be accidentally exposed via CloudWatch Logs if sent to standard output Additionally malformed or intercepted input can be used as an attack vector —either to gain access to a system or cause a malfun ction Sensitive data should be protected at all times in all layers possible as discussed in detail in the AWS WellArchitected Framework The recommendations in that whitepaper still apply here With regard to API Gateway sensitive data should be either encrypted at the client side before making its way as part of a n HTTP request or sent as a payload as part of a n HTTP POST request That also includes encrypting any headers that might contain sensitive data prior to making a given request ArchivedAmazon Web Services Serverless Application Lens 41 Concerning Lambda functions or any integrations that API Gateway may be configured with sensitive data should be encrypted before any processing or data manipulation This will prevent data le akage if such data gets exposed in persistent storage or by standard output that is streamed and persisted by Cloud Watch Logs In the scenarios described earlier in this document Lambda function s would persist encrypted data in either DynamoDB Amazon ES or Amazon S3 along with encryption at rest We strictly advise against s ending logging and storing unencrypted sensitive data either as part of HTTP request path/query strings or in standard output of a Lambda function Enabling logging in API Gateway where sensitive data is unencrypted is also discouraged As mentioned in the Detective Controls subsection you should consult your compliance team before enabling API Gateway logging in such cases SEC 3: How do you implement Application Security in your workload ? Review security awareness documents authored by AWS Security bulletins and industry threat intelligence as covered in the AWS WellArchitected Framework OWASP guidelines for application security still appl y Validate and sanitize inbound events and perform a security code review as you normally would for non serverless applications For API Gateway set up basic request validation as a first step to ensure that the request adheres to the configured JSON Schema request model as well as any required parameter s in the URI query string or headers Application specific deep validation should be implemented whether that is as a separate Lambda function library framework or service Store y our secrets such as database passwords or API keys in a secrets manager that allows for rotation secure and audited access Secrets Manager allow s finegrained policies for secrets including auditing Key AWS Services Key AWS services for security are A mazon Cognito IAM Lambda Cloud Watch Logs AWS CloudTrail AWS CodePipeline Amazon S3 Amazon ES DynamoDB and Amazon Virtual Private Cloud (Amazon VPC) ArchivedAmazon Web Services Serverless Application Lens 42 Resources Refer to the following resources to learn more about our best practices for security Documentation & Blogs • IAM role for Lambda function with Amazon S3 example 23 • API Gateway Request Validation 24 • API Gateway Lambda Authorizers 25 • Securing API Access with Amazon Cognito Federated Identities Amazon Cognito User Pools and Amaz on API Gateway 26 • Configuring VPC Access for AWS Lambda 27 • Filtering VPC outbound traffic with Squid Proxies 28 • Using A WS Secrets Manager with Lambda • Auditing Secrets with AWS Secrets Manager • OWASP Input validation cheat sheet • AWS Serverless Security Workshop Whitepapers • OWASP Secure Coding Best Practices 29 • AWS Security Best Practices 30 Partner Solutions • PureSec Serverless Security • Twistlock Serverless Security 31 • Protego Serverless Security • Snyk – Commercial Vulnerability DB and Dependency Che ck32 • Using Hashicorp Vault with Lambda & API Gateway Third Party Tools • OWASP Vulnerability Dependency Check 33 ArchivedAmazon Web Services Serverless Application Lens 43 Reliability Pillar The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions dynamically acquire computing resources to meet demand and mitigate disruptions such as misconfigurations or transient network issues Definition There are thr ee best practice areas for reliability in the cloud: • Foundations • Change management • Failure management To achieve reliability a system must have a well planned foundation and monitoring in place with mechanisms for handling changes in demand requirements or potentially defending an unauthorized denial of service attack The system should be designed to detect failure and ideally automatically heal itself Best Practices Foundations REL 1 : How are you regulating inbound request rates ? Throttling In a microservices architecture API consumers may be in separate teams or even outside the organization This creates a vulnerability due to unknown access patterns as well as the risk of consumer credentials being compromised The service API can potentially be affected if the number of requests exceeds what the processing logic/backend can handle Additionally events that trigger new transactions such as an update in a database row or new objects being added to a n S3 bucket as part of the API will trigger additional executions throu ghout a serverless application Throttling should be enabled at the API level to enforce access patterns established by a service contract Defining a request access pattern strategy is fundamental to ArchivedAmazon Web Services Serverless Application Lens 44 establish ing how a consumer should use a service whether that is at the resou rce or global level Returning the appropriate HTTP status codes within your API (such as a 429 for throttling) help s consumers plan for throttled access by implementing back off and retries accordingly For more granular throttling and metering usage issuing API keys to consumers with usage plans in addition to global throttling enables API Gateway to enforce quota and access patterns in unexpected behavior API keys also simplif y the process for administr ators to cut off access if an individual consumer is making suspicious requests A common way to capture API keys is through a developer portal This provides you as the service provider with additional metadata associated with the consumers and requests You may capture the application contact information and business area/purpose and store this data in a durable data store such as DynamoDB This gives you additional validation of your consumers and provides traceability of logging with identities so that you can contact consumers for breaking change upgrades/issues As discussed in the security pillar API keys are not a security mechanism to authorize requests and therefore should only be used with one of the available authorization options available within API Gateway Concurrency controls are sometimes necessary to protect specific workloads against service failure as they may not scale as rapidly as Lambda Concurrency controls enable you to control the allocation of how many concurrent invocations of a particular Lambda function are set at the individual Lambda function level Lambda invocations that exceed the concurrency set of an individual function will be throttled by the AWS Lambda Service and the result will vary depending on their event source – Synchronous invocations return HTTP 429 error Asynchronous invocations will be queued and retried while Stream based event sources will retry up to their record expiration time ArchivedAmazon Web Services Serverless Application Lens 45 Figure 16: AWS Lambda concurrency controls Controlling concurrency is particularly useful for the following scenarios: • Sensitive backend or integrated systems that may have scaling limitations • Database Connection Pool restrictions such as a relational database which may impose concurrent limits • Critical Path Services: Higher priority Lambda functions such as authorization vs lower priority functions ( for example back office) against limits in the same account • Ability to disable Lambda function (concurrency = 0) in the event of anomalies • Limiting desired execution concurrency to protect against Distributed Denial of Service (DDoS) attacks Concurrency c ontrols for Lambda functions also limit its ability to scale beyond the concurrency set and draws from your account reserved concurrency pool For asynchronous processing use Kinesis Data Streams to effectively control concurrency with a single shard as o pposed to Lambda function concurrency control This gives you the flexibility to increase the number of shards or the parallelization factor to increase concurrency of your Lambda function ArchivedAmazon Web Services Serverless Application Lens 46 Figure 8: Concurrency controls for syn chronous and asynchronous requests REL 2: How are you building resiliency into your serverless application? Asynchronous Calls and Events Asynchronous calls reduce the latency on HTTP responses Multiple synchronous calls as well as long running wait cycles may result in timeouts and “locked ” code that prevents retry logic Event driven architectures enable streamlining asynchronous executions of code thus limiting consumer wait cycles These architectures are commonly implemented asynchronous ly using queues streams pub/sub Webhooks s tate machines and event rule managers across multiple components that perform a business functionality User experience is decoupled with asynchronous calls Instead of blocking the ent ire experience until the overall execution is complete d frontend systems receive a reference/job ID as part of their initial request and they subscribe for real time changes or in legacy systems use an additional API to poll its status This decoupling a llows the frontend to be more efficient by using event loops parallel or concurrency techniques while making such requests and lazily loading parts of the application when a response is partially or completely available The f rontend becomes a key elemen t in asynchronous calls as it becomes more robust with custom retries and caching It can halt an in flight request if no response has been received within an acceptable SLA be it caused by an anomaly transient condition networking or degraded environm ents ArchivedAmazon Web Services Serverless Application L ens 47 Alternatively when synchronous calls are necessary it ’s recommended at a minimum to ensure t hat the total execution time doesn’t exceed the API Gateway or AWS AppSync maximum timeout Use an external service ( for example AWS Step Functions) to coo rdinate business transactions across multiple services to control state and handle error handling that occur s along the request lifecycle Change Management This is covered in the AWS WellArchitected Framework and specific information on serverless can be found in the operational excellence pillar Failure Management Certain parts of a serverless application are dictated by asynchronous calls to various components in an event driven fashion such as by pub/su b and other patterns When asynchronous calls fail they should be captured and retried whenever possible Otherwise data loss can occur result ing in a degraded customer experience For Lambda functions build retry logic into your Lambda queries to ensu re that spiky workloads don’t overwhelm your backend Use structured logging as covered in the operational excellence pillar to log retries including contextual information about errors as they can be captured as a custom metric Use Lambda Destinations t o send contextual information about errors stack traces and retries into dedicated Dead Letter Queues (DLQ) such as SNS topics and SQS queues You also want to develop a plan to poll by a separate mechanism to re drive these failed events back to their intended service AWS SDKs provide back off and retry mechanisms by default when talking to other AWS services that are sufficient in most cases However review and tune them to suit your needs especially HTTP keepalive connection and socket timeout s Whenever possible use Step Functions to minimize the amount of custom try/catch backoff and retry logic within your serverless applications For more information see the cost optimization pillar section Use Step Functions integration to save failed state executions and their state into a DLQ ArchivedAmazon Web Services Serverless Application Lens 48 Figure 9: Step Functions state machine with DLQ step Partial failures can occur in non atomic operations such as PutRecords (Kinesis) and BatchWriteItem (DynamoDB) since they return successful if at least one record has been ing ested successfully Always i nspect the response when using such operations and programmatically deal with partial failures When consuming from Kinesis or DynamoDB streams use Lambda error handling controls such as max imum record age max imum retry attem pts DLQ on failure and Bisect batch on function error to build additional resiliency into your application For synchronous parts that are transaction based and depend on certain guarantees and requirements rolling back failed transactions as describe d by the Saga pattern 34 also can be achieved by using Step Functions state machines which will decouple and simplify the logic of your applicat ion ArchivedAmazon Web Services Serverless Application Lens 49 Figure 10: Saga pattern in Step Functions by Yan Cui Limits In addition to what is covered in the Well Architected Framework consider reviewing limits for burst and spiky use cases For example API Gateway and Lambda have different limits for steady and burst request rates Use scaling layers and asynchronous patterns when possible and perform load test to ensure that your current account limits can sustain your actual customer demand Key AWS Services Key AWS services for reliability are AWS Marketplace Trusted Advisor Cloud Watch Logs Cloud Watch API Gateway Lambda X Ray Step Functions Amazon SQS and Amazon SNS Resources Refer to the following resources to learn more about our best practices for reliability Documentation & Blogs • Limits in Lambda 35 • Limits in API Gateway 36 • Limits in Kin esis Streams 37 ArchivedAmazon Web Services Serverless Application Lens 50 • Limits in DynamoDB 38 • Limits in Step Functions 39 • Error handling patterns 40 • Serverless testing with Lambd a41 • Monitoring Lambda Functions Logs 42 • Versioning Lambda 43 • Stages in API Gateway 44 • API Retries in AWS 45 • Step Functions error handling 46 • XRay 47 • Lambda DLQ 48 • Error handling patterns with API Gateway and Lambda 49 • Step Functions Wait state 50 • Saga pattern 51 • Applying Saga pattern via Step Functions 52 • Serverless Application Repository App – DLQ Redriver • Troubleshooting retry and timeout issues with AWS SDK • Lambda resiliency controls for stream processing • Lambda Destinations • Serverless Application Repository App – Event Replay • Serverless Application Repository App – Event Storage and Backup Whitepapers • Microservices on AWS 53 ArchivedAmazon Web Services Serverless Application Lens 51 Performance Efficiency Pillar The performance efficiency pillar focuses on the efficient use of computing resources to meet requirements and the maintenance of that efficiency as demand changes and technologies evolve Definition Performance efficiency in the c loud is composed of four areas: • Selection • Review • Monitoring • Tradeoffs Take a data driven approach to selecting a high performance architecture Gather data on all aspects of the architecture from the high level design to the selection and configuration of resource types By reviewing your choices on a cyc lical basis you will ensure that you are taking advantage of the continually evolving AWS Cloud Monitoring will ensure that you are aware of any deviance from expected performance and can take action on it Finally you can make tradeoffs in your archit ecture to improve performance such as using compression or caching or by relaxing consistency requirements PER 1 : How have you optimized the performance of your serverless application ? Selection Run performance test s on your serverless application using steady and burst rates Using the result try tuning capacity units and load test after changes to help you select the best configuration: • Lambda : Test different memory settings as CPU network and storage IOPS are allocated proportionally • API Gateway : Use Edge endpoints for geographically dispersed customers Use Regional for regional customers and when using other AWS services within the same Region ArchivedAmazon Web Services Serverless Application Lens 52 • DynamoDB: Use on demand for unpredictable application traffic otherwise provisioned mode for consistent traffic • Kinesis: Use enhanced fanout for dedicated input/output channel per consumer in multiple consumer scenarios U se an extended batch window for low volume transactions with Lambda Configure VPC access to your Lambda functions only when necessary Set up a NAT gateway if your VPC enabled Lambda function needs access to the internet As covered in the Well Architected Framework configure your NAT gateway across multiple Availability Zones for high availability and performance API Gatewa y Edge optimized APIs provide a fully managed CloudFront distribution to optimize access for geographically dispersed consumers API requests are routed to the nearest CloudFront Point of Presence (POP) which typically improves connection time Figure 11: Edge optimized API Gateway deployment API Gateway Regional endpoint doesn’t provide a CloudFront distribution and enables HTTP2 by default which helps reduce overall latency when requests originate from the same Region Region al endpoints also allow you to associate your own Amazon CloudFront distribution or an existing CDN ArchivedAmazon Web Services Serverless Application Lens 53 Figure 21: Regional Endpoint API Gateway deployment This table can help you decide whether to deploy and Edge optimized API or Regional API Endpoint: Edge optimized API Regional API Endpoint API is accessed across Regions Includes API Gateway managed CloudFront distribution X API is accessed within same Region Least request latency when API is accessed from same Region as API is deployed X Ability to associate own CloudFront distribution X This decision tree can help you decide when to deploy your Lambda function in a VPC ArchivedAmazon Web Services Serverless Application Lens 54 Figure 12: Decision tree for deploying a Lambda function in a VPC Optimize As a serverless architecture grows organically there are certain mechanisms that are commonly used across a variety of workload profiles Despite performance testing design tradeoffs should be considered to increase your application’s performance always keeping your SLA and requirements in mind API Gateway and AWS AppSync caching can be enabled to improve performance for applicable operations DAX can improve read responses significantly as well as Global and Local Secondary Indexes to prevent DynamoDB full table scan operations These details and resources were described in the Mobile Back end scenario ArchivedAmazon Web Services Serverless Application Lens 55 API Gateway content encoding allows API clients to request the payload to be compressed before being sent back in the response to an API request This reduces the number of bytes that are sent from API Gateway to API clients and decreases the time it takes to transfer the data You can enable content encoding in the API definition and you can also set the minimum response size that triggers compression By default APIs do not have content encoding support enabled Set your function timeout a few seconds higher than the average execution to account for any transient issues in downstream services used in the communication path This also applies when working with Step Functions activities tasks and SQS message visibility Choosing a default memory setting and timeout in AWS Lambda may have an undesired effect in performance c ost and operational procedures Setting the timeout much higher than the average execution may cause functions to execute for longer upon code malfunction resulting in higher costs and possibly reaching concurrency limits depending on how such function s are invoked Setting a timeout that equals one successful function execution may trigger a serverless application to abruptly halt an execution should a transient networking issue or abnormality in downstream services occur Setting a timeout without performing load testing and more importantly without considering upstream services may result in errors whenever any pa rt reache s its timeout first Follow best practices for working with Lambda functions 54 such as container reuse minimiz ing deployment package size to its runtime necessities and minimizing the complexity of your dependencies including frameworks that may not be optimized for fast startup The latency 99 th percentile ( P99) should always be taken into account as one may not impact application SLA agreed with other teams For Lambda functions in VPC avoid DNS resolution of public host names of underlying resources in your VPC For example if your Lambda function accesses an Amazon RDS DB instance in your VPC launch the instance with the nopublicly accessible option After a Lambda function has executed AWS Lambda maintains the execution context for some arbitrary time in anticipation of another Lambda function invocation That allows you to use the global scope for one off expensive operations for example ArchivedAmazon Web Services Serverless Application Lens 56 establ ishing a database connection or any initialization logic In subsequent invocations you can verify whether it’s still valid and reuse the existing connection Asynchronous Transactions Because your customers expect more modern and interactive user interfaces you can no longer sustain complex workflows using synchronous transactions The more service interaction you need the more you end up chaining calls that may end up increasing the risk on service stability as well as response time Modern UI framewor ks such as Angularjs VueJS and React asynchronous transactions and cloud native workflows provide a sustainable approach to meet customers demand as well as helping you decouple components and focus on process and business domains instead These asynchronous transactions (or often times described as an event driven architecture) kick off downstream subsequent choreographed events in the cloud instead of constraining clients to lock andwait (I/O blocking) for a response Asynchronous workflows handle a variety of use cases including but not limited to: data Ingestion ETL operations and order/request fulfillment In these use cases data is processed as it arrives and is retrieved as it changes We outline best practices for two common asynchronous workflows where you can learn a few optimization patterns for integr ation and async processing Serverless Data Processing In a serverless data processing workflow data is ingested from clients into Kinesis (using the Kinesis agent SDK or API) and arrives in Amazon S3 New objects kick off a Lambda function that is au tomatically executed This function is commonly used to transform or partition data for further processing and possibly stored in other destinations such as DynamoDB or another S3 bucket where data is in its final format As you may have different transfo rmations for different data types we recommend granularly splitting the transformations into different Lambda functions for optimal performance With this approach you have the flexibility to run data transformation in parallel and gain speed as well as cost ArchivedAmazon Web Services Serverless Application Lens 57 Figure 23: Asynchronous data ingestion Kinesis Data Firehose offers native data transformations that can be used as an alternative to Lambda where no add itional logic is necessary for transforming records in Apache Log/System logs to CSV JSON; JSON to Parquet or ORC Serverless Event Submission with Status Updates Suppose you have an e commerce site and a customer submits an order that kicks off an invent ory deduction and shipment process; or an enterprise application that submits a large query that may take minutes to respond The processes required to complete this common transaction may require multiple service calls that may take a couple of minutes t o complete Within those calls you want to safeguard against potential failures by adding retries and exponential backoffs However that can cause a suboptimal user experience for whoever is waiting for the transaction to complete For long and complex w orkflows similar to this you can integrate API Gateway or AWS AppSync with Step Functions that upon new authorized requests will start this business workflow Step Functions responds immediately with an execution ID to the caller (Mobile App SDK web ser vice etc) For legacy systems you can use the execution ID to poll Step Functions for the business workflow status via another REST API With WebSockets whether you’re using REST or GraphQL you can receive business workflow status in real time by providing updates in every step of the workflow Amazon Kinesis Firehose Amazon S3 AWS Lambda Amazon S3 Amazon DynamoDB Other Data SourcesArchivedAmazon Web Services Serverless Application Lens 58 Figure 24: Asynchronous workflow with Step Functions state machines Another common scenario is integrating API Gateway directly with SQS or Kinesis as a scaling layer A Lambda function would only be necessary if additional business information or a custom request ID format is expected from the caller Figure 25: Asynchronous workflow using a queue as a scaling layer In this second example SQS serves multiple purposes: 1 Storing the request record durably is important because the client can confidently proceed throughout the workflow knowing that the request will eventually be processed 2 Upon a burst of events that may temporarily ove rwhelm the backend the request can be polled for processing when resources become available Compared to the first example without a queue Step Functions is storing the data durably without the need for a queue or state tracking data sources In both ex amples the best practice is to pursue an asynchronous workflow after the client submits the request and avoiding the resulting response as blocking code if completion can take several minutes Amazon API GatewayAWS Step Functions Amazon API Gateway Amazon SQS AWS Lambda2 31 45 1 2 3 45 Event Processing Amazon API Gateway Amazon SQS AWS Lambda2 31 45 ArchivedAmazon Web Services Serverless Application Lens 59 With WebSockets AWS AppSync provides this capability out of the box via GraphQL subscriptions With subscriptions an authorized client could listen for data mutations they’re interested in This is ideal for data that is streaming or may yield more than a single response With AWS AppSync as status updates chan ge in DynamoDB clients can automatically subscribe and receive updates as they occur and it’s the perfect pattern for when data drives the user interface Figure 26: Asynchronous updates via WebSockets with AWS AppSync and GraphQL Web Hooks can be imple mented with SNS Topic HTTP subscriptions Consumers can host an HTTP endpoint that SNS will call back via a POST method upon an event ( for example a data file arriving in Amazon S3) This pattern is ideal when the clients are configurable such as another microservice which could host an endpoint Alternatively Step Functions supports callbacks where a state machine will block until it receives a response fo r a given task Figure 27: Asynchronous notification via Webhook with SNS Lastly polling could be costly from both a cost and resource perspective due to multiple clients constantly polling an API for status If polling is the only option due to environment constraints it ’s a best practice to establish SLAs with the clients to limit the number of “empty polls ” AWS AppSync Amazon S31 2 1 Amazon DynamoDB AWS SNSHTTP Amazon S31 2 ArchivedAmazon Web Services Serverless Application Lens 60 Figure 28: Client polling for updates on transaction recently made For example if a large data warehouse query takes an average of two minutes for a response the client should poll the API after two minutes with exponential backoff if the data is not available Th ere are two common patterns to ensure that clients aren’t polling more frequently than expected: Throttling and Timestamp for when is safe to poll again For timestamps the system being polled can return an extra field with a timestamp or time period as to when it is safe for the consumer to poll once again This approach follows an optimistic scenario where the consumer will respect and use this wisely and in the event of abuse you can also employ throttling for a more complete implementation Review See the AWS Well Architected Framework whitepaper for best practices in the review area for performance efficiency that apply to serverless applications Monitoring See the AWS Well Architected Framework whitepaper for best practices in the monitoring area for performance efficiency that apply to serverless applications Tradeoffs See the AWS Well Architected Framework whitepaper for best practices in the tradeoffs area for performance efficiency that apply to serverless applications Key AWS Services Key AWS Services for performance efficiency are DynamoDB Accelerator API Gateway Step Functions NAT gateway Amazon VPC and Lambda Amazon API Gateway1 4 Amazon S32 3 ArchivedAmazon Web Services Serverless Application Lens 61 Resources Refer to the following resources to learn more about our best practices for performance efficiency Documentation & Blogs • AWS Lambda FAQs 55 • Best Practices for Working with AWS Lambda Functions 56 • AWS Lambda: How It Works 57 • Understanding Container Reuse in AWS Lambda 58 • Configuring a Lambda Function to Access Resources in an Amazon VPC 59 • Enable API Caching to Enhance Responsiveness 60 • DynamoDB: Global Secondary Indexes 61 • Amazon DynamoDB Accelerator (DAX) 62 • Developer Guide: Kinesis Streams 63 • Java SDK : Performance improvement configuration • Nodejs SDK: Enabling HTTP Keep Alive • Nodejs SDK: Improving Imports • Using Amazon SQS queues and AWS Lambda for high throughput • Increasing stream processing performance with enhanced fan out • Lambda Power Tuning • When to use Amazon DynamoDB on demand and provisioned mode • Analyzing Log Data with Amazon CloudWatch Logs Insights • Integrating multiple data sources with AWS AppS ync • Step Functions Service Integrations • Caching patterns • Caching Serverless Applications • Best Practices for Amazon Athena and AWS Glue ArchivedAmazon Web Services Serverless Application Lens 62 Cost Optimization Pillar The cost optimization pillar includes the continual process of refinement and improvement of a system over its entire lifecycle From the initial design of your first proof of concept to the ongoing operation of production workloads adopting the practices in this document will enable you to build and operate cost aware systems that achieve business outcomes and minimize costs thus allowing your business to maximize its return on investment Definition There are four best practice areas for cost optimization in the cloud: • Costeffective resources • Matching supply and demand • Expenditure awareness • Optimizing over time As with the other pillars there are tradeoffs to consider For example do you want to optimize for speed to market or for cost? In some cases it’s best to optimize for speed — going to market quickly shipping new features or simply meeting a deadline rather than investing in upfront cost optimization Design decisions are sometimes guided by haste a s opposed to empirical data as the temptation always exists to overcompensate “just in case” rather than spend time benchmarking for the most cost optimal deployment This often leads to drastically over provisioned and under optimized deployments The following sections provide techniques and strategic guidance for the initial and ongoing cost optimization of your deployment Generally serverless architectures tend to reduce cost s because some of the services such as AWS Lambda don’t cost anything while they’re idle However following certain best practices and making tradeoffs will help you reduce the cost of these solutions even more ArchivedAmazon Web Services Serverless Application Lens 63 Best Practices COST 1 : How do you optimize your costs ? Cost Effective Resources Serverless architectures are easier to manage in terms of correct resource allocation Due to its pay pervalue pricing model and scale based on demand serverless effectively reduces the capacity planning effort As covered in the operational excellence and performan ce pillar s optimizing your serverless application has a direct impact on the value it produces and its cost As Lambda proportionally allocates CPU network and storage IOPS based on memory the faster the execution the cheaper and more value your functi on produces due to 100 ms billing incremental dimension Matching Supply and Demand The AWS serverless architecture is designed to scale based on demand and as such there are no applicable practices to be followed Expenditure Awareness As covered in the A WS Well Architected Framework the increased flexibility and agility that the cloud enables encourages innovation and fast paced development and deployment It eliminates the manual processes and time associated with provisioning onpremises infrastructure including identifying hardware specifications negotiating price quotations managing purchase orders scheduling shipments an d then deploying the resources As your serverless architecture grows the number of Lambda functions APIs stages and other assets will multiply Most of these architectures need to be budgeted and forecasted in terms of costs and resource management –tagging can help you here You can allocate costs from your AWS bill to individual functions and APIs and obtain a granulat ed view of your costs per project in AWS Cost Explorer A good implementation is to share the same key value tag for assets that belong to the project programmatically and create custom reports based on the tags that you have created This feature will hel p you not only allocate your costs but also identify which resources belong to which projects ArchivedAmazon Web Services Serverless Application Lens 64 Optimizing Over Time See the AWS Well Architected Framework whitepaper for best practices in the Optimizing Over Time area for cost optimization that apply to serverless applications Logging Ingestion and Storage AWS Lambda uses CloudWatch Logs to store the output of the executions to identify and troubleshoot problems on executions as well as monitoring the serverless application These will impact the cost in the CloudWatch Logs service in two dimensions : ingestion and storage Set appropriate logging levels and remove unnecessary logging information to optimize log ingestion Use environment variables to control application logging level and sample logging in DEBUG mode to ensure you have additional insight when necessary Set log retention periods for new and existing CloudWatch Logs groups For log archival export and set cost effective storage classes that best suit your needs Direct Integrations If your L ambda function is not performing custom logic while integrating with other AWS services chances are that it may be unnecessary API Gateway AWS AppSync Step Functions EventBridge and Lambda Destinations can directly integrate with a number of service s and provide you more value and less operational overhead Most public serverless applications provide an API with an agnostic implementation of the contract provided as described in the Microservices scenario An example scenario where a direct integration is a better fit is ingesting click stream data through a REST API Figure 13: Sending data to Amazon S3 using Kinesis Data Firehose ArchivedAmazon Web Services Serverless Application Lens 65 In this scenario API Gateway will execute a Lam bda function that will simply ingest the incoming record into Kinesis Data Firehose that subsequently batches records before storing into a S3 bucket As no additional logic is necessary for this example we can use an API Gateway service proxy to directly integrate with Kinesis Data Firehose Figure 14: Reducing cost of sending data to Amazon S3 by implementing AWS service proxy With this approach we remove the cost of using Lambda and unnecessary invocations by implementing the AWS Service Proxy within API Gateway As a tradeoff this might introduce some extra complexity if multiple shards are necessary to meet the ingestion rate If latency sensitive you can stream data di rectly to your Kinesis Data Firehose by having the correct credentials at the expense of abstraction contract and API features Figure 15: Reducing cost of sending data to Amazon S3 by streaming directly using the Kinesis Data Firehose SDK For scenarios where you need to connect with internal resources within your VPC or on premises and no custom logic is required use API Gateway private integration ArchivedAmazon Web Services Serverless Application Lens 66 Figure 32: Amazon API Gateway private integration over Lambda in VPC to access private resources With this approach API Gateway sends each incoming request to an Internal Network Load Balancer that you own in your VPC which can forward the traffic to any backend either in the same VPC or on premises via IP address This approach has both cost and performance benefits as you don’t need an additional hop to send requests to a private backend with the added benefits of authorization throttling and caching mechanisms Another scenario is a fan out pattern where Amazon SNS broa dcast s messages to all of its subscribers This approach requires additional application logic to filter and avoid an unnecessary Lambda invocation ArchivedAmazon Web Services Serverless Application Lens 67 Figure 33: Amazon SNS without message attribute filtering SNS can filter events based on message attribut es and more efficiently deliver the message to the correct subscriber Figure 34: Amazon SNS with message attribute filtering Another example is long running processing tasks where you may need to wait for task completion before proceeding to the next st ep This wait state may be implemented within the Lambda code however it’s far more efficient to either transform to asynchronous processing using events or implement the waiting state using Step Functions For example in the following image we poll a n AWS Batch job and review its state every 30 seconds to see if it has finished Instead of coding this wait within the Lambda function we implement a poll ( GetJobStatus ) + wait ( Wait30Seconds ) + decider (CheckJobStatus ) ArchivedAmazon Web Services Serverless Application Lens 68 Figure 16: Implementing a wait state with AWS Step Functions Implementing a wait state with Step Functions won’t incur any further cost as the pricing model for Step Functions is based on transitions between states and not on the time spent within a state ArchivedAmazon Web Services Serverless Application Lens 69 Figure 17: Step Functions service integration synchronous wait Depending on the integration you have to wait Step Functions can wait synchronously before moving to the next task saving you an additional transition Code optimizati on As covered in the performance pillar optimizing your serverless application can effectively improve the value it produces per execution The use of global variables to maintain connections to your data stores or other services and resources will incre ase performance and reduce execution time which also reduce s the cost For more information see the performance pillar section An example where the use of managed service features can improve the value per execution is retrieving and filtering objects f rom Amazon S3 since fetching large objects from Amazon S3 requires higher memory for Lambda functions ArchivedAmazon Web Services Serverless Application Lens 70 Figure 37: Lambda function retrieving full S3 object In the previous diagram we can see that when retrieving large objects from Amazon S3 we migh t increase the memory consumption of the Lambda increase the execution time (so the function can transform iterate or collect required data) and in some case s only part of this information is needed This is represented with three columns in red (data not required) and one column in green (data required) Using Athena SQL queries to gather granular information needed for your execution reduces the retrieval time and object size on which perform transformations Figure 38: Lambda with Athena object r etrieval In the next diagram we can see that by querying Athena to get the specific data we reduce the size of the object retrieved and as an extra benefit we can reuse that content since Athena saves its query results in a S3 bucket and invoke the La mbda invocation as the results land in Amazon S3 asynchronously A similar approach could be using with S3 Select S3 Select enables applications to retrieve only a subset of data from an object by using simple SQL expressions As in the ArchivedAmazon Web Services Serverle ss Application Lens 71 previous example w ith Athena retrieving a smaller object from Amazon S3 reduces execution time and the memory u sed by the Lambda function 200 seconds 95 seconds # Download and process all keys for key in src_keys: response = s3_clientget_object(Bucket=src_bucket Key=key) contents = response['Body']read() for line in contentssplit(' \n')[:1]: line_count +=1 try: data = linesplit('') srcIp = data[0][:8] … # Select IP Address and Keys for key in src_keys: response = s3_clientselect_object_content (Bucket=src_bucket Key=key expression = SELECT SUBSTR(obj_1 1 8) obj_2 FROM s3object as obj) contents = response['Body']read() for line in contents: line_count +=1 try: … Figure 18: Lambda perf ormance statistics using Amazon S3 vs S3 Select Resources Refer to the following resources to learn more about our best practices for cost optimization Documentation & Blogs • Cloud Watch Logs Retention 64 • Exporting Cloud Watch Logs to Amazon S3 65 • Streaming Cloud Watch Logs to Amazon E S66 • Defining wait states in Step Functions state machines 67 • Coca Cola Vending Pass State Machine Powered by Step Functions 68 • Building high throughput genomics batch workflows on AWS 69 • Simplify your Pub/Sub Messaging with Amazon SNS Me ssage Filtering • S3 Select and Glacier Select ArchivedAmazon Web Services Serverless Application Lens 72 • Lambda Reference Architecture for MapReduce • Serverless Application Repository App – Autoset CloudWatch Logs group retention • Ten resources every Serverless Architect should know Whitepaper • Optimizing Enterprise Economics with Serverless Architectures 70 Conclusion While serverless applications take the undifferentiated heavy lifting off developers there are still important principles to apply For reliability by regularly testing f ailure pathways you will be more likely to catch errors before they reach production For performance starting backward from customer expectation will allow you to design for optimal experience There are a number of AWS tools to help optimize performance as well For c ost optimization you can reduc e unnecessary waste within your serverless application by sizing resources in accordance with traffic demand and improve value by optimizing your application For operations your architecture should strive t oward automation in responding to events Finally a secure application will protect your organization’s sensitive information assets and meet any compliance requirements at every layer The landscape of serverless applications is continuing to evolve wi th the ecosystem of tooling and processes growing and maturing As this occurs we will continue to update this paper to help you ensure that your serverless application s are well architected Contributors The following individuals and organizations contributed to this document: • Adam Westrich: Sr Solutions Architect Amazon Web Services • Mark Bunch: Enterprise Solutions Architect Amazon Web Services • Ignacio Garcia Alonso: Solutions Architect Amazon Web Services ArchivedAmazon Web Services Serverless Application Lens 73 • Heitor Lessa: Principal S erverless Lead Well Architected Amazon Web Services • Philip Fitzsimons: Sr Manager Well Architected Amazon Web Services • Dave Walker: Principal Specialist Solutions Architect Amazon Web Services • Richard Threlkeld: Sr Product Manager Mobile Amazon Web Services • Julian Hambleton Jones: Sr Solutions Architect Amazon Web Services Further Reading For additional information see the following: • AWS Well Architected Framework 71 Document Revisions Date Description December 2019 Updates throughout for new features and evolution of best practice November 2018 New scenarios for Alexa and Mobile and updates throughout to reflect new features and evolution of best practice November 2017 Initial publication 1 https://awsamazoncom/well architected 2 http://d0aws staticcom/whitepapers/architecture/AWS_Well Architected_Frameworkpdf 3 https://githubcom/alexcasalboni/aws lambda power tuning 4 http://docsawsamazoncom/amazondynamodb/latest/developerguide/BestPracticesh tml Notes ArchivedAmazon Web Services Serverless Application Lens 74 5 http://docsawsamazoncom/elasticsearch service/latest/developerguide/es managedomainshtml 6 https://wwwelasticco/guide/en/elasticsearch/guide/current/scalehtml 7 http://docsawsamazoncom/streams/latest/dev/kinesis record processor scalinghtml 8 https://d0awsstaticcom/whitepapers/whitepaper streaming datasolutions onaws withamazon kinesispdf 9 http://do csawsamazoncom/kinesis/latest/APIReference/API_PutRecordshtml 10 http://docsawsamazoncom/streams/latest/dev/kinesis record processor duplicatesht ml 11 http://docsawsamazoncom/lambda/latest/dg/best practiceshtml#stream events 12 http://docsawsamazoncom/apigateway/latest/developerguide/api gateway api usage planshtml 13 http://docsawsamazonc om/apigateway/latest/developerguide/stage variableshtml 14 http://docsawsamazoncom/lambda/latest/dg/env_variableshtml 15 https://githubcom/awslabs/serverless application model 16 https://awsamazoncom/blogs/aws/latency distribution graph inawsxray/ 17 http://docsawsamazoncom/lambda/latest/dg/lambda xrayhtml 18 http://docsawsamazoncom/systems manager/latest/userguide/systems manager paramstorehtml 19 https://awsamazoncom/blogs/compute/continuous deployment forserverless applications/ 20 https://githubcom/awslabs/aws serverless samfarm 21 https://d0awsstaticcom/whitepapers/DevOps/practicing continuous integration continuous delivery onAWSpdf 22 https://awsamazoncom/serverless/developer tools/ 23 http://docsawsamazoncom/lambda/latest/dg/with s3example create iamrolehtml 24 http://docsawsamazoncom/apigateway/latest/developerguide/api gateway method request validationhtml ArchivedAmazon Web Services Serverless Application Lens 75 25 http://docsawsamazoncom/apigateway/latest/developerguide/use custom authorizerhtml 26 https://awsamazoncom/blogs/compute/secure apiaccess withamazon cognito federated identities amazon cognito userpools andamazon apigateway/ 27 http://docsawsamazoncom/lambda/latest/dg/vpchtml 28 https://awsamazoncom/pt/articles/using squid proxy instances forwebservice access inamazon vpcanother example withawscodedeploy andamazon cloudwatch/ 29 https://wwwowasporg/images/0/08/OWASP_SCP_Quick_Reference_Guide_v2pdf 30 https://d0awsstaticcom/whitepapers/Security/AWS_Security_Best_Practicespdf 31 https://wwwtwistlockcom/products/serverless security/ 32 https://snykio/ 33 https: //wwwowasporg/indexphp/OWASP_Dependency_Check 34 http://theburningmonkcom/2017/07/applying thesaga pattern withawslambda and stepfunctio ns/ 35 http://docsawsamazoncom/lambda/latest/dg/limitshtml 36 http://docsawsamazoncom/apigateway/latest/developerguide/limitshtml#api gateway limits 37 http://docsawsamazoncom/streams/latest/dev/service sizes andlimitshtml 38 http://docsawsamazoncom/amazondynamodb/lat est/developerguide/Limitshtml 39 http://docsawsamazoncom/step functions/latest/dg/limitshtml 40 https://awsamazoncom/blogs/compute/error handling patterns inamazon api gateway andawslambda/ 41 https://awsamazoncom/bl ogs/compute/serverless testing withawslambda/ 42 http://docsawsamazoncom/lambda/latest/dg/monitoring functions logshtml 43 http://docsawsamazoncom/lambda/latest/dg/versioning aliaseshtml 44 http://docsawsamazoncom/apigateway/latest/developerguide/stageshtml 45 http://docsawsamazoncom/general/lat est/gr/api retrieshtml ArchivedAmazon Web Services Serverless Application Lens 76 46 http://docsawsamazoncom/step functions/latest/dg/tutorial handling error conditionshtml#using state machine error conditions step4 47 http://docsawsamazoncom/xray/latest/devguide/xray services lambdahtml 48 http://docsawsamazoncom/lambda/latest/dg/dlqhtml 49 https://awsamazoncom/blogs/compute/e rrorhandling patterns inamazon api gateway andawslambda/ 50 http://docsawsamazoncom/step functions/latest/dg/amazon states language wait state html 51 http://microservicesio/patterns/data/sagahtml 52 http://theburningmon kcom/2017/07/applying thesaga pattern withawslambda and stepfunctions/ 53 https://d0awsstaticcom/whitepapers/microservices onawspdf 54 http://docsawsamazoncom/lambda/latest/dg/best practiceshtml 55 https://awsamazoncom/lambda/faqs/ 56 http://docsawsamazoncom/lambda/latest/dg/best practiceshtml 57 http://docsawsamazoncom /lambda/latest/dg/lambda introductionhtml 58 https://awsamazoncom/blogs/compute/container reuse inlambda/ 59 http://docsawsamazoncom/lambda/latest/dg/vpchtml 60 http://docsawsamazoncom/apigateway/latest/developerguide/api gateway cachinghtml 61 http://docsawsamazoncom/amazondynamodb/latest/developerguide/GSIhtml 62 https://awsamazoncom/dynamodb/dax/ 63 http://docsawsamazoncom/streams/latest/dev/amazon kinesis streamshtml 64 http://docsawsamazoncom/AmazonCloudWatch/latest/logs/SettingLogRetentionhtm l 65 http://docsawsamazoncom/AmazonCloudWatch/latest/logs/S3ExportTasksConsole html ArchivedAmazon Web Services Serverless Application Lens 77 66 http://docsawsamazoncom/AmazonCloudWatch/latest/logs/CWL_ES_Streamhtml 67 http://docsawsamazoncom/step functions/latest/dg/ama zonstates language wait statehtml 68 https://awsamazoncom/blogs/aws/things gobetter withstepfunctions/ 69 https://awsamazoncom/blogs/compute/buildin ghighthroughput genomics batch workflows onawsworkflow layer part4of4/ 70 https://d0awsstaticcom/whitepapers/optimizing enterprise economics serverless architecturespdf 71 https://awsamazoncom/well architected
|
General
|
consultant
|
Best Practices
|
AWS_WellArchitected_Framework
|
ArchivedAWS WellArchitected Framework July 2020 This whitepaper describes the AWS WellArchitected Framework It provides guidance to help cus tomers apply best practices in the design delivery and maintenance of AWS environments We address general design principles as well as specific best practices and guidance in five conceptual areas that we define as the pillars of the WellArchitected FrameworkThis paper has been archived The latest version is available at: https://docsawsamazoncom/wellarchitected/latest/framework/welcomehtmlArchivedAWS WellArchitected Framework Notices Customers are responsible for making their own independent assessment of the in formation in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Copyright © 2020 Amazon Web Services Inc or its affiliatesArchivedAWS WellArchitected Framework Introduction 1 Definitions 2 On Architecture 3 General Design Principles 5 The Five Pillars of the Framework 6 Operational Excellence 6 Security 15 Reliability 22 Performance Efficiency 28 Cost Optimization 36 The Review Process 43 Conclusion 45 Contributors 46 Further Reading 47 Document Revisions 48 Appendix: Questions and Best Practices 49 Operational Excellence 49 Security 60 Reliability 69 Performance Efficiency 80 Cost Optimization 88 iiiArchivedAWS WellArchitected Framework Introduction The AWS WellArchitected Framework helps you understand the pros and cons of de cisions you make while building systems on AWS By using the Framework you will learn architectural best practices for designing and operating reliable secure effi cient and costeffective systems in the cloud It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement The process for reviewing an architecture is a constructive conversation about archi tectural decisions and is not an audit mechanism We believe that having wellarchi tected systems greatly increases the likelihood of business success AWS Solutions Architects have years of experience architecting solutions across a wide variety of business verticals and use cases We have helped design and review thou sands of customers’ architectures on AWS From this experience we have identified best practices and core strategies for architecting systems in the cloud The AWS WellArchitected Framework documents a set of foundational questions that allow you to understand if a specific architecture aligns well with cloud best practices The framework provides a consistent approach to evaluating systems against the qualities you expect from modern cloudbased systems and the remedi ation that would be required to achieve those qualities As AWS continues to evolve and we continue to learn more from working with our customers we will continue to refine the definition of wellarchitected This framework is intended for those in technology roles such as chief technology of ficers (CTOs) architects developers and operations team members It describes AWS best practices and strategies to use when designing and operating a cloud workload and provides links to further implementation details and architectural patterns For more information see the AWS WellArchitected homepage AWS also provides a service for reviewing your workloads at no charge The AWS WellArchitected Tool (AWS WA Tool) is a service in the cloud that provides a consis tent process for you to review and measure your architecture using the AWS WellAr chitected Framework The AWS WA Tool provides recommendations for making your workloads more reliable secure efficient and costeffective To help you apply best practices we have created AWS WellArchitected Labs which provides you with a repository of code and documentation to give you handson ex perience implementing best practices We also have teamed up with select AWS Part ner Network (APN) Partners who are members of the AWS WellArchitected Partner program These APN Partners have deep AWS knowledge and can help you review and improve your workloads 1ArchivedAWS WellArchitected Framework Definitions Every day experts at AWS assist customers in architecting systems to take advantage of best practices in the cloud We work with you on making architectural tradeoffs as your designs evolve As you deploy these systems into live environments we learn how well these systems perform and the consequences of those tradeoffs Based on what we have learned we have created the AWS WellArchitected Frame work which provides a consistent set of best practices for customers and partners to evaluate architectures and provides a set of questions you can use to evaluate how well an architecture is aligned to AWS best practices The AWS WellArchitected Framework is based on five pillars — operational excel lence security reliability performance efficiency and cost optimization Table 1 The pillars of the AWS WellArchitected Framework Name Description Operational Excellence The ability to support development and run workloads effectively gain insight into their operations and to continuously improve supporting processes and proce dures to deliver business value Security The security pillar encompasses the ability to protect data systems and assets to take advantage of cloud technologies to improve your security Reliability The reliability pillar encompasses the ability of a work load to perform its intended function correctly and con sistently when it’s expected to This includes the ability to operate and test the workload through its total life cycle This paper provides indepth best practice guid ance for implementing reliable workloads on AWS Performance Efficiency The ability to use computing resources efficiently to meet system requirements and to maintain that effi ciency as demand changes and technologies evolve Cost Optimization The ability to run systems to deliver business value at the lowest price point In the AWS WellArchitected Framework we use these terms: • A component is the code configuration and AWS Resources that together deliver against a requirement A component is often the unit of technical ownership and is decoupled from other components 2ArchivedAWS WellArchitected Framework • The term workload is used to identify a set of components that together deliver business value A workload is usually the level of detail that business and technolo gy leaders communicate about • We think about architecture as being how components work together in a work load How components communicate and interact is often the focus of architecture diagrams •Milestones mark key changes in your architecture as it evolves throughout the product lifecycle (design testing go live and in production) • Within an organization the technology portfolio is the collection of workloads that are required for the business to operate When architecting workloads you make tradeoffs between pillars based on your business context These business decisions can drive your engineering priorities You might optimize to reduce cost at the expense of reliability in development environ ments or for missioncritical solutions you might optimize reliability with increased costs In ecommerce solutions performance can affect revenue and customer propen sity to buy Security and operational excellence are generally not tradedoff against the other pillars On Architecture In onpremises environments customers often have a central team for technology ar chitecture that acts as an overlay to other product or feature teams to ensure they are following best practice Technology architecture teams typically include a set of roles such as: Technical Architect (infrastructure) Solutions Architect (software) Data Ar chitect Networking Architect and Security Architect Often these teams use TOGAF or the Zachman Framework as part of an enterprise architecture capability At AWS we prefer to distribute capabilities into teams rather than having a central ized team with that capability There are risks when you choose to distribute decision making authority for example ensure that teams are meeting internal standards We mitigate these risks in two ways First we have practices 1 that focus on enabling each team to have that capability and we put in place experts who ensure that teams raise the bar on the standards they need to meet Second we implement mechanisms 2 that carry out automated checks to ensure standards are being met This distributed approach is supported by the Amazon leadership principles and establishes a culture 1Ways of doing things process standards and accepted norms 2 “Good intentions never work you need good mechanisms to make anything happen” Jeff Bezos This means replacing humans best efforts with mechanisms (often automated) that check for compliance with rules or process 3ArchivedAWS WellArchitected Framework across all roles that works back 3 from the customer Customerobsessed teams build products in response to a customer need For architecture this means that we expect every team to have the capability to cre ate architectures and to follow best practices To help new teams gain these capa bilities or existing teams to raise their bar we enable access to a virtual communi ty of principal engineers who can review their designs and help them understand what AWS best practices are The principal engineering community works to make best practices visible and accessible One way they do this for example is through lunchtime talks that focus on applying best practices to real examples These talks are recorded and can be used as part of onboarding materials for new team members AWS best practices emerge from our experience running thousands of systems at in ternet scale We prefer to use data to define best practice but we also use subject matter experts like principal engineers to set them As principal engineers see new best practices emerge they work as a community to ensure that teams follow them In time these best practices are formalized into our internal review processes as well as into mechanisms that enforce compliance The WellArchitected Framework is the customerfacing implementation of our internal review process where we have cod ified our principal engineering thinking across field roles like Solutions Architecture and internal engineering teams The WellArchitected Framework is a scalable mecha nism that lets you take advantage of these learnings By following the approach of a principal engineering community with distributed ownership of architecture we believe that a WellArchitected enterprise architecture can emerge that is driven by customer need Technology leaders (such as a CTOs or development managers) carrying out WellArchitected reviews across all your work loads will allow you to better understand the risks in your technology portfolio Using this approach you can identify themes across teams that your organization could ad dress by mechanisms training or lunchtime talks where your principal engineers can share their thinking on specific areas with multiple teams 3Working backward is a fundamental part of our innovation process We start with the customer and what they want and let that define and guide our efforts 4ArchivedAWS WellArchitected Framework General Design Principles The WellArchitected Framework identifies a set of general design principles to facili tate good design in the cloud: •Stop guessing your capacity needs : If you make a poor capacity decision when de ploying a workload you might end up sitting on expensive idle resources or deal ing with the performance implications of limited capacity With cloud computing these problems can go away You can use as much or as little capacity as you need and scale up and down automatically •Test systems at production scale : In the cloud you can create a productionscale test environment on demand complete your testing and then decommission the resources Because you only pay for the test environment when it's running you can simulate your live environment for a fraction of the cost of testing on premises •Automate to make architectural experimentation easier: Automation allows you to create and replicate your workloads at low cost and avoid the expense of manu al effort You can track changes to your automation audit the impact and revert to previous parameters when necessary •Allow for evolutionary architectures: Allow for evolutionary architectures In a tra ditional environment architectural decisions are often implemented as static one time events with a few major versions of a system during its lifetime As a business and its context continue to evolve these initial decisions might hinder the system's ability to deliver changing business requirements In the cloud the capability to au tomate and test on demand lowers the risk of impact from design changes This al lows systems to evolve over time so that businesses can take advantage of innova tions as a standard practice •Drive architectures using data : In the cloud you can collect data on how your ar chitectural choices affect the behavior of your workload This lets you make fact based decisions on how to improve your workload Your cloud infrastructure is code so you can use that data to inform your architecture choices and improve ments over time •Improve through game days : Test how your architecture and processes perform by regularly scheduling game days to simulate events in production This will help you understand where improvements can be made and can help develop organizational experience in dealing with events 5ArchivedAWS WellArchitected Framework The Five Pillars of the Framework Creating a software system is a lot like constructing a building If the foundation is not solid structural problems can undermine the integrity and function of the build ing When architecting technology solutions if you neglect the five pillars of opera tional excellence security reliability performance efficiency and cost optimization it can become challenging to build a system that delivers on your expectations and re quirements Incorporating these pillars into your architecture will help you produce stable and efficient systems This will allow you to focus on the other aspects of de sign such as functional requirements Operational Excellence The Operational Excellence pillar includes the ability to support development and run workloads effectively gain insight into their operations and to continuously improve supporting processes and procedures to deliver business value The operational excellence pillar provides an overview of design principles best prac tices and questions You can find prescriptive guidance on implementation in the Op erational Excellence Pillar whitepaper Design Principles There are five design principles for operational excellence in the cloud: •Perform operations as code : In the cloud you can apply the same engineering dis cipline that you use for application code to your entire environment You can define your entire workload (applications infrastructure) as code and update it with code You can implement your operations procedures as code and automate their execu tion by triggering them in response to events By performing operations as code you limit human error and enable consistent responses to events •Make frequent small reversible changes : Design workloads to allow components to be updated regularly Make changes in small increments that can be reversed if they fail (without affecting customers when possible) •Refine operations procedures frequently: As you use operations procedures look for opportunities to improve them As you evolve your workload evolve your proce dures appropriately Set up regular game days to review and validate that all proce dures are effective and that teams are familiar with them •Anticipate failure : Perform “premortem” exercises to identify potential sources of failure so that they can be removed or mitigated Test your failure scenarios and validate your understanding of their impact Test your response procedures to en 6ArchivedAWS WellArchitected Framework sure that they are effective and that teams are familiar with their execution Set up regular game days to test workloads and team responses to simulated events •Learn from all operational failures: Drive improvement through lessons learned from all operational events and failures Share what is learned across teams and through the entire organization Definition There are four best practice areas for operational excellence in the cloud: •Organization •Prepare •Operate •Evolve Your organization’s leadership defines business objectives Your organization must understand requirements and priorities and use these to organize and conduct work to support the achievement of business outcomes Your workload must emit the in formation necessary to support it Implementing services to enable integration de ployment and delivery of your workload will enable an increased flow of beneficial changes into production by automating repetitive processes There may be risks inherent in the operation of your workload You must understand those risks and make an informed decision to enter production Your teams must be able to support your workload Business and operational metrics derived from de sired business outcomes will enable you to understand the health of your workload your operations activities and respond to incidents Your priorities will change as your business needs and business environment changes Use these as a feedback loop to continually drive improvement for your organization and the operation of your work load Best Practices Organization Your teams need to have a shared understanding of your entire workload their role in it and shared business goals to set the priorities that will enable business success Welldefined priorities will maximize the benefits of your efforts Evaluate internal and external customer needs involving key stakeholders including business devel opment and operations teams to determine where to focus efforts Evaluating cus tomer needs will ensure that you have a thorough understanding of the support that 7ArchivedAWS WellArchitected Framework is required to achieve business outcomes Ensure that you are aware of guidelines or obligations defined by your organizational governance and external factors such as regulatory compliance requirements and industry standards that may mandate or emphasize specific focus Validate that you have mechanisms to identify changes to internal governance and external compliance requirements If no requirements are identified ensure that you have applied due diligence to this determination Review your priorities regularly so that they can be updated as needs change Evaluate threats to the business (for example business risk and liabilities and infor mation security threats) and maintain this information in a risk registry Evaluate the impact of risks and tradeoffs between competing interests or alternative approaches For example accelerating speed to market for new features may be emphasized over cost optimization or you may choose a relational database for nonrelational data to simplify the effort to migrate a system without refactoring Manage benefits and risks to make informed decisions when determining where to focus efforts Some risks or choices may be acceptable for a time it may be possible to mitigate associated risks or it may become unacceptable to allow a risk to remain in which case you will take action to address the risk Your teams must understand their part in achieving business outcomes Teams need to understand their roles in the success of other teams the role of other teams in their success and have shared goals Understanding responsibility ownership how decisions are made and who has authority to make decisions will help focus efforts and maximize the benefits from your teams The needs of a team will be shaped by the customer they support their organization the makeup of the team and the char acteristics of their workload It's unreasonable to expect a single operating model to be able to support all teams and their workloads in your organization Ensure that there are identified owners for each application workload platform and infrastructure component and that each process and procedure has an identified owner responsible for its definition and owners responsible for their performance Having understanding of the business value of each component process and pro cedure of why those resources are in place or activities are performed and why that ownership exists will inform the actions of your team members Clearly define the re sponsibilities of team members so that they may act appropriately and have mech anisms to identify responsibility and ownership Have mechanisms to request addi tions changes and exceptions so that you do not constrain innovation Define agree ments between teams describing how they work together to support each other and your business outcomes Provide support for your team members so that they can be more effective in taking action and supporting your business outcomes Engaged senior leadership should set expectations and measure success They should be the sponsor advocate and driver for the adoption of best practices and evolution of the organization Empower team members to take action when outcomes are at risk to minimize impact and encour age them to escalate to decision makers and stakeholders when they believe there 8ArchivedAWS WellArchitected Framework is a risk so that it can be address and incidents avoided Provide timely clear and ac tionable communications of known risks and planned events so that team members can take timely and appropriate action Encourage experimentation to accelerate learning and keeps team members interest ed and engaged Teams must grow their skill sets to adopt new technologies and to support changes in demand and responsibilities Support and encourage this by pro viding dedicated structure time for learning Ensure your team members have the re sources both tools and team members to be successful and scale to support your business outcomes Leverage crossorganizational diversity to seek multiple unique perspectives Use this perspective to increase innovation challenge your assumptions and reduce the risk of confirmation bias Grow inclusion diversity and accessibility within your teams to gain beneficial perspectives If there are external regulatory or compliance requirements that apply to your organi zation you should use the resources provided by AWS Cloud Compliance to help ed ucate your teams so that they can determine the impact on your priorities The Well Architected Framework emphasizes learning measuring and improving It provides a consistent approach for you to evaluate architectures and implement designs that will scale over time AWS provides the AWS WellArchitected Tool to help you review your approach prior to development the state of your workloads prior to production and the state of your workloads in production You can compare workloads to the lat est AWS architectural best practices monitor their overall status and gain insight in to potential risks AWS Trusted Advisor is a tool that provides access to a core set of checks that recommend optimizations that may help shape your priorities Business and Enterprise Support customers receive access to additional checks focusing on security reliability performance and costoptimization that can further help shape their priorities AWS can help you educate your teams about AWS and its services to increase their understanding of how their choices can have an impact on your workload You should use the resources provided by AWS Support (AWS Knowledge Center AWS Discus sion Forums and AWS Support Center) and AWS Documentation to educate your teams Reach out to AWS Support through AWS Support Center for help with your AWS questions AWS also shares best practices and patterns that we have learned through the operation of AWS in The Amazon Builders' Library A wide variety of oth er useful information is available through the AWS Blog and The Official AWS Pod cast AWS Training and Certification provides some free training through selfpaced digital courses on AWS fundamentals You can also register for instructorled training to further support the development of your teams’ AWS skills You should use tools or services that enable you to centrally govern your environ ments across accounts such as AWS Organizations to help manage your operating models Services like AWS Control Tower expand this management capability by en abling you to define blueprints (supporting your operating models) for the setup of accounts apply ongoing governance using AWS Organizations and automate provi 9ArchivedAWS WellArchitected Framework sioning of new accounts Managed Services providers such as AWS Managed Services AWS Managed Services Partners or Managed Services Providers in the AWS Partner Network provide expertise implementing cloud environments and support your se curity and compliance requirements and business goals Adding Managed Services to your operating model can save you time and resources and lets you keep your inter nal teams lean and focused on strategic outcomes that will differentiate your busi ness rather than developing new skills and capabilities The following questions focus on these considerations for operational excellence (For a list of operational excellence questions and best practices see the Appendix) OPS 1: How do you determine what your priorities are? Everyone needs to understand their part in enabling business success Have shared goals in order to set priorities for resources This will maximize the benefits of your efforts OPS 2: How do you structure your organization to support your business outcomes? Your teams must understand their part in achieving business outcomes Teams need to un derstand their roles in the success of other teams the role of other teams in their success and have shared goals Understanding responsibility ownership how decisions are made and who has authority to make decisions will help focus efforts and maximize the benefits from your teams OPS 3: How does your organizational culture support your business outcomes? Provide support for your team members so that they can be more effective in taking action and supporting your business outcome You might find that you want to emphasize a small subset of your priorities at some point in time Use a balanced approach over the long term to ensure the development of needed capabilities and management of risk Review your priorities regularly and update them as needs change When responsibility and ownership are undefined or unknown you are at risk of both not performing necessary action in a timely fashion and of redundant and potentially conflicting efforts emerging to address those needs Organizational culture has a direct impact on team member job satisfaction and re tention Enable the engagement and capabilities of your team members to enable the success of your business Experimentation is required for innovation to happen and turn ideas into outcomes Recognize that an undesired result is a successful experi ment that has identified a path that will not lead to success Prepare To prepare for operational excellence you have to understand your workloads and their expected behaviors You will then be able design them to provide insight to their status and build the procedures to support them Design your workload so that it provides the information necessary for you to under stand its internal state (for example metrics logs events and traces) across all com 10ArchivedAWS WellArchitected Framework ponents in support of observability and investigating issues Iterate to develop the telemetry necessary to monitor the health of your workload identify when outcomes are at risk and enable effective responses When instrumenting your workload cap ture a broad set of information to enable situational awareness (for example changes in state user activity privilege access utilization counters) knowing that you can use filters to select the most useful information over time Adopt approaches that improve the flow of changes into production and that en able refactoring fast feedback on quality and bug fixing These accelerate beneficial changes entering production limit issues deployed and enable rapid identification and remediation of issues introduced through deployment activities or discovered in your environments Adopt approaches that provide fast feedback on quality and enable rapid recovery from changes that do not have desired outcomes Using these practices mitigates the impact of issues introduced through the deployment of changes Plan for unsuc cessful changes so that you are able to respond faster if necessary and test and val idate the changes you make Be aware of planned activities in your environments so that you can manage the risk of changes impacting planed activities Emphasize fre quent small reversible changes to limit the scope of change This results in easier troubleshooting and faster remediation with the option to roll back a change It also means you are able to get the benefit of valuable changes more frequently Evaluate the operational readiness of your workload processes procedures and per sonnel to understand the operational risks related to your workload You should use a consistent process (including manual or automated checklists) to know when you are ready to go live with your workload or a change This will also enable you to find any areas that you need to make plans to address Have runbooks that document your routine activities and playbooks that guide your processes for issue resolution Un derstand the benefits and risks to make informed decisions to allow changes to enter production AWS enables you to view your entire workload (applications infrastructure policy governance and operations) as code This means you can apply the same engineering discipline that you use for application code to every element of your stack and share these across teams or organizations to magnify the benefits of development efforts Use operations as code in the cloud and the ability to safely experiment to develop your workload your operations procedures and practice failure Using AWS CloudFor mation enables you to have consistent templated sandbox development test and production environments with increasing levels of operations control 11ArchivedAWS WellArchitected Framework The following questions focus on these considerations for operational excellence OPS 4: How do you design your workload so that you can understand its state? Design your workload so that it provides the information necessary across all components (for example metrics logs and traces) for you to understand its internal state This enables you to provide effective responses when appropriate OPS 5: How do you reduce defects ease remediation and improve flow into production? Adopt approaches that improve flow of changes into production that enable refactoring fast feedback on quality and bug fixing These accelerate beneficial changes entering pro duction limit issues deployed and enable rapid identification and remediation of issues in troduced through deployment activities OPS 6: How do you mitigate deployment risks? Adopt approaches that provide fast feedback on quality and enable rapid recovery from changes that do not have desired outcomes Using these practices mitigates the impact of is sues introduced through the deployment of changes OPS 7: How do you know that you are ready to support a workload? Evaluate the operational readiness of your workload processes and procedures and person nel to understand the operational risks related to your workload Invest in implementing operations activities as code to maximize the productivity of operations personnel minimize error rates and enable automated responses Use “premortems” to anticipate failure and create procedures where appropriate Apply metadata using Resource Tags and AWS Resource Groups following a consistent tag ging strategy to enable identification of your resources Tag your resources for orga nization cost accounting access controls and targeting the execution of automated operations activities Adopt deployment practices that take advantage of the elastic ity of the cloud to facilitate development activities and predeployment of systems for faster implementations When you make changes to the checklists you use to eval uate your workloads plan what you will do with live systems that no longer comply Operate Successful operation of a workload is measured by the achievement of business and customer outcomes Define expected outcomes determine how success will be mea sured and identify metrics that will be used in those calculations to determine if your workload and operations are successful Operational health includes both the health of the workload and the health and success of the operations activities performed in support of the workload (for example deployment and incident response) Establish metrics baselines for improvement investigation and intervention collect and an alyze your metrics and then validate your understanding of operations success and how it changes over time Use collected metrics to determine if you are satisfying cus tomer and business needs and identify areas for improvement 12ArchivedAWS WellArchitected Framework Efficient and effective management of operational events is required to achieve op erational excellence This applies to both planned and unplanned operational events Use established runbooks for wellunderstood events and use playbooks to aid in investigation and resolution of issues Prioritize responses to events based on their business and customer impact Ensure that if an alert is raised in response to an event there is an associated process to be executed with a specifically identified owner Define in advance the personnel required to resolve an event and include es calation triggers to engage additional personnel as it becomes necessary based on urgency and impact Identify and engage individuals with the authority to make a de cision on courses of action where there will be a business impact from an event re sponse not previously addressed Communicate the operational status of workloads through dashboards and notifica tions that are tailored to the target audience (for example customer business devel opers operations) so that they may take appropriate action so that their expectations are managed and so that they are informed when normal operations resume In AWS you can generate dashboard views of your metrics collected from workloads and natively from AWS You can leverage CloudWatch or thirdparty applications to aggregate and present business workload and operations level views of opera tions activities AWS provides workload insights through logging capabilities including AWS XRay CloudWatch CloudTrail and VPC Flow Logs enabling the identification of workload issues in support of root cause analysis and remediation The following questions focus on these considerations for operational excellence OPS 8: How do you understand the health of your workload? Define capture and analyze workload metrics to gain visibility to workload events so that you can take appropriate action OPS 9: How do you understand the health of your operations? Define capture and analyze operations metrics to gain visibility to operations events so that you can take appropriate action OPS 10: How do you manage workload and operations events? Prepare and validate procedures for responding to events to minimize their disruption to your workload All of the metrics you collect should be aligned to a business need and the outcomes they support Develop scripted responses to wellunderstood events and automate their performance in response to recognizing the event Evolve You must learn share and continuously improve to sustain operational excellence Dedicate work cycles to making continuous incremental improvements Perform post 13ArchivedAWS WellArchitected Framework incident analysis of all customer impacting events Identify the contributing factors and preventative action to limit or prevent recurrence Communicate contributing factors with affected communities as appropriate Regularly evaluate and prioritize opportunities for improvement (for example feature requests issue remediation and compliance requirements) including both the workload and operations procedures Include feedback loops within your procedures to rapidly identify areas for improve ment and capture learnings from the execution of operations Share lessons learned across teams to share the benefits of those lessons Analyze trends within lessons learned and perform crossteam retrospective analysis of op erations metrics to identify opportunities and methods for improvement Implement changes intended to bring about improvement and evaluate the results to determine success On AWS you can export your log data to Amazon S3 or send logs directly to Amazon S3 for longterm storage Using AWS Glue you can discover and prepare your log da ta in Amazon S3 for analytics and store associated metadata in the AWS Glue Data Catalog Amazon Athena through its native integration with AWS Glue can then be used to analyze your log data querying it using standard SQL Using a business intel ligence tool like Amazon QuickSight you can visualize explore and analyze your da ta Discovering trends and events of interest that may drive improvement The following questions focus on these considerations for operational excellence OPS 11: How do you evolve operations? Dedicate time and resources for continuous incremental improvement to evolve the effec tiveness and efficiency of your operations Successful evolution of operations is founded in: frequent small improvements; pro viding safe environments and time to experiment develop and test improvements; and environments in which learning from failures is encouraged Operations support for sandbox development test and production environments with increasing lev el of operational controls facilitates development and increases the predictability of successful results from changes deployed into production Resources Refer to the following resources to learn more about our best practices for Opera tional Excellence Documentation •DevOps and AWS Whitepaper 14ArchivedAWS WellArchitected Framework •Operational Excellence Pillar Video •DevOps at Amazon Security The Security pillar includes the security pillar encompasses the ability to protect data systems and assets to take advantage of cloud technologies to improve your security The security pillar provides an overview of design principles best practices and ques tions You can find prescriptive guidance on implementation in the Security Pillar whitepaper Design Principles There are seven design principles for security in the cloud: •Implement a strong identity foundation : Implement the principle of least privi lege and enforce separation of duties with appropriate authorization for each inter action with your AWS resources Centralize identity management and aim to elimi nate reliance on longterm static credentials •Enable traceability : Monitor alert and audit actions and changes to your environ ment in real time Integrate log and metric collection with systems to automatically investigate and take action •Apply security at all layers : Apply a defense in depth approach with multiple secu rity controls Apply to all layers (for example edge of network VPC load balancing every instance and compute service operating system application and code) •Automate security best practices : Automated softwarebased security mechanisms improve your ability to securely scale more rapidly and costeffectively Create se cure architectures including the implementation of controls that are defined and managed as code in versioncontrolled templates •Protect data in transit and at rest : Classify your data into sensitivity levels and use mechanisms such as encryption tokenization and access control where appropri ate •Keep people away from data : Use mechanisms and tools to reduce or eliminate the need for direct access or manual processing of data This reduces the risk of mis handling or modification and human error when handling sensitive data •Prepare for security events : Prepare for an incident by having incident manage ment and investigation policy and processes that align to your organizational re 15ArchivedAWS WellArchitected Framework quirements Run incident response simulations and use tools with automation to in crease your speed for detection investigation and recovery Definition There are six best practice areas for security in the cloud: •Security •Identity and Access Management •Detection •Infrastructure Protection •Data Protection •Incident Response Before you architect any workload you need to put in place practices that influence security You will want to control who can do what In addition you want to be able to identify security incidents protect your systems and services and maintain the con fidentiality and integrity of data through data protection You should have a wellde fined and practiced process for responding to security incidents These tools and tech niques are important because they support objectives such as preventing financial loss or complying with regulatory obligations The AWS Shared Responsibility Model enables organizations that adopt the cloud to achieve their security and compliance goals Because AWS physically secures the infra structure that supports our cloud services as an AWS customer you can focus on us ing services to accomplish your goals The AWS Cloud also provides greater access to security data and an automated approach to responding to security events Best Practices Security To operate your workload securely you must apply overarching best practices to every area of security Take requirements and processes that you have defined in op erational excellence at an organizational and workload level and apply them to all ar eas Staying up to date with AWS and industry recommendations and threat intelligence helps you evolve your threat model and control objectives Automating security processes testing and validation allow you to scale your security operations 16ArchivedAWS WellArchitected Framework The following questions focus on these considerations for security (For a list of secu rity questions and best practices see the Appendix) SEC 1: How do you securely operate your workload? To operate your workload securely you must apply overarching best practices to every area of security Take requirements and processes that you have defined in operational excellence at an organizational and workload level and apply them to all areas Staying up to date with AWS and industry recommendations and threat intelligence helps you evolve your threat model and control objectives Automating security processes testing and validation allow you to scale your security operations In AWS segregating different workloads by account based on their function and compliance or data sensitivity requirements is a recommended approach Identity and Access Management Identity and access management are key parts of an information security program ensuring that only authorized and authenticated users and components are able to access your resources and only in a manner that you intend For example you should define principals (that is accounts users roles and services that can perform ac tions in your account) build out policies aligned with these principals and implement strong credential management These privilegemanagement elements form the core of authentication and authorization In AWS privilege management is primarily supported by the AWS Identity and Ac cess Management (IAM) service which allows you to control user and programmat ic access to AWS services and resources You should apply granular policies which as sign permissions to a user group role or resource You also have the ability to require strong password practices such as complexity level avoiding reuse and enforcing multifactor authentication (MFA) You can use federation with your existing directory service For workloads that require systems to have access to AWS IAM enables secure access through roles instance profiles identity federation and temporary credentials 17ArchivedAWS WellArchitected Framework The following questions focus on these considerations for security SEC 2: How do you manage identities for people and machines? There are two types of identities you need to manage when approaching operating secure AWS workloads Understanding the type of identity you need to manage and grant access helps you ensure the right identities have access to the right resources under the right con ditions Human Identities: Your administrators developers operators and end users require an identity to access your AWS environments and applications These are members of your organization or external users with whom you collaborate and who interact with your AWS resources via a web browser client application or interactive commandline tools Machine Identities: Your service applications operational tools and workloads require an identity to make requests to AWS services for example to read data These identities include machines running in your AWS environment such as Amazon EC2 instances or AWS Lambda functions You may also manage machine identities for external parties who need access Additionally you may also have machines outside of AWS that need access to your AWS environment SEC 3: How do you manage permissions for people and machines? Manage permissions to control access to people and machine identities that require access to AWS and your workload Permissions control who can access what and under what condi tions Credentials must not be shared between any user or system User access should be granted using a leastprivilege approach with best practices including password re quirements and MFA enforced Programmatic access including API calls to AWS ser vices should be performed using temporary and limitedprivilege credentials such as those issued by the AWS Security Token Service AWS provides resources that can help you with Identity and access management To help learn best practices explore our handson labs on managing credentials & au thentication controlling human access and controlling programmatic access Detection You can use detective controls to identify a potential security threat or incident They are an essential part of governance frameworks and can be used to support a quality process a legal or compliance obligation and for threat identification and response efforts There are different types of detective controls For example conducting an in ventory of assets and their detailed attributes promotes more effective decision mak ing (and lifecycle controls) to help establish operational baselines You can also use internal auditing an examination of controls related to information systems to en sure that practices meet policies and requirements and that you have set the correct automated alerting notifications based on defined conditions These controls are im portant reactive factors that can help your organization identify and understand the scope of anomalous activity In AWS you can implement detective controls by processing logs events and mon itoring that allows for auditing automated analysis and alarming CloudTrail logs 18ArchivedAWS WellArchitected Framework AWS API calls and CloudWatch provide monitoring of metrics with alarming and AWS Config provides configuration history Amazon GuardDuty is a managed threat detection service that continuously monitors for malicious or unauthorized behav ior to help you protect your AWS accounts and workloads Servicelevel logs are also available for example you can use Amazon Simple Storage Service (Amazon S3) to log access requests The following questions focus on these considerations for security SEC 4: How do you detect and investigate security events? Capture and analyze events from logs and metrics to gain visibility Take action on security events and potential threats to help secure your workload Log management is important to a WellArchitected workload for reasons ranging from security or forensics to regulatory or legal requirements It is critical that you an alyze logs and respond to them so that you can identify potential security incidents AWS provides functionality that makes log management easier to implement by giv ing you the ability to define a dataretention lifecycle or define where data will be preserved archived or eventually deleted This makes predictable and reliable data handling simpler and more cost effective Infrastructure Protection Infrastructure protection encompasses control methodologies such as defense in depth necessary to meet best practices and organizational or regulatory obligations Use of these methodologies is critical for successful ongoing operations in either the cloud or onpremises In AWS you can implement stateful and stateless packet inspection either by using AWSnative technologies or by using partner products and services available through the AWS Marketplace You should use Amazon Virtual Private Cloud (Amazon VPC) to create a private secured and scalable environment in which you can define your topology—including gateways routing tables and public and private subnets The following questions focus on these considerations for security SEC 5: How do you protect your network resources? Any workload that has some form of network connectivity whether it’s the internet or a pri vate network requires multiple layers of defense to help protect from external and internal networkbased threats SEC 6: How do you protect your compute resources? Compute resources in your workload require multiple layers of defense to help protect from external and internal threats Compute resources include EC2 instances containers AWS Lambda functions database services IoT devices and more 19ArchivedAWS WellArchitected Framework Multiple layers of defense are advisable in any type of environment In the case of in frastructure protection many of the concepts and methods are valid across cloud and onpremises models Enforcing boundary protection monitoring points of ingress and egress and comprehensive logging monitoring and alerting are all essential to an ef fective information security plan AWS customers are able to tailor or harden the configuration of an Amazon Elastic Compute Cloud (Amazon EC2) Amazon EC2 Container Service (Amazon ECS) contain er or AWS Elastic Beanstalk instance and persist this configuration to an immutable Amazon Machine Image (AMI) Then whether triggered by Auto Scaling or launched manually all new virtual servers (instances) launched with this AMI receive the hard ened configuration Data Protection Before architecting any system foundational practices that influence security should be in place For example data classification provides a way to categorize organiza tional data based on levels of sensitivity and encryption protects data by way of ren dering it unintelligible to unauthorized access These tools and techniques are impor tant because they support objectives such as preventing financial loss or complying with regulatory obligations In AWS the following practices facilitate protection of data: • As an AWS customer you maintain full control over your data • AWS makes it easier for you to encrypt your data and manage keys including regu lar key rotation which can be easily automated by AWS or maintained by you • Detailed logging that contains important content such as file access and changes is available • AWS has designed storage systems for exceptional resiliency For example Amazon S3 Standard S3 Standard–IA S3 One ZoneIA and Amazon Glacier are all designed to provide 99999999999% durability of objects over a given year This durability level corresponds to an average annual expected loss of 0000000001% of objects • Versioning which can be part of a larger data lifecycle management process can protect against accidental overwrites deletes and similar harm • AWS never initiates the movement of data between Regions Content placed in a Region will remain in that Region unless you explicitly enable a feature or leverage a service that provides that functionality 20ArchivedAWS WellArchitected Framework The following questions focus on these considerations for security SEC 7: How do you classify your data? Classification provides a way to categorize data based on criticality and sensitivity in order to help you determine appropriate protection and retention controls SEC 8: How do you protect your data at rest? Protect your data at rest by implementing multiple controls to reduce the risk of unautho rized access or mishandling SEC 9: How do you protect your data in transit? Protect your data in transit by implementing multiple controls to reduce the risk of unautho rized access or loss AWS provides multiple means for encrypting data at rest and in transit We build fea tures into our services that make it easier to encrypt your data For example we have implemented serverside encryption (SSE) for Amazon S3 to make it easier for you to store your data in an encrypted form You can also arrange for the entire HTTPS en cryption and decryption process (generally known as SSL termination) to be handled by Elastic Load Balancing (ELB) Incident Response Even with extremely mature preventive and detective controls your organization should still put processes in place to respond to and mitigate the potential impact of security incidents The architecture of your workload strongly affects the ability of your teams to operate effectively during an incident to isolate or contain systems and to restore operations to a known good state Putting in place the tools and ac cess ahead of a security incident then routinely practicing incident response through game days will help you ensure that your architecture can accommodate timely in vestigation and recovery In AWS the following practices facilitate effective incident response: • Detailed logging is available that contains important content such as file access and changes • Events can be automatically processed and trigger tools that automate responses through the use of AWS APIs • You can preprovision tooling and a “clean room” using AWS CloudFormation This allows you to carry out forensics in a safe isolated environment 21ArchivedAWS WellArchitected Framework The following questions focus on these considerations for security SEC 10: How do you anticipate respond to and recover from incidents? Preparation is critical to timely and effective investigation response to and recovery from security incidents to help minimize disruption to your organization Ensure that you have a way to quickly grant access for your security team and auto mate the isolation of instances as well as the capturing of data and state for forensics Resources Refer to the following resources to learn more about our best practices for Security Documentation •AWS Cloud Security •AWS Compliance •AWS Security Blog Whitepaper •Security Pillar •AWS Security Overview •AWS Security Best Practices •AWS Risk and Compliance Video •AWS Security State of the Union •Shared Responsibility Overview Reliability The Reliability pillar includes the reliability pillar encompasses the ability of a work load to perform its intended function correctly and consistently when it’s expected to this includes the ability to operate and test the workload through its total lifecycle this paper provides indepth best practice guidance for implementing reliable work loads on aws The reliability pillar provides an overview of design principles best practices and questions You can find prescriptive guidance on implementation in the Reliability Pil lar whitepaper 22ArchivedAWS WellArchitected Framework Design Principles There are five design principles for reliability in the cloud: •Automatically recover from failure : By monitoring a workload for key perfor mance indicators (KPIs) you can trigger automation when a threshold is breached These KPIs should be a measure of business value not of the technical aspects of the operation of the service This allows for automatic notification and tracking of failures and for automated recovery processes that work around or repair the fail ure With more sophisticated automation it’s possible to anticipate and remediate failures before they occur •Test recovery procedures : In an onpremises environment testing is often con ducted to prove that the workload works in a particular scenario Testing is not typ ically used to validate recovery strategies In the cloud you can test how your work load fails and you can validate your recovery procedures You can use automation to simulate different failures or to recreate scenarios that led to failures before This approach exposes failure pathways that you can test and fix before a real fail ure scenario occurs thus reducing risk •Scale horizontally to increase aggregate workload availability : Replace one large resource with multiple small resources to reduce the impact of a single failure on the overall workload Distribute requests across multiple smaller resources to en sure that they don’t share a common point of failure •Stop guessing capacity : A common cause of failure in onpremises workloads is re source saturation when the demands placed on a workload exceed the capacity of that workload (this is often the objective of denial of service attacks) In the cloud you can monitor demand and workload utilization and automate the addition or removal of resources to maintain the optimal level to satisfy demand without over or underprovisioning There are still limits but some quotas can be controlled and others can be managed (see Manage Service Quotas and Constraints) •Manage change in automation: Changes to your infrastructure should be made us ing automation The changes that need to be managed include changes to the au tomation which then can be tracked and reviewed Definition There are four best practice areas for reliability in the cloud: •Foundations •Workload Architecture •Change Management 23ArchivedAWS WellArchitected Framework •Failure Management To achieve reliability you must start with the foundations — an environment where service quotas and network topology accommodate the workload The workload ar chitecture of the distributed system must be designed to prevent and mitigate fail ures The workload must handle changes in demand or requirements and it must be designed to detect failure and automatically heal itself Best Practices Foundations Foundational requirements are those whose scope extends beyond a single workload or project Before architecting any system foundational requirements that influence reliability should be in place For example you must have sufficient network band width to your data center With AWS most of these foundational requirements are already incorporated or can be addressed as needed The cloud is designed to be nearly limitless so it’s the re sponsibility of AWS to satisfy the requirement for sufficient networking and compute capacity leaving you free to change resource size and allocations on demand The following questions focus on these considerations for reliability (For a list of reli ability questions and best practices see the Appendix) REL 1: How do you manage service quotas and constraints? For cloudbased workload architectures there are service quotas (which are also referred to as service limits) These quotas exist to prevent accidentally provisioning more resources than you need and to limit request rates on API operations so as to protect services from abuse There are also resource constraints for example the rate that you can push bits down a fiberoptic cable or the amount of storage on a physical disk REL 2: How do you plan your network topology? Workloads often exist in multiple environments These include multiple cloud environments (both publicly accessible and private) and possibly your existing data center infrastructure Plans must include network considerations such as intra and intersystem connectivity pub lic IP address management private IP address management and domain name resolution For cloudbased workload architectures there are service quotas (which are also re ferred to as service limits) These quotas exist to prevent accidentally provisioning more resources than you need and to limit request rates on API operations to protect services from abuse Workloads often exist in multiple environments You must mon itor and manage these quotas for all workload environments These include multiple cloud environments (both publicly accessible and private) and may include your exist ing data center infrastructure Plans must include network considerations such as in trasystem and intersystem connectivity public IP address management private IP ad dress management and domain name resolution 24ArchivedAWS WellArchitected Framework Workload Architecture A reliable workload starts with upfront design decisions for both software and infra structure Your architecture choices will impact your workload behavior across all five WellArchitected pillars For reliability there are specific patterns you must follow With AWS workload developers have their choice of languages and technologies to use AWS SDKs take the complexity out of coding by providing languagespecific APIs for AWS services These SDKs plus the choice of languages allow developers to im plement the reliability best practices listed here Developers can also read about and learn from how Amazon builds and operates software in The Amazon Builders' Li brary The following questions focus on these considerations for reliability REL 3: How do you design your workload service architecture? Build highly scalable and reliable workloads using a serviceoriented architecture (SOA) or a microservices architecture Serviceoriented architecture (SOA) is the practice of making soft ware components reusable via service interfaces Microservices architecture goes further to make components smaller and simpler REL 4: How do you design interactions in a distributed system to prevent failures? Distributed systems rely on communications networks to interconnect components such as servers or services Your workload must operate reliably despite data loss or latency in these networks Components of the distributed system must operate in a way that does not neg atively impact other components or the workload These best practices prevent failures and improve mean time between failures (MTBF) REL 5: How do you design interactions in a distributed system to mitigate or withstand failures? Distributed systems rely on communications networks to interconnect components (such as servers or services) Your workload must operate reliably despite data loss or latency over these networks Components of the distributed system must operate in a way that does not negatively impact other components or the workload These best practices enable workloads to withstand stresses or failures more quickly recover from them and mitigate the impact of such impairments The result is improved mean time to recovery (MTTR) Distributed systems rely on communications networks to interconnect components such as servers or services Your workload must operate reliably despite data loss or latency in these networks Components of the distributed system must operate in a way that does not negatively impact other components or the workload Change Management Changes to your workload or its environment must be anticipated and accommodat ed to achieve reliable operation of the workload Changes include those imposed on your workload such as spikes in demand as well as those from within such as feature deployments and security patches 25ArchivedAWS WellArchitected Framework Using AWS you can monitor the behavior of a workload and automate the response to KPIs For example your workload can add additional servers as a workload gains more users You can control who has permission to make workload changes and audit the history of these changes The following questions focus on these considerations for reliability REL 6: How do you monitor workload resources? Logs and metrics are powerful tools to gain insight into the health of your workload You can configure your workload to monitor logs and metrics and send notifications when thresholds are crossed or significant events occur Monitoring enables your workload to recognize when lowperformance thresholds are crossed or failures occur so it can recover automatically in response REL 7: How do you design your workload to adapt to changes in demand? A scalable workload provides elasticity to add or remove resources automatically so that they closely match the current demand at any given point in time REL 8: How do you implement change? Controlled changes are necessary to deploy new functionality and to ensure that the work loads and the operating environment are running known software and can be patched or re placed in a predictable manner If these changes are uncontrolled then it makes it difficult to predict the effect of these changes or to address issues that arise because of them When you architect a workload to automatically add and remove resources in re sponse to changes in demand this not only increases reliability but also ensures that business success doesn't become a burden With monitoring in place your team will be automatically alerted when KPIs deviate from expected norms Automatic logging of changes to your environment allows you to audit and quickly identify actions that might have impacted reliability Controls on change management ensure that you can enforce the rules that deliver the reliability you need Failure Management In any system of reasonable complexity it is expected that failures will occur Reliabil ity requires that your workload be aware of failures as they occur and take action to avoid impact on availability Workloads must be able to both withstand failures and automatically repair issues With AWS you can take advantage of automation to react to monitoring data For ex ample when a particular metric crosses a threshold you can trigger an automated ac tion to remedy the problem Also rather than trying to diagnose and fix a failed re source that is part of your production environment you can replace it with a new one and carry out the analysis on the failed resource out of band Since the cloud enables you to stand up temporary versions of a whole system at low cost you can use auto mated testing to verify full recovery processes 26ArchivedAWS WellArchitected Framework The following questions focus on these considerations for reliability REL 9: How do you back up data? Back up data applications and configuration to meet your requirements for recovery time objectives (RTO) and recovery point objectives (RPO) REL 10: How do you use fault isolation to protect your workload? Fault isolated boundaries limit the effect of a failure within a workload to a limited number of components Components outside of the boundary are unaffected by the failure Using multiple fault isolated boundaries you can limit the impact on your workload REL 11: How do you design your workload to withstand component failures? Workloads with a requirement for high availability and low mean time to recovery (MTTR) must be architected for resiliency REL 12: How do you test reliability? After you have designed your workload to be resilient to the stresses of production testing is the only way to ensure that it will operate as designed and deliver the resiliency you expect REL 13: How do you plan for disaster recovery (DR)? Having backups and redundant workload components in place is the start of your DR strate gy RTO and RPO are your objectives for restoration of availability Set these based on busi ness needs Implement a strategy to meet these objectives considering locations and func tion of workload resources and data Regularly back up your data and test your backup files to ensure that you can recov er from both logical and physical errors A key to managing failure is the frequent and automated testing of workloads to cause failure and then observe how they recov er Do this on a regular schedule and ensure that such testing is also triggered after significant workload changes Actively track KPIs such as the recovery time objective (RTO) and recovery point objective (RPO) to assess a workload's resiliency (especial ly under failuretesting scenarios) Tracking KPIs will help you identify and mitigate single points of failure The objective is to thoroughly test your workloadrecovery processes so that you are confident that you can recover all your data and continue to serve your customers even in the face of sustained problems Your recovery processes should be as well exercised as your normal production processes Resources Refer to the following resources to learn more about our best practices for Reliability Documentation •AWS Documentation •AWS Global Infrastructure •AWS Auto Scaling: How Scaling Plans Work 27ArchivedAWS WellArchitected Framework •What Is AWS Backup? Whitepaper •Reliability Pillar: AWS WellArchitected •Implementing Microservices on AWS Performance Efficiency The Performance Efficiency pillar includes the ability to use computing resources ef ficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve The performance efficiency pillar provides an overview of design principles best prac tices and questions You can find prescriptive guidance on implementation in the Per formance Efficiency Pillar whitepaper Design Principles There are five design principles for performance efficiency in the cloud: •Democratize advanced technologies : Make advanced technology implementation easier for your team by delegating complex tasks to your cloud vendor Rather than asking your IT team to learn about hosting and running a new technology consid er consuming the technology as a service For example NoSQL databases media transcoding and machine learning are all technologies that require specialized ex pertise In the cloud these technologies become services that your team can con sume allowing your team to focus on product development rather than resource provisioning and management •Go global in minutes: Deploying your workload in multiple AWS Regions around the world allows you to provide lower latency and a better experience for your cus tomers at minimal cost •Use serverless architectures: Serverless architectures remove the need for you to run and maintain physical servers for traditional compute activities For example serverless storage services can act as static websites (removing the need for web servers) and event services can host code This removes the operational burden of managing physical servers and can lower transactional costs because managed ser vices operate at cloud scale •Experiment more often: With virtual and automatable resources you can quickly carry out comparative testing using different types of instances storage or config urations 28ArchivedAWS WellArchitected Framework •Consider mechanical sympathy: Understand how cloud services are consumed and always use the technology approach that aligns best with your workload goals For example consider data access patterns when you select database or storage ap proaches Definition There are four best practice areas for performance efficiency in the cloud: •Selection •Review •Monitoring •Tradeoffs Take a datadriven approach to building a highperformance architecture Gather data on all aspects of the architecture from the highlevel design to the selection and con figuration of resource types Reviewing your choices on a regular basis ensures that you are taking advantage of the continually evolving AWS Cloud Monitoring ensures that you are aware of any de viance from expected performance Make tradeoffs in your architecture to improve performance such as using compression or caching or relaxing consistency require ments Best Practices Selection The optimal solution for a particular workload varies and solutions often combine multiple approaches Wellarchitected workloads use multiple solutions and enable different features to improve performance AWS resources are available in many types and configurations which makes it easier to find an approach that closely matches your workload needs You can also find op tions that are not easily achievable with onpremises infrastructure For example a managed service such as Amazon DynamoDB provides a fully managed NoSQL data base with singledigit millisecond latency at any scale 29ArchivedAWS WellArchitected Framework The following questions focus on these considerations for performance efficiency (For a list of performance efficiency questions and best practices see the Appendix) PERF 1: How do you select the best performing architecture? Often multiple approaches are required for optimal performance across a workload Well architected systems use multiple solutions and features to improve performance Use a datadriven approach to select the patterns and implementation for your archi tecture and achieve a cost effective solution AWS Solutions Architects AWS Refer ence Architectures and AWS Partner Network (APN) partners can help you select an architecture based on industry knowledge but data obtained through benchmarking or load testing will be required to optimize your architecture Your architecture will likely combine a number of different architectural approach es (for example eventdriven ETL or pipeline) The implementation of your architec ture will use the AWS services that are specific to the optimization of your architec ture's performance In the following sections we discuss the four main resource types to consider (compute storage database and network) Compute Selecting compute resources that meet your requirements performance needs and provide great efficiency of cost and effort will enable you to accomplish more with the same number of resources When evaluating compute options be aware of your requirements for workload performance and cost requirements and use this to make informed decisions In AWS compute is available in three forms: instances containers and functions: •Instances are virtualized servers allowing you to change their capabilities with a button or an API call Because resource decisions in the cloud aren’t fixed you can experiment with different server types At AWS these virtual server instances come in different families and sizes and they offer a wide variety of capabilities includ ing solidstate drives (SSDs) and graphics processing units (GPUs) •Containers are a method of operating system virtualization that allow you to run an application and its dependencies in resourceisolated processes AWS Fargate is serverless compute for containers or Amazon EC2 can be used if you need con trol over the installation configuration and management of your compute environ ment You can also choose from multiple container orchestration platforms: Ama zon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS) •Functions abstract the execution environment from the code you want to execute For example AWS Lambda allows you to execute code without running an instance 30ArchivedAWS WellArchitected Framework The following questions focus on these considerations for performance efficiency PERF 2: How do you select your compute solution? The optimal compute solution for a workload varies based on application design usage pat terns and configuration settings Architectures can use different compute solutions for vari ous components and enable different features to improve performance Selecting the wrong compute solution for an architecture can lead to lower performance efficiency When architecting your use of compute you should take advantage of the elasticity mechanisms available to ensure you have sufficient capacity to sustain performance as demand changes Storage Cloud storage is a critical component of cloud computing holding the information used by your workload Cloud storage is typically more reliable scalable and secure than traditional onpremises storage systems Select from object block and file stor age services as well as cloud data migration options for your workload In AWS storage is available in three forms: object block and file: •Object Storage provides a scalable durable platform to make data accessible from any internet location for usergenerated content active archive serverless com puting Big Data storage or backup and recovery Amazon Simple Storage Ser vice (Amazon S3) is an object storage service that offers industryleading scal ability data availability security and performance Amazon S3 is designed for 99999999999% (11 9's) of durability and stores data for millions of applications for companies all around the world •Block Storage provides highly available consistent lowlatency block storage for each virtual host and is analogous to directattached storage (DAS) or a Stor age Area Network (SAN) Amazon Elastic Block Store (Amazon EBS) is designed for workloads that require persistent storage accessible by EC2 instances that helps you tune applications with the right storage capacity performance and cost •File Storage provides access to a shared file system across multiple systems File storage solutions like Amazon Elastic File System (EFS) are ideal for use cases such as large content repositories development environments media stores or user home directories Amazon FSx makes it easy and cost effective to launch and run popular file systems so you can leverage the rich feature sets and fast performance of widely used open source and commerciallylicensed file systems 31ArchivedAWS WellArchitected Framework The following questions focus on these considerations for performance efficiency PERF 3: How do you select your storage solution? The optimal storage solution for a system varies based on the kind of access method (block file or object) patterns of access (random or sequential) required throughput frequency of access (online offline archival) frequency of update (WORM dynamic) and availability and durability constraints Wellarchitected systems use multiple storage solutions and enable different features to improve performance and use resources efficiently When you select a storage solution ensuring that it aligns with your access patterns will be critical to achieving the performance you want Database The cloud offers purposebuilt database services that address different problems pre sented by your workload You can choose from many purposebuilt database engines including relational keyvalue document inmemory graph time series and ledger databases By picking the best database to solve a specific problem (or a group of problems) you can break away from restrictive onesizefitsall monolithic databases and focus on building applications to meet the performance needs of your customers In AWS you can choose from multiple purposebuilt database engines including re lational keyvalue document inmemory graph time series and ledger databas es With AWS databases you don’t need to worry about database management tasks such as server provisioning patching setup configuration backups or recovery AWS continuously monitors your clusters to keep your workloads up and running with self healing storage and automated scaling so that you can focus on higher value applica tion development The following questions focus on these considerations for performance efficiency PERF 4: How do you select your database solution? The optimal database solution for a system varies based on requirements for availability consistency partition tolerance latency durability scalability and query capability Many systems use different database solutions for various subsystems and enable different fea tures to improve performance Selecting the wrong database solution and features for a sys tem can lead to lower performance efficiency Your workload's database approach has a significant impact on performance efficien cy It's often an area that is chosen according to organizational defaults rather than through a datadriven approach As with storage it is critical to consider the access patterns of your workload and also to consider if other nondatabase solutions could solve the problem more efficiently (such as using graph time series or inmemory storage database) 32ArchivedAWS WellArchitected Framework Network Since the network is between all workload components it can have great impacts both positive and negative on workload performance and behavior There are also workloads that are heavily dependent on network performance such as High Perfor mance Computing (HPC) where deep network understanding is important to increase cluster performance You must determine the workload requirements for bandwidth latency jitter and throughput On AWS networking is virtualized and is available in a number of different types and configurations This makes it easier to match your networking methods with your needs AWS offers product features (for example Enhanced Networking Amazon EBSoptimized instances Amazon S3 transfer acceleration and dynamic Amazon CloudFront) to optimize network traffic AWS also offers networking features (for ex ample Amazon Route 53 latency routing Amazon VPC endpoints AWS Direct Con nect and AWS Global Accelerator) to reduce network distance or jitter The following questions focus on these considerations for performance efficiency PERF 5: How do you configure your networking solution? The optimal network solution for a workload varies based on latency throughput require ments jitter and bandwidth Physical constraints such as user or onpremises resources de termine location options These constraints can be offset with edge locations or resource placement You must consider location when deploying your network You can choose to place resources close to where they will be used to reduce distance Use networking met rics to make changes to networking configuration as the workload evolves By tak ing advantage of Regions placement groups and edge services you can significant ly improve performance Cloud based networks can be quickly rebuilt or modified so evolving your network architecture over time is necessary to maintain performance efficiency Review Cloud technologies are rapidly evolving and you must ensure that workload compo nents are using the latest technologies and approaches to continually improve perfor mance You must continually evaluate and consider changes to your workload com ponents to ensure you are meeting its performance and cost objectives New tech nologies such as machine learning and artificial intelligence (AI) can allow you to re imagine customer experiences and innovate across all of your business workloads Take advantage of the continual innovation at AWS driven by customer need We re lease new Regions edge locations services and features regularly Any of these re leases could positively improve the performance efficiency of your architecture 33ArchivedAWS WellArchitected Framework The following questions focus on these considerations for performance efficiency PERF 6: How do you evolve your workload to take advantage of new releases? When architecting workloads there are finite options that you can choose from However over time new technologies and approaches become available that could improve the per formance of your workload Architectures performing poorly are usually the result of a nonexistent or broken performance review process If your architecture is performing poorly implement ing a performance review process will allow you to apply Deming’s plandocheckact (PDCA) cycle to drive iterative improvement Monitoring After you implement your workload you must monitor its performance so that you can remediate any issues before they impact your customers Monitoring metrics should be used to raise alarms when thresholds are breached Amazon CloudWatch is a monitoring and observability service that provides you with data and actionable insights to monitor your workload respond to systemwide per formance changes optimize resource utilization and get a unified view of operational health CloudWatch collects monitoring and operational data in the form of logs metrics and events from workloads that run on AWS and onpremises servers AWS XRay helps developers analyze and debug production distributed applications With AWS XRay you can glean insights into how your application is performing and dis cover root causes and identify performance bottlenecks You can use these insights to react quickly and keep your workload running smoothly The following questions focus on these considerations for performance efficiency PERF 7: How do you monitor your resources to ensure they are performing? System performance can degrade over time Monitor system performance to identify degra dation and remediate internal or external factors such as the operating system or applica tion load Ensuring that you do not see false positives is key to an effective monitoring solution Automated triggers avoid human error and can reduce the time it takes to fix prob lems Plan for game days where simulations are conducted in the production environ ment to test your alarm solution and ensure that it correctly recognizes issues Tradeoffs When you architect solutions think about tradeoffs to ensure an optimal approach Depending on your situation you could trade consistency durability and space for time or latency to deliver higher performance 34ArchivedAWS WellArchitected Framework Using AWS you can go global in minutes and deploy resources in multiple locations across the globe to be closer to your end users You can also dynamically add read only replicas to information stores (such as database systems) to reduce the load on the primary database The following questions focus on these considerations for performance efficiency PERF 8: How do you use tradeoffs to improve performance? When architecting solutions determining tradeoffs enables you to select an optimal ap proach Often you can improve performance by trading consistency durability and space for time and latency As you make changes to the workload collect and evaluate metrics to determine the impact of those changes Measure the impacts to the system and to the enduser to understand how your tradeoffs impact your workload Use a systematic approach such as load testing to explore whether the tradeoff improves performance Resources Refer to the following resources to learn more about our best practices for Perfor mance Efficiency Documentation •Amazon S3 Performance Optimization •Amazon EBS Volume Performance Whitepaper •Performance Efficiency Pillar Video •AWS re:Invent 2019: Amazon EC2 foundations (CMP211R2) •AWS re:Invent 2019: Leadership session: Storage state of the union (STG201L) •AWS re:Invent 2019: Leadership session: AWS purposebuilt databases (DAT209L) •AWS re:Invent 2019: Connectivity to AWS and hybrid AWS network architectures (NET317R1) •AWS re:Invent 2019: Powering nextgen Amazon EC2: Deep dive into the Nitro sys tem (CMP303R2) •AWS re:Invent 2019: Scaling up to your first 10 million users (ARC211R) 35ArchivedAWS WellArchitected Framework Cost Optimization The Cost Optimization pillar includes the ability to run systems to deliver business value at the lowest price point The cost optimization pillar provides an overview of design principles best practices and questions You can find prescriptive guidance on implementation in the Cost Op timization Pillar whitepaper Design Principles There are five design principles for cost optimization in the cloud: •Implement Cloud Financial Management : To achieve financial success and accel erate business value realization in the cloud you need to invest in Cloud Financial Management /Cost Optimization Your organization needs to dedicate time and re sources to build capability in this new domain of technology and usage manage ment Similar to your Security or Operational Excellence capability you need to build capability through knowledge building programs resources and processes to become a costefficient organization •Adopt a consumption model : Pay only for the computing resources that you re quire and increase or decrease usage depending on business requirements not by using elaborate forecasting For example development and test environments are typically only used for eight hours a day during the work week You can stop these resources when they are not in use for a potential cost savings of 75% (40 hours versus 168 hours) •Measure overall efficiency : Measure the business output of the workload and the costs associated with delivering it Use this measure to know the gains you make from increasing output and reducing costs •Stop spending money on undifferentiated heavy lifting : AWS does the heavy lift ing of data center operations like racking stacking and powering servers It also removes the operational burden of managing operating systems and applications with managed services This allows you to focus on your customers and business projects rather than on IT infrastructure •Analyze and attribute expenditure : The cloud makes it easier to accurately identify the usage and cost of systems which then allows transparent attribution of IT costs to individual workload owners This helps measure return on investment (ROI) and gives workload owners an opportunity to optimize their resources and reduce costs Definition There are five best practice areas for cost optimization in the cloud: 36ArchivedAWS WellArchitected Framework •Practice Cloud Financial Management •Expenditure and usage awareness •Costeffective resources •Manage demand and supply resources •Optimize over time As with the other pillars within the WellArchitected Framework there are trade offs to consider for example whether to optimize for speedtomarket or for cost In some cases it’s best to optimize for speed—going to market quickly shipping new features or simply meeting a deadline—rather than investing in upfront cost opti mization Design decisions are sometimes directed by haste rather than data and the temptation always exists to overcompensate “just in case” rather than spend time benchmarking for the most costoptimal deployment This might lead to overprovi sioned and underoptimized deployments However this is a reasonable choice when you need to “lift and shift” resources from your onpremises environment to the cloud and then optimize afterwards Investing the right amount of effort in a cost op timization strategy up front allows you to realize the economic benefits of the cloud more readily by ensuring a consistent adherence to best practices and avoiding un necessary over provisioning The following sections provide techniques and best prac tices for both the initial and ongoing implementation of Cloud Financial Management and cost optimization of your workloads Best Practices Practice Cloud Financial Management With the adoption of cloud technology teams innovate faster due to shortened ap proval procurement and infrastructure deployment cycles A new approach to finan cial management in the cloud is required to realize business value and financial suc cess This approach is Cloud Financial Management and builds capability across your organization by implementing organizational wide knowledge building programs re sources and processes Many organizations are composed of many different units with different priorities The ability to align your organization to an agreed set of financial objectives and pro vide your organization the mechanisms to meet them will create a more efficient or ganization A capable organization will innovate and build faster be more agile and adjust to any internal or external factors In AWS you can use Cost Explorer and optionally Amazon Athena and Amazon Quick Sight with the Cost and Usage Report (CUR) to provide cost and usage awareness throughout your organization AWS Budgets provides proactive notifications for cost 37ArchivedAWS WellArchitected Framework and usage The AWS blogs provide information on new services and features to en sure you keep up to date with new service releases The following questions focus on these considerations for cost optimization (For a list of cost optimization questions and best practices see the Appendix) COST 1: How do you implement cloud financial management? Implementing Cloud Financial Management enables organizations to realize business value and financial success as they optimize their cost and usage and scale on AWS When building a cost optimization function use members and supplement the team with experts in CFM and CO Existing team members will understand how the organi zation currently functions and how to rapidly implement improvements Also consid er including people with supplementary or specialist skill sets such as analytics and project management When implementing cost awareness in your organization improve or build on exist ing programs and processes It is much faster to add to what exists than to build new processes and programs This will result in achieving outcomes much faster Expenditure and usage awareness The increased flexibility and agility that the cloud enables encourages innovation and fastpaced development and deployment It eliminates the manual processes and time associated with provisioning onpremises infrastructure including identifying hardware specifications negotiating price quotations managing purchase orders scheduling shipments and then deploying the resources However the ease of use and virtually unlimited ondemand capacity requires a new way of thinking about ex penditures Many businesses are composed of multiple systems run by various teams The capa bility to attribute resource costs to the individual organization or product owners dri ves efficient usage behavior and helps reduce waste Accurate cost attribution allows you to know which products are truly profitable and allows you to make more in formed decisions about where to allocate budget In AWS you create an account structure with AWS Organizations or AWS Control Tower which provides separation and assists in allocation of your costs and usage You can also use resource tagging to apply business and organization information to your usage and cost Use AWS Cost Explorer for visibility into your cost and usage or create customized dashboards and analytics with Amazon Athena and Amazon Quick Sight Controlling your cost and usage is done by notifications through AWS Budgets and controls using AWS Identity and Access Management (IAM) and Service Quotas 38ArchivedAWS WellArchitected Framework The following questions focus on these considerations for cost optimization COST 2: How do you govern usage? Establish policies and mechanisms to ensure that appropriate costs are incurred while objec tives are achieved By employing a checksandbalances approach you can innovate without overspending COST 3: How do you monitor usage and cost? Establish policies and procedures to monitor and appropriately allocate your costs This al lows you to measure and improve the cost efficiency of this workload COST 4: How do you decommission resources? Implement change control and resource management from project inception to endoflife This ensures you shut down or terminate unused resources to reduce waste You can use cost allocation tags to categorize and track your AWS usage and costs When you apply tags to your AWS resources (such as EC2 instances or S3 buckets) AWS generates a cost and usage report with your usage and your tags You can apply tags that represent organization categories (such as cost centers workload names or owners) to organize your costs across multiple services Ensure you use the right level of detail and granularity in cost and usage reporting and monitoring For high level insights and trends use daily granularity with AWS Cost Explorer For deeper analysis and inspection use hourly granularity in AWS Cost Explorer or Amazon Athena and Amazon QuickSight with the Cost and Usage Report (CUR) at an hourly granularity Combining tagged resources with entity lifecycle tracking (employees projects) makes it possible to identify orphaned resources or projects that are no longer gener ating value to the organization and should be decommissioned You can set up billing alerts to notify you of predicted overspending Costeffective resources Using the appropriate instances and resources for your workload is key to cost sav ings For example a reporting process might take five hours to run on a smaller server but one hour to run on a larger server that is twice as expensive Both servers give you the same outcome but the smaller server incurs more cost over time A wellarchitected workload uses the most costeffective resources which can have a significant and positive economic impact You also have the opportunity to use man aged services to reduce costs For example rather than maintaining servers to deliver email you can use a service that charges on a permessage basis AWS offers a variety of flexible and costeffective pricing options to acquire instances from Amazon EC2 and other services in a way that best fits your needs OnDemand 39ArchivedAWS WellArchitected Framework Instances allow you to pay for compute capacity by the hour with no minimum com mitments required Savings Plans and Reserved Instances offer savings of up to 75% off OnDemand pricing With Spot Instances you can leverage unused Amazon EC2 capacity and offer savings of up to 90% off OnDemand pricing Spot Instances are appropriate where the system can tolerate using a fleet of servers where individual servers can come and go dynamically such as stateless web servers batch processing or when using HPC and big data Appropriate service selection can also reduce usage and costs; such as CloudFront to minimize data transfer or completely eliminate costs such as utilizing Amazon Auro ra on RDS to remove expensive database licensing costs The following questions focus on these considerations for cost optimization COST 5: How do you evaluate cost when you select services? Amazon EC2 Amazon EBS and Amazon S3 are buildingblock AWS services Managed ser vices such as Amazon RDS and Amazon DynamoDB are higher level or application level AWS services By selecting the appropriate building blocks and managed services you can optimize this workload for cost For example using managed services you can reduce or re move much of your administrative and operational overhead freeing you to work on appli cations and businessrelated activities COST 6: How do you meet cost targets when you select resource type size and number? Ensure that you choose the appropriate resource size and number of resources for the task at hand You minimize waste by selecting the most cost effective type size and number COST 7: How do you use pricing models to reduce cost? Use the pricing model that is most appropriate for your resources to minimize expense COST 8: How do you plan for data transfer charges? Ensure that you plan and monitor data transfer charges so that you can make architectural decisions to minimize costs A small yet effective architectural change can drastically reduce your operational costs over time By factoring in cost during service selection and using tools such as Cost Explorer and AWS Trusted Advisor to regularly review your AWS usage you can actively monitor your utilization and adjust your deployments accordingly Manage demand and supply resources When you move to the cloud you pay only for what you need You can supply re sources to match the workload demand at the time they’re needed this eliminates the need for costly and wasteful over provisioning You can also modify the demand using a throttle buffer or queue to smooth the demand and serve it with less re sources resulting in a lower cost or process it at a later time with a batch service In AWS you can automatically provision resources to match the workload demand Auto Scaling using demand or timebased approaches allow you to add and remove 40ArchivedAWS WellArchitected Framework resources as needed If you can anticipate changes in demand you can save more money and ensure your resources match your workload needs You can use Amazon API Gateway to implement throttling or Amazon SQS to implementing a queue in your workload These will both allow you to modify the demand on your workload components The following questions focus on these considerations for cost optimization COST 9: How do you manage demand and supply resources? For a workload that has balanced spend and performance ensure that everything you pay for is used and avoid significantly underutilizing instances A skewed utilization metric in ei ther direction has an adverse impact on your organization in either operational costs (de graded performance due to overutilization) or wasted AWS expenditures (due to overpro visioning) When designing to modify demand and supply resources actively think about the patterns of usage the time it takes to provision new resources and the predictabili ty of the demand pattern When managing demand ensure you have a correctly sized queue or buffer and that you are responding to workload demand in the required amount of time Optimize over time As AWS releases new services and features it's a best practice to review your existing architectural decisions to ensure they continue to be the most cost effective As your requirements change be aggressive in decommissioning resources entire services and systems that you no longer require Implementing new features or resource types can optimize your workload incremen tally while minimizing the effort required to implement the change This provides continual improvements in efficiency over time and ensures you remain on the most updated technology to reduce operating costs You can also replace or add new com ponents to the workload with new services This can provide significant increases in efficiency so it's essential to regularly review your workload and implement new ser vices and features The following questions focus on these considerations for cost optimization COST 10: How do you evaluate new services? As AWS releases new services and features it's a best practice to review your existing archi tectural decisions to ensure they continue to be the most cost effective When regularly reviewing your deployments assess how newer services can help save you money For example Amazon Aurora on RDS can reduce costs for relational data bases Using serverless such as Lambda can remove the need to operate and manage instances to run code 41ArchivedAWS WellArchitected Framework Resources Refer to the following resources to learn more about our best practices for Cost Opti mization Documentation •AWS Documentation Whitepaper •Cost Optimization Pillar 42ArchivedAWS WellArchitected Framework The Review Process The review of architectures needs to be done in a consistent manner with a blame free approach that encourages diving deep It should be a light weight process (hours not days) that is a conversation and not an audit The purpose of reviewing an archi tecture is to identify any critical issues that might need addressing or areas that could be improved The outcome of the review is a set of actions that should improve the experience of a customer using the workload As discussed in the “On Architecture” section you will want each team member to take responsibility for the quality of its architecture We recommend that the team members who build an architecture use the WellArchitected Framework to contin ually review their architecture rather than holding a formal review meeting A con tinuous approach allows your team members to update answers as the architecture evolves and improve the architecture as you deliver features The AWS WellArchitected Framework is aligned to the way that AWS reviews systems and services internally It is premised on a set of design principles that influences ar chitectural approach and questions that ensure that people don’t neglect areas that often featured in Root Cause Analysis (RCA) Whenever there is a significant issue with an internal system AWS service or customer we look at the RCA to see if we could improve the review processes we use Reviews should be applied at key milestones in the product lifecycle early on in the design phase to avoid oneway doors 1 that are difficult to change and then before the golive date After you go into production your workload will continue to evolve as you add new features and change technology implementations The architecture of a workload changes over time You will need to follow good hygiene practices to stop its architectural characteristics from degrading as you evolve it As you make sig nificant architecture changes you should follow a set of hygiene processes including a WellArchitected review If you want to use the review as a onetime snapshot or independent measurement you will want to ensure that you have all the right people in the conversation Often we find that reviews are the first time that a team truly understands what they have implemented An approach that works well when reviewing another team's workload is to have a series of informal conversations about their architecture where you can glean the answers to most questions You can then follow up with one or two meet ings where you can gain clarity or dive deep on areas of ambiguity or perceived risk Here are some suggested items to facilitate your meetings: • A meeting room with whiteboards 1Many decisions are reversible twoway doors Those decisions can use a light weight process Oneway doors are hard or impossible to reverse and require more inspection before making them 43ArchivedAWS WellArchitected Framework • Print outs of any diagrams or design notes • Action list of questions that require outofband research to answer (for example “did we enable encryption or not?” ) After you have done a review you should have a list of issues that you can prioritize based on your business context You will also want to take into account the impact of those issues on the daytoday work of your team If you address these issues early you could free up time to work on creating business value rather than solving recur ring problems As you address issues you can update your review to see how the ar chitecture is improving While the value of a review is clear after you have done one you may find that a new team might be resistant at first Here are some objections that can be handled through educating the team on the benefits of a review: • “We are too busy!” (Often said when the team is getting ready for a big launch) • If you are getting ready for a big launch you will want it to go smoothly The re view will allow you to understand any problems you might have missed • We recommend that you carry out reviews early in the product lifecycle to uncov er risks and develop a mitigation plan aligned with the feature delivery roadmap • “We don’t have time to do anything with the results!” (Often said when there is an immovable event such as the Super Bowl that they are targeting) • These events can’t be moved Do you really want to go into it without knowing the risks in your architecture? Even if you don’t address all of these issues you can still have playbooks for handling them if they materialize • “We don’t want others to know the secrets of our solution implementation!” • If you point the team at the questions in the WellArchitected Framework they will see that none of the questions reveal any commercial or technical propriety information As you carry out multiple reviews with teams in your organization you might identify thematic issues For example you might see that a group of teams has clusters of is sues in a particular pillar or topic You will want to look at all your reviews in a holis tic manner and identify any mechanisms training or principal engineering talks that could help address those thematic issues 44ArchivedAWS WellArchitected Framework Conclusion The AWS WellArchitected Framework provides architectural best practices across the five pillars for designing and operating reliable secure efficient and costeffective systems in the cloud The Framework provides a set of questions that allows you to review an existing or proposed architecture It also provides a set of AWS best prac tices for each pillar Using the Framework in your architecture will help you produce stable and efficient systems which allow you to focus on your functional require ments 45ArchivedAWS WellArchitected Framework Contributors The following individuals and organizations contributed to this document: • Rodney Lester: Senior Manager WellArchitected Amazon Web Services • Brian Carlson: Operations Lead WellArchitected Amazon Web Services • Ben Potter: Security Lead WellArchitected Amazon Web Services • Eric Pullen: Performance Lead WellArchitected Amazon Web Services • Seth Eliot: Reliability Lead WellArchitected Amazon Web Services • Nathan Besh: Cost Lead WellArchitected Amazon Web Services • Jon Steele: Sr Technical Account Manager Amazon Web Services • Ryan King: Technical Program Manager Amazon Web Services • Erin Rifkin: Senior Product Manager Amazon Web Services • Max Ramsay: Principal Security Solutions Architect Amazon Web Services • Scott Paddock: Security Solutions Architect Amazon Web Services • Callum Hughes: Solutions Architect Amazon Web Services 46ArchivedAWS WellArchitected Framework Further Reading AWS Cloud Compliance AWS WellArchitected Partner program AWS WellArchitected Tool AWS WellArchitected homepage Cost Optimization Pillar whitepaper Operational Excellence Pillar whitepaper Performance Efficiency Pillar whitepaper Reliability Pillar whitepaper Security Pillar whitepaper The Amazon Builders' Library 47ArchivedAWS WellArchitected Framework Document Revisions Table 2 Major revisions: Date Description July 2020 Review and rewrite of most questions and answers July 2019 Addition of AWS WellArchitected Tool links to AWS WellArchitected Labs and AWS WellArchitected Part ners minor fixes to enable multiple language version of framework November 2018 Review and rewrite of most questions and answers to ensure questions focus on one topic at a time This caused some previous questions to be split into multiple questions Added common terms to definitions (work load component etc) Changed presentation of question in main body to include descriptive text June 2018 Updates to simplify question text standardize answers and improve readability November 2017 Operational Excellence moved to front of pillars and rewritten so it frames other pillars Refreshed other pil lars to reflect evolution of AWS November 2016 Updated the Framework to include operational excel lence pillar and revised and updated the other pillars to reduce duplication and incorporate learnings from car rying out reviews with thousands of customers November 2015 Updated the Appendix with current Amazon Cloud Watch Logs information October 2015 Original publication 48ArchivedAWS WellArchitected Framework Appendix: Questions and Best Practices Operational Excellence Organization OPS 1 How do you determine what your priorities are? Everyone needs to understand their part in enabling business success Have shared goals in order to set priorities for resources This will maximize the benefits of your efforts Best Practices: •Evaluate external customer needs: Involve key stakeholders including business devel opment and operations teams to determine where to focus efforts on external customer needs This will ensure that you have a thorough understanding of the operations support that is required to achieve your desired business outcomes •Evaluate internal customer needs : Involve key stakeholders including business devel opment and operations teams when determining where to focus efforts on internal cus tomer needs This will ensure that you have a thorough understanding of the operations support that is required to achieve business outcomes •Evaluate governance requirements: Ensure that you are aware of guidelines or obliga tions defined by your organization that may mandate or emphasize specific focus Eval uate internal factors such as organization policy standards and requirements Validate that you have mechanisms to identify changes to governance If no governance require ments are identified ensure that you have applied due diligence to this determination •Evaluate compliance requirements : Evaluate external factors such as regulatory compli ance requirements and industry standards to ensure that you are aware of guidelines or obligations that may mandate or emphasize specific focus If no compliance requirements are identified ensure that you apply due diligence to this determination •Evaluate threat landscape : Evaluate threats to the business (for example competition business risk and liabilities operational risks and information security threats) and main tain current information in a risk registry Include the impact of risks when determining where to focus efforts •Evaluate tradeoffs : Evaluate the impact of tradeoffs between competing interests or al ternative approaches to help make informed decisions when determining where to focus efforts or choosing a course of action For example accelerating speed to market for new features may be emphasized over cost optimization or you may choose a relational data base for nonrelational data to simplify the effort to migrate a system rather than migrat ing to a database optimized for your data type and updating your application •Manage benefits and risks : Manage benefits and risks to make informed decisions when determining where to focus efforts For example it may be beneficial to deploy a work load with unresolved issues so that significant new features can be made available to cus tomers It may be possible to mitigate associated risks or it may become unacceptable to allow a risk to remain in which case you will take action to address the risk 49ArchivedAWS WellArchitected Framework OPS 2 How do you structure your organization to support your business outcomes? Your teams must understand their part in achieving business outcomes Teams need to un derstand their roles in the success of other teams the role of other teams in their success and have shared goals Understanding responsibility ownership how decisions are made and who has authority to make decisions will help focus efforts and maximize the benefits from your teams Best Practices: •Resources have identified owners : Understand who has ownership of each application workload platform and infrastructure component what business value is provided by that component and why that ownership exists Understanding the business value of these in dividual components and how they support business outcomes informs the processes and procedures applied against them •Processes and procedures have identified owners: Understand who has ownership of the definition of individual processes and procedures why those specific process and proce dures are used and why that ownership exists Understanding the reasons that specific processes and procedures are used enables identification of improvement opportunities •Operations activities have identified owners responsible for their performance: Under stand who has responsibility to perform specific activities on defined workloads and why that responsibility exists Understanding who has responsibility to perform activities in forms who will conduct the activity validate the result and provide feedback to the owner of the activity •Team members know what they are responsible for: Understanding the responsibilities of your role and how you contribute to business outcomes informs the prioritization of your tasks and why your role is important This enables team members to recognize needs and respond appropriately •Mechanisms exist to identify responsibility and ownership: Where no individual or team is identified there are defined escalation paths to someone with the authority to assign ownership or plan for that need to be addressed •Mechanisms exist to request additions changes and exceptions : You are able to make requests to owners of processes procedures and resources Make informed decisions to approve requests where viable and determined to be appropriate after an evaluation of benefits and risks •Responsibilities between teams are predefined or negotiated: There are defined or ne gotiated agreements between teams describing how they work with and support each oth er (for example response times service level objectives or service level agreements) Un derstanding the impact of the teams’ work on business outcomes and the outcomes of other teams and organizations informs the prioritization of their tasks and enables them to respond appropriately 50ArchivedAWS WellArchitected Framework OPS 3 How does your organizational culture support your business outcomes? Provide support for your team members so that they can be more effective in taking action and supporting your business outcome Best Practices: •Executive Sponsorship : Senior leadership clearly sets expectations for the organization and evaluates success Senior leadership is the sponsor advocate and driver for the adop tion of best practices and evolution of the organization •Team members are empowered to take action when outcomes are at risk: The workload owner has defined guidance and scope empowering team members to respond when out comes are at risk Escalation mechanisms are used to get direction when events are outside of the defined scope •Escalation is encouraged : Team members have mechanisms and are encouraged to esca late concerns to decision makers and stakeholders if they believe outcomes are at risk Es calation should be performed early and often so that risks can be identified and prevent ed from causing incidents •Communications are timely clear and actionable: Mechanisms exist and are used to pro vide timely notice to team members of known risks and planned events Necessary con text details and time (when possible) are provided to support determining if action is nec essary what action is required and to take action in a timely manner For example provid ing notice of software vulnerabilities so that patching can be expedited or providing no tice of planned sales promotions so that a change freeze can be implemented to avoid the risk of service disruption •Experimentation is encouraged: Experimentation accelerates learning and keeps team members interested and engaged An undesired result is a successful experiment that has identified a path that will not lead to success Team members are not punished for suc cessful experiments with undesired results Experimentation is required for innovation to happen and turn ideas into outcomes •Team members are enabled and encouraged to maintain and grow their skill sets : Teams must grow their skill sets to adopt new technologies and to support changes in de mand and responsibilities in support of your workloads Growth of skills in new technolo gies is frequently a source of team member satisfaction and supports innovation Support your team members’ pursuit and maintenance of industry certifications that validate and acknowledge their growing skills Cross train to promote knowledge transfer and reduce the risk of significant impact when you lose skilled and experienced team members with institutional knowledge Provide dedicated structured time for learning •Resource teams appropriately: Maintain team member capacity and provide tools and resources to support your workload needs Overtasking team members increases the risk of incidents resulting from human error Investments in tools and resources (for example providing automation for frequently executed activities) can scale the effectiveness of your team enabling them to support additional activities •Diverse opinions are encouraged and sought within and across teams: Leverage cross organizational diversity to seek multiple unique perspectives Use this perspective to in crease innovation challenge your assumptions and reduce the risk of confirmation bias Grow inclusion diversity and accessibility within your teams to gain beneficial perspec tives51ArchivedAWS WellArchitected Framework Prepare OPS 4 How do you design your workload so that you can understand its state? Design your workload so that it provides the information necessary across all components (for example metrics logs and traces) for you to understand its internal state This enables you to provide effective responses when appropriate Best Practices: •Implement application telemetry : Instrument your application code to emit informa tion about its internal state status and achievement of business outcomes For example queue depth error messages and response times Use this information to determine when a response is required •Implement and configure workload telemetry : Design and configure your workload to emit information about its internal state and current status For example API call volume HTTP status codes and scaling events Use this information to help determine when a re sponse is required •Implement user activity telemetry: Instrument your application code to emit informa tion about user activity for example click streams or started abandoned and completed transactions Use this information to help understand how the application is used patterns of usage and to determine when a response is required •Implement dependency telemetry : Design and configure your workload to emit informa tion about the status (for example reachability or response time) of resources it depends on Examples of external dependencies can include external databases DNS and network connectivity Use this information to determine when a response is required •Implement transaction traceability: Implement your application code and configure your workload components to emit information about the flow of transactions across the work load Use this information to determine when a response is required and to assist you in identifying the factors contributing to an issue 52ArchivedAWS WellArchitected Framework OPS 5 How do you reduce defects ease remediation and improve flow into production? Adopt approaches that improve flow of changes into production that enable refactoring fast feedback on quality and bug fixing These accelerate beneficial changes entering pro duction limit issues deployed and enable rapid identification and remediation of issues in troduced through deployment activities Best Practices: •Use version control : Use version control to enable tracking of changes and releases •Test and validate changes: Test and validate changes to help limit and detect errors Au tomate testing to reduce errors caused by manual processes and reduce the level of effort to test •Use configuration management systems : Use configuration management systems to make and track configuration changes These systems reduce errors caused by manual processes and reduce the level of effort to deploy changes •Use build and deployment management systems: Use build and deployment manage ment systems These systems reduce errors caused by manual processes and reduce the level of effort to deploy changes •Perform patch management : Perform patch management to gain features address issues and remain compliant with governance Automate patch management to reduce errors caused by manual processes and reduce the level of effort to patch •Share design standards: Share best practices across teams to increase awareness and maximize the benefits of development efforts •Implement practices to improve code quality : Implement practices to improve code qual ity and minimize defects For example testdriven development code reviews and stan dards adoption •Use multiple environments : Use multiple environments to experiment develop and test your workload Use increasing levels of controls as environments approach production to gain confidence your workload will operate as intended when deployed •Make frequent small reversible changes: Frequent small and reversible changes reduce the scope and impact of a change This eases troubleshooting enables faster remediation and provides the option to roll back a change •Fully automate integration and deployment : Automate build deployment and testing of the workload This reduces errors caused by manual processes and reduces the effort to deploy changes 53ArchivedAWS WellArchitected Framework OPS 6 How do you mitigate deployment risks? Adopt approaches that provide fast feedback on quality and enable rapid recovery from changes that do not have desired outcomes Using these practices mitigates the impact of is sues introduced through the deployment of changes Best Practices: •Plan for unsuccessful changes : Plan to revert to a known good state or remediate in the production environment if a change does not have the desired outcome This preparation reduces recovery time through faster responses •Test and validate changes: Test changes and validate the results at all lifecycle stages to confirm new features and minimize the risk and impact of failed deployments •Use deployment management systems: Use deployment management systems to track and implement change This reduces errors cause by manual processes and reduces the ef fort to deploy changes •Test using limited deployments : Test with limited deployments alongside existing sys tems to confirm desired outcomes prior to full scale deployment For example use deploy ment canary testing or onebox deployments •Deploy using parallel environments: Implement changes onto parallel environments and then transition over to the new environment Maintain the prior environment until there is confirmation of successful deployment Doing so minimizes recovery time by enabling roll back to the previous environment •Deploy frequent small reversible changes: Use frequent small and reversible changes to reduce the scope of a change This results in easier troubleshooting and faster remedia tion with the option to roll back a change •Fully automate integration and deployment : Automate build deployment and testing of the workload This reduces errors cause by manual processes and reduces the effort to de ploy changes •Automate testing and rollback : Automate testing of deployed environments to confirm desired outcomes Automate rollback to previous known good state when outcomes are not achieved to minimize recovery time and reduce errors caused by manual processes 54ArchivedAWS WellArchitected Framework OPS 7 How do you know that you are ready to support a workload? Evaluate the operational readiness of your workload processes and procedures and person nel to understand the operational risks related to your workload Best Practices: •Ensure personnel capability: Have a mechanism to validate that you have the appropri ate number of trained personnel to provide support for operational needs Train personnel and adjust personnel capacity as necessary to maintain effective support •Ensure consistent review of operational readiness : Ensure you have a consistent review of your readiness to operate a workload Reviews must include at a minimum the oper ational readiness of the teams and the workload and security requirements Implement review activities in code and trigger automated review in response to events where ap propriate to ensure consistency speed of execution and reduce errors caused by manual processes •Use runbooks to perform procedures : Runbooks are documented procedures to achieve specific outcomes Enable consistent and prompt responses to wellunderstood events by documenting procedures in runbooks Implement runbooks as code and trigger the execu tion of runbooks in response to events where appropriate to ensure consistency speed re sponses and reduce errors caused by manual processes •Use playbooks to investigate issues : Enable consistent and prompt responses to issues that are not well understood by documenting the investigation process in playbooks Playbooks are the predefined steps performed to identify the factors contributing to a fail ure scenario The results from any process step are used to determine the next steps to take until the issue is identified or escalated •Make informed decisions to deploy systems and changes: Evaluate the capabilities of the team to support the workload and the workload's compliance with governance Evaluate these against the benefits of deployment when determining whether to transition a sys tem or change into production Understand the benefits and risks to make informed deci sions 55ArchivedAWS WellArchitected Framework Operate OPS 8 How do you understand the health of your workload? Define capture and analyze workload metrics to gain visibility to workload events so that you can take appropriate action Best Practices: •Identify key performance indicators: Identify key performance indicators (KPIs) based on desired business outcomes (for example order rate customer retention rate and profit versus operating expense) and customer outcomes (for example customer satisfaction) Evaluate KPIs to determine workload success •Define workload metrics : Define workload metrics to measure the achievement of KPIs (for example abandoned shopping carts orders placed cost price and allocated workload expense) Define workload metrics to measure the health of the workload (for example interface response time error rate requests made requests completed and utilization) Evaluate metrics to determine if the workload is achieving desired outcomes and to un derstand the health of the workload •Collect and analyze workload metrics: Perform regular proactive reviews of metrics to identify trends and determine where appropriate responses are needed •Establish workload metrics baselines: Establish baselines for metrics to provide expected values as the basis for comparison and identification of under and over performing com ponents Identify thresholds for improvement investigation and intervention •Learn expected patterns of activity for workload: Establish patterns of workload activity to identify anomalous behavior so that you can respond appropriately if required •Alert when workload outcomes are at risk: Raise an alert when workload outcomes are at risk so that you can respond appropriately if necessary •Alert when workload anomalies are detected: Raise an alert when workload anomalies are detected so that you can respond appropriately if necessary •Validate the achievement of outcomes and the effectiveness of KPIs and metrics : Cre ate a businesslevel view of your workload operations to help you determine if you are sat isfying needs and to identify areas that need improvement to reach business goals Vali date the effectiveness of KPIs and metrics and revise them if necessary 56ArchivedAWS WellArchitected Framework OPS 9 How do you understand the health of your operations? Define capture and analyze operations metrics to gain visibility to operations events so that you can take appropriate action Best Practices: •Identify key performance indicators: Identify key performance indicators (KPIs) based on desired business (for example new features delivered) and customer outcomes (for exam ple customer support cases) Evaluate KPIs to determine operations success •Define operations metrics: Define operations metrics to measure the achievement of KPIs (for example successful deployments and failed deployments) Define operations met rics to measure the health of operations activities (for example mean time to detect an in cident (MTTD) and mean time to recovery (MTTR) from an incident) Evaluate metrics to determine if operations are achieving desired outcomes and to understand the health of your operations activities •Collect and analyze operations metrics : Perform regular proactive reviews of metrics to identify trends and determine where appropriate responses are needed •Establish operations metrics baselines : Establish baselines for metrics to provide expect ed values as the basis for comparison and identification of under and over performing op erations activities •Learn the expected patterns of activity for operations: Establish patterns of operations activities to identify anomalous activity so that you can respond appropriately if necessary •Alert when operations outcomes are at risk : Raise an alert when operations outcomes are at risk so that you can respond appropriately if necessary •Alert when operations anomalies are detected : Raise an alert when operations anomalies are detected so that you can respond appropriately if necessary •Validate the achievement of outcomes and the effectiveness of KPIs and metrics : Cre ate a businesslevel view of your operations activities to help you determine if you are sat isfying needs and to identify areas that need improvement to reach business goals Vali date the effectiveness of KPIs and metrics and revise them if necessary 57ArchivedAWS WellArchitected Framework OPS 10 How do you manage workload and operations events? Prepare and validate procedures for responding to events to minimize their disruption to your workload Best Practices: •Use processes for event incident and problem management : Have processes to address observed events events that require intervention (incidents) and events that require in tervention and either recur or cannot currently be resolved (problems) Use these process es to mitigate the impact of these events on the business and your customers by ensuring timely and appropriate responses •Have a process per alert : Have a welldefined response (runbook or playbook) with a specifically identified owner for any event for which you raise an alert This ensures effec tive and prompt responses to operations events and prevents actionable events from be ing obscured by less valuable notifications •Prioritize operational events based on business impact: Ensure that when multiple events require intervention those that are most significant to the business are addressed first For example impacts can include loss of life or injury financial loss or damage to reputation or trust •Define escalation paths : Define escalation paths in your runbooks and playbooks includ ing what triggers escalation and procedures for escalation Specifically identify owners for each action to ensure effective and prompt responses to operations events •Enable push notifications : Communicate directly with your users (for example with email or SMS) when the services they use are impacted and again when the services return to normal operating conditions to enable users to take appropriate action •Communicate status through dashboards: Provide dashboards tailored to their target au diences (for example internal technical teams leadership and customers) to communicate the current operating status of the business and provide metrics of interest •Automate responses to events : Automate responses to events to reduce errors caused by manual processes and to ensure prompt and consistent responses 58ArchivedAWS WellArchitected Framework Evolve OPS 11 How do you evolve operations? Dedicate time and resources for continuous incremental improvement to evolve the effec tiveness and efficiency of your operations Best Practices: •Have a process for continuous improvement: Regularly evaluate and prioritize opportuni ties for improvement to focus efforts where they can provide the greatest benefits •Perform postincident analysis : Review customerimpacting events and identify the con tributing factors and preventative actions Use this information to develop mitigations to limit or prevent recurrence Develop procedures for prompt and effective responses Com municate contributing factors and corrective actions as appropriate tailored to target au diences •Implement feedback loops : Include feedback loops in your procedures and workloads to help you identify issues and areas that need improvement •Perform Knowledge Management : Mechanisms exist for your team members to discover the information that they are looking for in a timely manner access it and identify that it’s current and complete Mechanisms are present to identify needed content content in need of refresh and content that should be archived so that it’s no longer referenced •Define drivers for improvement: Identify drivers for improvement to help you evaluate and prioritize opportunities •Validate insights : Review your analysis results and responses with crossfunctional teams and business owners Use these reviews to establish common understanding identify addi tional impacts and determine courses of action Adjust responses as appropriate •Perform operations metrics reviews : Regularly perform retrospective analysis of opera tions metrics with crossteam participants from different areas of the business Use these reviews to identify opportunities for improvement potential courses of action and to share lessons learned •Document and share lessons learned: Document and share lessons learned from the exe cution of operations activities so that you can use them internally and across teams •Allocate time to make improvements : Dedicate time and resources within your processes to make continuous incremental improvements possible 59ArchivedAWS WellArchitected Framework Security Security SEC 1 How do you securely operate your workload? To operate your workload securely you must apply overarching best practices to every area of security Take requirements and processes that you have defined in operational excellence at an organizational and workload level and apply them to all areas Staying up to date with AWS and industry recommendations and threat intelligence helps you evolve your threat model and control objectives Automating security processes testing and validation allow you to scale your security operations Best Practices: •Separate workloads using accounts: Organize workloads in separate accounts and group accounts based on function or a common set of controls rather than mirroring your com pany’s reporting structure Start with security and infrastructure in mind to enable your or ganization to set common guardrails as your workloads grow •Secure AWS account : Secure access to your accounts for example by enabling MFA and restrict use of the root user and configure account contacts •Identify and validate control objectives : Based on your compliance requirements and risks identified from your threat model derive and validate the control objectives and con trols that you need to apply to your workload Ongoing validation of control objectives and controls help you measure the effectiveness of risk mitigation •Keep up to date with security threats: Recognize attack vectors by staying up to date with the latest security threats to help you define and implement appropriate controls •Keep up to date with security recommendations : Stay up to date with both AWS and in dustry security recommendations to evolve the security posture of your workload •Automate testing and validation of security controls in pipelines: Establish secure base lines and templates for security mechanisms that are tested and validated as part of your build pipelines and processes Use tools and automation to test and validate all security controls continuously For example scan items such as machine images and infrastructure as code templates for security vulnerabilities irregularities and drift from an established baseline at each stage •Identify and prioritize risks using a threat model: Use a threat model to identify and maintain an uptodate register of potential threats Prioritize your threats and adapt your security controls to prevent detect and respond Revisit and maintain this in the context of the evolving security landscape •Evaluate and implement new security services and features regularly: AWS and APN Partners constantly release new features and services that allow you to evolve the security posture of your workload 60ArchivedAWS WellArchitected Framework Identity and Access Management SEC 2 How do you manage identities for people and machines? There are two types of identities you need to manage when approaching operating secure AWS workloads Understanding the type of identity you need to manage and grant access helps you ensure the right identities have access to the right resources under the right con ditions Human Identities: Your administrators developers operators and end users require an identity to access your AWS environments and applications These are members of your organization or external users with whom you collaborate and who interact with your AWS resources via a web browser client application or interactive commandline tools Machine Identities: Your service applications operational tools and workloads require an identity to make requests to AWS services for example to read data These identities include machines running in your AWS environment such as Amazon EC2 instances or AWS Lambda functions You may also manage machine identities for external parties who need access Additionally you may also have machines outside of AWS that need access to your AWS environment Best Practices: •Use strong signin mechanisms : Enforce minimum password length and educate users to avoid common or reused passwords Enforce multifactor authentication (MFA) with soft ware or hardware mechanisms to provide an additional layer •Use temporary credentials : Require identities to dynamically acquire temporary creden tials For workforce identities use AWS Single SignOn or federation with IAM roles to ac cess AWS accounts For machine identities require the use of IAM roles instead of long term access keys •Store and use secrets securely: For workforce and machine identities that require secrets such as passwords to third party applications store them with automatic rotation using the latest industry standards in a specialized service •Rely on a centralized identity provider: For workforce identities rely on an identity provider that enables you to manage identities in a centralized place This enables you to create manage and revoke access from a single location making it easier to manage ac cess This reduces the requirement for multiple credentials and provides an opportunity to integrate with HR processes •Audit and rotate credentials periodically: When you cannot rely on temporary credentials and require long term credentials audit credentials to ensure that the defined controls (for example MFA) are enforced rotated regularly and have appropriate access level •Leverage user groups and attributes: Place users with common security requirements in groups defined by your identity provider and put mechanisms in place to ensure that user attributes that may be used for access control (eg department or location) are cor rect and updated Use these groups and attributes rather than individual users to control access This allows you to manage access centrally by changing a user’s group member ship or attributes once rather than updating many individual policies when a user’s access needs change 61ArchivedAWS WellArchitected Framework SEC 3 How do you manage permissions for people and machines? Manage permissions to control access to people and machine identities that require access to AWS and your workload Permissions control who can access what and under what condi tions Best Practices: •Define access requirements: Each component or resource of your workload needs to be accessed by administrators end users or other components Have a clear definition of who or what should have access to each component choose the appropriate identity type and method of authentication and authorization •Grant least privilege access : Grant only the access that identities require by allowing ac cess to specific actions on specific AWS resources under specific conditions Rely on groups and identity attributes to dynamically set permissions at scale rather than defining per missions for individual users For example you can allow a group of developers access to manage only resources for their project This way when a developer is removed from the group access for the developer is revoked everywhere that group was used for access con trol without requiring any changes to the access policies •Establish emergency access process : A process that allows emergency access to your workload in the unlikely event of an automated process or pipeline issue This will help you rely on least privilege access but ensure users can obtain the right level of access when they require it For example establish a process for administrators to verify and approve their request •Reduce permissions continuously : As teams and workloads determine what access they need remove permissions they no longer use and establish review processes to achieve least privilege permissions Continuously monitor and reduce unused identities and per missions •Define permission guardrails for your organization: Establish common controls that re strict access to all identities in your organization For example you can restrict access to specific AWS Regions or prevent your operators from deleting common resources such as an IAM role used for your central security team •Manage access based on life cycle : Integrate access controls with operator and applica tion life cycle and your centralized federation provider For example remove a user’s ac cess when they leave the organization or change roles •Analyze public and cross account access: Continuously monitor findings that highlight public and cross account access Reduce public access and cross account access to only re sources that require this type of access •Share resources securely : Govern the consumption of shared resources across accounts or within your AWS Organization Monitor shared resources and review shared resource ac cess 62ArchivedAWS WellArchitected Framework Detection SEC 4 How do you detect and investigate security events? Capture and analyze events from logs and metrics to gain visibility Take action on security events and potential threats to help secure your workload Best Practices: •Configure service and application logging : Configure logging throughout the workload including application logs resource logs and AWS service logs For example ensure that AWS CloudTrail Amazon CloudWatch Logs Amazon GuardDuty and AWS Security Hub are enabled for all accounts within your organization •Analyze logs findings and metrics centrally: All logs metrics and telemetry should be collected centrally and automatically analyzed to detect anomalies and indicators of unauthorized activity A dashboard can provide you easy to access insight into realtime health For example ensure that Amazon GuardDuty and Security Hub logs are sent to a central location for alerting and analysis •Automate response to events: Using automation to investigate and remediate events re duces human effort and error and enables you to scale investigation capabilities Regular reviews will help you tune automation tools and continuously iterate For example auto mate responses to Amazon GuardDuty events by automating the first investigation step then iterate to gradually remove human effort •Implement actionable security events: Create alerts that are sent to and can be actioned by your team Ensure that alerts include relevant information for the team to take action For example ensure that Amazon GuardDuty and AWS Security Hub alerts are sent to the team to action or sent to response automation tooling with the team remaining informed by messaging from the automation framework 63ArchivedAWS WellArchitected Framework Infrastructure Protection SEC 5 How do you protect your network resources? Any workload that has some form of network connectivity whether it’s the internet or a pri vate network requires multiple layers of defense to help protect from external and internal networkbased threats Best Practices: •Create network layers : Group components that share reachability requirements into lay ers For example a database cluster in a VPC with no need for internet access should be placed in subnets with no route to or from the internet In a serverless workload operating without a VPC similar layering and segmentation with microservices can achieve the same goal •Control traffic at all layers: Apply controls with a defense in depth approach for both in bound and outbound traffic For example for Amazon Virtual Private Cloud (VPC) this in cludes security groups Network ACLs and subnets For AWS Lambda consider running in your private VPC with VPCbased controls •Automate network protection: Automate protection mechanisms to provide a selfde fending network based on threat intelligence and anomaly detection For example intru sion detection and prevention tools that can proactively adapt to current threats and re duce their impact •Implement inspection and protection: Inspect and filter your traffic at each layer For ex ample use a web application firewall to help protect against inadvertent access at the ap plication network layer For Lambda functions thirdparty tools can add applicationlayer firewalling to your runtime environment 64ArchivedAWS WellArchitected Framework SEC 6 How do you protect your compute resources? Compute resources in your workload require multiple layers of defense to help protect from external and internal threats Compute resources include EC2 instances containers AWS Lambda functions database services IoT devices and more Best Practices: •Perform vulnerability management : Frequently scan and patch for vulnerabilities in your code dependencies and in your infrastructure to help protect against new threats •Reduce attack surface : Reduce your attack surface by hardening operating systems mini mizing components libraries and externally consumable services in use •Implement managed services : Implement services that manage resources such as Ama zon RDS AWS Lambda and Amazon ECS to reduce your security maintenance tasks as part of the shared responsibility model •Automate compute protection : Automate your protective compute mechanisms including vulnerability management reduction in attack surface and management of resources •Enable people to perform actions at a distance: Removing the ability for interactive ac cess reduces the risk of human error and the potential for manual configuration or man agement For example use a change management workflow to deploy EC2 instances us ing infrastructure as code then manage EC2 instances using tools instead of allowing di rect access or a bastion host •Validate software integrity : Implement mechanisms (for example code signing) to vali date that the software code and libraries used in the workload are from trusted sources and have not been tampered with 65ArchivedAWS WellArchitected Framework Data Protection SEC 7 How do you classify your data? Classification provides a way to categorize data based on criticality and sensitivity in order to help you determine appropriate protection and retention controls Best Practices: •Identify the data within your workload: This includes the type and classification of data the associated business processes data owner applicable legal and compliance require ments where it’s stored and the resulting controls that are needed to be enforced This may include classifications to indicate if the data is intended to be publicly available if the data is internal use only such as customer personally identifiable information (PII) or if the data is for more restricted access such as intellectual property legally privileged or marked sensititve and more •Define data protection controls : Protect data according to its classification level For ex ample secure data classified as public by using relevant recommendations while protect ing sensitive data with additional controls •Automate identification and classification : Automate identification and classification of data to reduce the risk of human error from manual interactions •Define data lifecycle management: Your defined lifecycle strategy should be based on sensitivity level as well as legal and organization requirements Aspects including the du ration you retain data for data destruction data access management data transformation and data sharing should be considered 66ArchivedAWS WellArchitected Framework SEC 8 How do you protect your data at rest? Protect your data at rest by implementing multiple controls to reduce the risk of unautho rized access or mishandling Best Practices: •Implement secure key management : Encryption keys must be stored securely with strict access control for example by using a key management service such as AWS KMS Consid er using different keys and access control to the keys combined with the AWS IAM and re source policies to align with data classification levels and segregation requirements •Enforce encryption at rest: Enforce your encryption requirements based on the latest standards and recommendations to help protect your data at rest •Automate data at rest protection: Use automated tools to validate and enforce data at rest protection continuously for example verify that there are only encrypted storage re sources •Enforce access control : Enforce access control with least privileges and mechanisms in cluding backups isolation and versioning to help protect your data at rest Prevent opera tors from granting public access to your data •Use mechanisms to keep people away from data: Keep all users away from directly ac cessing sensitive data and systems under normal operational circumstances For example provide a dashboard instead of direct access to a data store to run queries Where CI/CD pipelines are not used determine which controls and processes are required to adequately provide a normally disabled breakglass access mechanism SEC 9 How do you protect your data in transit? Protect your data in transit by implementing multiple controls to reduce the risk of unautho rized access or loss Best Practices: •Implement secure key and certificate management: Store encryption keys and certifi cates securely and rotate them at appropriate time intervals while applying strict access control; for example by using a certificate management service such as AWS Certificate Manager (ACM) •Enforce encryption in transit : Enforce your defined encryption requirements based on ap propriate standards and recommendations to help you meet your organizational legal and compliance requirements •Automate detection of unintended data access: Use tools such as GuardDuty to automat ically detect attempts to move data outside of defined boundaries based on data classifi cation level for example to detect a trojan that is copying data to an unknown or untrust ed network using the DNS protocol •Authenticate network communications: Verify the identity of communications by using protocols that support authentication such as Transport Layer Security (TLS) or IPsec 67ArchivedAWS WellArchitected Framework Incident Response SEC 10 How do you anticipate respond to and recover from incidents? Preparation is critical to timely and effective investigation response to and recovery from security incidents to help minimize disruption to your organization Best Practices: •Identify key personnel and external resources: Identify internal and external personnel resources and legal obligations that would help your organization respond to an incident •Develop incident management plans: Create plans to help you respond to communicate during and recover from an incident For example you can start an incident response plan with the most likely scenarios for your workload and organization Include how you would communicate and escalate both internally and externally •Prepare forensic capabilities : Identify and prepare forensic investigation capabilities that are suitable including external specialists tools and automation •Automate containment capability : Automate containment and recovery of an incident to reduce response times and organizational impact •Preprovision access : Ensure that incident responders have the correct access preprovi sioned into AWS to reduce the time for investigation through to recovery •Predeploy tools : Ensure that security personnel have the right tools predeployed into AWS to reduce the time for investigation through to recovery •Run game days : Practice incident response game days (simulations) regularly incorporate lessons learned into your incident management plans and continuously improve 68ArchivedAWS WellArchitected Framework Reliability Foundations REL 1 How do you manage service quotas and constraints? For cloudbased workload architectures there are service quotas (which are also referred to as service limits) These quotas exist to prevent accidentally provisioning more resources than you need and to limit request rates on API operations so as to protect services from abuse There are also resource constraints for example the rate that you can push bits down a fiberoptic cable or the amount of storage on a physical disk Best Practices: •Aware of service quotas and constraints : You are aware of your default quotas and quo ta increase requests for your workload architecture You additionally know which resource constraints such as disk or network are potentially impactful •Manage service quotas across accounts and regions: If you are using multiple AWS ac counts or AWS Regions ensure that you request the appropriate quotas in all environ ments in which your production workloads run •Accommodate fixed service quotas and constraints through architecture: Be aware of unchangeable service quotas and physical resources and architect to prevent these from impacting reliability •Monitor and manage quotas : Evaluate your potential usage and increase your quotas ap propriately allowing for planned growth in usage •Automate quota management: Implement tools to alert you when thresholds are being approached By using AWS Service Quotas APIs you can automate quota increase requests •Ensure that a sufficient gap exists between the current quotas and the maximum us age to accommodate failover: When a resource fails it may still be counted against quo tas until its successfully terminated Ensure that your quotas cover the overlap of all failed resources with replacements before the failed resources are terminated You should con sider an Availability Zone failure when calculating this gap 69ArchivedAWS WellArchitected Framework REL 2 How do you plan your network topology? Workloads often exist in multiple environments These include multiple cloud environments (both publicly accessible and private) and possibly your existing data center infrastructure Plans must include network considerations such as intra and intersystem connectivity pub lic IP address management private IP address management and domain name resolution Best Practices: •Use highly available network connectivity for your workload public endpoints: These endpoints and the routing to them must be highly available To achieve this use highly available DNS content delivery networks (CDNs) API Gateway load balancing or reverse proxies •Provision redundant connectivity between private networks in the cloud and on premises environments: Use multiple AWS Direct Connect (DX) connections or VPN tun nels between separately deployed private networks Use multiple DX locations for high availability If using multiple AWS Regions ensure redundancy in at least two of them You might want to evaluate AWS Marketplace appliances that terminate VPNs If you use AWS Marketplace appliances deploy redundant instances for high availability in different Avail ability Zones •Ensure IP subnet allocation accounts for expansion and availability: Amazon VPC IP ad dress ranges must be large enough to accommodate workload requirements including factoring in future expansion and allocation of IP addresses to subnets across Availability Zones This includes load balancers EC2 instances and containerbased applications •Prefer hubandspoke topologies over manytomany mesh: If more than two network address spaces (for example VPCs and onpremises networks) are connected via VPC peer ing AWS Direct Connect or VPN then use a hubandspoke model like that provided by AWS Transit Gateway •Enforce nonoverlapping private IP address ranges in all private address spaces where they are connected : The IP address ranges of each of your VPCs must not overlap when peered or connected via VPN You must similarly avoid IP address conflicts between a VPC and onpremises environments or with other cloud providers that you use You must also have a way to allocate private IP address ranges when needed 70ArchivedAWS WellArchitected Framework Workload Architecture REL 3 How do you design your workload service architecture? Build highly scalable and reliable workloads using a serviceoriented architecture (SOA) or a microservices architecture Serviceoriented architecture (SOA) is the practice of making soft ware components reusable via service interfaces Microservices architecture goes further to make components smaller and simpler Best Practices: •Choose how to segment your workload : Monolithic architecture should be avoided In stead you should choose between SOA and microservices When making each choice bal ance the benefits against the complexities—what is right for a new product racing to first launch is different than what a workload built to scale from the start needs The benefits of using smaller segments include greater agility organizational flexibility and scalability Complexities include possible increased latency more complex debugging and increased operational burden •Build services focused on specific business domains and functionality: SOA builds ser vices with welldelineated functions defined by business needs Microservices use domain models and bounded context to limit this further so that each service does just one thing Focusing on specific functionality enables you to differentiate the reliability requirements of different services and target investments more specifically A concise business problem and having a small team associated with each service also enables easier organizational scaling •Provide service contracts per API : Service contracts are documented agreements between teams on service integration and include a machinereadable API definition rate limits and performance expectations A versioning strategy allows clients to continue using the existing API and migrate their applications to the newer API when they are ready Deploy ment can happen anytime as long as the contract is not violated The service provider team can use the technology stack of their choice to satisfy the API contract Similarly the service consumer can use their own technology 71ArchivedAWS WellArchitected Framework REL 4 How do you design interactions in a distributed system to prevent failures? Distributed systems rely on communications networks to interconnect components such as servers or services Your workload must operate reliably despite data loss or latency in these networks Components of the distributed system must operate in a way that does not neg atively impact other components or the workload These best practices prevent failures and improve mean time between failures (MTBF) Best Practices: •Identify which kind of distributed system is required: Hard realtime distributed systems require responses to be given synchronously and rapidly while soft realtime systems have a more generous time window of minutes or more for response Offline systems handle responses through batch or asynchronous processing Hard realtime distributed systems have the most stringent reliability requirements •Implement loosely coupled dependencies: Dependencies such as queuing systems streaming systems workflows and load balancers are loosely coupled Loose coupling helps isolate behavior of a component from other components that depend on it increas ing resiliency and agility •Make all responses idempotent : An idempotent service promises that each request is completed exactly once such that making multiple identical requests has the same ef fect as making a single request An idempotent service makes it easier for a client to im plement retries without fear that a request will be erroneously processed multiple times To do this clients can issue API requests with an idempotency token—the same token is used whenever the request is repeated An idempotent service API uses the token to return a response identical to the response that was returned the first time that the request was completed •Do constant work: Systems can fail when there are large rapid changes in load For exam ple a health check system that monitors the health of thousands of servers should send the same size payload (a full snapshot of the current state) each time Whether no servers are failing or all of them the health check system is doing constant work with no large rapid changes 72ArchivedAWS WellArchitected Framework REL 5 How do you design interactions in a distributed system to mitigate or withstand failures? Distributed systems rely on communications networks to interconnect components (such as servers or services) Your workload must operate reliably despite data loss or latency over these networks Components of the distributed system must operate in a way that does not negatively impact other components or the workload These best practices enable workloads to withstand stresses or failures more quickly recover from them and mitigate the impact of such impairments The result is improved mean time to recovery (MTTR) Best Practices: •Implement graceful degradation to transform applicable hard dependencies into soft dependencies : When a component's dependencies are unhealthy the component itself can still function although in a degraded manner For example when a dependency call fails failover to a predetermined static response •Throttle requests : This is a mitigation pattern to respond to an unexpected increase in de mand Some requests are honored but those over a defined limit are rejected and return a message indicating they have been throttled The expectation on clients is that they will back off and abandon the request or try again at a slower rate •Control and limit retry calls: Use exponential backoff to retry after progressively longer intervals Introduce jitter to randomize those retry intervals and limit the maximum num ber of retries •Fail fast and limit queues : If the workload is unable to respond successfully to a request then fail fast This allows the releasing of resources associated with a request and permits the service to recover if it’s running out of resources If the workload is able to respond successfully but the rate of requests is too high then use a queue to buffer requests in stead However do not allow long queues that can result in serving stale requests that the client has already given up on •Set client timeouts : Set timeouts appropriately verify them systematically and do not re ly on default values as they are generally set too high •Make services stateless where possible : Services should either not require state or should offload state such that between different client requests there is no dependence on lo cally stored data on disk or in memory This enables servers to be replaced at will without causing an availability impact Amazon ElastiCache or Amazon DynamoDB are good desti nations for offloaded state •Implement emergency levers: These are rapid processes that may mitigate availability impact on your workload They can be operated in the absence of a root cause An ideal emergency lever reduces the cognitive burden on the resolvers to zero by providing fully deterministic activation and deactivation criteria Example levers include blocking all robot traffic or serving a static response Levers are often manual but they can also be automat ed 73ArchivedAWS WellArchitected Framework Change Management REL 6 How do you monitor workload resources? Logs and metrics are powerful tools to gain insight into the health of your workload You can configure your workload to monitor logs and metrics and send notifications when thresholds are crossed or significant events occur Monitoring enables your workload to recognize when lowperformance thresholds are crossed or failures occur so it can recover automatically in response Best Practices: •Monitor all components for the workload (Generation): Monitor the components of the workload with Amazon CloudWatch or thirdparty tools Monitor AWS services with Per sonal Health Dashboard •Define and calculate metrics (Aggregation): Store log data and apply filters where neces sary to calculate metrics such as counts of a specific log event or latency calculated from log event timestamps •Send notifications (Realtime processing and alarming) : Organizations that need to know receive notifications when significant events occur •Automate responses (Realtime processing and alarming): Use automation to take action when an event is detected for example to replace failed components •Storage and Analytics : Collect log files and metrics histories and analyze these for broader trends and workload insights •Conduct reviews regularly : Frequently review how workload monitoring is implemented and update it based on significant events and changes •Monitor endtoend tracing of requests through your system: Use AWS XRay or third party tools so that developers can more easily analyze and debug distributed systems to understand how their applications and its underlying services are performing 74ArchivedAWS WellArchitected Framework REL 7 How do you design your workload to adapt to changes in demand? A scalable workload provides elasticity to add or remove resources automatically so that they closely match the current demand at any given point in time Best Practices: •Use automation when obtaining or scaling resources : When replacing impaired resources or scaling your workload automate the process by using managed AWS services such as Amazon S3 and AWS Auto Scaling You can also use thirdparty tools and AWS SDKs to au tomate scaling •Obtain resources upon detection of impairment to a workload: Scale resources reactive ly when necessary if availability is impacted to restore workload availability •Obtain resources upon detection that more resources are needed for a workload: Scale resources proactively to meet demand and avoid availability impact •Load test your workload: Adopt a load testing methodology to measure if scaling activity meets workload requirements REL 8 How do you implement change? Controlled changes are necessary to deploy new functionality and to ensure that the work loads and the operating environment are running known software and can be patched or re placed in a predictable manner If these changes are uncontrolled then it makes it difficult to predict the effect of these changes or to address issues that arise because of them Best Practices: •Use runbooks for standard activities such as deployment : Runbooks are the predefined steps used to achieve specific outcomes Use runbooks to perform standard activities whether done manually or automatically Examples include deploying a workload patch ing it or making DNS modifications •Integrate functional testing as part of your deployment: Functional tests are run as part of automated deployment If success criteria are not met the pipeline is halted or rolled back •Integrate resiliency testing as part of your deployment: Resiliency tests (as part of chaos engineering) are run as part of the automated deployment pipeline in a preprod environ ment •Deploy using immutable infrastructure : This is a model that mandates that no updates security patches or configuration changes happen inplace on production workloads When a change is needed the architecture is built onto new infrastructure and deployed into production •Deploy changes with automation: Deployments and patching are automated to eliminate negative impact 75ArchivedAWS WellArchitected Framework Failure Management REL 9 How do you back up data? Back up data applications and configuration to meet your requirements for recovery time objectives (RTO) and recovery point objectives (RPO) Best Practices: •Identify and back up all data that needs to be backed up or reproduce the data from sources : Amazon S3 can be used as a backup destination for multiple data sources AWS services such as Amazon EBS Amazon RDS and Amazon DynamoDB have built in capabil ities to create backups Thirdparty backup software can also be used Alternatively if the data can be reproduced from other sources to meet RPO you might not require a backup •Secure and encrypt backups : Detect access using authentication and authorization such as AWS IAM and detect data integrity compromise by using encryption •Perform data backup automatically : Configure backups to be taken automatically based on a periodic schedule or by changes in the dataset RDS instances EBS volumes Dy namoDB tables and S3 objects can all be configured for automatic backup AWS Market place solutions or thirdparty solutions can also be used •Perform periodic recovery of the data to verify backup integrity and processes : Validate that your backup process implementation meets your recovery time objectives (RTO) and recovery point objectives (RPO) by performing a recovery test REL 10 How do you use fault isolation to protect your workload? Fault isolated boundaries limit the effect of a failure within a workload to a limited number of components Components outside of the boundary are unaffected by the failure Using multiple fault isolated boundaries you can limit the impact on your workload Best Practices: •Deploy the workload to multiple locations : Distribute workload data and resources across multiple Availability Zones or where necessary across AWS Regions These loca tions can be as diverse as required •Automate recovery for components constrained to a single location: If components of the workload can only run in a single Availability Zone or onpremises data center you must implement the capability to do a complete rebuild of the workload within your de fined recovery objectives •Use bulkhead architectures: Like the bulkheads on a ship this pattern ensures that a fail ure is contained to a small subset of requests/users so the number of impaired requests is limited and most can continue without error Bulkheads for data are usually called parti tions or shards while bulkheads for services are known as cells 76ArchivedAWS WellArchitected Framework REL 11 How do you design your workload to withstand component failures? Workloads with a requirement for high availability and low mean time to recovery (MTTR) must be architected for resiliency Best Practices: •Monitor all components of the workload to detect failures: Continuously monitor the health of your workload so that you and your automated systems are aware of degrada tion or complete failure as soon as they occur Monitor for key performance indicators (KPIs) based on business value •Fail over to healthy resources : Ensure that if a resource failure occurs that healthy re sources can continue to serve requests For location failures (such as Availability Zone or AWS Region) ensure you have systems in place to failover to healthy resources in unim paired locations •Automate healing on all layers : Upon detection of a failure use automated capabilities to perform actions to remediate •Use static stability to prevent bimodal behavior: Bimodal behavior is when your work load exhibits different behavior under normal and failure modes for example relying on launching new instances if an Availability Zone fails You should instead build workloads that are statically stable and operate in only one mode In this case provision enough in stances in each Availability Zone to handle the workload load if one AZ were removed and then use Elastic Load Balancing or Amazon Route 53 health checks to shift load away from the impaired instances •Send notifications when events impact availability: Notifications are sent upon the de tection of significant events even if the issue caused by the event was automatically re solved 77ArchivedAWS WellArchitected Framework REL 12 How do you test reliability? After you have designed your workload to be resilient to the stresses of production testing is the only way to ensure that it will operate as designed and deliver the resiliency you expect Best Practices: •Use playbooks to investigate failures: Enable consistent and prompt responses to fail ure scenarios that are not well understood by documenting the investigation process in playbooks Playbooks are the predefined steps performed to identify the factors contribut ing to a failure scenario The results from any process step are used to determine the next steps to take until the issue is identified or escalated •Perform postincident analysis : Review customerimpacting events and identify the con tributing factors and preventative action items Use this information to develop mitiga tions to limit or prevent recurrence Develop procedures for prompt and effective respons es Communicate contributing factors and corrective actions as appropriate tailored to target audiences Have a method to communicate these causes to others as needed •Test functional requirements : These include unit tests and integration tests that validate required functionality •Test scaling and performance requirements: This includes load testing to validate that the workload meets scaling and performance requirements •Test resiliency using chaos engineering: Run tests that inject failures regularly into pre production and production environments Hypothesize how your workload will react to the failure then compare your hypothesis to the testing results and iterate if they do not match Ensure that production testing does not impact users •Conduct game days regularly : Use game days to regularly exercise your failure procedures as close to production as possible (including in production environments) with the peo ple who will be involved in actual failure scenarios Game days enforce measures to ensure that production testing does not impact users 78ArchivedAWS WellArchitected Framework REL 13 How do you plan for disaster recovery (DR)? Having backups and redundant workload components in place is the start of your DR strate gy RTO and RPO are your objectives for restoration of availability Set these based on busi ness needs Implement a strategy to meet these objectives considering locations and func tion of workload resources and data Best Practices: •Define recovery objectives for downtime and data loss : The workload has a recovery time objective (RTO) and recovery point objective (RPO) •Use defined recovery strategies to meet the recovery objectives: A disaster recovery (DR) strategy has been defined to meet objectives •Test disaster recovery implementation to validate the implementation: Regularly test failover to DR to ensure that RTO and RPO are met •Manage configuration drift at the DR site or region: Ensure that the infrastructure data and configuration are as needed at the DR site or region For example check that AMIs and service quotas are up to date •Automate recovery : Use AWS or thirdparty tools to automate system recovery and route traffic to the DR site or region 79ArchivedAWS WellArchitected Framework Performance Efficiency Selection PERF 1 How do you select the best performing architecture? Often multiple approaches are required for optimal performance across a workload Wellar chitected systems use multiple solutions and features to improve performance Best Practices: •Understand the available services and resources : Learn about and understand the wide range of services and resources available in the cloud Identify the relevant services and configuration options for your workload and understand how to achieve optimal perfor mance •Define a process for architectural choices : Use internal experience and knowledge of the cloud or external resources such as published use cases relevant documentation or whitepapers to define a process to choose resources and services You should define a process that encourages experimentation and benchmarking with the services that could be used in your workload •Factor cost requirements into decisions : Workloads often have cost requirements for op eration Use internal cost controls to select resource types and sizes based on predicted re source need •Use policies or reference architectures: Maximize performance and efficiency by evaluat ing internal policies and existing reference architectures and using your analysis to select services and configurations for your workload •Use guidance from your cloud provider or an appropriate partner: Use cloud company resources such as solutions architects professional services or an appropriate partner to guide your decisions These resources can help review and improve your architecture for optimal performance •Benchmark existing workloads: Benchmark the performance of an existing workload to understand how it performs on the cloud Use the data collected from benchmarks to dri ve architectural decisions •Load test your workload: Deploy your latest workload architecture on the cloud using dif ferent resource types and sizes Monitor the deployment to capture performance metrics that identify bottlenecks or excess capacity Use this performance information to design or improve your architecture and resource selection 80ArchivedAWS WellArchitected Framework PERF 2 How do you select your compute solution? The optimal compute solution for a workload varies based on application design usage pat terns and configuration settings Architectures can use different compute solutions for vari ous components and enable different features to improve performance Selecting the wrong compute solution for an architecture can lead to lower performance efficiency Best Practices: •Evaluate the available compute options : Understand the performance characteristics of the computerelated options available to you Know how instances containers and func tions work and what advantages or disadvantages they bring to your workload •Understand the available compute configuration options: Understand how various op tions complement your workload and which configuration options are best for your sys tem Examples of these options include instance family sizes features (GPU I/O) function sizes container instances and single versus multitenancy •Collect computerelated metrics : One of the best ways to understand how your compute systems are performing is to record and track the true utilization of various resources This data can be used to make more accurate determinations about resource requirements •Determine the required configuration by rightsizing: Analyze the various performance characteristics of your workload and how these characteristics relate to memory network and CPU usage Use this data to choose resources that best match your workload's profile For example a memoryintensive workload such as a database could be served best by the rfamily of instances However a bursting workload can benefit more from an elastic container system •Use the available elasticity of resources : The cloud provides the flexibility to expand or reduce your resources dynamically through a variety of mechanisms to meet changes in demand Combined with computerelated metrics a workload can automatically respond to changes and utilize the optimal set of resources to achieve its goal •Reevaluate compute needs based on metrics: Use systemlevel metrics to identify the behavior and requirements of your workload over time Evaluate your workload's needs by comparing the available resources with these requirements and make changes to your compute environment to best match your workload's profile For example over time a sys tem might be observed to be more memoryintensive than initially thought so moving to a different instance family or size could improve both performance and efficiency 81ArchivedAWS WellArchitected Framework PERF 3 How do you select your storage solution? The optimal storage solution for a system varies based on the kind of access method (block file or object) patterns of access (random or sequential) required throughput frequency of access (online offline archival) frequency of update (WORM dynamic) and availability and durability constraints Wellarchitected systems use multiple storage solutions and enable different features to improve performance and use resources efficiently Best Practices: •Understand storage characteristics and requirements: Understand the different charac teristics (for example shareable file size cache size access patterns latency throughput and persistence of data) that are required to select the services that best fit your workload such as object storage block storage file storage or instance storage •Evaluate available configuration options: Evaluate the various characteristics and config uration options and how they relate to storage Understand where and how to use provi sioned IOPS SSDs magnetic storage object storage archival storage or ephemeral stor age to optimize storage space and performance for your workload •Make decisions based on access patterns and metrics : Choose storage systems based on your workload's access patterns and configure them by determining how the workload accesses data Increase storage efficiency by choosing object storage over block storage Configure the storage options you choose to match your data access patterns 82ArchivedAWS WellArchitected Framework PERF 4 How do you select your database solution? The optimal database solution for a system varies based on requirements for availability consistency partition tolerance latency durability scalability and query capability Many systems use different database solutions for various subsystems and enable different fea tures to improve performance Selecting the wrong database solution and features for a sys tem can lead to lower performance efficiency Best Practices: •Understand data characteristics : Understand the different characteristics of data in your workload Determine if the workload requires transactions how it interacts with data and what its performance demands are Use this data to select the best performing database approach for your workload (for example relational databases NoSQL Keyvalue docu ment wide column graph time series or inmemory storage) •Evaluate the available options: Evaluate the services and storage options that are avail able as part of the selection process for your workload's storage mechanisms Understand how and when to use a given service or system for data storage Learn about available configuration options that can optimize database performance or efficiency such as provi sioned IOPs memory and compute resources and caching •Collect and record database performance metrics : Use tools libraries and systems that record performance measurements related to database performance For example mea sure transactions per second slow queries or system latency introduced when accessing the database Use this data to understand the performance of your database systems •Choose data storage based on access patterns: Use the access patterns of the workload to decide which services and technologies to use For example utilize a relational database for workloads that require transactions or a keyvalue store that provides higher through put but is eventually consistent where applicable •Optimize data storage based on access patterns and metrics : Use performance charac teristics and access patterns that optimize how data is stored or queried to achieve the best possible performance Measure how optimizations such as indexing key distribution data warehouse design or caching strategies impact system performance or overall effi ciency 83ArchivedAWS WellArchitected Framework PERF 5 How do you configure your networking solution? The optimal network solution for a workload varies based on latency throughput require ments jitter and bandwidth Physical constraints such as user or onpremises resources de termine location options These constraints can be offset with edge locations or resource placement Best Practices: •Understand how networking impacts performance : Analyze and understand how net workrelated decisions impact workload performance For example network latency often impacts the user experience and using the wrong protocols can starve network capacity through excessive overhead •Evaluate available networking features: Evaluate networking features in the cloud that may increase performance Measure the impact of these features through testing metrics and analysis For example take advantage of networklevel features that are available to reduce latency network distance or jitter •Choose appropriately sized dedicated connectivity or VPN for hybrid workloads: When there is a requirement for onpremise communication ensure that you have adequate bandwidth for workload performance Based on bandwidth requirements a single dedicat ed connection or a single VPN might not be enough and you must enable traffic load bal ancing across multiple connections •Leverage loadbalancing and encryption offloading: Distribute traffic across multiple resources or services to allow your workload to take advantage of the elasticity that the cloud provides You can also use load balancing for offloading encryption termination to improve performance and to manage and route traffic effectively •Choose network protocols to improve performance: Make decisions about protocols for communication between systems and networks based on the impact to the workload’s performance •Choose your workload’s location based on network requirements: Use the cloud loca tion options available to reduce network latency or improve throughput Utilize AWS Re gions Availability Zones placement groups and edge locations such as Outposts Local Regions and Wavelength to reduce network latency or improve throughput •Optimize network configuration based on metrics: Use collected and analyzed data to make informed decisions about optimizing your network configuration Measure the im pact of those changes and use the impact measurements to make future decisions 84ArchivedAWS WellArchitected Framework Review PERF 6 How do you evolve your workload to take advantage of new releases? When architecting workloads there are finite options that you can choose from However over time new technologies and approaches become available that could improve the per formance of your workload Best Practices: •Stay uptodate on new resources and services : Evaluate ways to improve performance as new services design patterns and product offerings become available Determine which of these could improve performance or increase the efficiency of the workload through ad hoc evaluation internal discussion or external analysis •Define a process to improve workload performance : Define a process to evaluate new services design patterns resource types and configurations as they become available For example run existing performance tests on new instance offerings to determine their po tential to improve your workload •Evolve workload performance over time : As an organization use the information gath ered through the evaluation process to actively drive adoption of new services or resources when they become available 85ArchivedAWS WellArchitected Framework Monitoring PERF 7 How do you monitor your resources to ensure they are performing? System performance can degrade over time Monitor system performance to identify degra dation and remediate internal or external factors such as the operating system or applica tion load Best Practices: •Record performancerelated metrics : Use a monitoring and observability service to record performancerelated metrics For example record database transactions slow queries I/O latency HTTP request throughput service latency or other key data •Analyze metrics when events or incidents occur : In response to (or during) an event or incident use monitoring dashboards or reports to understand and diagnose the impact These views provide insight into which portions of the workload are not performing as ex pected •Establish Key Performance Indicators (KPIs) to measure workload performance : Identi fy the KPIs that indicate whether the workload is performing as intended For example an APIbased workload might use overall response latency as an indication of overall perfor mance and an ecommerce site might choose to use the number of purchases as its KPI •Use monitoring to generate alarmbased notifications: Using the performancerelated key performance indicators (KPIs) that you defined use a monitoring system that gener ates alarms automatically when these measurements are outside expected boundaries •Review metrics at regular intervals : As routine maintenance or in response to events or incidents review which metrics are collected Use these reviews to identify which metrics were key in addressing issues and which additional metrics if they were being tracked would help to identify address or prevent issues •Monitor and alarm proactively : Use key performance indicators (KPIs) combined with monitoring and alerting systems to proactively address performancerelated issues Use alarms to trigger automated actions to remediate issues where possible Escalate the alarm to those able to respond if automated response is not possible For example you may have a system that can predict expected key performance indicators (KPI) values and alarm when they breach certain thresholds or a tool that can automatically halt or roll back deployments if KPIs are outside of expected values 86ArchivedAWS WellArchitected Framework Tradeoffs PERF 8 How do you use tradeoffs to improve performance? When architecting solutions determining tradeoffs enables you to select an optimal ap proach Often you can improve performance by trading consistency durability and space for time and latency Best Practices: •Understand the areas where performance is most critical : Understand and identify ar eas where increasing the performance of your workload will have a positive impact on ef ficiency or customer experience For example a website that has a large amount of cus tomer interaction can benefit from using edge services to move content delivery closer to customers •Learn about design patterns and services: Research and understand the various design patterns and services that help improve workload performance As part of the analysis identify what you could trade to achieve higher performance For example using a cache service can help to reduce the load placed on database systems; however it requires some engineering to implement safe caching or possible introduction of eventual consistency in some areas •Identify how tradeoffs impact customers and efficiency: When evaluating perfor mancerelated improvements determine which choices will impact your customers and workload efficiency For example if using a keyvalue data store increases system perfor mance it is important to evaluate how the eventually consistent nature of it will impact customers •Measure the impact of performance improvements : As changes are made to improve performance evaluate the collected metrics and data Use this information to determine impact that the performance improvement had on the workload the workload’s compo nents and your customers This measurement helps you understand the improvements that result from the tradeoff and helps you determine if any negative sideeffects were in troduced •Use various performancerelated strategies : Where applicable utilize multiple strategies to improve performance For example using strategies like caching data to prevent exces sive network or database calls using readreplicas for database engines to improve read rates sharding or compressing data where possible to reduce data volumes and buffering and streaming of results as they are available to avoid blocking 87ArchivedAWS WellArchitected Framework Cost Optimization Practice Cloud Financial Management COST 1 How do you implement cloud financial management? Implementing Cloud Financial Management enables organizations to realize business value and financial success as they optimize their cost and usage and scale on AWS Best Practices: •Establish a cost optimization function : Create a team that is responsible for establishing and maintaining cost awareness across your organization The team requires people from finance technology and business roles across the organization •Establish a partnership between finance and technology: Involve finance and technolo gy teams in cost and usage discussions at all stages of your cloud journey Teams regularly meet and discuss topics such as organizational goals and targets current state of cost and usage and financial and accounting practices •Establish cloud budgets and forecasts: Adjust existing organizational budgeting and fore casting processes to be compatible with the highly variable nature of cloud costs and us age Processes must be dynamic using trend based or business driverbased algorithms or a combination •Implement cost awareness in your organizational processes : Implement cost awareness into new or existing processes that impact usage and leverage existing processes for cost awareness Implement cost awareness into employee training •Report and notify on cost optimization: Configure AWS Budgets to provide notifications on cost and usage against targets Have regular meetings to analyze this workload's cost efficiency and to promote cost aware culture •Monitor cost proactively : Implement tooling and dashboards to monitor cost proactively for the workload Do not just look at costs and categories when you receive notifications This helps to identify positive trends and promote them throughout your organization •Keep up to date with new service releases : Consult regularly with experts or APN Partners to consider which services and features provide lower cost Review AWS blogs and other information sources 88ArchivedAWS WellArchitected Framework Expenditure and usage awareness COST 2 How do you govern usage? Establish policies and mechanisms to ensure that appropriate costs are incurred while objec tives are achieved By employing a checksandbalances approach you can innovate without overspending Best Practices: •Develop policies based on your organization requirements: Develop policies that define how resources are managed by your organization Policies should cover cost aspects of re sources and workloads including creation modification and decommission over the re source lifetime •Implement goals and targets: Implement both cost and usage goals for your workload Goals provide direction to your organization on cost and usage and targets provide mea surable outcomes for your workloads •Implement an account structure: Implement a structure of accounts that maps to your or ganization This assists in allocating and managing costs throughout your organization •Implement groups and roles : Implement groups and roles that align to your policies and control who can create modify or decommission instances and resources in each group For example implement development test and production groups This applies to AWS services and thirdparty solutions •Implement cost controls : Implement controls based on organization policies and defined groups and roles These ensure that costs are only incurred as defined by organization re quirements: for example control access to regions or resource types with IAM policies •Track project lifecycle : Track measure and audit the lifecycle of projects teams and en vironments to avoid using and paying for unnecessary resources 89ArchivedAWS WellArchitected Framework COST 3 How do you monitor usage and cost? Establish policies and procedures to monitor and appropriately allocate your costs This al lows you to measure and improve the cost efficiency of this workload Best Practices: •Configure detailed information sources: Configure the AWS Cost and Usage Report and Cost Explorer hourly granularity to provide detailed cost and usage information Configure your workload to have log entries for every delivered business outcome •Identify cost attribution categories : Identify organization categories that could be used to allocate cost within your organization •Establish organization metrics: Establish the organization metrics that are required for this workload Example metrics of a workload are customer reports produced or web pages served to customers •Configure billing and cost management tools: Configure AWS Cost Explorer and AWS Budgets inline with your organization policies •Add organization information to cost and usage : Define a tagging schema based on or ganization and workload attributes and cost allocation categories Implement tagging across all resources Use Cost Categories to group costs and usage according to organiza tion attributes •Allocate costs based on workload metrics : Allocate the workload's costs by metrics or business outcomes to measure workload cost efficiency Implement a process to ana lyze the AWS Cost and Usage Report with Amazon Athena which can provide insight and charge back capability COST 4 How do you decommission resources? Implement change control and resource management from project inception to endoflife This ensures you shut down or terminate unused resources to reduce waste Best Practices: •Track resources over their life time : Define and implement a method to track resources and their associations with systems over their life time You can use tagging to identify the workload or function of the resource •Implement a decommissioning process : Implement a process to identify and decommis sion orphaned resources •Decommission resources : Decommission resources triggered by events such as periodic audits or changes in usage Decommissioning is typically performed periodically and is manual or automated •Decommission resources automatically : Design your workload to gracefully handle re source termination as you identify and decommission noncritical resources resources that are not required or resources with low utilization 90ArchivedAWS WellArchitected Framework Costeffective resources COST 5 How do you evaluate cost when you select services? Amazon EC2 Amazon EBS and Amazon S3 are buildingblock AWS services Managed ser vices such as Amazon RDS and Amazon DynamoDB are higher level or application level AWS services By selecting the appropriate building blocks and managed services you can optimize this workload for cost For example using managed services you can reduce or re move much of your administrative and operational overhead freeing you to work on appli cations and businessrelated activities Best Practices: •Identify organization requirements for cost: Work with team members to define the bal ance between cost optimization and other pillars such as performance and reliability for this workload •Analyze all components of this workload : Ensure every workload component is analyzed regardless of current size or current costs Review effort should reflect potential benefit such as current and projected costs •Perform a thorough analysis of each component : Look at overall cost to the organization of each component Look at total cost of ownership by factoring in cost of operations and management especially when using managed services Review effort should reflect poten tial benefit: for example time spent analyzing is proportional to component cost •Select software with cost effective licensing: Open source software will eliminate soft ware licensing costs which can contribute significant costs to workloads Where licensed software is required avoid licenses bound to arbitrary attributes such as CPUs look for li censes that are bound to output or outcomes The cost of these licenses scales more close ly to the benefit they provide •Select components of this workload to optimize cost in line with organization prior ities : Factor in cost when selecting all components This includes using application level and managed services such as Amazon RDS Amazon DynamoDB Amazon SNS and Ama zon SES to reduce overall organization cost Use serverless and containers for compute such as AWS Lambda Amazon S3 for static websites and Amazon ECS Minimize license costs by using open source software or software that does not have license fees: for exam ple Amazon Linux for compute workloads or migrate databases to Amazon Aurora •Perform cost analysis for different usage over time: Workloads can change over time Some services or features are more cost effective at different usage levels By performing the analysis on each component over time and at projected usage you ensure the work load remains cost effective over its lifetime 91ArchivedAWS WellArchitected Framework COST 6 How do you meet cost targets when you select resource type size and number? Ensure that you choose the appropriate resource size and number of resources for the task at hand You minimize waste by selecting the most cost effective type size and number Best Practices: •Perform cost modeling : Identify organization requirements and perform cost modeling of the workload and each of its components Perform benchmark activities for the workload under different predicted loads and compare the costs The modeling effort should reflect potential benefit: for example time spent is proportional to component cost •Select resource type and size based on data: Select resource size or type based on data about the workload and resource characteristics: for example compute memory through put or write intensive This selection is typically made using a previous version of the workload (such as an onpremises version) using documentation or using other sources of information about the workload •Select resource type and size automatically based on metrics : Use metrics from the cur rently running workload to select the right size and type to optimize for cost Appropriate ly provision throughput sizing and storage for services such as Amazon EC2 Amazon Dy namoDB Amazon EBS (PIOPS) Amazon RDS Amazon EMR and networking This can be done with a feedback loop such as automatic scaling or by custom code in the workload COST 7 How do you use pricing models to reduce cost? Use the pricing model that is most appropriate for your resources to minimize expense Best Practices: •Perform pricing model analysis : Analyze each component of the workload Determine if the component and resources will be running for extended periods (for commitment dis counts) or dynamic and short running (for spot or ondemand) Perform an analysis on the workload using the Recommendations feature in AWS Cost Explorer •Implement regions based on cost : Resource pricing can be different in each region Fac toring in region cost ensures you pay the lowest overall price for this workload •Select third party agreements with cost efficient terms: Cost efficient agreements and terms ensure the cost of these services scales with the benefits they provide Select agree ments and pricing that scale when they provide additional benefits to your organization •Implement pricing models for all components of this workload: Permanently running resources should utilize reserved capacity such as Savings Plans or reserved Instances Short term capacity is configured to use Spot Instances or Spot Fleet On demand is only used for shortterm workloads that cannot be interrupted and do not run long enough for reserved capacity between 25% to 75% of the period depending on the resource type •Perform pricing model analysis at the master account level: Use Cost Explorer Savings Plans and Reserved Instance recommendations to perform regular analysis at the master account level for commitment discounts 92ArchivedAWS WellArchitected Framework COST 8 How do you plan for data transfer charges? Ensure that you plan and monitor data transfer charges so that you can make architectural decisions to minimize costs A small yet effective architectural change can drastically reduce your operational costs over time Best Practices: •Perform data transfer modeling : Gather organization requirements and perform data transfer modeling of the workload and each of its components This identifies the lowest cost point for its current data transfer requirements •Select components to optimize data transfer cost : All components are selected and ar chitecture is designed to reduce data transfer costs This includes using components such as WAN optimization and MultiAZ configurations •Implement services to reduce data transfer costs: Implement services to reduce data transfer: for example using a CDN such as Amazon CloudFront to deliver content to end users caching layers using Amazon ElastiCache or using AWS Direct Connect instead of VPN for connectivity to AWS Manage demand and supply resources COST 9 How do you manage demand and supply resources? For a workload that has balanced spend and performance ensure that everything you pay for is used and avoid significantly underutilizing instances A skewed utilization metric in ei ther direction has an adverse impact on your organization in either operational costs (de graded performance due to overutilization) or wasted AWS expenditures (due to overpro visioning) Best Practices: •Perform an analysis on the workload demand : Analyze the demand of the workload over time Ensure the analysis covers seasonal trends and accurately represents operating con ditions over the full workload lifetime Analysis effort should reflect potential benefit: for example time spent is proportional to the workload cost •Implement a buffer or throttle to manage demand: Buffering and throttling modify the demand on your workload smoothing out any peaks Implement throttling when your clients perform retries Implement buffering to store the request and defer processing un til a later time Ensure your throttles and buffers are designed so clients receive a response in the required time •Supply resources dynamically : Resources are provisioned in a planned manner This can be demand based such as through automatic scaling or timebased where demand is predictable and resources are provided based on time These methods result in the least amount of over or under provisioning 93ArchivedAWS WellArchitected Framework Optimize over time COST 10 How do you evaluate new services? As AWS releases new services and features it's a best practice to review your existing archi tectural decisions to ensure they continue to be the most cost effective Best Practices: •Develop a workload review process : Develop a process that defines the criteria and process for workload review The review effort should reflect potential benefit: for exam ple core workloads or workloads with a value of over 10% of the bill are reviewed quarter ly while workloads below 10% are reviewed annually •Review and analyze this workload regularly : Existing workloads are regularly reviewed as per defined processes 94
|
General
|
consultant
|
Best Practices
|
Backup_and_Recovery_Approaches_Using_AWS
|
ArchivedBackup and Recovery Approaches Using AWS June 2016 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 2 of 26 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 3 of 26 Contents Abstract 4 Introduction 4 Why Use AWS as a DataProtection Platform? 4 AWS Storage Services for Data Protection 5 Amazon S3 6 Amazon Glacier 6 AWS Storage Gateway 7 AWS Transfer Services 7 Designing a Backup and Recovery Solution 7 CloudNative Infrastructure 8 EBS SnapshotBased Protection 9 Database Backup Approaches 14 OnPremises to AWS Infrastructure 17 Hybrid Environments 20 Backing Up AWSBased Applications to Your Data Center 21 Migrating Backup Management to the Cloud for Availability 22 Example Hybrid Scenario 23 Archiving Data with AWS 24 Securing Backup Data in AWS 24 Conclusion 25 Contributors 25 Document Revisions 26 ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 4 of 26 Abstract This paper is intended for enterprise solution architects backup architects and IT administrators who are responsible for protecting data in their corporate IT environments It discusses production workloads and architectures that can be implemented using AWS to augment or replace a backup and recovery solution These approaches offer lower costs higher scalability and more durability to meet Recovery Time Objective (RTO) Recovery Point Objective (RPO) and compliance requirements Introduction As the growth of enterprise data accelerates the task of protecting it becomes more challenging Questions about the durability and scalability of backup methods are commonplace including this one: How does the cloud help meet my backup and archival needs? This paper covers a number of backup architectures (cloudnative applications hybrid and onpremises environments) and associated AWS services that can be used to build scalable and reliable dataprotection solutions Why Use AWS as a DataProtection Platform? Amazon Web Services (AWS) is a secure highperformance flexible cost effective and easy touse cloud computing platform AWS takes care of the undifferentiated heavy lifting and provides tools and resources you can use to build scalable backup and recovery solutions There are many advantages to using AWS as part of your data protection strategy: Durability: Amazon Simple Storage Service (Amazon S3) and Amazon Glacier are designed for 999999999 99% (11 nines) of durability for the objects stored in them Both platforms offer reliable locations for backup data ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 5 of 26 Security: AWS provides a number of options for access control and encrypting data in transit and at rest Global infrastructure : AWS services are available around the globe so you can back up and store data in the region that meets your compliance requirements Compliance: AWS infrastructure is certified for compliance with standards such as Service Organization Controls (SOC) Statement on Standards for Attestation Engagements ( SSAE ) 16 International Organization for Standardization (ISO) 27001 Payment Card Industry Data Security Standard (PCI DSS) Health Insurance Portability and Accountability Act (HIPPA ) SEC1 and Federal Risk and Authorization Management Program (FedRAMP) so you can easily fit the backup solution into your existing compliance regimen Scalability: With AWS you don’t have to worry about capacity You can scale your consumption up or down as your needs change without administrative overhead Lower TCO: The scale of AWS operations drives down service costs and helps lower the total cost of ownership (T CO) of the storage AWS passes these cost savings on to customers in the form of price drops Payasy ougo pricing: Purchase AWS services as you need them and only for the period you plan to use them AWS pricing has no upfront fees termination penalties or longterm contracts AWS Storage Services for Data Protection Amazon S3 and Amazon Glacier are ideal services for backup and archival Both are durable lowcost storage platforms Both offer unlimited capacity and require no volume or media management as backup data sets grow The payforwhat youuse model and low cost per GB/month make these services a good fit for data protection use cases 1 https://awsamazoncom/aboutaws/whatsnew/2015/09/amazonglacierreceives third partycomplianceassessmentforsecrule17a 4ffromcohassetassociates inc/ ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 6 of 26 Amazon S3 Amazon S3 provides highly secure scalable object storage You can use Amazon S3 to store and retrieve any amount of data at any time from anywhere on the web Amazon S3 stores data as objects within resources called buckets AWS Storage Gateway and many thirdparty backup solutions can manage Am azon S3 objects on your behalf You can store as many objects as you want in a bucket and you can write read and delete objects in your bucket Single objects can be up to 5 TB in size Amazon S3 offers a range of storage classes designed for different use cases These include: Amazon S3 Standard for generalpurpose storage of frequently accessed data Amazon S3 Standard Infrequent Access for longlived but less frequently accessed data Amazon Glacier for longterm archive Amazon S3 also offers lifecycle policies you can configure to manage your data throughout its lifecycle After a policy is set your data will be migrated to the appropriate storage class without any changes to your application For more information see S3 Storage Classes Amazon Glacier Amazon Glacier is an extremely lowcost cloud archive storage service that provides secure and durable storage for data archiving and online backup To keep costs low Amazon Glacier is optimized for data that is infrequently accessed and for which retrieval times of several hours are acceptable With Amazon Glacier you can reliably store large or small amounts of data for as little as $00 07 per gigabyte per month a significant savings compared to onpremises solutions Amazon Glacier is well suited for storage of backup data with long or indefinite retention requirements and for longterm data archiving For more information see Amazon Glacier ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 7 of 26 AWS Storage Gateway AWS Storage Gateway connects an onpremises software appliance with cloud based storage to provide seamless and highly secure integration between your on premises IT environment and the AWS storage infrastructure For more information see AWS Storage Gateway AWS Transfer Services In addition to thirdparty gateways and connectors you can use AWS options like AWS Direct Connect AWS Snowball AWS Storage Gateway and Amazon S3 Transfer Acceleration to quickly transfer your data For more information see Cloud Data Migration Designing a Backup and Recovery Solution When you develop a comprehensive strategy for backing up and restoring data you must first identify the failure or disaster situations that can occur and their potential business impact In some industries you must consider regulatory requirements for data security privacy and records retention You should implement backup processes that will offer the appropriate level of granularity to meet the RTO and RPO of the business including: Filelevel recovery Volumelevel recovery Applicationlevel recovery (for example databases) Imagelevel recovery The following sections describe backup recovery and archive approaches based on the organization of your infrastructure IT infrastructure can broadly be categorized as cloud native onpremises and hybrid ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 8 of 26 CloudNative Infrastructure This scenario describes a workload environment that exists entirely on AWS As the following figure shows it includes web servers application servers monitoring servers databases and Active Directory If you are running all of your services from AWS you can leverage many builtin features to meet your data protection and recovery needs Figure 1: AWS Native Scenario ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 9 of 26 EBS SnapshotBased Protection When services are running in Amazon Elastic Compute Cloud2 (Amazon EC2) compute instances can use Amazon Elastic Block Store (Amazon EBS) volumes to store and access primary data You can use this block storage for structured data such as databases or unstructured data such as files in a file system on the volume Amazon EBS provides the ability to create snapshots (backups) of any Amazon EBS volume It takes a copy of the volume and places it in Amazon S3 where it is stored redundantly in multiple Availability Zones The first snapshot is a full copy of the volume; ongoing snapshots store incremental blocklevel changes only This is a fast and reliable way to restore full volume data If you only need a partial restore you can attach the volume to the running instance under a different device name mount it and then use operating system copy commands to copy the data from the backup volume to the production volume Amazon EBS snapshots can also be copied between AWS regions using the Amazon EBS snapshot copy capability available in the console or from the command line as described in the Amazon Elastic Cloud Compute User Guide 3 You can use this feature to store your backup in another region without having to manage the underlying replication technology 2 http://awsamazoncom/ec2/ 3 http://docsawsamazoncom/AWSEC2/latest/UserGuide/ebscopysnapshothtml ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 10 of 26 Creating EBS Snapshots When you create a snapshot you protect your data directly to durable diskbased storage You can use the AWS Management Console the command line interface (CLI) or the APIs to create the Amazon EBS snapshot In the Amazon EC2 console on the Elastic Block Store Volumes page choose Create Snapshot from the Actions menu On the Create Snapshot dialog box choose Create to create a snapshot that will be stored in Amazon S3 Figure 2: Using the EC2 Console to Create a Snapshot To use the CLI command to create the snapshot run the following command: aws ec2 create snapshot You can schedule and run the aws ec2 create snapshot commands on a regular basis to back up the EBS data The economical pricing of Amazon S3 makes it possible for you to retain many generations of data And because snapshots are blockbased you consume space only for data that’s changed after the initial s napshot was created ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 11 of 26 Restoring from an EBS Snapshot To restore data from a snapshot you can use the AWS Management Console the CLI or the APIs to create a volume from an existing snapshot For example follow these steps to restore a volume to an earl ier point intime backup: 1 Use the following command to create a volume from the backup snapshot: aws ec2 create volume –region uswest1b –snapshot id mysnapshotid 2 On the Amazon EC2 instance unmount the existing volume In Linux use umount In Windows use the Logical Volume Manager (LVM) 3 Use the following command to detach the existing volume from the instance: aws ec2 detachvolume volumeid oldvolumeid – instanceid myec2instance id 4 Use the following command to attach the volume that was created from the snapshot: aws ec2 attachvolume volumeid newvolume id instanceid myec2instance id device /dev/sdf 5 Remount the volume on the running instance ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 12 of 26 Creating Consistent or Hot Backups When you perform a backup it’s best to have the system in a state where it is not performing any I/O I n the ideal case the machine is n’t accepting traffic but this is increasingly rare as 24/7 IT operations become the norm For this reason you must quiesce the file system or database in order to make a clean backup The way in which you do this depends on your database or file system The process for a database is as follows: If possible put the database into hot backup mode Run the Amazon EBS snapshot commands Take the database out of hot backup mode or if using a read replica terminate the read replica instance The process for a file system is similar but depends on the capabilities of the operating system or file system For example XFS is a file system that can flush its data for a consistent backup For more information see xfs_freeze 4 If your file system does not support the ability to freeze you should unmount it issue the snapshot command and then remount the file system Alternatively you can facilitate this process by using a logical volume manager that supports the freezing of I/O Because the snapshot process continues in the background and the creation of the snapshot is fast to execute and captures a point in time the volumes you ’re backing up only need to be unmounted for a matter of seconds Because the backup window is as small as possible the outage time is predictable and can be scheduled 4 https://accessredhatcom/documentation/en US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/xfsfreeze html ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 13 of 26 Performing Multivolume Backups In some cases you can stripe data across multiple Amazon EBS volumes by using a logical volume manager to increase potential throughput When you use a logical volume manager (for example mdadm or LVM) it is important to perform the backup from the volume manager layer rather than the underlying EBS volumes This ensures all metadata is consistent and the subcomponent volumes are coherent There are a number of ways to accomplish this For example you can use the script created by alesticcom5 The memory buffers should be flushed to disk; the file system I/O to disk should be stopped; and a snapshot should be initiated simultaneously for all the volumes making up the RAID set After the snapshot for the volumes is initiated (usually a second or two) the file system can continue its operations The snapshots should be tagged so that you can manage them collectively during a restore You can also perform these backups from the logical volume manager or file system level In these cases using a traditional backup agent enables the data to be backed up over the network A number of agentbased backup solutions are available on the internet and in the AWS Marketplace 6 Remember that agent based backup software expects a consistent server name and IP address As a result using these tools with instances deployed in an Amazon virtual private cloud (VPC)7 is the best way to ensure reliability An alternative approach is to create a replica of the primary system volumes that exist on a single large volume This simplifies the backup process because only one large volume must be backed up and the backup does not take place on the primary system However you should first determine whether the single volume can perform sufficiently during the backup and whether the maximum volume size is appropriate for the application 5 https://githubcom/alestic/ec2consistentsnapshot 6 https://awsamazoncom/marketplace/ 7 http://awsamazoncom/vpc/ ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 14 of 26 Database Backup Approaches AWS has many options for databases You can run your own database on an EC2 instance or use one of the managed service database options provided by the Amazon Relational Database Service 8(Amazon RDS) If you are running your own database on an EC2 instance you can back up data to files using native tools (for example MySQL9 Oracle10 MSSQL11 PostgreSQL12) or create a snapshot of the volumes containing the data using one of the methods described in “EBS SnapshotBased Protection ” Using Database Replica Backups For databases that are built on RAID sets of Amazon EBS volumes you can remove the burden of backups on the primary database by creating a read replica of the database This is an up todate copy of the database that runs on a separate Amazon EC2 instance The replica database instance can be created using multiple disks similar to the source or the data can be consolidated to a single EBS volume You can then use one of the procedures described in “ EBS SnapshotBased Protection ” to snapshot the EBS volumes This approach is often used for large databases that are required to run 24/7 When that is the case the backup window required is too long and the production database cannot be taken down for such long periods Using Amazon RDS for Backups Amazon RDS includes features for automating database backups Amazon RDS creates a storage volume snapshot of your database instance backing up the entire DB instance not just individual databases 8 https://awsamazoncom/rds/ 9 http://devmysqlcom/doc/refman/57/en/backupandrecoveryhtml 10 http://docsoraclecom/cd/E11882_01/backup112/e10642/rcmbckbahtm#BRADV 8003 11 http://msdnmicrosoftcom/enus/library/ms187510aspx 12 http://wwwpostgresqlorg/docs/93/static/backuphtml ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 15 of 26 Amazon RDS provides two different methods for backing up and restoring your DB instances : Automated backups enable po intintime recovery of your DB instance Automated backups are turned on by default when you create a new DB instance Amazon RDS performs a full daily backup of your data during a window that you define when you create the DB instance You can configure a retention period of up to 35 days for the automated backup Amazon RDS uses these periodic data backups in conjunction with your transaction logs to enable you to restore your DB instance to any second during your retention period up to the LatestRestorableTime (typically the last five minutes) To find the latest restorable time for your DB instances you can use the DescribeDBInstances API call or look on the Description tab for the database in the Amazon RDS console When you initiate a point intime recovery transaction logs are applied to the most appropriate daily backup in order to restore your DB instance to the time you requested DB snapshots are userinitiated backups that enable you to back up your DB instan ce to a known state as frequently as you like and then restore to that state at any time You can use the Amazon RDS console or the CreateDBSnapshot API call to create DB snapshots The se snapshots have unlimited retention They are kept until you use the console or the DeleteDBSnapshot API call to explicitly delete them When you restore a database to a point in time or from a DB snapshot a new database instance with a new endpoint will be created In this way you can create multiple database instances from a specific DB snapshot or point in time You can use the AWS Management Console or a DeleteDBInstance call to delete the old database instance ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 16 of 26 Using AMI to Back Up EC2 Instances AWS stores system images in what are called Amazon Machine Images (AMI s) These images consist of the template for the root volume required to launch an instance You can use the AWS Management Console or the aws ec2 create image CLI command to back up the root volume as an AMI Figure 3: Using an AMI to Back Up and Launch an Instance When you register an AMI it is stored in your account using Amazon EBS snapshots These snapshots reside in Amazon S3 and are highly durable Figure 4: Using the EC2 Console to Create a Machine Image After you have created an AMI of your Amazon EC2 instance you can use the AMI to recreate the instance or launch more copies of the instance You can also copy AMIs from one region to another for application migration or disaster recovery ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 17 of 26 OnPremises to AWS Infrastructure This scenario describes a workload environment with no components in the cloud All resources including web servers application servers monitoring servers databases Active Directory and more are hosted either in the customer data center or through colocation Routers SwitchesWorkstations Application ServersFile ServersWeb ServersManagement Server Database ServersSAN Storage SAN StorageRouters Application ServersSwitchesWorkstations Workstations Workstations Database ServersFile ServersInternet Customer Interconnect NetworkSAN Storage RoutersSwitchesWorkstations Database ServersFile Servers Application ServersApplication ServersApplication ServersColocation Hosting Branch OfficeCorporate Data Center Figure 5: OnPremises Environment ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 18 of 26 By using AWS storage services in this scenario you can focus on backup and archiving tasks You don ’t have to worry about storage scaling or infrastructure capacity to accomplish the backup task Amazon S3 and Amazon Glacier are natively APIbased and available through the Internet This allows backup software vendors to directly integrate their applications with AWS storage solutions as shown in the following figure Figure 6 : Backup Connector to Amazon S3 or Amazon Glacier In this scenario backup and archive software directly interfaces with AWS through the APIs Because the backup software is AWSaware it will back up the data from the onpremises servers directly to Amazon S3 or Amazon Glacier If your existing backup software does not natively support the AWS cloud you can use AWS storage gateway products AWS Storage Gateway13 is a virtual appliance that provides seamless and secure integration between your data center and the AWS storage infrastructure The service allows you to securely store data 13 http://awsamazoncom/storagegateway/ ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 19 of 26 in the AWS cloud for scalable and costeffective storage Storage Gateway supports industrystandard storage protocols that work with your existing applications while securely storing all of your data encrypted in Amazon S3 or Amazon Glacier Figure 7: Connecting OnPremises to AW S Storage AWS Storage Gateway supports the following configurations: Volume gateways: Volume gateways provide cloudbacked storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from your onpremises application servers The gateway supports the following volume configurations: Gatewaycached volumes: You can store your primary data in Amazon S3 and retain your frequently accessed data locally Gatewaycached volumes provide substantial cost savings on primary storage minimize the need to scale your storage on premises and retain lowlatency access to your frequently accessed data Gatewaystored volumes: In the event you need lowlatency access to your entire data set you can configure your onpremises data gateway to store your primary data locally and asynchronously back up point intime snapshots of this data to Amazon S3 Gatewaystored volumes provide durable and inexpensive offsite backups that you can recover locally or from Amazon EC2 Gatewayvirtual tape library (gatewayVTL): With gatewayVTL you can have a limitless collection of virtual tapes Each virtual tape can be stored ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 20 of 26 in a virtual tape library backed by Amazon S3 or a virtual tape shelf backed by Amazon Glacier The virtual tape library exposes an industrystandard iSCSI interface which provides your backup application with online access to the virtual tapes When you no longer require immediate or frequent access to data contained on a virtual tape you can use your backup application to move it from its virtual tape library to your virtual tape shelf to further reduce your storage costs These gateways act as plugandplay devices providing standard iSCSI devices which can be integrated into your backup or archive framework You can use the iSCSI disk devices as storage pools for your backup software or the gatewayVTL to offload tapebased backup or archive directly to Amazon S3 or Amazon Glacier Using this method your backup and archives are automatically offsite (for compliance purposes) and stored on durable media eliminating the complexity and security risks of off site tape management Hybrid Environments The two infrastructure deployments discussed to this point cloudnative and on premises can be combined into a hybrid scenario wh ere the workload environment has on premises and AWS infrastructure components Resources including web servers application servers monitoring servers databases Active Directory and more are hosted either in the customer data center or AWS Applications running in the AWS cloud are connected to applications running on premises This is becoming a common scenario for enterprise workloads Many enterprises have data centers of their own and use AWS to augment capacity These customer data centers are often connected to the AWS network by highcapacity network links For example with AWS Direct Connect14 you can establish private dedicated connectivity from your premises to AWS This provides the bandwidth 14 http://awsamazoncom/directconnect/ ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 21 of 26 and consistent latency to upload data to the cloud for the purposes of data protection and consistent performance and latency for hybrid workloads Figure 8: A Hybrid Infrastructure Scenario Welldesigned data protection solutions typically use a combination of the methods described in the cloudnative and onpremises solutions Back ing Up AWSBased Applications to Your Data Center If you already have a framework that backs up data for your onpremises servers then it is easy to extend it to your AWS resources over a VPN connection or through AWS Direct Connect You can install the backup agent on the Amazon EC2 instances and back them up per your dataprotection policies ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 22 of 26 Migrating Backup Management to the Cloud for Availability Dependin g on your backup architecture you may have a master backup server and one or more media or storage servers located onpremises with the services it’s protecting In this case you might want to move the master backup server to an Amazon EC2 instance to protect it from onpremises disasters and have a highly available backup infrastructure To manage the backup data flows you might also want to create one or more media servers on Amazon EC2 instances Media servers near the Amazon EC2 instances will save you money on internet transfer and when backing up to S3 or Amazon Glacier increase overall backup and recovery performance Figure 9 : Using Gateways in the Hybrid Scenario ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 23 of 26 Example Hybrid Scenario Assume that you are managing an environment where you are backing up Amazon EC2 instances standalone servers virtual machines and databases This environment has 1000 servers and you back up the operating system file data virtual machine images and databases There are 20 databases (a mixture of MySQL Microsoft SQL Server and Oracle) to back up Your backup software has agents that back up operating systems virtual machine images data volumes SQL Server databases and Oracle databases (using RMAN) For applications like MySQL that your backup software does not have an agent for you might use the mysqldump client utility to create a database dump file to disk where standard backup agents can then protect the data To protect th is environment your thirdparty backup software most likely has a global catalog server or master server that controls the backup archive and restore activities as well as multiple media server s that are connected to disk based storage Linear TapeOpen (LTO) tape driv es and AWS storage services The simpliest way to augment your backup solution with AWS storage services is to take advantage of your backup vendor ’s support for Amazon S3 or Amazon Glacier We suggest you work with your vendor to understand their integration and connector options For a list of backup software vendors who work with AWS see our partner directory15 If your exisin g backup software does not natively support cloud storage for backup or archive you can use a storage gateway device such as a bridge between the backup software and Amazon S3 or Amazon Glacier There are many thirdparty gateway solutions You can also use AWS Storage Gateway virtual appliances to bridge this gap because it uses generic techniques such as iSCSIbased volumes and virtual tape libraries (VTL s) This configuration requires a supported hypervisor (VMware or Microsoft Hyper V) and local storage to host the appliance 15 http://wwwawspartnerdirectorycom/PartnerDirectory/PartnerSearch?type=ISV ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 24 of 26 Archiving Data with AWS When you need to preserve data for compliance or corporate reasons you archive it Unlike backup s which are usually performed to keep a copy of the production data for a short duration to recover from data corruption or data loss archiving maintains all copies of data until the retention policy expires A good archive meets the following criteria: Data durability for longterm integrity Data security Ease of recoverability Low cost Immutable data stores can be another regulatory or compliance requirement Amazon Glacier provid es archives at low cost native encryption of data at rest 11 nines of durability and unlimited capacity Amazon S3 Standard Infrequent Access is a good choice for use cases that require the quick retrieval of data Amazon Glacier is a good choice for use cases where data is infrequently accessed and retrieval times of several hours are acceptable Objects can be tiered into Amazon Glacier either through lifecycle rules in S3 or the Amazon Glacier API The Amazon Glacier Vault Lock feature allows you to easily deploy and enforce compliance controls for individual Amazon Glacier vaults with a vault lock policy You can specify controls such as “write once read many” (WORM) in a vault lock policy and lock the policy from future edits For more information see Amazon Glacier Securing Backup Data in AWS Data security is a common concern AWS takes security very seriously It’s the foundation of every service we launch Storage services like Amazon S3 provide strong capabilities for access control and encryption both at rest and in transit All Amazon S3 and Amazon Glacier API endpoints support SSL encryption for ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 25 of 26 data in transit Amazon Glacier encrypts all data at rest by default With Amazon S3 customers can choose serverside encryption for objects at rest by letting AWS manage the encryption keys providing their own keys when they upload an object or using AWS Key Management Service (AWS KMS )16 integration for the encryption keys Alternatively customers can always encrypt their data before uploading it to AWS For more information s ee Amazon Web Services: Overview of Security Processes Conclusion Gartner has recognized AWS as a leader in public cloud storage services17 AWS is well positioned to help organizations move their workloads to cloudbased platforms the next generation of backup AWS provides costeffective and scalable solutions to help organizations balance their requirements for backup and archiving These services integrate well with technologies you are using today Contributors The following individuals contributed to this paper : Pawan Agnihotri Solutions Architect Amazon Web Services Lee Kear Solutions Architect Amazon Web Services Peter Levett Solutions Architect Amazon Web Services 16 http://docsawsamazoncom/AmazonS3/latest/dev/UsingKMSEncryptionhtml 17 http://wwwgartnercom/technology/reprintsdo?id=1 1WWKTQ3&ct=140709&st=sb ArchivedAmazon Web Services – Backup and Recovery Approaches Using AWS June 2016 Page 26 of 26 Document Revisions Updated May 2016
|
General
|
consultant
|
Best Practices
|
Best_Practices_for_Deploying_Alteryx_Server_on_AWS
|
ArchivedBest Practices for Deploying Alteryx Server on AWS August 2019 This paper has been archived For the latest technical guidance on the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/Archived Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved Archived Contents Introduction 1 Alteryx Server 1 Designer 1 Scheduler 1 Controller 2 Worker 3 Database 3 Gallery 3 Options for Deploying Alte ryx Server on AWS 4 Enterprise Deployment 5 Deploy Alteryx Server with Chef 8 Deploy a Windows Server EC2 instance and install Alteryx Server 8 Deploy an Amazon EC2 Instance from the Alteryx Server AMI 8 Sizing and Scaling Alteryx Server on AWS 10 Performance Consider ations 10 Availability Considerations 14 Management Considerations 15 Sizing and Scaling Summary 15 Operations 17 Backup and Restore 17 Monitoring 17 Network and Security 18 Connecting On Premises Resources to Amazon VPC 18 Security Groups 20 Network Access Con trol Lists (NACLs) 20 Bastion Host (Jump Box) 20 Archived Secure Sockets Layer (SSL) 21 Best Practices 21 Deployment 21 Scaling and Availability 22 Network and Security 22 Performance 23 Conclusion 23 Contributors 23 Further Reading 24 Document Revisions 25 Archived Abstract Alteryx Server is a scalable server based analytics solution that helps you create publish and share analytic applications schedule and automate workflow jobs create manage and share data connec tions and control data access This whitepaper discusse s how to run Alteryx Server on AWS and provides an overview of the AWS services that relate to Alteryx Server It also includes i nformation on common architecture patterns and deployment of Alteryx Server on AWS The paper is intended for information techn ology professionals who are new to Alteryx products and are considering deploying Alteryx Server on AWSArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 1 Introduction Alteryx Server provides a scalable platform that helps create analytical insights and empowers analysts and business users across your org anization to make better data driven decisions Alteryx Server provides: • Data blending • Predictive analytics • Interactive visualizations • An easy touse drag anddrop interface • Support for a wide variety of data sources • Data governance and security • Sharing an d collaboration Alteryx Server is an end toend analytics platform for the enterprise used by thousands of customers around the world For details on how customers have successfully used Alteryx on AWS see the Alteryx + AWS Customer Success Stories Alteryx Server Alteryx Server consists of six main components : Designer Scheduler Controller Worker Database and Gallery Each component is discussed in the following sections Designer The Designer is a Windows software application that lets you create repeatable workflow processe s Designer is installed by de fault on the same instance as the Controller You can use o ther installations of the Designer (for example on your workstation) and connect it to the C ontroller using the controller tok en Scheduler The Scheduler lets you schedule the execution of workflows or analytic applications developed within the Designer ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 2 Controller The Controller orchestrates workflow execution s manages the service settings and delegates work to the Workers The Controller also supports the Gallery and handles APIs for remote integration T he Controller has t hree key parts : authentication controller token and database drivers which are described as follows Authentication Alteryx Server supports local authentication Microsoft Active Directory (Microsoft AD) authentication and SAML 20 authentication For short term trial or proof ofconcept deployments local authentication is a reasonable option However in most deployments we recommend that you use Microsoft AD or SAML 20 to connect your user directory Note: Changing authentication methods requires that you reinstall the Control ler For deployments of Alteryx Server on AWS where you have chosen Microsoft AD consider using AWS Directory Services AWS Directory Services enables Alteryx Server to use a fully managed instance of Microsoft AD in the AWS Cloud AWS Microsoft AD is bui lt on Microsoft AD and does not require you to synchronize or replicate data from your existing Active Directory to the cloud (although this remains an option for later integration as your deployment evolves over time ) For more information on this option see AWS Directory Service Controller Token The controller token connects the Controller to Workers and D esigner clients to schedule and run workflows from other Designer components The token is automatically generated when you install Alteryx Server The controller token is unique to your server instance and administrators must safeguard it You only need to regenerate the token if it is compromised If you regenerate the token all the W orker s and Gallery components must be updated with the new token Drivers Alteryx Server communicates with numerous supported data sources including databases such as Amazon Aurora and Amazon Redshift and object stores such as ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 3 Amazon S imple Storage Service (Ama zon S 3) For a complete list of supported sources see Data Sources on the Alteryx Technical Specifications page Successfully connecting to most data sources is a simple process because the Controller has a network path to the database and proper credentials to access the database with the appropriate permissions For help with troubleshooting database connections see the Alteryx Community and Alteryx Support pages Each database requires you to install the appropriate driver When using Alteryx Server be sure to configure each required database driver on the server machine with the same version that is used for Designer clients If a Designer client and the Alteryx Server do not have the same driver the scheduled workflow may not complete properly Worker The Worker executes workflows or analytic applications sent to the Controller The same instance that runs the Controller can run the Worker This setup is common in smaller scale deployments You can configure s eparate instances to run as Workers for scaling and performance purposes You must configure a t least one instan ce as a Worker —the total number of Workers you need is dependent on performance considerations Database The persistence tier store s information that is critical to the functioning of the Controller such as A lteryx application files the job queue gallery information and result data Alteryx Server supports two different databases for persistence: MongoDB and SQLite Most deploymen ts use MongoDB which can be deployed as an embedded database or as a user managed database Consider using MongoDB if you need a scalable or highly available architecture Note that m ost scalable deployments use a user managed MongoDB database Consider u sing SQLite if you do not need to use Gallery and your deployment is limited to scheduling workloads Gallery The Gallery is a web based application for sharing workflows and outputs The Gallery can be run on the Alteryx Server machine Alternatively multiple Gallery machines can be configured behind an Elastic Load Balanc ing (ELB) load balanc er to handle the Gallery services at scale ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 4 Options for Deploying Alteryx Server on AWS Alteryx Server is contained as a Microsoft Windows Service It can run easily on most Microsoft Windows Server operating systems Note: In order to install Alteryx Server on AWS you will need an AWS account and an Alteryx Server license key If you do not have a license key trial options for Alteryx Server on AW S are available through AWS Marketplace You can install the Alteryx Server components into a multi node cluster to create a scalable enterprise deployment of Alteryx Server: Figure 1: Scalable enterprise deployment of Alteryx Server Alternatively you can install Alteryx Server in one self contained EC2 instance: ArchivedAmazon Web Services Best Practices for Deploying Alte ryx Server on AWS Page 5 Figure 2: Deployment of Alteryx Server on a single EC2 instance The following sections discuss how to deploy Alteryx Server on AWS from the most complex deployment to the simplest deployment Enterprise Deployment The following architecture diagram shows a solution for a scalable enterprise deployment of Alteryx Server on AWS ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 6 Figure 3: Alteryx Server architecture on AWS The following high level steps explain how to create a scal able enterprise deployment of Alteryx Server on AWS: Note: To deploy Alteryx Server on AWS you will need the controller token to connect the Controller to Workers and Designer clients the IP or DNS information of the Controller for connection and failover if needed and the usermanaged MongoDB connection information 1 Create an Amazon Virtual Private Cloud ( VPC) or use an existing VPC with a minimum of two Availability Zones (called A vailability Zone A and Availability Zone B) ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 7 2 Deploy a Controller instance in Availability Zone A Document the controller key and connection information for later steps Note: It’s possible to use an Elastic IP address to connect remote clients and users to the Controller but we recommend that you use AWS Direct Connect or A WS Managed VPN for more complex long running deployments VPC p eering connection options and Direct Connect can enable private connectivity to the Controller instance as well as a predictable cost effective network path back to on premises data sources that you may wish to expose to the Controller 3 Create a MongoDB replica set with at least three instances Place each instance in a different Availability Zone Document the connection information for the next step 4 Connect the MongoDB cluster to the Controller instance by providing the MongoDB connection information in the Alteryx System Settings on the Controller 5 Deploy and connect a Worker instance in Availability Zone A to the Controller instance in the Availability Zone A subnet 6 Deploy and connect a Worker instance in Availability Zone B to the Controller instance in the Availability Zone A subnet 7 Deploy and connect more Workers as needed to support your desired level o f workflow concurrency You can have more than one Worker in each A vailability Zone but be aware that each A vailability Zone represents a fault domain You should also consider the performance implications of losing access to Workers deployed in a particu lar Availability Zone 8 Create an ELB l oad balancer to handle requests to the Gallery instances 9 Deploy Gallery instances and register with the ELB l oad balancer Be sure to deploy your Gallery instances in multiple Availability Zones 10 Connect the Gallery i nstances to the Controller instance ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 8 11 Connect the client Designer installations to the Controller instance using either the E lastic IP address or the optional private IP (chose n in Step 2 ) then test workf lows and publishing to Gallery 12 (Optional) Deploy a Cold/Warm Standby Controller instance in another Availability Zone or AWS R egion Failover is controlled by changing the Elastic IP address (if deployed in the same VPC) or DNS name to this Controller instance Deploy Alteryx Server with Chef You can use AWS OpsWorks with Chef cookbooks and recipes to deploy Alteryx Server For Alteryx Chef resources see cookbook alteryx server on GitHub Deploy a Windows Server EC2 instance and install Alteryx Server You can deploy an Amazon Elastic Compute Cloud (Amazon EC2) instance running Windows Server and then install Alteryx Server You can download the install package here Make sure that you deploy an instance with the recommended compute size (at least 8 vCPUs) Windows operating system (Microsoft Windows Server 2008R2 or later) and available Amazon Elastic Block Store (Amazon EBS) storage (1TB) Deploy an Amazon EC2 Instance from the Alteryx Server AMI You can purchase an Amazon Machine Image (AMI) from Alteryx through AWS Marketplace and use it to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance running Alteryx Server You can find the Alteryx Server offering on AWS Marketplace Note: You can try one instance of the product for 14 days Please remember to turn your instance off once your trial is complete to avoid incurring charges You have two options for launching your Amazon EC2 instance You can launch an instance using the Amazon EC2 launch wizard in the Amazon EC2 console or by ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 9 selecting the Alteryx Server AMI in the launch wizard Note that the fastest way to deploy Alteryx Server on AWS is to launch an Amazon EC2 instance using the Marketplace website To launch Alteryx Server using the Marketplace website: 1 Navigate to AWS Marketplace 2 Select Alteryx Server then select Continue to Subscribe 3 Once subscribed select Continue to Configuration 4 Review the configura tion settings choose a nearby Region then select Continue to Launch 5 Once you have configured the options on the page as desired select Launch 6 Go to the Amazon EC2 console to view the startup of the instance 7 It can be helpful to note the Instance ID for later reference You can give the instance a friendly name to find it more easily and to allow others to know what the instance is for Click inside the Name field and enter the desired name 8 Navigate to the instance Public IP address or Pu blic DNS name in your browser Enter in your email address and take note of the token at the bottom: 9 Your token will be specific to your instance If you selected the Bring Your Own License image a similar registration will appear and prompt you for lic ense information 10 After selecting your server instance and clicking Connect you will be guided through using Remote Desktop Protocol (RDP) to connect to the Controller instance of Alteryx ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server o n AWS Page 10 11 Once connected you can use your AWS instance running Alteryx Ser ver The desktop contains links to the Designer and Server System Settings 12 Start using Alteryx Server See Alteryx Community for more information on how to use Alteryx Server and Designer Sizing and Scalin g Alteryx Server on AWS When sizing and scaling your Alteryx Server deployment consider the performance availability and management Performance Considerations This section covers options and best practices for improving the performance of your Alteryx Server workflows Scaling Up vs Scaling Out You can usually increase performance by scaling your Workers up or out To scale up you need to relaunch Workers using a larger instance type with more vCPUs or memory or by configuring faster storage When sca ling up you should increase the size of all Workers as the Controller does not schedule on specific worker instances by priority and will not assign work to the machine with the most resources To scale out you need to configure additional instances Both options typically take only a few minutes Below are two scenarios that discuss scaling up and scaling out: Long job queues – If you expect that a high number of jobs will be scheduled or if you observe that the job queue length exceeds defined limits then scale out to make sure you have enough instances to meet demand Scale up if you already have a very large number of small nodes Long running jobs or large workflows – Larger instances specifically instance types with more RAM are best suit ed for long running workloads If you find that you have longrunning jobs first examine the query logic load on the data source and network path and adjust if necessary If the jobs are otherwise well tuned consider scaling up This table presents heu ristics that can help you determine the number of Workers you need to execute workloads with different run times ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 11 Number of Users 5Second Workload 30Second Workload 1Minute Workload 2+Minute Workload Number of Worker Instances 120 1 1 2 3 2040 1 2 3 4 40100 2 3 4 5 100 3 4 5 6 Table 1: Number of Worker instances needed to execute workloads with different run times Consider having your users run some of their frequently requested workflows on a test instance of Alteryx Server of your planned instance size You can quickly deploy a test instance using the Alteryx Server AMI These tests will help you understand the number of jobs and workflow sizes that your instance size can handle To predict workflow sizes rev iew your current and planned Designer workflows In Alteryx benchmark testing the engine running in Alteryx Designer performed nearly the same as in Alteryx Server when running on similar instance types (see Alteryx Analytics Benchmarking Results )Keep this in mind when determining how long workloads will take to run You can test workload times without installing Alteryx Server by using the Designer on hardw are that is similar to what you would use to deploy Alteryx Server Scaling Based on Demand Many customers find they need to add more Workers at predictable times For peak usage times you can launch new Worker instances from the Alteryx Server AMI and pay for them using the pay asyougo option With this model you pay only for instances you need for as long as you use them This is common for seasonal or end ofmonth end ofquarter or end ofyear workloads You can use an Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling group with a script to insert the controller token into these new instances to scale additional Worker instances on demand with minimal or no post launch configuration Additionally you can integrate Amazon EC2 Auto Scaling with Amazon CloudWatch to scale automatically based on custom metrics such as the number of jobs queued Scaling Alteryx Server to more instances will have licensing implications because it is licensed by cores ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 12 Figure 4: Use Amazon EC2 Auto Scaling and Amaz on CloudWatch to scale Worker instances ondemand You can perform additional scheduled scaling actions with Amazon EC2 Auto Scaling For example you can configure an Amazon EC2 Auto Scaling group to spin up instances at the start of business hours and tur n them off automatically at the end of the day This allows Alteryx Server to reduce compute costs while meeting business analytic requirements Worker Performance Workers have several configuration settings The two settings that are the most important fo r optimizing workflow performance are simultaneous workflows and max sort/join memory Simultaneous workflows – You have the best starting point for simultaneous workflows when 4 vCPUs are available for each workflow For example if an instance has 8 vCPUs then we recommend that you enable 2 workflows to run simultaneously This setting is labeled Workflows allowed to run simultaneously in the Worker configuration interface You can adjust this setting as a way to tune performance ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 13 Note: 4 vCPU s = 1 workflows running simultaneously Max sort/join memory usage – This configuration manages the memory available to workflows that are more RAM intensive The best practice is to take t he total memory available to the machine and subtract a suggested 4 GB of memory for OS processes Then take that number and divide it by the number of simultaneous workflows assigned: Max Sort / Join Memory Usage =(Total Memory −Suggested 4GBs Operating System Memory ) #of simultaneous workflow For example for a Worker configured with 32 GB of memory and 8 vCPUs the recommended number of simultaneous workflows is 4 because there are 8 vCPUs (1 workflow for e very 2 vCPUs) In this example 4 GB of memory set aside for the OS is subtracted from 3 2 GB total memory The remaining number (28 GB) is divided by the number of simultaneous workflows (4) leaving 7 GB Therefore the recommended max sort/join memory is 7 GB Max Sort / Join Memory Usage for 32 GB Instance and 8 vCPUs = (32 GB – 4 GB) / 4 simultaneous workflows = 7 GB The following table shows a list of precomputed values for suggested max sort/join memory Instance vCPUs Suggested Simultaneo us Workflows Total Memory (GB) OS Memory (constant) (GB) Suggested Max Sort/Join Memory (GB / Th read) 4 2 16 4 6 8 4 32 4 7 16 8 32 4 35 16 8 64 4 75 32 16 128 4 78 Table 2: Examples of suggested max sort / join memory Database Performance Using a user managed MongoDB cluster allows you to control and tune the performance of the Alteryx Server persistence tier ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 14 Availability Considerations Except for the Controller you can scale out the other major Alteryx Server components to multiple inst ances Scaling the Worker Gallery and Database instances increases their availability performance or both You can create a standby Controller to ensure availability in the event of a Controller issue instance failure or Availability Zone issue For high availability you should deploy Worker Gallery and Database instances in two or three Availability Zones Consider deploying instances in more than one AWS Region for faster disaster recovery to improve interactive access to data for your regional customers and to reduce latency for users in different geographies Figure 5: High availability deployment of Alteryx Server on AWS AWS recommends that you have approximately 3 5 Worker instances 2 4 Gallery instances behind an ELB application load balancer and 3 5 Mongo Database instances configured in a Mongo DB replica set for high availability deployments The worker instances de picted above were created with Amazon EC2 auto scaling The exact ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 15 numbers and instance sizes are dependent on costs and the performance sizing specific to your organization For multi Region deployments ensure that each AWS Region has a Controller instanc e that can be used with a DNS name (Elastic IP addresses are local to a single AWS Region) We recommend using Amazon Route 53 in an active passive configuration to ensure there is only one active controller The passive controllers can be fully configured but Amazon Route 53 will only route traffic to a passive controller if the active controller becomes unavailable Management Considerations Many of the configurations we discussed allow for more flexible management of Alteryx Server Control of the pers istence tier gives you more options when replicating and backing up the databas e Placing the Gallery behind a load balancer allows for easier maintenance when upgrading or deploying Gallery instances From an operational standpoint a scaled install gives you more options and less downtime for backups monitoring database permissions and thirdparty tools Remember scaling Alteryx Server will have licensing implications based on the number of vCPUs in the deployment You need to license a ll deployed nodes regardless of function Sizing and Scaling Summary A high level overview of reasons and decisions for sizing and scaling Alter yx is given in the table below Action Performance Impact Availability Impact Management Impact Controller Scaled Up (Larger Instance Size) Can help increase Gallery performance No major impact No major impact ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 16 Action Performance Impact Availability Impact Management Impact Controlle r Scaled Out ( More Controller Instance s) No major impact Having multiple Controllers requires that one Controller is on cold or warm standby Requires customized scripts or triggers to automatically failover You can create these with AWS services such as CloudWatch and SNS Worker Scaled Up (Larger Instance Size) Decreased workflow completion times For best results use instance types with more memory or optimized memory No major impact No major impact Worker Scaled Out (More Worker Instances) More concurrent workflows can be run More resiliency to Worker instance failures Reduced downtime during maintenance Gallery Scaled Out (More Gallery Instances) Better performance for more Gallery users More resiliency to Gallery instance failures Reduced downtime during maintenance User Managed MongoDB database More control for tuning and performance Clust ering and replication in MongoDB allow for higher availability Give you more control over the database but require s some knowledge about NoSQL databases Table 3: Scaling actions and impact on performance availability and management When considering Alteryx Server deployment options and which components to scale it's best to consider your organization ’s performance availability and management needs For example your organiza tion may have a few users creating analytic workflows but hundreds of users consuming those workflows via the Gallery In that ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 17 case you might need minimal infrastructure to handle analytic workflows and the database while the Controller which aids the G allery instances would need to be a larger instance and the Gallery instances would be best served using several instances behind a l oad balancer If you are concerned with data loss you should create a user managed MongoDB cluster and make sure that it is backed up regularly to multiple locations Operations This section discusses backup restore and monitoring operations Backup and Restore You can use the Amazon Elastic Block Store ( EBS) snapshot feature to back up the Controller Worker and Database instances You can use these s napshots to restore data in the event of a failure It is best to stop the Controller and Database tier before a snapshot The Gallery is stateless and does not need to be backed up For details on how to perform backup and recovery operations i f you are using a user managed MongoDB database see the MongoDB documentation for Amazon EC2 Backup and Restore Monit oring AWS provides robust monitoring of Amazon E lastic Compute Cloud (Amazon E C2) instances Amazon EBS volumes and other services via Amazon CloudWatch Amazon CloudWatch can be triggered to send a notification via Amazon Simple Notification Service (Ama zon SNS) or email upon meeting userdefined thresholds on individual AWS services Amazon CloudWatch can also be configured to trigger an auto recov ery action on instance failure You can also write a custom metric to Amazon CloudWatch for example to mo nitor current queue sizes of workflows in your Controller and to alarm or trigger automatic responses from those measures By default these metrics are not available from Alteryx Server but can be parsed from Alteryx logs and custom workflows and exposed to CloudWatch using Amazon CloudWatch Logs You can also use t hirdparty monitoring tools to monitor status and performance for Alteryx Server A free analytics workflow and app lication is available for reviewing ArchivedAmazon Web Services Best Practice s for Deploying Alteryx Server on AWS Page 18 Alteryx Server performance and logs You ca n get that tool from the Alteryx support community Network and Security This section covers network and security considerations for Alteryx Server deployment Connecting OnPremise s Resources to Amazon VPC In order for Alteryx Server to access your on premises data sources connect an Amazon Virtual Private Cloud (Amazon VPC) to your onpremise s resources In the following figure the private subnet contain s Alteryx Server You can place all t he Gallery services in a public subnet (not shown ) for simple access to the internet and users or you can configure AWS Direct Connect or use VPN to enable a private peering connection with no public IP addressing required You can also place Gallery instances or Alteryx Server in the private subnets with configuration of NAT Gateway Scaling hybrid or disaster recovery options are also available in this model with elements of Altery x Server deployed as need ed either on premises or on AWS ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 19 Figure 6: Options for connecting on premises services to Alteryx Server on AWS Alteryx Server often uses information stored on private corporation resources Be aware of the performance and traffi c implications of accessing large amounts of data that are outside of AWS AWS offers a several solutions to handle this kind of expected traffic You can provision a VPN connection to your VPC by provisioning an AWS Managed VPN Connection AWS VPN CloudHu b or a third party software VPN appliance running on an Amazon EC2 instance deployed in your VPC We recommend using AWS Direct Connect to connect to private data sources outside of AWS as it provides a predictable low cost and high performance dedicate d peering connection You can also use VPN with Direct Connect to fully encrypt all traffic This approach fits well into risk and security compliance standards for many corporations You may already be using Direct Connect to connect with an existing AWS deployment It is possible to share Direct Connect and create connections to multiple VPCs even across AWS accounts or to provision access to remote regions While possible it is not ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 20 recommended to connect to data sources directly over the internet from a public subnet due to security concerns See the Direct Connect documentation for more details on a variety of connectivity scenarios see the AWS Direct Connect docu mentation Security Groups When running Alteryx Server on AWS be sure to check your security group settings when attempting to add a connection to a data source You will need to customize your security groups based on your needs as some data sources may require specific ports Refer to the data source documentation on the specific source you are connecting to and the ports and protocols used for traffic Port Permitted Traffic 3389 RDP Access 80 HTTP Web Traffic 443 HTTPS Web Traffic 81 Used Only with AWS Marketplace Offering for Client Connections 5985 Used Only with AWS Marketplace Offering for Windows Management Table 4: Security Groups for Alteryx Server Network Access Control Lists (NACLs) Amazon VPC and Alteryx Server support NACLs as an optional additional network security component NACLs are not stateful and tend to be more restrictive and so they are not recommended for general deployments The y may be useful for organizations with specific compliance concerns or other internal security requirements NACLs are supported for controlling network traffic that relates to Alteryx Server Bastion Host (Jump Box) In the case that Alteryx Server components are placed in a private subnet we recommend that a bastion host or jump box is placed in the public subnet with security group rules to allow traffic between the public jump box and the private server This adds another level of control and help s limit the types of conne ctions that can reach the Alteryx Server For details on bastion host deployment on AWS see the Linux Bastion Hosts on the AWS Cloud Quick Start ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 21 Secure Sockets Layer (SSL) The Gallery component of Alteryx Server is available over HTTP or HTTPS If you deploy gallery instances in a public subnet we recommend HTTPS For information on how to properly configure TLS see the Alteryx Server documentation Best Practices The following sections summarize best practices and tips for deploying Alteryx Server on AWS Deployment • Deploy Alteryx Server on an in stance that meets the minimum requirements: Microsoft Windows Server 2008R2 (or later) at least 8 vCPUs and at least 1TB of Amazon Elastic Block Store (Amazon EBS) storage • Do not change the Alteryx Server Authentication Mode once it has been set Changi ng the Authentication Mode requires that you reinstall Microsoft Windows Active Directory (Microsoft AD) or SAML 20 are the recommended authentication methods • The controller token is unique to each Alteryx Server installation and administrators must sa feguard it • Be sure to configure each required database driver on the server machine with the same version that is used for designer clients • Alteryx Server supports two different mechanisms for persistence: MongoDB and SQLite Choose MongoDB if you need a scalable or highly available architecture Choose SQLite if you do not need to use Gallery and your deployment is limited to scheduling workloads • Worker instances Gallery instances and user managed MongoDB instances can be scaled for deployments suppor ting user groups of 20 or more • If you use the pay asyougo AWS Marketplace image for test purposes be sure to note the 14 day trial period and remember to turn your instance off once your trial is complete ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 22 Scaling and Availability • For a more resilient architecture be sure to scale out worker Gallery and persistence instances across multiple Availability Zones Consider deploying instances across AWS Regions to reduce latency for users in different geographies or to improve access to data • Multiple Ga llery instances can be configured behind a load balancer to handle the Gallery services at scale • When scaling Worker instances you should increase the size of all Worker instances as the Controller does not schedule on specific worker instances by priori ty • A standby Controller can be deployed for failover AWS tools such as AWS CLI Amazon Route 53 and Amazon CloudWatch can help automate failover • Scaling Alteryx Server to more instances will likely have licensing implications because it is licensed by c ores Network and Security • Alteryx Server on AWS commonly process information stored on premises Be aware of the potential performance and cost implications of using large amounts of data outside of AWS • When using Alteryx Server on AWS ensure that you c heck your security group settings when attempting to add a connection to a data source You will need to customize security groups based on your needs as some data sources may require specific ports Refer to documentation on the specific database you are connecting to and the ports and protocols used for traffic • Amazon VPC and Alteryx Server support NACLs as an optional additional network security component NACLs may be useful for organizations with specific compliance concerns or other internal security requirements • Be sure your Alteryx Designer clients have connectivity to any Controllers you plan to schedule workflows on This is an easily missed requirement when Alteryx Server is deployed in the cloud ArchivedAmazon Web Services Best Practices for Deploy ing Alteryx Server on AWS Page 23 Performance • Instance types with a larger ratio of memory to vCPUs will often run Alteryx workflows faster Consider EC2 memory optimized instances types such as the R4 when working to improve performance • We recommend two VPCs per simultaneous workflow • The user defined Controller setting max so rt/join memory manages the memory available to workflows that are RAM intensive The best practice is to take total memory available to the machine and subtract a suggested 4 GBs of memory for OS processes Then take that number and divide it by the number of simultaneous workflows assigned For example: 32 GBs – 4 = 28 GBs / 4 simultaneous workflows = 7 GBs max sort/join memory • For workflows using geo spatial tools use EBS Provisioned IOPS SSD (io1) or EBS General Purpose SSD (gp2) volumes that have been optimized for I/O intensive tasks to increase performance Conclusion AWS lets you deploy scalable analytic tools such as Alteryx Server Using Alteryx Server on AWS is a cost effective and flexible way to manage and deploy various configurations of Alter yx Server In this whitepaper we have discussed several considerations and best practices for deploying Alteryx Server on AWS Please send comments or feedback on this paper to the papers authors or helpfeedback@alteryxcom Contributors The following individuals and organizations contributed to this document: • Mike Ruiz Solutions Architect AWS • Claudine Morales Solutions Architect AWS • Matt Braun Product Manager Alteryx • Mark Hayford Amazon Web Services Architect Alteryx ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 24 Further Reading For additional information see the following: • Alteryx Community • Alteryx Knowledge Base • Alteryx Server Install Guide • Alteryx SSL Information • Alteryx Documentation ArchivedAmazon Web Services Best Practices for Deploying Alteryx Server on AWS Page 25 Document Revisions Date Description August 2019 Edits to clarify information about Simultaneous workflows August 2018 First publication
|
General
|
consultant
|
Best Practices
|
Best_Practices_for_Deploying_Amazon_WorkSpaces
|
ArchivedBest Practices for Deploying Amazon WorkSpaces Network Access Directory Services Cost Optimization and Security December 2020 This version has been archived For the latest technical information refer to https://docsawsamazoncom/whitepapers/ latest/bestpracticesdeployingamazon workspaces/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 WorkSpaces Requirements 1 Network Considerations 2 VPC Design 3 Network Interfaces 4 Traffic Flow 4 Client Device to WorkSpace 4 Amazon WorkSpaces Service to VPC 6 Example of a Typical Configuration 7 AWS Directory Service 11 AD DS Deployment Scenarios 11 Scenario 1: Using AD Connector to Proxy Authentication to On Premises Ac tive Directory Service 12 Scenario 2: Extending On Premises AD DS into AWS (Replica) 15 Scenario 3: Standalone Isolated Deployment Using AWS Directory Service in the AWS Cloud 17 Scenario 4: AWS Microsoft AD and a Two Way Transitive Trust to On Premises 19 Scenario 5: AWS Microsoft AD using a Shared Services Virtual Private Cloud (VPC) 21 Scenario 6: AWS Mi crosoft AD Shared Services VPC and a One Way Trust to On Premises 23 Design Considerations 25 VPC Design 25 Active Directory: Sites and Serv ices 29 Multi Factor Authentication (MFA) 30 Disaster Recovery / Business Continuity 31 WorkSpaces Interface VPC Endpoint (AWS PrivateLink) – API Calls 32 Amazon WorkSpaces Tags 33 Automating Amazon WorkSpaces Deployment 34 Common WorkSpaces Automation Methods 34 WorkSpaces Deployment Automation Best Practices 36 ArchivedAmazon W orkSpaces Language Packs 37 Amazon WorkSpaces Profile Management 37 Folder Redirection 37 Best Practices 37 Thing to Avoid 38 Other Considerations 39 Profile Settings 39 Amazon WorkSpaces Volumes 39 Amazon W orkSpaces Logging 40 Amazon WorkSpaces Migrate 42 WellArchitected Framework 44 Security 45 Encryption in Transit 45 Network Inte rfaces 47 WorkSpaces Security Group 48 Encrypted WorkSpaces 49 Access Control Options and Trusted Devices 51 IP Access Control Groups 51 Monitoring or Logging Using Amazon CloudWatch 52 Cost Optimization 54 SelfService WorkSpace Management Capabilities 54 Amazon WorkSpaces Cost Optimizer 55 Troubleshooting 56 AD Connector Cannot Connect to Active Directory 56 Troubleshooting a WorkSpace Custom Image Creation Error 57 Troubleshooting a Windows WorkSpace Marked as Unhealthy 57 Collecting a WorkSpaces Support Log Bundle for Debugging 59 How to Check Latency to the Closest AWS Region 62 Conclusion 62 Contributors 62 Further Reading 62 ArchivedDocument Revisions 63 ArchivedAbstract This whitepaper outlines a set of best practices for the deployment of Amazon WorkSpaces The paper covers network considerations directory services and user authentication security and monitoring and logging This whitepaper was written to enable quick access to relevant information It is intended for network engineer s directory engineer s or security engineer s ArchivedAmazon Web Services Best Practices for De ploying Amazon WorkSpaces 1 Introduction Amazon WorkSpaces is a managed desktop computing service in the cloud Amazon WorkSpaces removes the burden of procuring or deploying hardware or installing complex software and delivers a desktop experience with either a few clicks on the AWS Management Console using the Amazon Web Services ( AWS ) command line interface (CLI) or by using the application programming interface (API) With Amazon WorkSpaces you can launch a Microsoft Windows or A mazon Linux desktop within minutes which enables you to connect to and access your desktop software securely reliably and quickly from on premises or from an external network You can: • Leverage your existing onpremises Microsoft Active Directory (AD) by using AWS Directory Service : Active Directory Connector (AD Connector) • Extend your directory to the AWS Cloud • Build a managed directory with AWS Directory Service Microsoft AD or Simple AD to manage your users and WorkSpaces • Leverag e your on premises or cloud hosted RADIUS server with AD Connector to provide multi factor authentication (MFA) to your WorkSpaces You can automate the provisioning of Amazon WorkSpaces by using the CLI or API which enables you to integrate Amazon WorkSp aces into your existing provisioning workflows For security in addition to the integrated network encryption that the Amazon WorkSpaces service provides you can also enable encryption at rest for your WorkSpaces See the Encrypted WorkSpaces section of this document You can deploy applications to your WorkSpaces by using your existing on premises tools such as Microsoft System Center Configuration Manager (SCCM) Puppet Enterprise or Ansible The following sections provi de details about Amazon WorkSpaces explain how the service works describe what you need to launch the service and tells you what options and features are available for you to use WorkSpaces Requirements The Amazon WorkSpaces service requires three components to deploy successfully: • WorkSpaces client application — An Amazon WorkSpaces supported client device See Getting Started with Your WorkSpace ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 2 You can also use Personal Computer over Internet Protocol (PCoIP) Zero Clients to connect to WorkSpaces For a list of available devices see PCoIP Zero Clients for Amazon WorkSpaces • A directory service to authenticate users and provide access to their WorkSpace — Amazon WorkSpaces currently works with AWS Directory Service and Microsoft AD You can use your on premises AD server with AWS Directory Service to support your existing enterprise user credentials with Amazon WorkSpaces • Amazon Virtual Private Cloud (Amazon VPC) in which to run your Amazon WorkSpaces — You’ll need a minimum of two subnets for a n Amazon WorkSpaces deployment because each AWS Directory Service construct requires two subnets in a Multi AZ deployment Network Considerations Each WorkSpace is associated with the specific Amazon VPC and AWS D irectory Service construct that you used to create it All AWS Directory Service constructs (Simple AD AD Connector and Microsoft AD) require two subnets to operate each in different Availability Zones (AZs) Subnets are permanently affiliated with a Di rectory Service construct and can’t be modified after it is created Because of this it’s imperative that you determine the right subnet sizes before you create the Directory Services construct Carefully consider the following before you create the subnets: • How many WorkSpaces will you need over time? • What is the expected growth? • What types of users will you need to accommodate? • How many AD domains will you connect? • Where do your enterprise user accounts reside? Amazon recommends defining user group s or personas based on the type of access and the user authentication you require as part of your planning process Answers to these questions are helpful when you need to limit access to certain applications or resources Defined user personas can help you segment and restrict access using AWS Directory Service network access control lists routing tables and VPC security groups Each AWS Directory Service construct uses two subnets and applies the same settings to all WorkSpaces that launch from that construct For example you can use a security group that applies to all WorkSpaces attached to an AD Connector to specify whether MFA is required or whether an enduser can have local administrator access on their WorkSpace ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 3 Note: Each AD Connector conne cts to your existing Enterprise Microsoft AD To take advantage of this capability and specify an Organizational Unit (OU) you must construct your Directory Service to take your user personas into consideration VPC Design This section describes best prac tices for sizing your VPC and subnets traffic flow and implications for directory services design Here are a few things to consider when designing the VPC subnets security groups routing policies and network access control lists ( ACLs ) for your Amaz on WorkSpaces so that you can build your WorkSpaces environment for scale security and ease of management: • VPC — We recommend using a separate VPC specifically for your WorkSpaces deployment With a separate VPC you can specify the necessary governance and security guardrails for your WorkSpaces by creating traffic separation • Directory Services — Each AWS Director y Service construct requires a pair of subnets that provide s a highly available directory service split between Amazon AZs • Subnet size — WorkSpaces deployments are tied to a directory construct and reside in the same VPC subnets as your chosen AWS Directo ry Service A few considerations: o Subnet sizes are permanent and cannot change You should leave ample room for future growth o You can specify a default security group for your chosen AWS Directory Service The security group applies to all WorkSpaces that are associated with the specific AWS Directory Service construct o You can have multiple AWS Directory Services use the same subnet Consider future plans when you design your VPC For example you might want to add management components such as an antivi rus server a patch management server or an AD or RADIUS MFA server It’s worth planning for additional available IP addresses in your VPC design to accommodate such requirements For in depth guidance and considerations for VPC design and subnet sizing see the re:Invent presentation How Amazoncom is Moving to Amazon WorkSpaces ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 4 Network Interfaces Each WorkSpace has two elastic network interfaces (ENIs) a management network interface (eth0 ) and a primary network interface ( eth1 ) AWS uses the management network interface to manage the WorkSpace — it’s the interface on which your client connection terminates AWS uses a private IP address range for this interface For network routing to work properly you can’t use this private address space on any network that can communicate with your WorkSpaces VPC For a list of the private IP ranges that we use on a per region basis see Amazon WorkSpaces Details Note: Amazon WorkSpac es and their associated management network interfaces do not reside in your VPC and you cannot view the management network interface or the Amazon Elastic Compute Cloud (Amazon EC2) instance ID in your AWS Management Console (see Figure s 4 5 and 6) However you can view and modify the security group settings of your primary network interface ( eth1 ) in the console The primary network interface of each WorkSpace does count toward you r ENI Amazon EC2 resource quotas For large deployments of Amazon WorkSpaces you need to open a support ticket via the AWS Management Console to increase your ENI quotas Traffic Flow You can break down Amazon WorkSpaces traffic into two main components: • The traffic between the client device and the Amazon WorkSpace s service • The traffic between the Amazon WorkSpaces service and customer network traffic In the next section we discuss both of these components Client Device to WorkSpace Regardless of its location ( onpremises or remote) the device running the Amazon WorkSpaces client uses the same two ports for connectivity to the Amazon WorkSpaces service The client uses port 443 (HTTPS port) for all authentication and session related information a nd port 4172 (PCoIP port) with both Transmission Control Protocol ( TCP) and User Datagram Protocol ( UDP ) for pixel streaming to a given WorkSpace and network health checks Traffic on both ports is encrypted Port 443 traffic is used for authentication a nd session information and uses TLS for encrypting the traffic Pixel streaming traffic uses AES256bit encryption for communication ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 5 between the client and eth0 of the WorkSpace via the streaming gateway More information can be found in the Security section of this document We publish per region IP ranges of our PCoIP streaming gateways and network health check endpoints You can limit outbound traffic on port 4172 from your corporate network to the AWS streaming gateway and network health check endpoints by allowing only outbound traffic on port 4172 to the specific AWS Regions in which you’re using Amazon WorkSpaces For the IP ranges and network health check endpoints see Amazon WorkSpaces PCoIP Gateway IP Ranges The Amazon WorkSpaces client has a built in network status check This utility shows users whether their network can support a connection by way of a status indicator on the bottom right of the application A more detailed view of the network status can be accessed by choosing Network on the topright side of the client See Figure 1 Figure 1 — WorkSpaces Client : network check A user initiates a connection from their client to the Amazon WorkSpaces service by supplying their login information for the directory used by the Directory Service construct typically their corporate directory The login information is sent via HTTPS to the authentication gateways of the Amazon WorkSpaces service in the Region where the WorkSpace is located The authentication gateway of the Amazon WorkSpaces service then forwards the traffic to the specific AWS Directory Service construct associated with your WorkSpace For example when using the AD Connector the AD Connector forwards the authentication request directly to your AD service which could be onpremises or in an AWS VPC See the AD DS Deployment Scenarios section of this document The AD Connector does not store any authentication information and it acts as a stateless ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 6 proxy As a result it’s imperative that the AD Connector has connec tivity to an AD server The AD Connector determines which AD server to connect to by using the DNS servers that you define when you create the AD Connector If you’re using an AD Connector and you have MFA enabled on the directory the MFA token is checked before the directory service authentication Should the MFA validation fail the user’s login information is not forwarded to your AWS Directory Service Once a user is authenticated the streaming traffic starts by using port 4172 (PCoIP port) through the AWS streaming gateway to the WorkSpace Session related information is still exchanged via HTTPS throughout the session The streaming traffic uses the first ENI on the WorkSpace ( eth0 on the WorkSpace) that is not connected to your VPC The network connection from the streaming gateway to the ENI is managed by AWS In the event of a connection failure from the streaming gateways to the WorkSpaces streamin g ENI a CloudWatch event is generated See the Monitoring or Logging Using Amazon CloudWatch section of this document The amount of data sent between the Amazon WorkSpaces service and the client depends on the level of pixel activity To ensure an optimal experience for users we recommend that the round trip time (RTT) between the WorkSpaces client and the AWS Region where your WorkSpaces are located is less than 100 m illiseconds (ms) Typically this means you r WorkSpaces client is located less than two thousand miles from the Region in which the WorkSpace is being hosted We provide a Connection Health Check webpage to help you determine the m ost optimal AWS Region to connect to the Amazon WorkSpaces service Amazon WorkSpaces Service to VPC After a connection is authenticated from a client to a WorkSpace and streaming traffic is initiated your WorkSpaces client will display either a Windows or Linux desktop (your Amazon WorkSpace) that is connected to your virtual private cloud ( VPC) and your network should show that you have established that connection The WorkSpace’s primary ENI identified as eth1 will have an IP address assigned to it f rom the Dynamic Host Configuration Protocol (DHCP) service that is provided by your VPC typically from the same subnets as your AWS Directory Service The IP address stays with the WorkSpace for the duration of the life of the WorkSpace The ENI in your V PC has access to any resource in the VPC and to any network you have connected to your VPC (via a VPC peering an AWS Direct Connect connection or VPN connection) ENI access to your network resources is determined by the route table of the subnet and default security group that your AWS Directory Service configures for each WorkSpace as well any additional security groups that you assign to the ENI You can add security groups to the ENI facing your VPC at any time by using the AWS Management Console or AWS CLI (For more information on security groups see Security Groups for Your WorkSpaces ) In addition to security groups you can use your ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 7 preferred host based firewall on a given WorkSpace to limit network access to resources within the VPC Figure 4 in the AD DS Deployment Scenarios section of this whitepaper shows the traffic flow described Example of a Typical Configuratio n Let’s consider a scenario where you have two types of users and your AWS Directory Service uses a centralized AD for user authentication: • Workers who need full access from anywhere (for example full time employees) — These users will have full access to the internet and the internal network and they will pass through a firewall from the VPC to the on premises network • Workers who should have only restricted access from inside the corporate network (for example contractors and consultants) — These users have restricted internet access through a proxy server to specific websites in the VPC and will have limited network access in the VPC and to the on premises network You’d like to give full time employees the ability to have local administra tor access on their WorkSpace to install software and you would like to enforce two factor authentication with MFA You also want to allow full time employees to access the internet without restrictions from their WorkSpace For contractors you want to b lock local admin istrator access so that they can only use specific pre installed applications You want to apply restrictive network access controls using security groups for these WorkSpaces You need to open port s 80 and 443 to specific internal websites only and you want to entirely block their access to the internet In this scenario there are two completely different types of user personas with different requirements for network and desktop access It’s a best practice to manage and configure their WorkSpaces differently You will need to create two AD Connectors one for each user persona Each AD Connector requires two subnets that have enough IP addresses available to meet your WorkSpaces usage growth estima tes Note: Each AWS VPC subnet consumes five IP addresses (the first four and the last IP address) for management purposes and each AD Connector consumes one IP address in each subnet in which it persists Further considerations for this scenario are as f ollows: ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 8 • AWS VPC subnets should be private subnets so that traffic such as internet access can be controlled through either a Network Address Translation (NAT) Gateway Proxy NAT server in the cloud or routed back through your on premises traffic manage ment system • A firewall is in place for all VPC traffic bound for the on premises network • Microsoft AD server and the MFA RADIUS servers are either on premises (s ee Scenario 1: Using AD Connector to Proxy Authentication to On Premises AD DS in this document ) or part of the AWS Cloud implementation (see Scenario 2 and Scenario 3 AD DS Deployment Scenarios in this document ) Given that all WorkSpaces are granted some form of internet access and given that they are hosted in a private subnet you also must create public subnets that can access the internet through an internet gateway You need a NAT gateway for the full time e mployees allowing them to access the internet and a Proxy NAT server for the consultants and contractors to limit their access to specific internal websites To plan for failure design for high availability and limit cross AZ traffic charges you shou ld have two NAT gateways and NAT or proxy servers in two different subnets in a Multi AZ deployment The two A Zs that you select as public subnets will match the two AZs that you use for your WorkSpaces subnets in Regions that have more than two zones You can route all traffic from each WorkSpaces AZ to the corresponding public subnet to limit cross AZ traffic charges and provide easier management Figure 2 shows the VPC configuration ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 9 Figure 2 — Highlevel VPC design The following information describes how to configure the two different WorkSpaces types : To configure WorkSpace s for fulltime employees : 1 In the Amazon WorkSpaces Management Console choose the Directories option on the menu bar 2 Choose the directory that hosts your full time employees 3 Choose Local Administrator Setting By enabling this option any newly created WorkSpace will have local administrator privileges To grant internet access configure NAT for outbound internet access from your VPC To enable MFA you need to specify a RADIUS server se rver IPs ports and a preshared key For full time employees’ WorkSpaces inbound traffic to the WorkSpace can be limited to Remote Desktop Protocol (RDP) from the Helpdesk subnet by ap plying a default security group via the AD Connector settings To configure WorkSpaces for c ontractors and consultants : ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 10 1 In the Amazon WorkSpaces Management Console disable Internet Access and the Local Administrator setting 2 Add a security group under the Security Group settings section to enforce a security group for all new WorkSpaces created under that directory For consultants’ WorkSpaces limit outbound and inbound traffic to the WorkSpaces by applying a default Security group via the AD Connector settings to all WorkSpaces associated with the AD Connector The security group prevent s outbound access from the WorkSpaces to anything other than HTTP and HTTPS traffic and inbound traffic to RDP from the Helpdesk subnet in th e onpremises network Note: The security group applies only to the ENI that is in the VPC ( eth1 on the WorkSpace) and access to the WorkSpace from the WorkSpaces client is not restricted as a result of a security group Figure 3 shows the final WorkSpaces VPC design Figure 3 — WorkSpaces design with user personas ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 11 AWS Directory Service As mentioned in the introduction AWS Directory Service is a core component of Amazon WorkSpaces With AWS Directory Serv ice you can create three types of directories with Amazon WorkSpaces: • AWS Managed Microsoft AD which is a managed Microsoft AD powered by Windows Server 2012 R2 AWS Managed Microsoft AD is available in Standard or Enterprise Edition • Simple A D is standalone Microsoft ADcompatible managed directory service powered by Samba 4 • AD Connector is a directory proxy for redirecting authentica tion requests and user or group lookups to your existing on premises Microsoft AD The following section describes communication flows for authentication between the Amazon WorkSpaces brokerage service and AWS Directory Service best practices for implemen ting WorkSpaces with AWS Directory Service and advanced concepts such as MFA It also discus ses infrastructure architecture concepts for Amazon WorkSpaces at scale requirements on Amazon VPC and AWS Directory Service including integration with on prem ises Microsoft AD Domain Services (AD DS) AD DS Deployment Scenarios Backing Amazon WorkSpaces is the AWS Directory Service and the proper design and deployment of the directory service is critical The following three scenarios build on the Active Direc tory Domain Services on AWS Quick Start guide and describe the best practice deployment options for AD DS when used with Amazon WorkSpaces The Design Considerations section of this document details the specific requirements and best practices of using AD Connector for WorkSpaces which is an integral part of the overall WorkSpaces design concept • Scenario 1: Using AD Connector to proxy authentication to on premises AD DS — In this scenario network connectivity (VPN/Direct Connect) is in place to the customer with all authentication proxied via AWS Directory Service (AD Connector) to the customer on premises AD DS • Scenario 2: Extending on premises AD DS into AWS (Replica) — This scenario is similar to scenario 1 but here a replica of the customer AD DS is deployed on AWS in combination with AD Con nector reducing latency of authentication/query requests to AD DS and the AD DS global catalog ArchivedAmazon Web Services Best Practices for Deploying Amaz on WorkSpaces 12 • Scenario 3: Standalone isolated deployment using AWS Dire ctory Service in the AWS Cloud — This is an isolated scenario and doesn’t include connectivity back to the customer for authentication This approach uses AWS Directory Service (Microsoft AD) and AD Connector Although this scenario doesn’t rely on connect ivity to the customer for authentication it does make provision for application traffic where required over VPN or Direct Connect • Scenario 4: AWS Microsoft AD and a Two Way Transitive Trust to On Premises — This scenario includes the AWS Managed Microsof t AD Service (MAD) with a twoway transitive trust to the on premises Microsoft AD forest • Scenario 5: AWS Microsoft AD using a Shared Services VPC — This scenario uses AWS Managed Microsoft AD in a Shared Services VPC to be used as an Identity Domain for multiple AWS Services ( Amazon EC2 Amazon WorkSpaces and so on ) while u sing the AD Connector to proxy Lightweight Directory Access Protocol (LDAP ) user authentication requests to the AD domain controllers • Scenario 6: AWS Microsoft AD Shared Services VPC and a One Way Trust to On Premises AD — This scenario is similar to Scenario 5 but it includes disparate identity and resource domains using a oneway trust to onpremises Scenario 1: Using AD Connector to Proxy Authentication to On Premi ses Active Directory Service This scenario is for customers who don’t want to extend their on premises AD service into AWS or where a new deployment of AD DS is not an option Figure 4 depicts at a high level each of the components and shows the user authentication flow ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 13 Figure 4 — AD Connector to on premises Active Directory In this scenario AWS Directory Service (AD Connector) is used for all user or MFA authentication that is pr oxied through the AD Connector to the customer on premises AD DS (see Figure 5 ) For details on the protocols or encryption used for the authentication process see the Security section of this docume nt Figure 5 — User authentication via the Authentication Gateway Scenario 1 shows a hybrid architecture where the customer m ight already have resources in AWS as well as resources in an on premises data center that could be accessed via Amazon WorkSpaces The customer can leverage their existing on premises AD DS and RADIUS servers for user and MFA authentication This architecture uses the following components or construct s: AWS • Amazon VPC — Creation of an Amazon VPC with at least two private subnets across two A Zs ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 14 • DHCP Options Set — Creation of an Amazon VPC DHCP Options Set This allows customer specified domain name and domain name servers (DNS) (on premises services) to be defined For m ore information see DHCP options sets • Amazon virtual private gat eway — Enable communication with your own network over an IPsec VPN tunnel or an AWS Direct Connect connection • AWS Directory Service — AD Connector is deployed into a pair of Amazon VPC private subnets • Amazon WorkSpaces — WorkSpaces are deployed in the s ame private subnets as the AD Connector For more information see the Active Directory: Sites and Services section of this document Customer • Network connectivity — Corporate VPN or Direct Connect endpoints • AD DS — Corporate AD DS • MFA (optional) — Corporate RADIUS server • End user devices — Corporate or bring your own license ( BYOL ) end user devices (such as Windows Mac s iPad s Android tablets zero clients and Chromebook s) used to access the A mazon WorkSpaces service See this list of client applications for supported devices and web browsers Although this solution is great for customers who don’t want to deploy AD DS into the cloud it does come with some caveats : • Reliance on connectivity — If connectivity to the data center is lost users cannot log in to their respective WorkSpaces and existing connections will remain active for the Kerberos/ Ticket Granting Ticket ( TGT) lifetime • Latency — If latency exists via the connection (this is more the case with VPN than D irect Connect ) then WorkSpaces authentication and any AD DS related activity such as Group Policy (GPO) enforcement will take more time • Traffic costs — All authentication must traverse the VPN or D irect Connect link and so it depends on the connection type This is either Data Transfer O ut from Amazon EC2 to internet or Data Transfer Out (Direct Connect ) ArchivedAmazon Web Services Best Practice s for Deploying Amazon WorkSpaces 15 Note: AD Connector is a proxy service It doesn’t store or cache user credentials Instead all authentication lookup and management requests are handled by your AD An account with delegation privileges is required in your directory service with rights to re ad all user information and join a computer to the domain In general the WorkSpaces experience is highly dependent on item 5 shown in Figure 4 For this scenario the WorkSpaces authentication experience is highly dependent on the network link between the customer AD and the WorkSpaces VPC The customer should ensure the link is highly available Scenario 2: Extending On Premises AD DS into AWS (Replica) This scenario is similar to scenario 1 However in this scenario a replica of the customer AD DS is deployed on AWS in combination with AD Connector This reduces latency of authentication or query requests to AD DS Figure 6 shows a high level view of each of the components and the user authentica tion flow Figure 6 — Extend customer Active Directory Domain to the cloud As in scenario 1 AD Connector is used for all user or MFA authentication which in turn is proxied to the customer AD DS ( see Figure 5 ) In this scenario the customer AD DS is deployed across AZs on Amazon EC2 instances that are promoted to be domain ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 16 controllers in the customer’s on premises AD forest running in the AWS Cloud Each domain controller is deployed into VPC private subnets to make AD DS highly available in the AWS Cloud For best practices for depl oying AD DS on AWS see the Design Considerations section of this document After WorkSpaces instances are deployed they have access to the cloud based domain controllers for secure low latency directory services and DNS All network traffic including AD DS communication authent ication requests and AD replication is secured either within the private subnets or across the customer VPN tunnel or D irect Connect This architecture uses the following components or construct s: AWS • Amazon VPC — Creation of an Amazon VPC with at least four private subnets across two AZs — two for the customer AD DS two for AD Connector or Amazon WorkSpaces • DHCP Options Set — Creation of an Amazon VPC DHCP options set This allows the customer to define a specified domain name and DNSs (AD DS local) For more information see DHCP Options Sets • Amazon virtual private gateway — Enable communication with a customer owned network over an IPsec VPN tunnel or AWS Direct Connect connection • Amazon EC2 — o Customer corporate AD DS domain controllers deployed on Amazon EC2 instances in dedicated private VPC subnets o Customer “optional” RADIUS servers for MFA on Amazon EC2 in stances in dedicated private VPC subnets • AWS Directory Services — AD Connector is deployed into a pair of Amazon VPC private subnets • Amazon WorkSpaces — WorkSpaces are deployed into the same private subnets as the AD Connector For more information see the Active Directory: Sites and Services section of this document Customer • Network connectivity — Corporate VPN or AWS Direct Connect endpoints • AD DS — Corporate AD DS (required for replicati on) • MFA “optional” — Corporate RADIUS server ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 17 • End user devices — Corporate or BYOL end user devices (such as Windows Macs iPad s Android tablets zero clients and Chromebook s) used to access the Amazon WorkSpaces service See this list of client applications for supported devices and web browsers This solution doesn’t have the same caveats as scenario 1 Amazon WorkSpaces and AWS Directory Service have no reliance on the connectivity in place • Reliance on connectivity — If connectivity to the c ustomer data center is lost end users can continue to work because authentication and “optional” MFA are processed locally • Latency — With the exception of replication traffic all authentication is local and low latency See the Active Directory: Sites and Services section of this document • Traffic costs — In this scenario authentication is local with only AD DS replication having to traverse the VPN or D irect Connect link reducing da ta transfer In general the WorkSpaces experience is enhanced and isn’t highly dependent on item 5 as shown in Figure 6 This is also the case when a customer want s to scale WorkSpaces to thousands of desktops especially in relat ion to AD DS global catalog queries as this traffic remains local to the WorkSpaces environment Scenario 3: Standalone Isolated Deployment Using AWS Directory Service in the AWS Cloud This scenario shown in Figure 7 has AD DS deployed in the AWS Cloud in a standalone isolated environment AWS Directory Service is used exclusively in this scenario Instead of fully managing AD DS customers can rely on AWS Directory Service for tasks such as building a highly available directory topology monitoring domain controllers and configuring backups and snapshots ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 18 Figure 7 — Cloud only : AWS Directory Services (Microsoft AD) As in scenario 2 the AD DS (Microsoft AD) is deployed into dedicated subnets that span two AZs making AD DS highly available in the AWS Cloud In addition to Microsoft AD AD Connector (in all three scenarios) is deployed for WorkSpaces authentication or MFA This ensures separation of roles or functions within the Amazon VPC which is a standard best practice For more information see the Design Considerations section of this document Scenario 3 is a standard allin configuration that works well for customers who want to have AWS manage the deployment patching high availability and monitoring of the AWS Directory Service The scenario also works well for proof of concepts lab and production environments because of its isolation mode In addition to the placement of AWS Directory Service Figure 7 shows the flow of traffic from a user to a workspace and how the workspace interacts with the AD server and MFA server This architecture uses the following components or construct s AWS • Amazon VPC — Creation of an Amazon VPC with at least four private subnets across two AZs — two for AD DS Microsoft AD two for AD Connector or WorkSpaces ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 19 • DHCP options set — Creation of an Amazon VPC DHCP options set This allows a customer to define a specified domain name and DNS (Microsoft AD) For more information see DHCP options sets • Optional: Amazon virtual private gateway — Enable communication with a customer owned network over an IPsec VPN tunnel (VPN) or AWS Direct Connect connection Use for accessing on premises back end systems • AWS Directory Service — Microsoft AD deployed into a dedicated pair of VPC subnets (AD DS Managed Service) • Amazon EC2 — Customer “Optional” RADIUS Servers for MFA • AWS Directory Services — AD Connector is deployed into a pair of Amazon VPC private subnets • Amazon WorkSpaces — WorkSpaces are deployed into the same private subnets as the AD Connector For more informati on see the Active Directory: Sites and Services section of this document Customer • Optional: Network Connectivity — Corporate VPN or AWS Direct Connect endpoints • End user devices — Corporate or BYOL enduser devices (such as Windows Macs iPad s Android tablets zero clients and Chromebook s) used to access the Amazon WorkSpaces service See this list of client applications for supported devices and web browsers Like scenario 2 this scenario doesn’t have issues w ith reliance on connectivity to the customer on premises data center latency or data out transfer costs (except where internet access is enabled for WorkSpaces within the VPC) because by design this is an isolated or cloud only scenario Scenario 4: AW S Microsoft AD and a Two Way Transitive Trust to On Premises This scenario shown in Figure 8 has AWS Managed AD deployed in the AWS Cloud which has a two way transitive trust to the customer on premises AD User accounts and WorkSpaces are created in the Managed AD with the AD trust enabling resources to be accessed in the on premises environment ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 20 Figure 8 — AWS Microsoft AD and a two way transitive trust to onpremises As in scenario 3 the AD DS (Microsoft AD) is deployed into dedicated subnets that span two AZs making AD DS highly available in the AWS Cloud This s cenario works well for customers who want to have a fully managed AWS Directory Service incl uding deployment patching high availability and monitoring of their AWS Cloud This scenario also allows WorkSpaces users to access AD joined resources on their existing networks This scenario requires a domain trust to be in place Security groups and firewall rules need to allow communication between the two active directories In addition to the placement of AWS Directory Service Figure 8 shows the flow of traffic from a user to a workspace and how the workspace interacts w ith the AD server and MFA server This architecture uses the following components or construct AWS • Amazon VPC — Creation of an Amazon VPC with at least four private subnets across two AZs — two for AD DS Microsoft AD two for AD Connector or WorkSpaces • DHCP options set — Creation of an Amazon VPC DHCP options set This allows a customer to define a specified domain name and DNS (Microsoft AD) For more information see DHCP options sets ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 21 • Optional: Amazon virtual private gateway — Enable communication with a customer owned network over an IPsec VPN tunnel (VPN) or AWS Direct Connect connection Use for accessing on premises back end systems • AWS Directory Service — Microsoft AD deployed into a dedicated pair of VPC subnets (AD DS Managed Service) • Amazon EC 2 — Customer “Optional” RADIUS Servers for MFA • Amazon WorkSpaces — WorkSpaces are deployed into the same private subnets as the AD Connector For more information see the Active Directory: Sites and Services section of this document Customer • Network Connectivity — Corporate VPN or AWS Direct Connect endpoints • End user devices — Corporate or BYOL end user devices (such as Windows Macs iPad s Android tablets zero clients and Chromebook s) used to access the Amazon WorkSpaces service See this list of client applications for supported devices and web browsers This solution requires connectivity to the customer on premises data center to allow the trust process to operate If WorkSpaces users are using resources on the on premises network then latency and outbound data transfer costs need to be considered Scenario 5: AWS Microsoft AD using a Shared Services Virtual Private Cloud (VPC) This scenario shown in Figure 9 has an AWS Managed AD deployed in the A WS Cloud providing authentication services for workloads that are either already hosted in AWS or are planned to be as part of a broader migration The best practice recommendation is to have Amazon WorkSpaces in a dedicated VPC Customers should also create a specific AD OU to organize the WorkSpaces computer objects To deploy WorkSpaces with a shared services VPC hosting Managed AD deploy an AD Connector (ADC) with an ADC service account created in the Managed AD The service account req uires permissions to create computer objects in the WorkSpaces designated OU in the shared services Managed AD ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 22 Figure 9 — AWS Microsoft AD using a shared services VPC This architecture uses the following components or construct s AWS • Amazon VPC — Creation of an Amazon VPC with at least two private subnets across two AZs (two for AD Connector and WorkSpaces) • DHCP options set — Creation of an Amazon VPC DHCP options set This allows a customer to define a specified domain name and DNS (Microsoft AD) For more information see DHCP options sets • Optional: Amazon virtual private gateway — Enable communication with a customer owned network over an IPsec VPN tunnel (VPN) or AWS Direct Connect connection Use for accessing on premises back end systems • AWS Directory Service — Microsoft AD deployed into a dedicated pair of VPC subnets (AD DS Managed Service) AD Connecto r • AWS Transit Gateway/VPC Peering — Enable connectivity between Workspaces VPC and the Shared Services VPC • Amazon EC2 — Customer “Optional” RADIUS Servers for MFA • Amazon WorkSpaces — WorkSpaces are deployed into the same private subnets as the AD Connector For more information see the Active Directory: Sites and Services section of this document ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpace s 23 Customer • Network Connectivity — Corporate VPN or AWS Direct Connect endpoints • End user devices — Corporat e or BYOL end user devices (such as Windows Macs iPad s Android tablets zero clients and Chromebook s) used to access the Amazon WorkSpaces service See this list of client applications for supported devices and web browsers Scenario 6: AWS Microsoft AD Shared Services VPC and a One Way Trust to OnPremises This scenario as shown in Figure 10 uses an existing on premises AD for user accounts a nd introduces a separate Managed AD in the AWS Cloud to host the computer objects associated with the WorkSpaces This scenario allows the computer objects and AD group policies to be managed independently from the corporate AD This scenario is useful whe n a third party wants to manage WorkSpaces on a customer’s behalf as it allows the third party to define and control the WorkSpaces and policies associated with them without a need to grant the third party access to the customer AD In this scenario a specific AD OU is created to organize the WorkSpaces computer objects in the Shared Services AD To deploy WorkSpaces with the computer objects created in the Shared Services VPC hosting Managed AD using use r accounts from the customer domain deploy an AD Connector referencing the corporate AD Use an ADC Service Account created in the corporate AD that has permissions to create computer objects in the OU that was configured for WorkSpaces in the Shared Serv ices Managed AD ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 24 Figure 10 — AWS Microsoft shared services VPC and a one way trust to AD onpremises This architecture uses the following components or construct s: AWS • Amazon VPC — Creation of an Amazon VPC with at least two private subnets across two AZs — two for AD Connector and WorkSpaces • DHCP options set — Creation of an Amazon VPC DHCP options set This allows a customer to define a specified domain name and DNS (Microsoft AD) For more information see DHCP options sets • Optional: Amazon virtual private gateway — Enable communication with a customer owned network over a n IPsec VPN tunnel (VPN) or AWS Direct Connect connection Use for accessing on premises back end systems • AWS Directory Service — Microsoft AD deployed into a dedicated pair of VPC subnets (AD DS Managed Service) AD Connector • Transit Gateway/VPC Peerin g — Enable connectivity between Workspaces VPC and the Shared Services VPC • Amazon EC2 — Customer “Optional” RADIUS Servers for MFA • Amazon WorkSpaces — WorkSpaces are deployed into the same private subnets as the AD Connector For more information see the Active Directory: Sites and Services section of this document ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 25 Customer • Network Connectivity — Corporate VPN or AWS Direct Connect endpoints • End user devices — Corporate or BYOL e nduser devices (such as Windows Macs iPad s Android tablets zero clients and Chromebook s) used to access the Amazon WorkSpaces service See this list of client applications for supported devices and web browsers Design Considerations A functional AD DS deployment in the AWS Cloud requires a good understanding of both Active Directory concepts and specific AWS services In this section we discuss key design co nsiderations when deploying AD DS for Amazon WorkSpaces VPC best practices for AWS Directory Service DHCP and DNS requirements AD Connector specifics and AD sites and services VPC Design As we discuss ed in the Network Considerations section of this document and documented earlier for scenarios 2 and 3 customers should deploy AD DS in the AWS Cloud into a dedicated pair of private subnets across two AZs and separated from AD Connector or WorkSpaces subnets This construct provides highly available low latency access to AD DS services for WorkSpaces while maintaining standard best practices of separation of roles or functions within the Amazon VPC Figure 11 shows the separation of A D DS and AD Connector into dedicated private subnets (scenario 3) In this example all services reside in the same Amazon VPC ArchivedAmazon Web Services Best Practices for Deploy ing Amazon WorkSpaces 26 Figure 11 — AD DS network segregation Figure 12 shows a design similar to scenario 1 ; however in this scenario the on premises portion resides in a dedicated Amazon VPC Figure 12 — Dedicated WorkSpaces VPC Note: For customers who have an existing AWS deployment where AD DS is being used w e recommend that they locate their WorkSpaces in a dedicated VPC and use VPC peering for AD DS communications In addition to the creation of dedicated private subnets for AD DS domain controllers and member servers require several Security Group rules t o allow traffic for services ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 27 such as AD DS replication user authentication Windows Time services and distributed file system (DFS) Note: Best practice is to restrict the required security group rules to the WorkSpaces private subnets and in the case of scenario 2 allow for bidirectional AD DS communications on premises to and from the AWS Cloud as shown in the following table Table 1 — Bidirectional AD DS communications to and from the AWS Cloud Protocol Port Use Destination TCP 53 88 135 139 389 445 464 636 Auth (primary) Active Directory (private data center or Amazon EC2) * TCP 49152 – 65535 RPC High Ports Active Directory (private data center or Amazon EC2) ** TCP 3268 3269 Trusts Active Directory (private data center or Amazon EC2) * TCP 9389 Remote Microsoft Windows PowerShell (optional) Active Directory (private data center or Amazon EC2) * UDP 53 88 123 137 138 389 445 464 Auth (primary) Active Directory (private data center or Amazon EC2) * UDP 1812 Auth (MFA) (optional) RADIUS (private data center or Amazon EC2) * *See Active Directory and Active Directory Domain Services Port Requirements **See Service overview and network port requirements for Windows ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 28 For step bystep guidance for implementing rules see Adding Rules to a Security Group in the Amazon Elastic Compute Cloud User Guide VPC Design : DHCP and DNS With an Amazon VPC DHCP services are provided by default for your instances By default eve ry VPC provides an internal DNS server that is accessible via the Classless InterDomain Routing (CIDR) +2 address space and is assigned to all instances via a default DHCP options set DHCP options sets are used within an Amazon VPC to define scope optio ns such as the domain name or the name servers that should be handed to customer instances via DHCP Correct functionality of Windows services within a customer VPC depends on this DHCP scope option In each of the scenarios defined earlier customers create and assign their own scope that defines the domain name and name servers This ensures that domain joined Windows instances or WorkSpaces are configured to use the AD DNS The following table is an example of a custom set of DHCP scope options that mu st be created for Amazon WorkSpaces and AWS Directory Services to function correctly Table 2 — Custom set of DHCP scope options Parameter Value Name tag Creates a tag with key = name and value set to a specific string Example: examplecom Domain name examplecom Domain name servers DNS server address separated by commas Example: 1 920210 1 920221 NTP servers Leave this field blank NetBIOS name servers Enter the same comma separated IPs as per domain name servers Example: 1 920210 1 920221 NetBIOS node type 2 ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 29 For details on creating a custom DHCP option set and associating it with an Amazon VPC see Working with DHCP options sets in the Amazon Virtual Private Cloud User Guide In scenario 1 the DHCP scope would be the on premises DNS or AD DS However in scenario s 2 or 3 this would be the locally deployed directory service (AD DS on Amazon EC2 or AWS Directory Servi ces: Microsoft AD) We recommend each domain controller that resides in the AWS Cloud be a global catalog and Directory Integrated DNS server Active Directory: Sites and Services For Scenario 2 sites and services are critical components for the correct function of AD DS Site topology controls AD replication between domain controllers within the same site and across site boundaries In scenario 2 at least two sites are present : on premises and the Amazon WorkSpa ces in the cloud Defining the correct site topology ensures client affinity meaning that clients (in this case WorkSpaces) use their preferred local domain controller Figure 13 — Active Directory sites and services : client affinity Best practice: Define high cost for site links between on premises AD DS and the AWS Cloud Figure 10 is an example of what costs to assign to the site links (cost 100) to ensure siteindependent client affinity ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 30 These associations help ensure that traffic — such as AD DS replication and client authentication — uses the most efficient path to a domain controller In the case of scenarios 2 and 3 this helps ensure lower latency and cross link traffic Multi Factor Authentication (MFA) Implementing MFA requires the Amazon WorkSpaces infrastructure to use AD Connector as its AWS Directory Service and have a RADIUS server Although this document doesn’t discuss the deployme nt of a RADIUS server the previous section AD DS Deployment Scenarios describes the placement of RADIUS within each scenario MFA – TwoFactor Authentication Amazon WorkSpaces supports MFA through AWS Directory Service: AD Connector and a customer owned RADIUS server After MFA is enabled users are required to provide their Username Password and MFA Code to the WorkSpaces client for authentication to their respective WorkSpaces desktops Figure 14 — WorkSpaces client with MFA enabled ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 31 Hard rul e: Implementing MFA authentication requires customers to use AD Connector AD Connector doesn’t support selective “per user” MFA as this is a global per AD Connector setting If selective “per user” MFA is required users must be separated by an AD Connector WorkSpaces MFA requires one or more RADIUS servers Typically these are existing solutions for example RSA or the servers can be depl oyed within a VPC (see the AD DS Deployment Scenarios section of this document ) If deploying a new RADIUS solution several implementations exist such as FreeRADIUS and cloud services such as Duo Security For a list of prerequisites to implement MFA with Amazon WorkSpaces see the Amazon WorkSpaces Administration Guide Preparing Your Network for an AD Connector Directory The process for configuring AD Connector for MFA is described in Managing an AD Connector Directory: Multi factor Authentication in the Amazon WorkSpaces Administration Guide Disaster Recovery / Business Continuity WorkSpaces Cross Region Redirection Amazon WorkSpaces is a regional service that provides remote desktop access to customers Depending on business continuity and disaster recovery requirements (BC/DR) some customers require seamless failover to another region where the Amazon WorkSpaces service i s available This BC/DR requirement can be accomplished using the Amazon WorkSpaces Cross Region redirection option It allows customers to use a fully qualified domain name (FQDN) as their Amazon WorkSpaces registration code When your end users log in t o WorkSpaces you can redirect them across Amazon WorkSpaces Regions based on your DNS policies for the FQDN This option can be used with public or private DNS zones Cross region failure can be manual or automated The automated failover can be done by u sing DNS health checks to determine if a primary site is still available before failing over to the second region If you do n’t have DNS health checks you can creat e a TXT record within your managed DNS service An important consideration is to determine at what point a failure to a failover region should occur The criteria for this decision should be based on your company policy but should include the Recovery Time Objective (RTO) and the Recovery Point Objective (RPO) A Well Architected Amazon Worksp aces architecture design should include the potential for service failure The time tolerance for normal business operation recovery ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 32 will also factor into the decision Additionally with cross region redirection user data replication to the new region should be considered There are many options available for user data replication such as Amazon WorkDocs Windows FSx (DFS Share) or 3rd party utilities to synchronize data volumes between regions For more information see Cross Region Redirection for Amazon WorkSpaces WorkSpaces Interface VPC Endpoint (AWS PrivateLink) – API Calls Amazon WorkSpaces public APIs are supported on AWS PrivateLink AWS PrivateLink increases the security of data shared wi th cloud based applications by reducing the exposure of data to the public internet WorkSpaces API traffic can be secured inside a VPC by using a n interface endpoint which is an elastic network interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service This enables you to privately access WorkSpaces API services by using private I P addresses Using PrivateLink with WorkSpaces Public APIs also enables you to securely expose REST APIs to resources only within your VPC or to those connected to your data centers via AWS Direct Connect You can restrict access to selected Amazon VPCs and VPC Endpoints and enable cross account access using resource specific policies Ensure that the security group that is associated with the endpoint network interface allows communication between the endpoint network interface and the resources in your VPC that communicate with the service If the security group restricts inbound HTTPS traffic (port 443) from resources in the VPC you might not be able to send traffic through the endpoint network interface An interface endpoint supports TCP traffic only • Endpoints support IPv4 traffic only • When you create an endpoint you can attach an endpoint policy to it that controls access to the service to which you are connecting • You have a quota on the number of endpoints you can create per VPC • Endpoints are supported within the same Region only You cannot create an endpoint between a VPC and a service in a different Region ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 33 Create Notification to receive alerts on interface endpoint events — You can create a notification to receive alerts for specific events that occur on your interface endpoint To create a notification you must associate an Amazon SNS topic with the notification You can subscribe to the SNS topic to receive an email notification when an endpoint event occurs Create a VPC Endpoint Policy for Amazon WorkSpaces — You can create a policy for Amazon VPC endpoints for Amazon WorkSpaces to specify the following: • The principal that can perform actions • The actions that can be performed • The resources on which actions can be performed Connect Your Private Network to Your VPC — To call the Amazon WorkSpaces API through your VPC you have to connect from an instance that is inside the VPC or connect your private network to your VPC by using an Amazon Virtual Private Net work (VPN) or AWS Direct Connect For information about Amazon VPN see VPN connections in the Amazon Virtual Private Cloud User Guide For information about AWS Direct C onnect see Creating a connection in the AWS Direct Connect User Guide For more information about using Amazon WorkSpaces API through a VPC interface endpoi nt see Infrastructure Security in Amazon WorkSpaces Amazon WorkSpaces Tags Tags enable you to associate metadata with AWS resources Tags can be used wi th Amazon WorkSpaces to registered directories bundles IP Access Control Groups or images Tags assist with cost allocation to internal cost centers Before using tags with Amazon WorkSpaces review the Tagging Best Practice s whitepaper Tag Restrictions • Maximum number of tags per resource —50 • Maximum key length —127 Unicode characters • Maximum value length —255 Unicode characters • Tag keys and values are case sensitive Allowed characters are letters spaces and numbers representable in UTF 8 plus the following special characters: + = _ : / @ Do not use leading or trailing spaces • Do not use the "aws:" or "aws:workspaces:" pr efixes in your tag names or values because they are reserved for AWS use You can't edit or delete tag names or values with these prefixes ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 34 Resources That You Can Tag • You can add tags to the following resources when you create them : WorkSpaces imported im ages and IP access control groups • You can add tags to existing resources of the following types : WorkSpaces registered directories custom bundles images and IP access control groups Using the Cost Allocation Tag To view your WorkSpaces resource tags in the Cost Explorer activate the tags that you have applied to your WorkSpaces resources by following the instructions in Activating User Defined Cost Allo cation Tags in the AWS Billing and Cost Management User Guide Although tags appear 24 hours after activation it can take four to five days for values associated with those tags to appear in the Cost Explorer To appear and provide cost data in Cost Explo rer WorkSpaces resources that have been tagged must incur charges during that time Cost Explorer shows only cost data from the time when the tags were activated forward No historical data is available at this time Managing Tags To update the tags for an existing resource using the AWS CLI use the create tags and deletetags commands For bulk updates and to automate the task on a large number of WorkSpaces resource Amazon WorkSpaces adds support for AWS Resource Groups Tag Editor AWS Resource Groups Tag Editor enables you to add edit or delete AWS tags from your WorkSpaces along with your other AWS resources Automating Amazon WorkSpaces Deployment With Amazon WorkSpaces you can launch a Microsoft Windows or Amazon Linux desktop within minutes and connect to and access your desktop software from on premises or an external network securely reliably and quickly You can automate the provisioning of Amazon WorkSpaces to enable you to integrate Amazon WorkSpaces into your existing provisioning workflows Common WorkSpaces Automation Methods Customers can use a number of tools to allow for rapid Amazon WorkSpaces deployment The tools can be used to allow simplify management of Work Spaces reduce costs and enable an agile environment that can scale and move fast ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 35 AWS CLI and API There are Amazon WorkSpaces API operations you can use to interact with the s ervice securely and at scale All public APIs are available with the AWS CLI SDK and Tools for PowerShell while private APIs such as image creation are available only through the AWS Console When considering operational management and business self service for Amazon WorkSpaces consider that WorkSpaces APIs do require technical expertise and security permissions to use API calls can be made using the AWS SDK AWS Tools for Windows PowerShell and AWS Tools for PowerShell Core are PowerShell modules built on functionality exposed by the AWS SDK for NET These modules enable you to script operations on AWS resources from the PowerShell command line and integ rate with existing tools and services For example API calls can enable you to automatically manage the WorkSpaces lifecycle by integrating with AD to provision and decommission WorkSpaces based on a user’s AD group membership AWS CloudFormation AWS Clo udFormation enables you to model your entire infrastructure in a text file This template becomes the single source of truth for your infrastructure This helps you to standardize infrastructure components used across your organization enabling configurat ion compliance and faster troubleshooting AWS CloudFormation provisions your resources in a safe repeatable manner enabling you to build and rebuild your infrastructure and applications You can use CloudFormation to commission and decommission envir onments which is useful whe n you have a number of accounts that you want to build and decommission in a repeatable fashion When considering operational management and business self service for Amazon WorkSpaces consider that AWS CloudFormation does require technical expertise and security permissions to use SelfService WorkSpaces Portal Customers can use build on WorkSpaces API commands and other AWS Services to create a WorkSpaces self service portal This helps customers streamline the process to deploy and reclaim WorkSpaces at scale Using a WorkSpaces portal you can enable your work force to provision their own WorkSpaces with an integrated approval workflow that does not require IT intervention for each request This reduces IT operational costs while helping end users get started with WorkSpaces faster The additional built in appr oval workflow simplifies the desktop approval process for businesses A dedicated portal can offer an automated tool for provisioning Windows or Linux cloud desktops and enable users to rebuild restart or migrate their WorkSpace as well as provide a fa cility for password resets ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 36 There are guided examples of creating Self Service WorkSpaces Portals referenced in the Further Reading section of this document AWS Partners provide preconfigured WorkSpaces management por tals via the AWS Marketplace Integration with Enterprise IT Service Management As enterprises adopt Amazon WorkSpaces as their virtual desktop solution at scale there is a need to implement or integrat e with IT Service Management (ITSM) systems ITSM integration allows for self service offerings for provisioning and operations The AWS Service Catalog enables you to manage commonly deployed AWS ser vices and provisioned software products centrally This service helps your organization achieve consistent governance and compliance requirements while enabling users to deploy only the approved AWS services they need The AWS Service Catalog can be used to enable a self service lifecycle management offering for Amazon WorkSpaces from within IT Service Management tools such as ServiceNow WorkSpaces Deployment Automation Best Practices You should consider Well Architected principles of selecting and designing WorkSpaces deployment automation • Design for Automation — Design to deliver the least possible manual intervention in the pr ocess to enable repeatability and scale • Design for Cost Optimization — By automatically creating and reclaiming WorkSpaces you can reduce the administration effort needed to provide resources and remove idle or unused resources from generating unnecessary cost • Design for Efficiency — Minimize the resources needed to create and terminate WorkSpaces Where possible provide T ier 0 self service capabilities for the business to improve efficiency • Design for Flexibility — Create a consistent deployment mechanism that can handle multiple scenarios and can scale with the same mechanism (customized using tagged use case and profil e identifiers ) • Design for Productivity — Design your WorkSpaces operations to allow for the correct authorization and validation to add or remove resources • Design for Scalability — The pay asyou go model that Amazon WorkSpaces uses can drive cost saving s by creating resources as needed and removing them when they are no longer necessary • Design for Security — Design your WorkSpaces operations to allow for the correct authorization and validation to add or remove resources ArchivedAmazon Web Services Best Practices for Deploying Amazon W orkSpaces 37 • Design for Supportability — Design your WorkSpaces operations to allow for noninvasive support and recovery mechanisms and processes Amazon WorkSpaces Language Packs Amazon WorkSpaces bundles that provide the Windows 10 desktop experience supports English (US) French (Canadian) Korean and Japanese However you can include additional language packs for Spanish Italian Portuguese and many more language options For more information see How do I create a new Windows WorkSpace image with a client language other than English? Amazon WorkSpaces Profile Management Amazon WorkSpaces separates the user profile from the base Operating System (OS) by redirecting all profile writes to a separate Amazon Elastic Block Store (Amazon EBS) volume In Microsoft Windows the user profile is stored in D:\Users\username In Amazon Linux the user profile is stored in /home The EBS volume is snapshotted automatically every 12 hours The snapshot is automatically stored in an AWS Managed S3 bucket to be used in the event that a n Amazon WorkSpace is rebuilt or restored For most organizations having automatic snapshots every 12 hours is superior to the existing desktop deployment of no backups for user profiles However customers can require more granular control over user profiles ; for example migration from desktop to WorkSpaces to a new OS/AWS Region support for DR and so on There are alternative methods for profile management available for Amazon WorkSpaces Folder Redirection While folder redirect ion is a common design consideration in Virtual Desktop Infrastructure (VDI) architectures it is not a best practice or even a common requirement in Amazon WorkSpaces designs The reason for this is Amazon WorkSpaces is a persistent Desktop as a Service (DaaS) solution with application and user data persisting out of the box There are specific scenarios where Folder Redirection for User Shell Folders ( for example D:\Users\username\Desktop redirected to \\Server\RedirectionShare$ \username\Desktop ) are required such as immediate recovery point objective (RPO) for user profile data in disaster recovery (DR) environments Best Practices The following best practices are listed for a robust folder redirection: ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 38 • Host the Windows File Server s in the same AWS Region and AZ that the Amazon WorkSpaces are launched in • Ensure AD Security Group Inbound Rules include the Windows File Server Security Group or private IP address es; otherwise ensure that the onpremises firewall allows those same TCP and UDP portbased traffic • Ensure Windows File Server Security Group Inbound Rules include TCP 445 (SMB) for all Amazon WorkSpaces Security Groups • Create an AD Security Group for Amazon WorkSpaces users to authorize users access to the Windows File Share • Use DFS Namespac e (DFS N) and DFS Replication (DFS R) to ensure your Windows File Share is agile not tied to anyone one specific Windows File Server and all user data is automatically replicated between Windows File Servers • Append ‘$’ to the end of the share name to hi de the share hosting user data from view when browsing the network shares in Windows Explorer • Create the file share following Microsoft’s guidance for redirected folders : Deploy Folder Redirection with Offline Files Follow the guidance for Security Permissions and GPO configuration closely • If your Amazon Work Spaces deployment is Bring Your Own License (BYOL) you must also specify disabling Offline Files following Microsoft’s guidance : Disable Offline Files on Individual Redirected Folders • Install and run Data Deduplication (commonly referred to as ‘dedupe’) if your Windows File Server is Windows Server 2016 or newer to reduce storage consumption and optimize cost See Install and enable Data Deduplication and Running Data Deduplicati on • Back up your Windows File Server file shares using existing organizational backup solutions Thing to Avoid • Do not use Windows File Server s that are accessible only across a wide area network ( WAN ) connection as the SMB protocol is not designed for th at use • Do not use the same Windows File Share that is used for Home Directories to mitigate the chances of users accidentally deleting their User Shell folders • While enabling Volume Shadow Copy Service (VSS) is recommended for ease of file restores this alone does not remove the requirement to back up the Windows File Server file shares ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 39 Other Considerations • Amazon FSx for Windows File Server of fers a managed service for Windows file shares and simplify the operational overhead of folder redirection including automatic backups • Utilize AWS Storage Gateway for SMB File Share to back up your file shares if there is no existing organizational backup solution Profile Settings Group Policies A common best practice in enterprise Microsoft Windows deployments is to define user environment settings through Group Policy Object (GPO) and Group Policy Preferences (GPP) settings Settings such as shortcuts drive mappings registry keys and printer s are defined through the Group Policy Management Console The benefits to defining the user environment through GPOs include but are not limited to: • Centralized configuration management • User profile defined by AD Security Group Membership or OU placement • Protection against deletion of settings • Automate profile creation and personalization at first logon • Ease of future updating Note: Follow Microsoft’s Best Practices for optimizing Group Policy performance Interactive Logon Banners Group Policies must not be used as they are not supported on Amazon WorkSpaces Banners are presented on the Amazon WorkSpaces Client through AWS support requests Additionally removable devices must not be blocked through group policy as they are required for Amazon WorkSpaces GPOs can be used to manage Windows WorkSpaces For more information see Manage Your Windows WorkSpaces Amazon WorkSpaces Volumes Each Amazon WorkSpaces instance contains two volumes : an operating system volume and a user volume ArchivedAmazon Web Services Best Practices fo r Deploying Amazon WorkSpaces 40 • Amazon Windows WorkSpaces — The C:\ drive is used for the Operating System (OS) and the D:\ drive is user volume The user profile is located on the user volume (AppData Documents Pictures Downloads and so on ) • Amazon Linux WorkSpace s — With an Amazon Linux WorkSpace the system volume (/dev/xvda1) mounts as the root folder The user volume is for user data and applications; /dev/xvdf1 mounts as /home For operating system volumes you can select a starting size for this drive of 80 GB or 175 GB For user volumes you can select a starting size of 10 GB 50 GB or 100 GB Both volumes can be increased up to 2TB in size as needed ; however to increase the user volume beyond 100 GB the OS volume must be 175 GB Volume changes can be performed only once every six hours p er volume For additional information on modifying the WorkSpaces volume size see the Modify a WorkSpace section of the Administration Guide WorkSpaces Volumes Best Practices When planning an Amazon WorkSpaces deployment we recommend factor ing the minimum requirements for OS installation in place upgrades and additional core applications that will be added to the image on the OS volume For the user volume we recommend starting with a smaller disk allocation and incrementally increas ing the user volume size as needed Minimizing the size of the disk volumes reduces the cost of running the WorkSpace Note : While a volu me size can be increased it cannot be decreased Amazon WorkSpaces Logging In an Amazon WorkSpaces environment there are many log sources that can be captured to troubleshoot issues and monitor the overall WorkSpaces performance Amazon WorkSpaces Clie nt 3x On each Amazon WorkSpaces client the client logs are located in the following director ies: • Windows — %LOCALAPPDATA% \Amazon Web Services \Amazon WorkSpaces \logs • macOS — ~/Library/"Application Support"/"Amazon Web Services"/"Amazon WorkSpaces"/logs • Linux (Ubuntu 1804 or later) — /opt/workspacesclient/workspacesclient ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 41 There are many instances where diagnostic or debugging details may be needed for a WorkSpaces session from the client side Advanced client logs can be enabled as well by adding an “ l3“ to the workspaces executable file For example: "C:\Program Files (x 86)\Amazon Web Services Inc \Amazon WorkSpaces" workspacesexe l3 Amazon WorkSpaces Service Amazon WorkSpaces service is integrated with Amazon CloudWatch Metrics CloudWatch Events and CloudTrail This integration allows of the performance data and API calls to be logged into central AWS Service When managing an Amazon WorkSpaces environment it is important to constantly monitor certain CloudWatch metrics to d etermine the overall environment health status Metrics While there are other CloudWatch metrics available for Amazon WorkSpaces (see Monitor Your WorkSpaces Using CloudWatch Metrics ) the three following metrics will assist in maintaining the WorkSpace instance availability : • Unhealthy — The number of WorkSpaces that returned an unhealthy status • SessionLaunchTime — The amount of time it takes to initiate a WorkSpaces session • InSessionLatency — The round trip time between the WorkSpaces client and the WorkSpace For more information on WorkSpaces logging options see Logging Amazon WorkSpaces API Calls by Using CloudTrail The additional CloudWatch Events will assist with capturing the client side IP of the user session when the user connected to the WorkSpaces session and the what endpoint was used during t he connection All of these details assist with isolating or pinpointing user reported issues during troubleshooting sessions Note : Some CloudWatch Metrics are available only with AWS Managed AD ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 42 Amazon WorkSpaces Migrate Amazon WorkSpaces migrate featur e enables you to bring your user volume data to a new bundle You can use this feature to : • Migrate your WorkSpaces from the Windows 7 Experience to the Windows 10 Desktop Experience • Migrate from a PCoIP WorkSpace to a WorkSpaces Streaming Protocol (WSP) WorkSpace • Migrate WorkSpaces from one public or custom bundle to another For example you can migrate from GPU enabled (Graphics and GraphicsPro) bundles to non GPU enabled bundles and vice versa Migration Process With WorkSpaces migrate you can spe cify the target WorkSpaces bundle The migration process recreates the WorkSpace using a new root volume from the target bundle image and the user volume from the latest original user volume snapshot A new user profile is generated during migrate for bet ter compatibility The data in your old user profile that cannot be moved to the new profile is stored in a notMigrated folder During migration the data on the user volume (drive D) is preserved but all the data on the root volume ( C:\ drive) is lost This means that none of the installed applications settings and changes to the registry are preserved The old user profile folder is renamed with the NotMigrated suffix and a ne w user profile is created The migration process takes up to one hour per WorkSpace In addition if the migrate workflow fails to complete the process the service will automat ically roll back the WorkSpace to its ori ginal state before migration minimiz ing any data loss risk Any tags assigned to the original WorkSpace are carried over during migration The running mode of the WorkSpace is preserved The migrated WorkSpace has a new WorkSpace ID computer name and IP address Migration procedure You c an migrate WorkSpaces through the Amazon WorkSpaces console the AWS CLI using the migrate workspace command or the Amazon WorkSpaces API All migration requests gets queued and the service will automatically throttle the total number of migration request s if there are too many Migration Limits • You cannot migrate to a public or custom Windows 7 desktop experience bundle You cannot migrate to BYOL Windows 7 bundles ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 43 • You can migrate BYOL WorkSpaces only to other BYOL bundles • You cannot migrate a WorkSpace created from public or custom bundles to a BYOL bundle • Migrating Linux WorkSpaces is not currently supported • In AWS Regions that support more than one language you can migrate WorkSpaces between language bundles • The source and target bundles must be different (However in Regions that support more than one language you can migrate to the same Windows 10 bundle as long as the languages differ) If you want to refresh your WorkSpace using the same bundle rebuild the WorkSpace instead • You cannot migrate WorkSpaces across Regions • Note that WorkSpaces cannot be migrated when they are in ADMIN_MAINTENANCE mode Cost During the month in which migration occurs you are charged prorated amounts for both the new and the original WorkSpaces For example if you migrate WorkSpace A to Work Space B on May 10 you will be charged for WorkSpace A from May 1 to May 10 and you will be charged for WorkSpace B from May 11 to May 30 WorkSpaces migration best practices Before you migrate a WorkSpace do the following: • Back up any important data on drive C to another location All data on drive C is erased during migration • Make sure that the WorkSpace being migrated is at least 12 hours old to ensure that a snapshot of the user volume has been created On the Migrate WorkSpaces page in the Amazon WorkSpaces console you can see the time of the last snapshot Any data created after the last snapshot is lost during migration • To avoid potential data loss make sure that your users log out of their WorkSpaces and don't log back in until after the mig ration process is finished • Make sure that the WorkSpaces you want to migrate have a status of AVAILABLE STOPPED or ERROR • Make sure that you have enough IP addresses for the WorkSpaces you are migrating During migration new IP addresses will be alloc ated for the WorkSpaces ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 44 • If you are using scripts to migrate WorkSpaces migrate them in batches of no more than 25 WorkSpaces at a time WellArchitected Framework AWS Well Architected helps cloud architects build secure high performing resilient and efficient infrastructure for their applications and workloads It describes the key concepts design principles and architectural best practices for designing and running workloads in th e cloud It is based on five key pillars: • Operational Excellence (OE) • Security • Reliability • Performance Efficiency • Cost Optimization When architecting an Amazon WorkSpaces environment it is important to evaluate these key pillars to determine the maturity deployment level and discover additional features that can be used with the Amazon WorkSpaces While there is overall guidance for the AWS Well Architect Framework we are provi ing some key questions that can be included in the planning phase of your WorkSpaces deployment to ensure each of the five pillars are considered General • What is the business driver for this project? Operational Excellence • How do you segregate access control between users and different admin groups? Security 1 What are the security and compliance requirements to be considered for the WorkSpaces to operate in? 2 Are there any restrictions on routing to external IP addresses? 3 Are the required WorkSpaces ports allowed through the corporate firewall? 4 Is or will multi factor authentication be used with this deployment? 5 How do you many user identities and authorization requests today? ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 45 Reliability 1 What is the data retention policy for desk tops? 2 What is the Recovery Point Objective (RPO) for end user data? 3 What is the Recovery Time Objective (RTO) for end user data? Cost Optimization 1 Have the WorkSpaces bundles been right sized for the user case and applications? 2 Will the users consum e WorkSpaces more than 82 hours per month? While the questions above do not constitute an exhaustive list of items that should be considered they provide some overarch ing guidance to assist you with a Well Architected Amazon WorkSpaces deployment Security This section explains how to secure data by using encryption when using Amazon WorkSpaces services We describe encryption in transit and at rest and the use of secu rity groups to protect network access to the WorkSpaces This section also provides information on how to control end device access to WorkSpaces by using Trusted Devices and IP Access Control Groups Additional information on authentication (including M FA support) in the AWS Directory Service can be found in this section Encryption in Transit Amazon WorkSpaces uses cryptography to protect confidentiality at different stages of communication (in transit) and also to protect data at rest (encrypted WorkSp aces) The processes in each stage of the encryption used by Amazon WorkSpaces in transit is described in the following sections For information about the encryption at rest see the Encrypted WorkSpaces section of this document Registration and Updates The desktop client application communicates with Amazon for updates and registration using HTTPS ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 46 Authentication Stage The desktop client initiates authentication by sending credentials to the authentication gateway The communication between the desktop client and authentication gateway uses HTTPS At the end of this stage if the authentication succeeds the a uthenticati on gateway returns an OAuth 20 token to the desktop client through the same HTTPS connection Note: The desktop client application supports the use of a proxy server for port 443 (HTTPS) traffic for updates registration and authentication After recei ving credentials from the client the authentication gateway sends an authentication request to AWS Directory Service The communication from the authentication gateway to AWS Directory Service takes place over HTTPS so no user credentials are transmitted in plain text Authentication — Active Directory Connector (ADC) AD Connector uses Kerberos to establish authenticated communication with on premises AD so it can bind to LDAP and execute subsequent LDAP queries Client side LDAPS support in ADC is also available to encrypt queries between Microsoft AD and AWS Applications Before implementing client side LDAPS functionality review the prerequisites for client side LDAPS The AWS Directory Service also supports LDAP with TLS No user credentials are transmitted in plaintext at any time For increased security it is possible to connect a WorkSpaces VPC with the on premises network (where AD resides) using a VPN connection When using an AWS hardware VPN connection customers can s et up encryption in transit by using standard IPSEC ( Internet Key Exchange ( IKE) and IPSEC SAs) with AES 128 or AES 256 symmetric encryption keys SHA 1 or SHA 256 for integrity hash and DH groups (214 18 22 23 and 24 for phase 1; 125 14 18 22 23 and 24 for phase 2) using perfect forward secrecy (PFS) Broker Stage After receiving the OAuth 20 token (from the a uthentication gateway if the authentication succeeded) the desktop client quer ies Amazon WorkSpaces services (Broker Connection Manager) using HTTPS The desktop client authenticates itself by sending the OAuth 20 token and as a result the client receive s the endpoint information of the WorkSpaces streaming gateway ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 47 Streaming Stage The desktop client requests to open a PCoIP session with the streaming gateway (using the OAuth 20 token) This session is AES256 encrypted and uses the PCoIP port for communication control ( 4172/ TCP) Using the OAuth20 token the streaming gateway requests the user specific WorkSpaces information from the Amazon WorkSpaces service over HTTPS The streaming gateway also receives the TGT from the client (which is encrypted using the client user’s password) and by using Kerberos TGT pass through the gateway initiates a Windows login on the WorkSpace using t he user’s retrieved Kerberos TGT The WorkSpace then initiates an authentication request to the configured AWS Directory Service using standard Kerberos authentication After the WorkSpace is successfully logged in the PCoIP streaming starts The connect ion is initiated by the client on port TCP 4172 with the return traffic on port UDP 4172 Additionally the initial connection between the streaming gateway and a WorkSpaces desktop over the management interface is via UDP 55002 (See documentation for IP Address and Port Requirements for Amazon WorkSpaces The initial outbound UDP port is 55002 ) The streaming connection using po rts 4172 ( TCP and UDP ) is encrypted by using AES 128 and 256 bit ciphers but default to 128 bit Customers can actively change this to 256 bit either using PCoIP specific AD Group Policy settings for Windows WorkSpaces or with the pcoip agentconf file for Amazon Linux WorkSpaces To learn more about Group Policy administration for Amazon WorkSpaces review the documentation Network Interfaces Each Amazon WorkSpace has two network interfaces called the primary network interface and management network interface The primary network interface provides connectivity to resources inside the customer VPC such as access to AWS Directory Service the internet and the custome r corporate network It is possible to attach security groups to this primary network interface Conceptually we differentiate the security groups attached to this ENI based on the scope of the deployment: WorkSpaces security group and ENI security groups Management Network Interface The management network interface cannot be controlled via security groups; however customers can use a host based firewall on WorkSpace s to block ports or control access We don’t recommend applying restrictions on the management network interface If a customer decide s to add host based firewall rules to manage this interface a few ports should be open so the Amazon WorkSpaces service ca n manage ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 48 the health and accessibility to the WorkSpace See Network Interfaces in the Amazon Workspaces Administration Gui de WorkSpaces Security Group A default security group is created per AWS Directory Service and is automatically attached to all WorkSpaces that belong to that specific directory As with any other security group it is possible to modify the rules of a Wo rkSpaces security group The results take effect immediately after the changes are applied However do not delete this security group If you delete this security group your WorkSpaces won't function correctly and you won't be able to recreate this grou p and add it back It is also possible to change the default WorkSpaces security group attached to an AWS Directory Service by changing the WorkSpaces security group association Note: A newly associated security group will be attached only to WorkSpaces created or rebuilt after the modification ENI Security Groups Because the primary network interface is a regular ENI it can be managed by usin g the different AWS management tools See Elastic Network Interfaces Look for the WorkSpace IP address (in the WorkSpaces page in the Amazon WorkSpaces console) and then use that IP address as a filter to find the corresponding ENI (in the Network Interfaces section of the Amazon EC2 console) Once the ENI is located it can be directly manage d by security groups When manually assigning security groups to the primary network interface consider the port requirements of Amazon WorkSpaces See Network Interfaces in the Amazon Workspaces Administration Guide ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 49 Figure 15 — WorkSpaces client with MFA enabled Encrypted WorkSpaces Each Amazon WorkSpace is provisioned with a root volume ( C: drive fo r Windows WorkSpaces root for Amazon Linux WorkSpaces) and a user volume ( D: drive for Windows WorkSpaces /home for Amazon Linux WorkSpaces) The encrypted WorkSpaces feature enables one or both volumes to be encrypted What is Encrypted? The data stored at rest disk input/output ( I/O) to the volume and snapshots created from encrypted volumes are all encrypted When Does Encryption Occur? Encryption for a WorkSpace should be specified when launching (creating) the WorkSpace WorkSpaces v olumes can be encrypted only at launch time: after launch the volume encryption status cannot be changed Figure 16 shows the Amazon WorkSpaces console page for choosing encryption during the launch of a new WorkSpace ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 50 Figure 16 — Encrypting WorkSpace root volumes How is a New WorkSpace Encrypted? A customer can choose the Encrypted WorkSpaces option from either the Amazon WorkSpaces console or AWS CLI or by using the Amazon WorkSpaces API when a custome r launch es a new WorkSpace To encrypt the volumes Amazon WorkSpaces uses a CMK from AWS Key Management Service ( AWS KMS) A default AWS KMS CMK is created the first time a WorkSpace is launched in a Region (CMKs have a Region scope ) A customer can also create a customer managed CMK to use with encrypted WorkSpaces The CMK is used to encrypt the data keys that are used by Amazon WorkSpaces service to encrypt each of the WorkSpace volumes (In a strict sense it is Amazon EBS that will encrypt the volumes) For current CMK limits see AWS KMS Resource quotas Note: Creating custom images from an encrypted WorkSpac e is not supported Also WorkSpaces launched with root volume encryption enabled can take up to an hour to be provisioned For a detailed description of the WorkSpaces encryption process see How Amazon WorkSpaces uses AWS KMS Consider how the use of CMK will be monitored to ensure that a request for an encrypted WorkSpace is serviced correctly For additional ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 51 information about AWS KMS customer master keys and d ata keys see the AWS KMS page Access Control Options and Trusted Devices Amazon WorkSpaces provides customers with options to manage which client devices can access WorkSpaces Customers can limit WorkSpaces access to trusted devices only Access to WorkSpaces can be allowed from macOS and Microsoft Windows PCs using digital certificates Amazon Workspaces can allow or block access for iOS Android Chrome OS Linux and zero clients as well as the WorkSpaces Web Access client With these capabilities it can further improve the security posture Access control options are enabled for new deployments for users to access their WorkSpaces from clients on Windows MacOS iOS Android ChromeOS and Zero Clients Access using Web Access or a Linux WorkSpaces client is not enabled by default for a new Workspaces deployment and will need to be enabled If there are limits on corporate data access from trusted devices (also known as managed devices) WorkSpaces access can be restricted to trusted devices with valid certificates When this feature is enabled Amazon WorkSpaces uses certificate based authentication to determine whether a device is trusted If the WorkSpaces client application can't ver ify that a device is trusted it blocks attempts to log in or reconnect from the device For more information about controlling which devices can access WorkSpaces see Restrict WorkSpaces Access to Trusted Devices Note : Certificates for trusted devices appl y only to Amazon WorkSpaces Windows and macOS clients This feature does not apply to the Amazon WorkSpaces Web Access client or any third party clients including but not limited to Teradici PCoIP software and mobile clients Teradici PCoIP zero clients RDP clients and remote desktop applications IP Acc ess Control Groups Using IP address based control groups customers can define and manage groups of trusted IP addresses and allow users to access their WorkSpaces only when they're connected to a trusted network This feature helps customers gain greater control over their security posture IP access control groups can b e added at the WorkSpaces directory level There are two ways to get started using IP access control groups ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 52 • IP Access Controls page — From the WorkSpaces management console IP access control groups can be created on the IP Access Controls page Rules can be added to these groups by entering the IP addresses or IP ranges from which WorkSpaces can be accessed These groups can then be added to directories on the Update Details page • Workspace APIs — WorkSpaces APIs can be used to create delete and view groups; create or delete access rules; or to add and remove groups from directories For a detailed description of the using IP access control groups with the A mazon WorkSpaces encryption process see IP Access Control Groups for Your WorkSpaces Monitoring or Logging Using Amazon CloudWatch Monitoring network servers and logs is an integral part of any infrastructure Customers wh o deploy Amazon WorkSpaces need to monitor their deployments specifically the overall health and connection status of individual WorkSpaces Amazon CloudWatch Metrics for WorkSpaces CloudWatch metrics for WorkSpaces is designed to provide administrators w ith additional insight into the overall health and connection status of individual WorkSpaces Metrics are available per WorkSpace or aggregated for all WorkSpaces in an organization within a given directory These metrics like all CloudWatch metrics ca n be viewed in the AWS Management Console (see Figure 17 ) accessed via the CloudWatch APIs and monitored by CloudWatch alarms and third party tools By default the following metrics are enabled and are available at no extra cost: • Available — WorkSpaces that respond to a status check are counted in this metric Figure 17 — CloudWatch metrics : ConnectionAttempt / ConnectionFailure ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 53 • Unhealthy — WorkSpaces that don’t respond to the same status check are counted in this metric • ConnectionAttempt — The number of connection attempts made to a WorkSpace • ConnectionSuccess — The number of successful connection attempts • ConnectionFailure — The number of unsuccessful connection attempts • Sessi onLaunchTime — The amount of time taken to initiate a session as measured by the WorkSpaces client • InSessionLatency — The round trip time between the WorkSpaces client and WorkSpaces as measured and reported by the client • SessionDisconnect — The number of user initiated and automatically closed sessions Additionally alarms can be created as shown in Figure 18 Figure 18 — Create CloudWatch alarm for WorkSpaces connection errors Amazon CloudWatch Events for WorkSpaces Events from Amazon CloudWatch Events can be used to view search download archive analyze and respond to successful logins to WorkSpaces The service can monitor client WAN IP addresses Operating System WorkSpaces ID and Directory ID ArchivedAmazon Web Services Best Prac tices for Deploying Amazon WorkSpaces 54 information for users’ logins to WorkSpaces For example it can use events for the following purposes: • Store or archive WorkSpaces login events as logs for future reference analyze the logs to look for patterns and take action based on those patterns • Use the WAN IP address to determine where users are logged in from and then use policies to allow users access only to files or data from WorkSpaces that meet the access criteria found in the Clo udWatch Event type of WorkSpaces Access • Use policy controls to block access to files and applications from unauthorized IP addresses For more information on how to use CloudWatch Events see the Amazon CloudWatch Events User Guide To learn more about CloudWatch Events for WorkSpaces see Monitor your WorkSpac es using Cloudwatch Events Cost Optimization SelfService WorkSpace Management Capabilities In Amazon WorkSpaces self service WorkSpace management capabilities can be enabled for users to provide them with more control over their experience Allowing users self service capability can reduce your IT support staff workload for Amazon WorkSpaces When s elfservice capabilities are enabled they allow users to perform one or more of the following tasks directly from their Windows macOS or Linux client for A mazon WorkSpaces: • Cache their credentials on their client This lets users reconnect to their WorkSpace without re entering their credentials • Restart their WorkSpace • Increase the size of the root and user volumes on their WorkSpace • Change the compute ty pe (bundle) for their WorkSpace • Switch the running mode of their WorkSpace • Rebuild their WorkSpace There are no ongoing cost implications for allowing users the Restart and Rebuild options for their WorkSpaces Users should be aware that a Rebuild of their WorkSpace will cause their WorkSpace to be unavailable for up to an hour as the rebuild process takes place ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 55 Options to increase the size of the volumes change the compute type and switch the running mode can incur additional costs for WorkSpaces A best practice is to enable selfservice to reduce the workload for the support team Self service for additional cost items should be allowed within a workflow process that ensures that authorization for additional charges has been obtained This c an be through a dedicated selfservice portal for WorkSpaces or by integration with existing Information Technology Service Manage (ITSM) services such as ServiceNow For more detailed information see Enabling Self Service WorkSpace Management Capabilities for Your Users For an example describing enabling a structured portal for user self service see Automate Amazon WorkSpaces with a Self Service Portal Amazon WorkSpaces Cost Optimizer The running mode of a WorkSpace determines its immediate availability and how it will be billed Here are the current running WorkSpaces running mode : • AlwaysOn — Use when paying a fixed monthly fee for unlimited usage of WorkSpaces This mode is best for users who use their WorkSpace full time as their primary desktop • AutoStop — Use when paying for WorkSpaces by the hour With this mode WorkSpaces stop after a specified period of inactivity and the state of apps and data is saved To set the automatic stop time use AutoStop Time (hours) A best practice is to monitor usage and set the WorkSpaces’ running mode to be the most cost effective This can be done with the Amazon WorkSpaces Cost Optimizer This solution deploys an Amazon Cloudwatch event that invokes an AWS Lambda function every 24 hours This solution can convert individual WorkSpaces from an hourly billing model to a monthly billing model on a ny day after the threshold is met If the method converts a WorkSpace from hourly billing to monthly billing it does not convert the WorkSpace back to hourly billing until the beginning of the next month and only if usage was below the threshold However the billing model can be manually change d at any time using the AWS Management Console The method ’s AWS CloudFormation template includes parameters that will execute these conversions Opting Out with Tags To prevent the method from converting a WorkSpa ce between billing models apply a resource tag to the WorkSpace using the tag key Skip_Convert and any tag value This method will log tagged WorkSpaces but it will not convert the tagged WorkSpaces Remove the tag at any time to resume automatic conversion for that WorkSpace For detail s see Amazon WorkSpaces Cost Optimizer ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 56 Troubleshooting Common administration and client issues such as error message s like "Your device is not able to connect to the WorkSpaces Registration service" or “Can't connect to a WorkSpace with an interactive logon banne r” can be found on the Client and Admi n Troubleshooting pages in the Amazon WorkSpaces Administration Guide AD Connector Cannot Connect to Activ e Directory For AD Connector to connect to the onpremises directory the firewall for the on premises network must have certain ports open to the CIDRs for both subnets in the VPC See Scenario 1: Using AD Connector to Proxy Authentication to On Premises Active Directory Service To test if these conditions are met perform the following steps To test the connection : 3 Launch a Windows instance in the VPC and connect to it over RDP The remaining steps are performed on the VPC instance 4 Download and unzip the DirectoryServicePortTest test application The source code and Microsoft Visual Studio project file s are included to modify the test application if desired 5 From a Windows command prompt run the DirectoryServicePortTest test application with the following options: DirectoryServicePortTestexe d <domain_name> ip <server_IP_address> tcp "5388135 13938944546463649152" udp "5388123137138389445464" <domain_name> <domain_name > — The fully qualified domain name used to test the forest and domain functional levels If the domain name is excluded the functional levels won't be tested <server_IP_address > — The IP address of a domain controller in the onpremises domain The ports are tested against this IP address If the IP address is excluded the ports won't be tested This test determine s if the necessary ports are open from the VPC to the domain The test app also verifies the minimum forest and domain functional levels ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 57 Troubleshooting A WorkSpace Custom Image Creation Error If a Windows or Amazon Linux WorkSpace has been launched and customized a custom image can be creat ed from that WorkSpace A custom image contains the operating system application software and settings for the WorkSpace Review the requirements to create a Windows custom image or the requirements to create an Amazon Linux custom image Image creation requires that all prerequisites are met before image c reation can start To confirm that the Windows WorkSpace meets the requirements for image creation we recommend running the Image Checker The Image Checker performs a series of tests on the WorkSpace when an image is created and provides guidance on ho w to resolve any issues it finds For detailed information see installing and configuring the image checker After the WorkSpace pass es all tests a “Validation Successful ” message appears You can now create a custom bundle Otherwise resolve any issues that cause test failures and warnings and repeat the process of running the Image Checker until the WorkSpace passes all tests All failures and warnings must be resolved before an image can be created For more information f ollow the tips for resolving issues det ected by the Image Checker Troubleshooting a Windows WorkSpace Marked as Unhealthy The Amazon WorkSpaces service periodically checks the health of a WorkSpace by sending it a status request The WorkSpace is marked as Unhealthy if a response isn’t received from the WorkSpace in a timely manner Common causes for this problem are: • An application on the WorkSpace is blocking network connection between the Amazon WorkSpaces service and the WorkSpace • High CPU utilization on the WorkSpace • The computer name of the Work Space is changed • The agent or service that responds to the Amazon WorkSpaces service isn't in running state The following troubleshooting steps can return the WorkSpace to a healthy state: ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 58 • First reboot the WorkSpace from the Amazon WorkSpaces console If rebooting the WorkSpace doesn't resolve the issue either use RDP or connect to an Amazon Linux WorkSpace using SSH • If the WorkSpace is unreachable by a different pr otocol rebuild the WorkSpace from the Amazon WorkSpaces console • If a WorkSpaces connection cannot be established verify the following: Verify CPU Utilization Use Open Task Manager to determine if the WorkSpace is experiencing high CPU utilization If it is try any of the following troubleshooting steps to resolve the issue: 1 Stop any service that is consuming a high amount of CPU 2 Resize the WorkSpace to a compute type greater than what is currently used 3 Reboot the WorkSpace Note : To diagnose high CPU utilization and for guidance if the above steps don't resolve the high CPU utilization issue see How do I diagnose high CPU utilization on my EC2 Windows instance when my CPU is not throttled? Verify the Computer Name of the WorkSpace If the computer name of the Workspace was changed chan ge it back to the original name : 1 Open the Amazon WorkSpaces console and then expand the Unhealthy WorkSpace to show details 2 Copy the Computer Name 3 Connect to the WorkSpace using RDP 4 Open a command prompt and then enter hostname to view the current computer name • If the name matches the Computer Name from step 2 skip to the next troubleshooting section • If the names don’t match enter sysdmcpl to open system properties and then follow the remaining steps in this section 5 Choose Change and then paste the Computer Name from step 2 6 Enter the domain user credentials if prompted Confirm that SkyLightWorkspaceConfigService is in Running State ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 59 • From Services verify that the WorkSpace service SkyLightWorkspaceConfigService is in runni ng state If it’s not start the service Verify Firewall Rules Confirm that the Windows Firewall and any third party firewall that is running have rules to allow the following ports: • Inbound TCP on port 4172: Establish the streaming connection • Inbound UD P on port 4172: Stream user input • Inbound TCP on port 8200: Manage and configure the WorkSpace • Outbound UDP on port 55002: PCoIP streaming If the firewall uses stateless filtering then open ephemeral ports 49152 65535 to allow return communication If the firewall uses stateful filtering then ephemeral port 55002 is already open Collecting a WorkSpaces Support Log Bundle for Debugging When troubleshooting WorkSpaces issues it is necessary to gather the log bundle from the affected WorkSpace and the h ost where the WorkSpaces client is installed There are two fundamental categories of logs: • Server side logs : The WorkSpace is the server in this scenario so these are logs that live on the WorkSpace itself • Client side logs : Logs on the device that the end user is using to connect to the WorkSpace o Note that only Windows and macOS clients write logs locally o Zero clients and iOS clients do not log o Android logs are encrypted on the local storage and uploaded automatically to the WorkSpaces client engineering team Only that team can review the logs for Android devices PCoIP Se rverSide Logs All of the PCoIP components write their log files into one of two folders: • Primary location : C:\ProgramData \Teradici\PCoIPAgent \logs ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 60 • Archive location : C:\ProgramData \Teradici\logs Sometimes when working with AWS Support on a complex issue it is necessary to put the PCoIP Server agent into verbose logging mode To enable this : 1 Open the following registry key: HKEY_LOCAL_MACHINE \SOFTWARE\Policies\Teradici\PCoIP\pcoip_admin _defaults 2 In the pcoip_admin_defaults key create the following 32bit DWORD: pcoipevent_filter_mode 3 Set the value for pcoipevent_filter_mode to “3” (Dec or Hex) For reference these are the log thresholds which can be defined in this DWORD • 0 — (CRITICAL) • 1 — (ERROR) • 2 — (INFO) • 3 — (Debug) If the pcoip_admin_default DWORD doesn’t exist the log level is 2 by default It is recommended to restore a value of 2 to the DWORD after it no longer need verbose logs as they are much larger and will consume disk space unnecessarily WebAccess Server Side Logs The WorkSpaces web access client uses the STXHD service The logs for WebAccess are stored at C:\ProgramData \Amazon\Stxhd\Logs Client Side Logs These logs come from the WorkSpaces client that the user connects with so the logs are on the end user’ s computer The log file locations for Windows and Mac are : • Windows : "%LOCALAPPDATA% \Amazon Web Services \Amazon WorkSpaces \Logs" • macOS : ~/Library/Logs/Amazon Web Services/ • Linux : ~/local/share/Amazon Web Services/Amazon WorkSpaces/logs To help troubleshoot issues that users might experience enable advanced logging that can be used on any Amazon WorkSpaces client Advanced logging is enabled for every subsequent client session until it is disabled ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 61 1 Before connecting to the WorkSpace the end user should enable advanced logging for their WorkSpaces client 2 The end user should then connect as usual use their WorkSpace and attempt to reproduce the issue 3 Advanced logging generates log files that contain diagnostic information and debugging level details including verbose performance data This setting persists until explicitly turned off After the user has successfully reproduce d the issue with verbose logging on this setting should be disabled as it generates large log files Automated Server Side Log Bundle Collection for Windows The GetWorkSpaceLogsps1 script is helpful for quickly gathering a server side log bundle for AWS Premium Support The script can be requested from AWS Premium Support by requesting it in a support case : 1 Connect to the WorkSpace using the client or using Remote Desktop Protocol (RDP) 2 Start an administrative Command Prompt ( run as administrator) 3 Launch the script from the Command Prompt with the following command: powershellexe NoLogo ExecutionPolicy RemoteSigned NoProfile File "C: \Program Files \Amazon\WorkSpacesConfig \Scripts\Get WorkSpaceLogsps1" 4 The script create s a log bundle on the user's desktop The script creates a zip file with the following folders: • C — Contains the files from Program Files Program Files (x86) ProgramData and Windows related to Skylight EC2Config Teradici Event viewer and Windows logs (Panther and others) • CliXML — Contains XML files that can be imported in Powershell by using ImportCliXML for interactive filtering See Import Clixml • Config — Detailed logs for each chec k that is performed • ScriptLogs — Logs about the script execution (not relevant to the investigation but useful to debug what the script does) • tmp —Temporary folder (it should be empty) ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkS paces 62 • Traces — Packet capture done during the log collection How to Check Latency to the Closest AWS Region The Connection Health Check website quickly checks whether all the required services that use Amazon WorkSpaces can be reached It also does a performance check to each AWS Region where Amazon WorkSpaces is available and lets users know which one will be the fastest Conclusion There is a strategic shift in end user computing as organizations strive to be more agile better protect their data and help their workers be more productive Many of the benefits already realized with cloud computing also apply to end user computing By moving their Windows or Linux desktops to the A WS Cloud with Amazon WorkSpaces organizations can quickly scale as they add workers improve their security posture by keeping data off devices and offer their workers a portable desktop with access from anywhere using the device of their choice Amazo n WorkSpaces is designed to be integrated into existing IT systems and processes and this whitepaper describe d the best practices for doing this The result of following the guidelines in this whitepaper is a cost effective cloud desktop deployment that c an securely scale with your business on the AWS global infrastructure Contributors Contributors to this document include: • Naviero Magee Sr EUC Solutions Architect Amazon Web Services • Andrew Wood Sr EUC Solutions Architect Amazon Web Services • Dzung N guyen Sr Consultant Amazon Web Services • Stephen Stetler Sr EUC Solutions Architect Amazon Web Services Further Reading For additional information see: • Amazon WorkSpaces Administration Guide • Amazon WorkSpaces Developer Guide • Amazon WorkSpaces Clients ArchivedAmazon Web Services Best Practices for Deploying Amazon WorkSpaces 63 • Managing Amazon Linux 2 Amazon WorkSpaces with AWS OpsWorks for Puppet Enterprise • Customizing the Amazon Linux WorkSpace • How to improve LDAP security in AWS Directory Service with client side LDAPS • Use Amazon CloudWatch Events with Amazon WorkSpaces and AWS Lambda for greater fleet visibility • How Amazon WorkSpaces Use AWS KMS • AWS CLI Command Reference – WorkSpaces • Monitoring Amazon WorkSpaces Metrics • MATE Desktop Environment • Troubleshooting AWS Directory Service Administration Issues • Troublesh ooting Amazon WorkSpaces Administration Issues • Troubleshooting Amazon WorkSpaces Client Issues • Automate Amazon WorkSpaces with a Self Service Portal Document Revisions Date Description December 2020 Updated content May 2020 Updated content and added new diagrams July 2016 First publication
|
General
|
consultant
|
Best Practices
|
Best_Practices_for_Deploying_Microsoft_SQL_Server_on_AWS
|
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlBest Practices for Deploying Microsoft SQL Server on Amazon EC2 First Published September 2018 Updated July 28 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlContents Introduction 1 High availability and disaster recovery 2 Availability Zones and multi AZ deployment 3 Using AWS Launch Wizard to deploy Microsoft SQL Server on Amazon EC2 instances 5 Multi Region deployments 6 Disaster recovery 8 Performance opti mization 10 Using Amazon Elastic Block Store (Amazon EBS) 10 Instance storage 11 Amazon FSx for Windows File Server 13 Bandwidth and latency 13 Read replicas 14 Security optimization 15 Amazon VPC 15 Encryption at rest 15 Encryption in transit 16 Encryption in use 16 AWS Key Management Service (AWS KMS) 16 Security patches 16 Cost optimization 17 Using SQL Server Developer Edition for non production 17 Amazon EC2 CPU optimization 18 Switch to SQL Server Standard Edition 18 Z1d and R5b EC2 instance types 19 Eliminating active replica licenses 20 SQL Server on Linux 22 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlOperational excellence 23 Observability and root cause analysis 23 Reducing mean time to resolution (MTTR) 24 Patch management 24 Contributors 25 Document revisions 25 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAbstract This whitepaper focuses on best practices to attain the most value for the least cost when running Microsoft SQL Server on AWS Although for many general purpose use cases Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server (MS SQL) provides an easy and quick solution this paper focus es on scenarios where you need to push the limits to satisfy your special requirements In particular this paper ex plains how you can minimize your costs maximize availability of your SQL Server databases optimize your infrastructure for maximum performance and tighten it for security compliance while enabling operational excellence for ongoing maintenance The fle xibility of AWS services combined with the power of Microsoft SQL Server can provide expanded capabilities for those who seek innovative approaches to optimize their applications and transform their businesses The main focus of this paper is on the capa bilities available in Microsoft SQL Server 2019 which is the most current version at the time of this publication Existing databases that run on previous versions (2008 2012 2014 2016 and 2017) can be migrated to SQL Server 2019 and run in compatibil ity mode Mainstream and extended support for SQL Server 2000 2005 and 2008 has been discontinued by Microsoft Any database running on those versions of SQL Server must be upgraded to a supported version first Although it is possible to run those versions of SQL Server on AWS that discussion is outside the scope of this whitepaper This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 1 Introduction AWS offers the best cloud for SQL Server and it is the proven rel iable and secure cloud platform for running Windows based applications today and in the future SQL Server on Windows or Linux on Amazon EC2 enables you to increase or decrease capacity within minutes not hours or days You can commission one hundreds or even thousands of server instances simultaneously Deploying self managed fully functioning and production ready Microsoft SQL Server instances on Amazon EC2 is now possible within a few minutes for anyone even those without deep skills on SQL Server and cloud features or configuration nuances thanks to AWS Launch Wizard for SQL Server Using AWS Launch Wizard you can quickly deploy SQL Server on EC2 Windows or Linux instances with all the best practices already implemented and included in your deployment Independent benchmarks have proven that SQL Server runs 2X faster with 64% lower costs on AWS when c ompared with the next biggest cloud provider AWS continues to be the most preferred option for deploying and running Microsoft SQL Server This is due to the unique combination of breadth and depth of services and capabilities offered by AWS providing th e optimum platform for MS SQL Server workloads Requirements for running SQL Server often fall under following categories: • High availability and disaster recovery • Performance • Security • Cost • Monitoring and maintenance These requirements map directly to the f ive pillars of the AWS Well Architected Framework namely: • Reliability • Performance efficiency • Security • Cost optimization This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 2 • Operational excellence This paper discuss es each of these requirements in further detail along with best practices using AWS services to address them High availability and disaster recovery Every business seeks data solutions that can address their operational requirements These require ments often translate to specific values of the Recovery Time Objective (RTO) and Recovery Point Objective (RPO) The RTO indicates how long the business can endure database and application outages and the RPO determines how much data loss is tolerable For example an RTO of one hour tells you that in the event of an application outage the recovery plans should aim to bring the application back online within one hour An RPO of zero indicates that should there be any minor or major issues impacting the application there should be no data loss after the a pplication is brought back online The combination of RTO and RPO requirements dictate s what solution should be adopted Typically applications with RPO and RTO values close to zero should use a high availability (HA) solution whereas disaster recovery ( DR) solutions can be used for those with higher values In many cases HA and DR solutions can be mixed to address more complex requirements Microsoft SQL Server offers several HA/DR solutions each suitable for specific requirements The f ollowing table compar es these solutions: Table 1: HA/DR options in M icrosoft SQL Server Solution HA DR Enterprise edition Standard edition Always On availability groups Yes Yes Yes Yes (2 replicas )* Always On failover cluster instances Yes Yes** Yes Yes (2 nodes) Distributed availability groups Yes Yes Yes No Log shipping No Yes Yes Yes Mirroring (deprecated) Yes Yes Yes Yes (Full safety only) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 3 *Always On basic availability groups in SQL Server 2019 Standard edition support a s ingle passive replicas (in addition to the primary replica) for a single database per availability group If you need multiple databases in HA mode a separate availability group needs to be defined for each database **MSSQL Failover Cluster Instance is o ften used as a pure HA solution However as discussed later in this document in AWS the FCI can also serve as a complete HA/DR solution These solutions rely on one or more secondary servers with SQL Server running as active or passive standby Based on the specific HA/DR requirements these servers can be located in close proximity to each other or far apart In AWS you can choose between low latency or an extremely low probability of failure You can also combine these options to create the solution that is most suitable to your use case This paper look s at these options and how they can be used with SQL Server workloads Availability Zones and multiAZ deployment AWS Availability Zones are designed to provide separate failure domains while keeping workloads in relatively close proximity for low latency communica tions Availability Zones are a good solution for synchronous replication of your databases using Mirroring Always On Availability Groups Basic Availability Groups or Failover Cluster Instances SQL Server provides zero data loss and when combined with the low latency infrastructure of Availability Zones provides high performance This is one of the main differences between most on premises deployments and AWS For example Always On Failover Cluster Instance (FCI) is often used inside a single data center because all nodes in an FCI cluster must have access to the same shared storage Locating these nodes in different data centers could degrade performance However with AWS FCI nodes can be located in separate Avail ability Zone s and still provide high performance because of the low latency network link between all Availability Zone s within a Region This feature enables a higher level of availability and could eliminate the need for a third node which is often coupl ed with an FCI cluster for disaster recovery purposes SQL Server FCI relies on shared storage being accessible from all nodes participating in FCI Amazon FSx for Windows File Server is a fully managed service providing shared storage that automatically replicates the data synchronously across two This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 4 Availability Zones provides high availability with automatic failure detection failover and failback and fully supports the Server Message Block (SMB ) Continuous Availability (CA) feature This enables you to simplify your SQL Server Always On deployments and use Amazon FSx as storage tier for MS SQL FCI Scenarios where Amazon FSx is ap plicable for performance tuning and cost optimization are discussed in subsequent sections of this document Figure 1: Using Amazon FSx as file share for Failover Cluster Instance or as file share witness in Windows Server Failover Cluster This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 5 Using AWS La unch Wizard to deploy Microsoft SQL Server on Amazon EC2 instances AWS Launch Wizard is a service that offers a guided way of sizing configuring and deploying AWS resources for third party applications such as Microsoft SQL Server without the need to manually identi fy and provision individual AWS resources Today you can use AWS Launch Wizard to deploy Microsoft SQL Server with following configurations: • SQL Server single instance on Windo ws • SQL Server single instance on Linux • SQL Server HA using Always On Availability Groups on Windows • SQL Server HA using Always On Availability Groups on Linux • SQL Server HA using Always On Failover Cluster Instance on Windows To start you input your MS SQL workload requirements including performance number of nodes licensing model MS SQL edition and connectivity on the service console Launch Wizard then identifies the correct AWS resources such as EC2 instances and EBS volumes to deploy and run you r MS SQL instance Launch Wizard provides an estimated cost of deployment and enables you to modify your resources to instantly view an updated cost assessment After you approve the AWS resources Launch Wizard automatically provisions and configures the selected resources to create a fully functioning production ready application AWS Launch wizard handles all the heavy lifting including installation and configurat ion of Always On Availability Groups or Failover Cluster Instance This is especially useful with the Linux support as most MS SQL administrators find Linux configuration non trivial when done manually AWS Launch Wizard also creates CloudFormation templa tes that can serve as a baseline to accelerate subsequent deployments For post deployment management AWS Systems Manager (SSM) Application Manager autom atically imports application resources created by AWS Launch Wizard From the Application Manager console you can view operations details and perform operations tasks As discussed later in this document y ou can also use SSM Automation documents to mana ge or remediate issues with application components or resources This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 6 Figure 2: AWS Launch Wizard deploys MS SQL FCI using Amazon FSx for Windows File Server Multi Region deployments For those workloads that require even more resilience against unplanned even ts you can leverage the global scale of AWS to ensure availability under almost any circumstances By default Amazon Virtual Private Cloud (Amazon VPC) is confined within a single Region Therefore for a multi region deployment you need to establish connectivity between your SQL Server instances that are deployed in different Regions In AWS there are a number of ways to do this each suitable for a range of requirements: • VPC peering — Provides an encrypted network connectivity between two VPCs The traffic flows through the AWS networking backbone eliminating latency and other hazards of the internet • AWS Transit Gateway — If you need to connect two or more VPCs or on premises sites you can use AWS Transit Gateway to s implify management and configuration overhead of establishing network connection between them • VPN connections — AWS VPN solutions are especially useful when you need to operate in a hybrid environment and connect your AWS VPCs to your on premises sites and clients This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 7 • VPC sharing — If your applications or other clients are spread across multip le AWS accounts an easy way to make your SQL Server instance available to all of them is using virtual private cloud ( VPC) Sharing A shared VPC can also be connected to other VPCs using AWS Transit Gateway AWS VPN CloudHub VPN connections or VPC peering These connections are useful when workloads are spread across multiple accounts an d Regions If you have applications or users that are deployed in remote Regions which need to connect to your SQL Server instances you can use the AWS Direct Connect feature that provides connectivity from any Direct Connect connection to all AWS Regions Although it is possible to have synchronous replication in a multi region SQL Server deployment the farther apart your selected Regions are the more severe the performance penalty is for a synchronous replication Often the best practice for multi region deployments is to establish an asynchronous replication especially for Regions that are geographically distant For those workloads that come with aggressive RPO requirements asynchronous multi Region deployment can be combined with a Multi AZ or Single AZ synchronous replication You can also combine all three methods into a single solution However these combinations would impose a significant increase in your SQL Server license costs which must be considered as part of your planning In cases involving several replicas across two or more Regions distributed availability groups might be the most suitable option This feature enables you to combine availability groups deployed in each Region into a larger distributed availability group Distributed availability g roups can also be used to increase the number of read replicas A traditional availability group allows up to eight read replicas This means you can have a total of nine replicas including the primary Using a distributed availability group a second ava ilability group can be added to the first increasing the total number of replicas to 18 This process can be repeated with a third availability group and a second distributed availability group The second distributed availability group can be configured to include either the first or second availability groups as its primary Distributed availability group is the means through which SQL Server Always On can achieve virtually unlimited scale Another use case of a distributed availability group is for zero downtime database migrations when during migration a read only replica is available at target destination The independence of SQL Server Distributed Availability Group from Active Directory and Windows Server Failover Cluster (WSFC) is the main benefactor for these cases It This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 8 enables you to keep both sides of the migration synchronized without having to wor ry about the complexities of Active Directory or WSFC See How to architect a hybrid Microsoft SQL Serve r solution using distributed availability groups for more details Figure 2: SQL Server distributed availability group in AWS Disaster recovery Similar to HA solutions DR solutions require a replica of SQL Server databases in another server However for DR the other server is often in a remote site far away from the primary site This means higher latency and therefore low er performance if you rely on HA solutions that use synchronous replication DR solutions often rely on asynchronous replication of data Similar to HA DR solutions are based on either block level or database level replication For example SQL Server This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 9 Log Shipping replicates data at the database level while Windows Storage Replica can be used to implement block level replication DR solutions are selected based on their requirements such as cost RPO RTO complexity and the effort to implement each solution In addition t o common SQL Server DR solutions such as Log Shipping and Windows Storage Replica AWS also provides CloudEndure Disaster Recovery You can use CloudEndure Disaster Recovery to reduce d owntime to a few minutes protect against data loss for sub second RPO simplify implementation increase reliability and decrease the total cost of ownership CloudEndure is an agent based solution that replicates entire virtual machines including the operating system all installed applications and all databases into a staging area The staging area contains low cost resources automatically provisioned and managed by CloudEndure Disaster Recovery This greatly reduces the cost of provisioning duplica te resources Because the staging area does not run a live version of your workloads you don’t need to pay for duplicate software licenses or highperformance compute Rather you pay for lowcost compute and storage The fully provisioned recovery envir onment with the rightsized compute and higher performance storage required for recovered workloads is launched only during a disaster or drill AWS also makes CloudEndure available at no additional cost for migration projects Figure 3: CloudEndure disaster recovery This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 10 Performance optimization In some cases maximizing performance m ight be your utmost priority Both SQL Server and AWS have several options to substantially i ncrease performance of your workloads Using Amazon Elastic Block Store (Amazon EBS) Amazon EBS is a Single AZ block storage service with a number of flexible options to cater to diverse requirements When it comes to maximizing performance with consistent and predictable results on a single volume using a Provisioned IOPS Solid State Drive (SSD) volume type ( io2 and io2 Block Express ) is the easiest choice You can provision up to 64000 input/output operations per second ( IOPS ) per io2 EBS volume (based on 16 KiB I/O size) along with 1000 MiB/s throughput For more demanding workloads the io2 Block Express EBS volumes guarantee 256000 IOPS and 4000 MiB/s through put per volume If you need more IOPS and throughput than provided by a single EBS volume you can create multiple volumes and stripe them in your Windows or Linux instance (Microsoft SQL Server 2017 and later can be installed on both Windows and Linux systems ) Striping enables you to further increase the available IOPS per instance up to 260000 and throughput per instance up to 7500 MB/s Remember to use EBSoptim ized EC2 instance types This means a dedicated network connection is allocated to serve requests between your EC2 instance and the EBS volumes attached to it While you can use a single Provisioned IOPS (io1 io2 or io2 Block Express ) volume to meet your IOPS and throughput requirements General Purpose SSD (gp2 and gp3 ) volumes offer a better balance of price and performance for SQL Server workloads when configured appropriately General Purpose SSD (gp2) volumes deliver single digit millisecond latencies and the ability to burst to 16000 IOPS for extended periods This ability is well suited to SQL Server The IOPS load generated by a relational database like SQL Server tends to spike frequently For exam ple table scan operations require a burst of throughput while other transactional operations require consistent low latency One of the major benefits of using EBS volumes is the ability to create point intime and instantaneous EBS snapshots This feat ure copies the EBS snapshot to Amazon Simple Storage Service (Amazon S3) infrastructure which provides 99999999999% durability Despite EBS volumes being confined to a single AZ EBS snapshots can be This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 11 restored to any AZ within the same Region Note that block level snapshots are not the same as database backups and not all features of database backups are attainable this way Therefore this method is often combined and complemented with a regular database backup plan Although each EBS volume can be as large as 64 TB and therefore could take a long time to transfer all its d ata to Amazon S3 EBS snapshots are always point intime This means SQL Server and other applications can continue reading and writing to and from the EBS volume while d ata is being transferred in the background When you restore a volume from a snapshot the volume is immediately available to applications for read and write operations However it takes some time until it gets to its full performance capacity Using Amazon EBS fast snapshot restore you can eliminate the latency of input/output ( I/O) operations on a block when it is accessed for the first time Volumes created using fast snapshot restore instantly deliver all of their provisioned performance You can use AWS Systems Manager Run Command to take application consistent EBS snapshots of your online SQL Server files at any time with no need to bring your database offline or in read only mode The snapshot process uses Windows Volume Shadow Copy Service (VSS) to take image level backups of VSS aware applications Microsoft SQL Server is VSS aware and is perfectly compatible with t his technique It is also possible to take VSS snapshots of Linux instances however that process requires some manual steps because Linux does not natively support VSS You can also take crash consistent EBS snapshots across multiple EBS volumes attached to a Windows or Linux EC2 instance without using orchestrator applications Using this method you only lose uncommitted transactions and writes that are not flushed to the disk SQL Server is capable of restoring databases to a consistent point before the crash time This feature is also supported through AWS Backup EBS volumes are simple and convenient to use and in most cases effective too However there m ight be circumstances where you need even higher IOPS and throughput than what is achievable using Amazon EBS Instance storage Storage optimized EC2 instance types use fixed size local disks and a variety of different storage technologies are available Among these Non Volatile Memory express (NVMe) is the fastest technology with the highest IOPS and throughput The i3 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 12 class of instance types provides NVMe SSD drives For example i316xlarg e comes with eight disks each with 19 TB of SSD storage When selecting storage optimized EC2 instance types for maximum performance it is essential to understand that some of the smaller instance types provide instance storage that is shared with oth er instances These are virtual disks that reside on a physical disk attached to the physical host By selecting a bigger instance type such as i32xlarge you ensure that there is a 1:1 correspondence between your instance store disk and the underlying p hysical disk This ensures consistent disk performance and eliminates the noisy neighbor problem Instance disks are ephemeral and live only as long as their associated EC2 instance If the EC2 instance fails or is stopped or ended all of its instance storage disks are wiped out and the data stored on them is irrecoverable Unlike EBS volumes instance storage disks cannot be backed up using a snapshot Therefore if you choose to use EC2 instance storage for your permanent data y ou need to provide a way to increase its durability One suitable use for instance storage may be the tempdb system database files because those files are r ecreated each time the SQL Server service is restarted SQL Server drops all tempdb temporary tables and stored procedures during shut down As a best practice the tempdb files should be stored on a fast volume separate from user databases For the best performance ensure that the tempdb data files within the same filegroup are the same size and stored on striped volumes Another use for EC2 instance storage is the buffer pool extension Buffer pool extension is available on both the Enterprise and Standard editions of Microsoft SQL Server This feature uses fast random access disks (SSD) as a secondary cache between RAM and persistent disk storage striking a balance between cost and performance when running workloads on SQL Server Although instance storage disks are the fastest available to EC2 instances their performance is capped at the speed of the physical disk You can go beyond the single disk maximum by striping across several disks You could also use instance storage disks as the cache layer in Storage Spaces (for single Windows instances) and Storage Spaces Direct (for Windows Server failover clusters) storage pools This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 13 Amazon FSx for Windows File Server Amazon FSx for Windows File Server is another storage option for SQL Server on Amazon EC2 This option is suitable for three major use cases: • As shared storage used by SQL Server nodes participating in a Failover Cluster Instance • As file share witness to be used with any SQL Server cluster on top of Windows Server Failover Cluster • As an option to attain higher throughput levels than available in dedicated EBS optimization The f irst two cases were discussed in an earlier section of this document To better understand the third case notice that EBS throughput depends on EC2 instance size Smaller EC2 instance sizes prov ide lower EBS throughput ; therefore to attain EBS higher throughput you need bigger instance sizes However higher instance sizes cost more If a workload leaves a big portion of its network bandwidth unused but requires higher throughput to access underlying storage using a shared file system over SMB may unlock its r equired performance while reducing cost by using smaller EC2 instance sizes Amazon FSx provides fast performance with baseline throughput up to 2 GB/second p er file system hundreds of thousands of IOPS and consistent submillisecond latencies To provide the right performance for your SQL instances you can choose a throughput level that is independent of your file system size Higher levels of throughput ca pacity also come with higher levels of IOPS that the file server can serve to the SQL Server instances accessing it The storage capacity determines not only how much data you can store but also how many I/O operations per second (IOPS) you can perform on the storage – each GB of storage provides three IOPS You can provision each file system to be up to 64 TB in size Bandwidth and latency When tuning for performance it is important to remember the difference between latency and bandwidth You should find a balance between network latency and availability To gain the highest bandwidth on AWS you can leverage enhanced networking and Elastic Network Adapter (ENA) or the new Elastic Fabric Adapter (EFA) which when combined with new generation of EC2 instances such as C6gn C5n R5n This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 14 I3en or G4dn instances can provide up to 100Gbps bandwidth But this quite high bandwidth has no effect on latency Network latency changes in direct correlation with the distance between interconnecting nodes Cluster ing nodes is a way to increase availability but placing cluster nodes too close to each other increases the probability of simultaneous failure reducing availability Putting them too far apart yields the highest availability but at the expense of highe r latency AWS Availability Zones within each AWS Region are engineered to provide a balance that fits most practical cases Each AZ is engineered to be physically separated from other Availability Zone s while keeping in close geographic proximity to provide low network latency Therefore in the overwhelming number of cases the best practice is to spread cluster nodes across multiple Availability Zone Read replicas You might determine that many of you r DB transactions are read only queries and that the sheer number of incoming connections is flooding your database Read replicas are a known solution for this situation You can offload your read only transactions from your primary SQL Server instance to one or more read replica instances Read replicas can also be used to perform backup operations relieving primary instance from performance hits during backup windows When using availab ility group listeners if you mark your connection strings as read only SQL Server routes incoming connections to any available read replicas and only sends read/write transactions to the primary instance Always On Availability Groups introduced in SQL Server 2012 supports up to four secondary replicas In more recent versions of SQL Server (2014 2016 2017 and 2019) Always On Availability Groups support one set of primary databases and one to eight sets of corresponding secondary databases There m ight be cases where you have users or applications connecting to your databases from geographically dispersed locations If latency is a concern you can locate read replicas close to your users and applications When you use a secondary database for read only transactions you must ensure that the server software is properly licensed This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 15 Security optimization Cloud security at AWS is the highest priority and there are many AWS security features available to you These features can be combined with the built in security features of Microsoft SQL Server to satisfy even the most stringent requi rements and expectations Amazon VPC There are many features in Amazon VPC that can help you secure your data in transit You can use security groups to restrict access to your EC2 instances and allow only certain endpoints and protocols You can also use network access control lists to deny known sources of threats A best practice is to deploy your SQL Server instances in private subnets inside a VPC and only allow access to the internet through a VPC network address translation ( NAT) gateway or a custom NAT instance Encryption at rest If you are using EBS volumes to store your SQL Server database files you have the option to enable block level encryption Amazon EBS transparently handles encryption and decryption for you This is available through a simple check box with no further action necessary Amazon FSx f or Windows File Server also includes built in encryption at rest Both EBS and Amazon FSx are integrated with AWS Key Management Service (AWS KMS) for managing encryption keys This means through AWS KMS you can either use keys provided by AWS or bring your own keys For more information see the AWS KMS documentation At the database level you can use SQL Server Transparent Data Encryption (TDE) a feature available in Microsoft SQL Server that provides transparent encryption of your data at rest TDE is available on Amazon RDS for SQL Server and you ca n also enable it on your SQL Server workloads on EC2 instances Previously TDE was only available on SQL Server Enterprise Edition However SQL Server 2019 has also made it available on Standard Edition If you want to have encryption atrest for your da tabase files on Standard Edition on an earlier version of SQL Server you can use EBS encryption instead It’s important to understand the tradeoffs and differences between EBS encryption and TDE EBS encryption is done at the block level that is data is encrypted when it is stored and decrypted when it is retrieved However with TDE the encryption is done at This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 16 the file level Database files are encrypted and can only be decrypted using the corresponding certificate For example this means if you use E BS encryption without TDE and copy your database data or log files from your EC2 instance to an S3 bucket that does not have encryption enabled the files will not be encrypted Furthermore if someone gains access to your EC2 instance database files will be exposed instantly However there is no performance penalty when using EBS encryption whereas enabling TDE adds additional drag on your server resources Encryption in transit As a best practice you can enable encryption in transit for your SQL Server workloads using the SSL/TLS protocol Microsoft SQL Server supports encrypted connections and SQL Server workloads in AWS are no exception When using SMB protocol for SQL Server storage layer Amazon FSx automatically encrypts all data in transit using SMB encryption as you access your file system without the need for you to modify SQL Server or other applications’ configurations Encryption in use Microsoft SQL Server offers Always Encrypted to protect sensitive data using client certificates This provides a separation between those who own t he data and can view it and those who manage the data but should have no access This feature is also available on both Amazon RDS for SQL Server as well as SQL Server workloads on Amazon EC2 AWS Key Management Service ( AWS KMS) AWS KMS is a fully managed service to create and store encryption keys You can use KMS generated keys or bring your own keys In either case keys never leave AWS KMS and are protected from any unauthorized access You can use KMS keys to encrypt your SQL Server backup files when you store them on Amazon S3 Amazon S3 Glacier or any other storage service Amazon EBS encryption also integrates with AWS KMS Securi ty patches One of the common security requirements is the regular deployment of security patches and updates In AWS you can use AWS Systems Manager Pa tch Manager to automate This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 17 this process Not e that use cases for Patch Manager are not restricted to security patches For more details refer to the Patch management section of this whitepaper Cost optimization SQL Server can be hosted on AWS through License Included (LI) a nd Bring Your Own License (BYOL) licensing models With LI you run SQL Server on AWS and pay for the licenses as a component of your AWS hourly usage bill The advantage of this mo del is that you do not need to have any long term commitments and can stop using the product at any time and stop paying for its usage However many businesses already have considerable investments in SQL Server licenses and m ight want to reuse their exis ting licenses on AWS This is possible using BYOL: • If you have Software Assurance (SA) one of its benefits is the Microsoft License Mobility through Software Assurance program This program enables you to use your licenses on server instances running anywhere including on Amazon EC2 instances • If you don ’t have SA you may still be able to use your own licenses on AWS using Amazon EC2 Dedicated Hosts For more details consult the licensing section of the FAQ for Microsoft workloads on AWS to ensure license compliance The BYOL option on EC2 Dedicated Hosts can significant ly reduce costs as the number of physical cores on an EC2 host is about half of the total number of vCPU available on that host However one of the common challenges with this option is the difficulty of tracking license usage and compliance AWS License Manager helps you solve this problem by tracking license usage and optionally enforcing license compliance based on your defined license terms and conditions AWS License Manager is available to AWS customers at no additi onal cost Using SQL Server Developer Edition for non production One of the easiest way s to save licensing costs is to use MS SQL Developer Edition for environments that are not going to be used by application end users These are typically Dev Staging T est and User Acceptance Testing (UAT) environments This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 18 For this you can download SQL Server Developer Edition installation media from the Microsoft website and install it on you r EC2 instances SQL Server Developer Edition is equivalent to SQL Server Enterprise Edition with full features and functionality Amazon EC2 CPU optimization The z1d instance types provide the maximum CPU power enabling you to reduce the number of CPU cores for compute intensive SQL Server deployments However your SQL Server deployments m ight not be compute intensive and require an EC2 instance type that provides intensity on other resources such as memory or storage Because EC2 instance types that provide these resources are also providing a fixed number of cores that m ight be more than your requirement the end result can be a higher licensing cost for cores that are not used at all AWS offers a solution for these situations You can use EC2 CPU optimization to reduce the number of cores available to an EC2 instance and avoid unnecessary lic ensing costs Switch to SQL Server Standard Edition Enterprise grade features of SQL Server are exclusively available in the Enterprise edition However many of these features have also been made available in the Standard edition enabling you to switch t o the Standard edition if you’ve been using Enterprise edition only for those features One example is encryption at rest using Transparent Data Encryption (TDE) which is now available in the Standard edition as of SQL Server 2019 One of the most common reasons for using Enterprise edition has always been its mission critical HA capabilities However there are alternative options that enable switching to Standard edition without degrading availability You can use these options to cost optimize your SQL Server deployments One option is using Always On Basic Availability Groups This option is similar to Always On Availability Groups but comes with a number of limitations The most important limitation is that you can have only one database in a basic availability group This limitation rules this option out for applications that rel y on multiple database s The other option is using Always On Failover Cluster Instance (FCI) Since FCI provides HA at the instance level it does not matter how many databases are hosted on your SQL Server instance Traditionally this option was restricte d to HA within a single data This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 19 center However as discussed earlier you can use Amazon FSx for Windows File Server to overcome that limitation See the High availability and disaster recovery section of this docume nt You can simplify the complexity and cost of running Microsoft SQL FCI deployments using Amazon FSx in the following scenarios: • Due to the complexity and cost of implementing a shared storage solution for FCI you might have opted to use availability gr oups and SQL Server Enterprise Edition However you can now switch to Standard edition and significantly reduce your licensing costs and also simplify the overall complexity of your SQL deployment and ongoing management • You might already use SQL Server FCI with shared storage using a third party storage replication software solution That implies that you purchase d a license for the storage replication solution and then deploy ed administer ed and maintain ed the shared storage solution yourself You can now switch to using a fully managed shared storage solution with Amazon FSx simplify ing and reduc ing costs for your SQL Server FCI deployment • You ran your SQL Server Always On deployment on premises using a combination of FCI and AG FCI to provide high availability within your primary data center site and AG provid ed a DR solution across sites The combination of Availability Zone s and the support in Amazon FSx for highly available shared storage deployed across multiple Availability Zone s now make s it possible for you to eliminate the need for separate HA and DR solutions reduc ing costs as well as simplify deplo yment complexities For a more detailed discussion of M icrosoft SQL Server FCI deployments using Amazon FSx see the blog post Simplify your Microsoft SQL Server high availability deployments using Amazon FSx for Windows File Server Z1d and R5b EC2 instance types The highperformance Z1d and R5b instance type is optimized for workloads that carry high licensing costs such as Microsoft SQL Server and Oracle databases The Z1d instance type is built with a custom Intel Xeon Scalable Processor that delivers a sustained all core Turbo frequency of up to 40 GHz which is significantly faster than other instances The R5b uses similar technology with 31 to 35 GHz For workloads that need faster sequential processing you can run fewer cores with a z1d instance and get the same or better performance as running oth er instances with more cores This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 20 For example moving a n SQL Server Enterprise workload running on an r44xlarge to a z1d3xlarge can deliver up to 24% in savings because of licensing fewer cores The same savings are realized when moving a workload from an r4 16xlarge to a z1d12xlarge as it is the same 4:3 ratio Figure 4: TCO comparison between SQL Server on r44xlarge and z1d3xlarge Eliminating active replica licenses Another opportunity for cost optimization in the cloud is through applying a combinatio n of BYOL and LI models A common use case is SQL Server Always On Availability Groups with active replicas Active replicas are used primarily for: • Reporting • Backup • OLAP Batch jobs • HA This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best P ractices for Deploying Microsoft SQL Server on Amazon EC2 21 Out of the above four operations the first three are often performed i ntermittently This means you would not need an instance continuously up and dedicated to running those operations In a traditional on premises environment you would have to create an active replica that is continuously synchronized with the primary inst ance This means you need to obtain an additional license for the active replica Figure 5: SQL Server active replication onpremises In AWS there is an opportunity to optimize this architecture by replacing the active replica with a passive replica th erefore relegating its role solely to the purpose of HA Other operations can be performed on a separate instance using License Included which could run for a few hours and then be shut down or ended The data can be restored through an EBS snapshot of th e primary instance This snapshot can be taken using VSS enabled EBS snapshots ensuring no performance impact or downtime on the primary This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 22 Figure 6: Eliminating active replica licenses in AWS This solution is applicable when jobs on the active replica run with a low frequency However if you need a replica for jobs that run continuously or at a high frequency consider using AWS Database Migration Service (AWS DMS) to continuously replicate data from your primary instance into a secondary The primary benefit of this method is because you can do it using SQL Server Standard edition it avoid s the high cost of SQL Ser ver Enterprise edition licensing Refer to the AWS Microsoft licensing page for more details on ways you can optimize licensing costs on AWS SQL Server on Linux Deploying SQL Server on L inux is a way to eliminate Windows license costs Installation and configuration of MS SQL on Linux can be non trivial However as discussed earlier in this document AWS Launch Wizard helps simplifying that by taking care of most of the heavy lifting for you This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Pract ices for Deploying Microsoft SQL Server on Amazon EC2 23 Operational excellence Most of the discussions explored in this whitepaper pertain to the best practices available for deploying Microsoft SQL Server in AWS However another crucial dimension is operating and maintaining these workloads post deployment As a general principle the best practice is to assume that failures and incidents happen all the time It ’s important to be pre pared and equipped to respond to these incidents This objective is composed of three subobjectives : • Observe and detect anomaly • Detect the root cause • Act to resolve the problem AWS provides tools and services for each of these purposes Observability and root cause analysis Amazon CloudWatch is a service that enables real time monitoring of AWS resources and other applications You can use CloudWatch to col lect and track metrics which are variables you can measure for your resources and applications Amazon CloudWatch Application Insights for NET and SQ L Server is a feature of Amazon CloudWatch that is designed to enable operational excellence for Microsoft SQL Server and NET applications Once enabled it identifies and sets up key metrics and logs across your application resources and technology stack It continuously monitors the metrics and logs to detect anomalies and errors while using artificial intelligence and machine learning (AI/ML) to correlate detected errors and anomalies When errors and anomalies are detected Application Insights gener ates CloudWatch Events To aid with troubleshooting it creates automated dashboards for the detected problems which include correlated metric anomalies and log errors along with additional insights to point you to the potential root cause Using AWS Launch Wizard you can choose to enable Amazon CloudWatch for Application Insights with a single click The AWS Launch Wizard handles all the configuration necessary to make your MSSQL instance observable through Amazon CloudWatch Application Insights This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 24 Reducing mean time to resolution (MTTR) The automated dashboards generated by Amazon CloudWatch Application Insights help you to take swift remedial actions to keep your applications healthy and to prevent impact to the end users of your application It also creates OpsItems so you can resolve problems using AWS Systems Manager OpsCenter AWS Systems Manager is a serv ice that enables you to view and control your infrastructure in AWS on premises and in other clouds OpsCenter is a capability of AWS Systems Manager designed to reduce the mean time to resolution OpsCenter also provides Systems Manager Automation documents (runbooks) that you can use to fully or partially automate resolution of issues Patch management AWS Systems Manager Patch Manager is a comprehensive patch management solution fully integrated with native Windows APIs and supporting Windows Server and Linux operating systems as well as Mi crosoft applications including Microsoft SQL Server Systems Manager Patch Manager integrates with AWS Systems Manager Maintenance Windows allow ing you to define a predictable schedule to prevent potential disruption of business operations You can also use AWS Systems Manager Configuration Compliance dashboards to quickly see patch compliance state or other configuration inconsistencies across your fleet Conclusion This whitepaper described a number of best practices for deploying Microsoft SQL Server workloads on AWS It discussed how AWS services can be used to compliment Microsoft SQL Server features to address different requirements AWS offers the greatest breadth and depth of services in the cloud and Amazon EC2 is the most flexible option for deploying Microsoft SQL Server workloads Each solution and associated trade offs may be embraced according to particular business requirements This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ bestpracticesfordeployingmicrosoftsqlserver/ bestpracticesfordeployingmicrosoftsql serverhtmlAmazon Web Services Best Practices for Deploying Microsoft SQL Server on Amazon EC2 25 The five pillars of AWS Well Architected Framework (reliability security performance cost optimization and operational excellence) are explored as applicable to SQL Server workloads and AWS services supporting each requirement are introduced Contributors The following individuals and organizations contributed to this document: • Sepehr Samiei Solutions Architect Amazon Web Services Document revisions Date Description July 28 2021 Updated for new AWS services and capabilities supporting Microsoft SQL Server workloads May 2020 Updated for new AWS services and capabilities supporting Microsoft SQL Server workloads March 2019 Updated for Total Cost of Ownership (TCO) using z1d instance types and EC2 CPU Optimization September 2018 First publication
|
General
|
consultant
|
Best Practices
|
Best_Practices_for_Migrating_from_RDBMS_to_Amazon_DynamoDB
|
This paper has been archived For the latest technical content refer t o the HTML version: : https://docsawsamazoncom/whitepapers/latest/best practicesformigratingfromrdbmstodynamodb/ welcomehtml Best Practices for Migrating from RDBMS to Amazon DynamoDB Leverage the Power of NoSQL for Suitable Workloads Nathaniel Slater March 2015 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 2 of 24 Contents Contents 2 Abstract 2 Introduction 2 Overview of Amazon DynamoDB 4 Suitable Workloads 6 Unsuitable Workloads 7 Key Concepts 8 Migrating to DynamoDB from RDBMS 13 Planning Phase 13 Data Analysis Phase 15 Data Modeling Phase 17 Testing Phase 21 Data Migration Phase 22 Conclusion 23 Cheat Sheet 23 Further Reading 23 Abstract Today software architects and developers have an array of choices for data storage and persistence These include not only traditional relational database management systems ( RDBMS) but also NoSQL databases such as Amazon DynamoDB Certain workloads will scale better and be more cost effective to run using a NoSQL solution This paper will highlight the best practices for migrating these workloads from an RDBMS to DynamoDB We will disc uss how NoSQL databases like DynamoDB differ from a traditional RDBMS and propose a framework for analysis data modeling and migration of data from an RDBMS into DynamoDB Introduction For decades the RDBMS was the de facto choice for data storage and persistence Any data driven application be it an e commerce website or an expense reporting This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 3 of 24 system was almost certain to use a relational database to retrieve and store the data required by the application T he reasons for this are numerous and include the following: • RDBMS is a mature and stable technology • The query language SQ L is feature rich and versatile • The servers that run an RDBMS engine are typically some of the most stable and powerful in the IT infrastructure • All major programming languages contain support for the drivers used to communicate with an RDBMS as well as a rich set of tools for simplifying the development of database driven applications These factors and many others have supported this incumbency of the RDBMS For architects and software developers there simply wasn’t a reasonable alternative for data storage and persistence – until now The growth of “internet scale” web applications such as e commerce and social media the explosion of connected devices like smart phones and tablets and the rise of big data have resulted in new workloads that t raditional relational database s are not well suited to handle As a system designed for transaction processing the fundamental properties that all RDBMS must support are defined by the acronym ACID: Atomicity Consistency Isolation and Durability Atom icity means “all or nothing” – a transaction executes completely or not at all Consistency means that the execution of a transaction causes a valid state transition Once the transaction has been committed the state of the resulting data must conform to the constraints imposed by the database schema Isolation requires that concurrent transactions execute separately from one another The isolation property guarantees that if concurrent transactions were executed in serial the end state of the data would be the same Durability requires that the state of the data once a transaction executes be preserved In the event of power or system failure the database should be able to recover to the last known state These ACID properties are all desirable but support for all four requires an architecture that poses some challenges for today’s data intensive workloads For example consistency requires a well defined schema and that all data stored in a database conform to that schema This is great for ad hoc queries and read heavy workloads For a workload consisting almost entirely of writes such a s the saving of a player ’s state in a gaming application this enforcement of schema is expensive from a storage and compute standpoint The game developer benef its little by forcing this data into rows and tables that relate to one another thr ough a welldefined set of keys This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 4 of 24 Consistency also requires locking some portion of the data until the transaction modifying it completes and then making the change immediately visible For a bank transaction which debits one account and credits another this is required This type of transaction is called “strongly consistent” For a social media application on the other hand there really is no requirement that all users see an update to a data feed at precisely the same time In this latter case the transaction is “eventually consistent” It is far more important that the social media application scale to handle potentially millions of simultaneous users even if those users see changes to the data at different times Scaling an RDBMS to handle this level of concurrency while maintaining strong consistency requires upgrading to more powerful (and often proprietary) hardware This is called “scaling up” or “vertical scaling” and it usually carries an extremely high cost The more cost effective way to scale a database to support this level of concurrency is to add server instances running on commodity hardware This is called “scaling out” or “horizontal scaling” an d it is typically far more cost effective than vertical scaling NoSQL databases like Amazon DynamoDB ad dress the scaling and performance challenges found with RDBMS The term “NoSQL” simply means that the database doesn’t follow the relational model e spoused by EF Codd in his 1970 paper A Relational Model of Data for Large Shared Data Banks 1 which would become the basis for all modern RDBMS As a result NoSQL databases vary much more widely in features and functionality than a traditional RDBMS T here is no common query language analogous to SQL and query flexibility is generally replaced by high I/O performance and horizontal scalability NoSQL databases don’t enforce the notion of schema in the same way as an RDBMS Some may store semi structured data like JSON Others may store r elated values as column sets Still others may simply store key/value pairs The net result i s that NoSQL databases trade some of the query capabilities and ACID properties of an RDBMS for a much more flexible dat a model that scales horizontally These characteristics make NoSQL databases an excellent choice in situations where use of an RDBMS for non relational workloads (like the aforementioned game state example) is resulting in some combination of performance bottlenecks operational complexity and rising cos ts DynamoDB offers solutions to all these problems and is an excellent platform for migrating these workloads off of an RDBMS Overview of Amazon DynamoDB Amazon DynamoDB is a fully managed NoSQL database service running in the AWS cloud The complexity of running a massively scalable distributed NoSQL database is managed by the service itself allowing software developers to focus on building applications rather than managing infrastructure NoSQL databases are designed for scale but their architectures are sophisticated and there can be significant operational 1 http://wwwseasupennedu/~zives/03f/cis550/coddpdf This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 5 of 24 overhead in running a large NoSQL cluster Instead of having to become experts in advanced distributed computing concepts the developer need only to learn DynamoDB’s straightforward API using the SDK for the programming language of choice In addition to being easy to use DynamoDB is also cost effective With D ynamoDB you pay for the storage you are consuming and the IO throughput y ou have provisioned It is designed to scale elastically When the storage and throughput requirements of an application are low only a small amount of capacity needs to be provisioned in the DynamoDB service As the number of users of an application g rows and the required IO throughput increases additional capacity can be provisioned on the fly This enables an application to seamlessly grow to support millions of users making thousands of concurrent requests to the database every second Tables are the fundamental construct for organizing and storing data in DynamoDB A table consists of items An item is composed of a primary key that uniquely identifies it and key/val ue pairs called attributes While an item is similar to a row in an RDBMS table all the items in the same DynamoDB table need not share th e same set of attributes in the way that all rows in a relational table share the same columns Figure 1 shows the structure of a DynamoDB table and the items it contains There is no concept of a column in a DynamoDB table Each item in the table can be expressed as a tuple containing an arbitrary number of elements up to a maximum size of 400 K This data model is well suited for storing data in the formats commonly used for object serializ ation and messaging in distributed systems As we will see in the next section workloads that involve this type of data are good candidates to migrate to DynamoDB Figure 1: DynamoDB Table Structure table items Attributes (name/value pairs) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 6 of 24 Tables and items are created updated and deleted through the DynamoDB API There is no conc ept of a standard DML language like there is in the relational database world Manipulation of data in DynamoDB is done programmatically through object oriented code It is possible to query data in a DynamoDB table but this too is done programmatically through the API Because there is no generic query language like SQL it’s important to unders tand your application’s data access patterns well in order to make the most effective use of DynamoDB Suitable Workloads DynamoDB is a NoSQL database which means that it will perform best for workloads involving non relational data Some of the more common use cases for non relational workloads are: • AdTech o Capturing browser cookie state • Mobile applications o Storing application data and session state • Gaming applications o Storing user preferences and application state o Storing player s’ game state • Consumer “voting” applications o Reality TV contests Superbowl commercials • Large Scale Websites o Session state o User data used for personalization o Access control • Application monitoring o Storing application log and event data o JSON data • Internet of Things o Sensor data and log ingestion This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 7 of 24 All of t hese use cases benefit from some combination of the features that make NoSQL databases so powerful Ad Tech applications typically require extremely low latency which is well suited for DynamoDB’s low single digit millisecond re ad and write performance Mobile applications and consumer voting applications often have millions of users and need to handle thousands of requests per second DynamoDB can scale horizontally to meet this load Finally application monitoring solutions typically ingest hundreds of thousands of data points per minute and DynamoDB’s sche maless data model high IO performance and support for a native JSON data type is a great fit for these types of applications Another important characteristic to consi der when determining if a workload is suitable for a NoSQL database like DynamoDB is whether it requires horizontal scaling A mobile application may have millions of users but each installation of the applicati on will only read and write session data fo r a single user This means the user session data in the DynamoDB table can be distributed across multiple storage partitions A read or write of data for a given user will be confined to a single partition This allows the DynamoDB table to scale horizontally —as more users are added more partitions are created As long as requests to read and write this data are uniformly d istributed across partitions DynamoDB will be able to handle a very large amount of concurrent data access This type of horizontal scaling is difficult to achieve with an RDBMS without the use of “sharding” which can add significant complexity to an a pplication’s data access layer When data in an RDBMS is “sharded” it is split across different database instances This requires maintaining an index of the instances and the range of data they contain In order to read and write data a client applic ation needs to know which shard contains the range of data to be read or written Sharding also adds administrative overhead and cost – instead of a single database instance you are now responsible for keeping several up and running It’s also important to evaluate the data consistency requirement of an application when determining if a workload would be suitable for DynamoDB There are actually two consistency models supported in DynamoDB: strong and eventual consistency with the former requiring more p rovisioned IO than the latter This flexibility allows the developer to get the best possible performance from the database while being able to support the consistency requirements of the application If an application does not require “strongly consisten t” reads meaning that updates made by one client do not need to be immediately visible to others then use of an RDBMS that will force strong consistency can result in a tax on performance with no net benefit to the application The reason is that strong consistency usually involves having to lock some portion of the data which can cause performance bottlenecks Unsuitable Workloads This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 8 of 24 Not all workloads are suitable for a NoSQL database like DynamoDB While in theory one could implement a classic entity relationship model using DynamoDB tables and items in practice this would be extremely cumbersome to work with Transactional systems that require well defined relationships between entities are still best implemented using a traditional RDBMS Some o ther unsuitable workloads include: • Adhoc queries • OLAP • BLOB storage Because DynamoDB does not support a standard query language like SQL and because there is no concept of a table join constructing ad hoc queries is not as efficient as it is with RDBMS Running such queries with DynamoDB is possible but requires the use of Amazon EMR and Hive Likewise OLAP applications are difficult to deliver as well because the dimensional data model used for analytical processing requires joining fact tables to d imension tables Finally due to the size limitation of a DynamoDB item storing BLOBs is often not practical DynamoDB does support a binary data type but this is not suited for storing large binary objects like images or documents However storing a pointer in the DynamoDB table to a large BLOB stored in Amazon S3 easily supports this last use case Key Concepts As described in the previous section Dynam oDB organizes data into tables consisting of items Each item in a DynamoDB table can define a n arbitrary set of attributes but all items in the table must define a primary key that uniquely identifies the item This key must contain an attribute known as the “hash key” a nd optionally an attribute called the “range ke y” Figure 2 shows the structure of a DynamoDB table defining both a hash and range key This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 9 of 24 Figure 2: DynamoDB Table with Hash and Range Keys If an item can be uniquely identified by a single attribute value then this attribute can function as the hash key In other cases an item may be uniquely identified by two values In this case the primary key will be defined as a composite of the has h key and the range key Figure 3 demonstrates this concept An RDBMS table relating media files with the codec used to trans code them can be modeled as a single table in DynamoDB using a primary key con sisting of a hash and range key Note how the data is de normalized in the DynamoDB table This is a common practice when migrating data from an RDBMS to a NoSQL database and will be discussed in more detail later in this paper Hash key Range key (DynamoDB maintains a sorted index) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 10 of 24 Figure 3: Example of Hash and Range Keys The ideal hash key will contain a large number of distinct values uniformly distributed across the items in the table A user ID is a good example of an attribute that tends to be uniformly distributed across items in a table Attributes that would be modeled as lookup values o r enumerations in an RDBMS tend to make poor hash keys The reason is that certain values may occur much more frequently than others These concepts are shown in Figure 4 Notice how the counts of user_id are uniform whereas the counts of status_code a re not If the status_code is used as a hash key in a DynamoDB table the value that occurs most frequently will end up being stored on the same partition and this means that most reads and writes will be hitting that single partition This is called a “hot partition” and this will negatively impact performance select user_id count(*) as total from user_preferences group by user_id user_id total 8a9642f7 51554138bb63870cd45d7e19 1 31667c72 86c54afb82a1a988bfe34d49 1 693f8265 b0d240f1add0bbe2e8650c08 1 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 11 of 24 select status_code count(*) as total from status_code sc log l where lstatus_code_id = scstatus_code_id status_code total 400 125000 403 250 500 10000 505 2 Figure 4: Uniform and NonUniform Distribution of Potential Key Values Items can be fetched from a table using the primary key Often it is useful to be able to fetch items using a different set of values than the hash and the range keys DynamoDB supports these operations t hrough local and global secondary indexes A local secondary index uses the same hash key as defined on the table but a different attribute as the range key Figure 5 shows how a local secondary index is defined on a table A global secondary index can use any scalar attribute as the hash or range key Fetching items using secondary indexes is done using the query interface defined in the DynamoDB API Figure 5: A Local Secondary Index Because there are limits to the number of local and global secondary indexes that can exist per table it is important to fully understand the data access requirements of any application that uses DynamoDB for persistent storage In addition global secondary indexes require that attribute values be projected into the index What this means is that when an index is created a subset of attributes from the parent table need to be selected for inclusion into the index When an item is queried using a globa l secondary index the only attributes that will be populated in the returned item are those that have Range key LSI key Hash key This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 12 of 24 been projected into the index Figure 6 demonstrates this concept Note how the original hash and range key attributes are automatically promoted in the global secondary index Reads on global secondary indexes are always eventually consistent whereas local secondary indexes support eventual or strong consistency Finally both local and global secondary indexes use provisioned IO (discussed in detail below) for reads and writes to the index This means that each time an item is inserted or updated in the main table any secondary indexes will consume IO to update the index Figure 6: Create a global secondary index on a table Whenever an item is read from or written to a DynamoDB table or index the amount of data required to perform the read or write operation is expressed as a “read unit” or “write unit” A read unit consists of 4K of data and a write unit is 1K This means that fetching an item of 8K in size will consume 2 read units of data Inserting the item would consume 8 write units of data The number of read and write units allowed per second is known as the “provisioned IO” of the table If your application requires that 1000 4K items be written per second then the provisioned write capacity of the table would need to be a minimum of 4000 write units per second When an insufficient amount of read or write capacity is provisi oned on a table the DynamoDB service will “throttle” the read and write operations This can result in poor performance and in some cases throttling exceptions in the client application For this reason it is important to understand an application ’s IO requirements when designing the tables However both read and write capacity can be altered on an existing table and if an application suddenly experiences a spike in usage that results in throttling the provisioned IO can be increased to handle the n ew workload Similarly if load decreases for some reason the provisioned IO can be reduced This ability to dynamically alter the IO characteristics of a table is a key differentiator between DynamoDB and a traditional RDBMS in which IO throughput is going to be fixed based on the underlying hardware the database engine is running on Choose which attributes to promote (if any) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 13 of 24 Migrating to DynamoDB from RDBMS In the previous section we discussed some of the key features of DynamoDB as well as some of the key differences between DynamoDB and a traditional RDBMS In this section we will propose a strategy for migrating from an RDBMS to DynamoDB that takes into account these key features and differences Because database migrations tend to be complex and risky we advocate taking a phased ite rative approach As is the case with the adoption of any new technology it’s also good to focus on the easiest use cases first It’s also important to remember as we propose in this section that migration to DynamoDB doesn’t need to be an “all or not hing” process For certain migrations it may be feasible to run the workload on both DynamoDB and the RDBMS in parallel and switch over to DynamoDB only when it’s clear that the migration has succeeded and the application is working properly The follow ing state diagram expresses our proposed migration strategy: Figure 7: Migration Phases It is important to note that this process is iterative The outcome of certain states can result in a return to a previous state Oversights in the data analysis an d data modeling phase may not become apparent until testing In most cases it will be necessary to iterate over these phases multiple times before reaching the final data migration state Each phase will be discussed in detail in the sections that follo w Planning Phase The first part of the planning phase is to identify the goals of the data migration These often include (but are not limited to): • Increasing application performance • Lowering costs • Reducing the load on an RDBMS In many cases the goals of a migration may be a combination of all of the above Once these goals have been defined they can be used to inform the identification of the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 14 of 24 RDMBS tables to migrate to DynamoDB As we mentioned previously tables being used by workloads involving non relational data make excellent choices for migration to DynamoDB Migration of such tables to DynamoDB can result in significantly improved application performance as well as lower costs and lower loads on the RDBMS Some good candidates for migration are: • Entity Attribute Value tables • Application session state tables • User preference tables • Logging tables Once the tables have been identified any characteristics of the source tables that may make migration challenging should b e documented This information will be essential for choosing a sound migration strategy Let’s take a look at some of the more common challenges that tend to impact the migration strategy : Challenge Impact on Migration Strategy Writes to the RDBMS sour ce table cannot be acquiesced before or during the migration Synchronization of the data in the target DynamoDB table with the source will be difficult Consider a migration strategy that involves writing data to both the source and target tables in parallel The amount of data in the source table is in excess of what can reasonably be transferred with the existing network bandwidth Consider exporting the data from the source table to removable disks and using the AWS Import/Export service to import the data to a bucket in S3 Import this data into DynamoDB directly from S3 Alternatively reduce the amount of data that needs to be migrated by exporting only those records that were created after a recent point in time All data older than that point will remain in the legacy table in the RDBMS The data in the source table needs to be transformed before it can be imported into Export the data from the source table and transfer it to S3 Consider using EMR to perform the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 15 of 24 Challenge Impact on Migration Strategy DynamoDB data transforma tion and import the transformed data into DynamoDB The primary key structure of the source table is not portable to DynamoDB Identify column(s) that will make suitable hash and range keys for the imported items Alternatively consider adding a surrog ate key (such as a UUID) to the source table that will act as a suitable hash key The data in the source table is encrypted If the encryption is being managed by the RDBMS then the data will need to be decrypted when exported and re encrypted upon import using an encryption scheme enforced by the application not the underlying database engine The cryptographic keys will need to be managed outside of DynamoDB Table 1: Challenges that Impact Migration Strategy Finally and perhaps most importantly the backup and recovery process should be defined and documented in the planning phase If the migration strategy requires a full cutover from the RDBMS to DynamoDB defining a process for restoring functionality using the RDBMS in the event the migration fails is essential To mitigate risk consider running the workload on DynamoDB and the RDBMS in parallel for some length of time In this scenario the legacy RDBMS based application can be disabled only once the workload has been sufficiently tested in production using DynamoDB Data Analysis Phase The purpose of the data analysis phase is to understand the composition of the source data and to identify the data access patterns used by the application This information is required input into the data modeling phase It is also essential for understanding the cost and performance of running a workload on DynamoDB The analysis of the source data should include: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 16 of 24 • An estimate of the number of items to be imported into DynamoDB • A distribution of the item sizes • The multiplicity of values to be used as hash or range keys DynamoDB pricing contains two main components – storage and provisioned IO By estimating the number of items that will be imported into a DynamoDB table and the approximate size of each item the storage and the provisioned IO requirements for the table can be calculated Common SQL data types will map to one of 3 scalar types in DynamoDB: string number and binary The length of the number data type is variable and strings are encoded using binary UTF 8 Focus should be placed on the attributes with the largest values when estimating item size as provisioned I OPS are given in integral 1K increments —there is no concept of a fractional IO in DynamoDB If an item is estimated to be 33K in size it will require 4 1K write IO units and 1 4K read IO unit to write and read a single item respectively Since the siz e will be rounded to the nearest kilobyte the exact size of the numeric types is unimportant In most cases even for large numbers with high precision the data will be stored using a small number of bytes Because each item in a table may contain a var iable number of attributes it is useful to compute a distribution of item sizes and use a percentile value to estimate item size For example one may choose an item size representing the 95th percentile and use this for estimating the storage and provisioned IO costs In the event that there are too many rows in the source table to inspect individually take samples of the source data and use these for computing the item size distribution At a minimum a table should have enough provisioned read and write units to read and write a single item per second For example if 4 write units are required to write an item with a size equal to or less than the 95 th percentile than the table should have a minimum provisioned IO of 4 write units per second Anything less than this and the write of a single item will cause throttling and degrade performance In practice the number of provisioned read and write units will be much larger than the required minimum An application using DynamoDB for data storage will typically need to issue read and writes concurrently Correctly estimating the provisioned IO is key to both guaranteeing the required application performance as well as understanding cost Understanding the distribution frequency of RDBMS colu mn values that could be hash or range keys is essential for obtaining maximum performance as well Columns containing values that are not uniformly distributed (ie some values occur in much larger numbers than others) are not good hash or range keys because accessing items with keys occurring in high frequency will hit the same DynamoDB partitions and this has negative performance implications The second purpose of the data analysis phase is to categorize the data access patterns of the application Because DynamoDB does not support a generic query language like SQL it is essential to document that ways in which data will be written to and read from This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 17 of 24 the tables This information is critical for the data modeling phase in which the tables the key structure and the indexes will be defined Some com mon patterns for data access are: • Write Only – Items are written to a table and never read by the application • Fetches by distinct value – Items are fetched in dividually by a value that uniquely identifies the item in the table • Queries across a range of values – This is seen frequently with temporal data As we will see in the next section documentation of an application’s data access patterns using categories such as those described above will drive much of the data modeling decisions Data Modeling Phase In this phase the tables hash and range keys and secondary indexes w ill be defined The data model produced in this phase must support the data access patterns described in the data analysis phase The first step in data modeling is to determine the hash and range keys for a table The primary key whether consisting only of the hash key or a composite of the hash and range key must be unique for all items in the table When migrating data from an RDBMS it is tempting to use the primary key of the source table as the hash key But in reality this key is often semantically meaningless to the application For example a User table in an RDBMS may define a numeric primary key but an application responsible for logging in a user will ask for an email address not the numeric user ID In this case the email address is the “natural key” and would be better suited as the hash key in the DynamoDB table as items can easily be fetched by their hash key values Modeling the hash key in this way is appropriate for data access patterns that fetch items by distinct value For other data access patterns like “write only” using a randomly generated numeric ID will work well for the hash key In this case the items will never be fetched from the table by the application and as such the key will only be used to uniquely identify the items not a means of fetching data RDBMS tables that contain a unique index on two key values are good candidates for defining a primary key using both a hash key and a range key Intersection tables used to define many tomany relationships in an RDBMS are typically modeled using a unique index on the key values of both sides of the relationship Because fetching data i n a many tomany relationship requires a series of table joins migrating such a table to DynamoDB would also involve de normalizing the data (discussed in more detail below) Date values are also often used as range keys A table counting the number of t imes a URL was visited on any given day could define the URL as the hash key and the date as the range key As with primary keys consisting solely of a hash key fetching items with a composite primary key requires the application to specify both the hash and range key This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 18 of 24 values This needs to be considered when evaluating whether a surrogate key or a natural key would make the better choice for the hash and or range key Because non key attributes can be added to an item arbitrarily the only attributes th at must be specified in a DynamoDB table definition are the hash key and (optionally) the range key However if secondary indexes are going to be defined on any non key attributes then these must be included in the table definition Inclusion of non key attributes in the table definition does not impose any sort of schema on all the items in the table Aside from the primary key each item in the table can have an arbitrary list of attributes The support for SQL in an RDBMS means that records can be f etched using any of the column values in the table These queries may not always be efficient – if no index exists on the column used to fetch the data a full table scan may be required to locate the matching rows The query interface exposed by the Dyn amoDB API does not support fetching items from a table in this way It is possible to do a full table scan but this is inefficient and will consume substantial read units if the table is large Instead items can be fetched from a DynamoDB table by the primary key of the table or the key of a local or global secondary index defined on the table Because an index on a non key column of an RDBMS table suggests that the application commonly queries for data on this value these attributes make good candidates for local or global secondary indexes in a DynamoDB table There are limits to the number of secondary indexes allowed on a DynamoDB table 2 so it is important to choose define keys for these indexes using attribute values that the application will use most frequently for fetching data DynamoDB does not support the concept of a table join so migrating data from an RDBMS table will often re quire denormalizing the data To those used to working with an RDBMS this will be a foreign and perhaps uncomfortable concept Since the workloads most suitable for migrating to DynamoDB from an RDMBS tend to involve nonrelational data denormalizatio n rarely poses the same issues as it would in a relational data model For example if a relational database contains a User and a UserAddress table related through the UserID one would combine the User attributes with the Address attributes into a sing le DynamoDB table In the relational database normalizing the User Address information allows for multiple addresses to be specified for a given user This is a requirement for a contact management or CRM system But in DynamoDB a User table would likely serve a different purpose —perhaps keeping track of a mobile application’s registered users In this use case the multiplicity of Users to Addresses is less important than scalability and fast retrieval of user records 2 http://docsawsamazoncom/amazondynamodb/latest/developerguide/Limitshtml This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 19 of 24 Data Modeling Example Let’s walk through an example that combines the concepts described in this section and the previous This example will demonstrate how to use secondary indexes for efficient data access and how to estimate both item size and the required amount of provisioned IO for a DynamoDB table Figure 8 contains an ER diagram for a schema used to track events when processing orders placed online through an e commerce portal Both the RDBMS and DynamoDB table structures are shown Figure 8: RDBMS and DynamoDB schem a for tracking events The number of rows that will be migrated is around 10! so computing the 95th percentile of item size iteratively is not practical Instead we will perform simple random sampling with replacement of 10! rows This will give us adeq uate precision for the purposes of estimating item size To do this we construct a SQL view that contains the fields that will be inserted into the DynamoDB table Our sampling routine then randomly selects 10 ! rows from this view and computes the size representing the 95th percentile This statistical sampling yields a 95th percentile size of 66 KB most of which is consumed by the “Data” attribute (which can be as large as 6KB in size) The minimum number of write units required to write a single i tem is: 𝑐𝑒𝑖𝑙𝑖𝑛𝑔 (66𝐾𝐵 𝑝𝑒𝑟 𝑖𝑡𝑒𝑚 1𝐾𝐵 𝑝𝑒𝑟 𝑤𝑟𝑖𝑡𝑒 𝑢𝑛𝑖𝑡 )=7 𝑤𝑟𝑖𝑡𝑒 𝑢𝑛𝑖𝑡𝑠 𝑝𝑒𝑟 𝑖𝑡𝑒𝑚 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 20 of 24 The minimum number of read units required to read a single item is computed similarly: 𝑐𝑒𝑖𝑙𝑖𝑛𝑔 (66𝐾𝐵 𝑝𝑒𝑟 𝑖𝑡𝑒𝑚 4𝐾𝑏 𝑝𝑒𝑟 𝑟𝑒𝑎𝑑 𝑢𝑛𝑖𝑡 )=2 𝑟𝑒𝑎𝑑 𝑢𝑛𝑖𝑡𝑠 𝑝𝑒𝑟 𝑖𝑡𝑒𝑚 This particular workload is write heavy and we need enough IO to write 1000 events for 500 orders per day This is computed as follows: 500 𝑜𝑟𝑑𝑒𝑟𝑠 𝑝𝑒𝑟 𝑑𝑎𝑦 × 1000 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 𝑜𝑟𝑑𝑒𝑟 = 5 ×10! 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 𝑑𝑎𝑦 5 × 10!𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 𝑑𝑎𝑦 × 86400 𝑠𝑒𝑐𝑜𝑛𝑑𝑠 𝑝𝑒𝑟 𝑑𝑎𝑦 =578 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 𝑠𝑒𝑐𝑜𝑛𝑑 𝑐𝑒𝑖𝑙𝑖𝑛𝑔 578 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 𝑠𝑒𝑐𝑜𝑛𝑑 × 7 𝑤𝑟𝑖𝑡𝑒 𝑢𝑛𝑖𝑡𝑠 𝑝𝑒𝑟 𝑖𝑡𝑒𝑚 =41 𝑤𝑟𝑖𝑡𝑒 𝑢𝑛𝑖𝑡𝑠 𝑝𝑒𝑟 𝑠𝑒𝑐𝑜𝑛𝑑 Reads on the table only happen once per hour when the previous hour’s data is imported into an Amazon Elastic Map Reduce cluster for ETL This operation uses a query that selects items from a given date range (which is why the EventDate attribute is both a range key and a global secondary index) The number of read units (which will be provisioned on the global secondary index) required to retrieve the results of a query is based on the size of the results re turned by the query: 578 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 𝑠𝑒𝑐𝑜𝑛𝑑 × 3600 𝑠𝑒𝑐𝑜𝑛𝑑𝑠 𝑝𝑒𝑟 ℎ𝑜𝑢𝑟 =20808 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 ℎ𝑜𝑢𝑟 20808 𝑒𝑣𝑒𝑛𝑡𝑠 𝑝𝑒𝑟 ℎ𝑜𝑢𝑟 × 66𝐾𝐵 𝑝𝑒𝑟 𝑖𝑡𝑒𝑚 1024𝐾𝐵 =13411𝑀𝐵 𝑝𝑒𝑟 ℎ𝑜𝑢𝑟 The maximum amount of data re turned in a single query operation is 1MB so pagination will be required Each hourly read query will require reading 135 pages of data For strongly consistent reads 256 read units are required to read a full page at a time (the number is half as much for eventually consistent reads) So to support this particular workload 256 read units and 41 write units will be required From a practical standpoint the write units would likely be expressed in an even number like 48 We now have all the data we need to estimate the DynamoDB cost for this workload: 1 Number of items ( 10 !) 2 Item size (7KB) 3 Write units (48) 4 Read units (256) These can be run through the Amazon Simple Monthly Calculator3 to derive a cost estimate 3 http://calculators3amazonawscom/indexhtml This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 21 of 24 Testing Phase The testing phase is the most important part of the migration strategy It is during this phase that the entire migration process will be tested end toend A comprehensive test plan should minimally contain the following: Test Category Purpose Basic Acceptance Tests These tests should be automatically executed upon completion of the data migration routines Their primary purpose is to verify whether the data migration was successful Some common outputs from these tests will include: • Total # items processed • Total # items imported • Total # items skipped • Total # warnings • Total # errors If any of these totals reported by the tests deviate from the expected values then it means the migration was not successful and the issues need to be resolved before moving to the next step in the process or the next round of testing Functional Tests These tests exercise the functionality of the application(s) using DynamoDB for data storage They will include a combination of automated and manual tests The primary purpose of the functional tests is to identify problems in the application caused by the migration of the RDBMS data to DynamoDB It is during this round of testing that gaps in the data model are often revealed NonFunctional Tests These tests will assess the non functional characteristics of the application such as performance under varying levels of load and resiliency to failure of any portion of This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 22 of 24 Test Category Purpose the application stack These tests can also include boundary or edge cases that are low probability but could negatively impact the application (for example if a large number of clients try to update the same record at the exact same time) The backup and recovery process defined in the planning phase should also be included in nonfunctional testing User Acceptance Tests These tests should be executed by the end users of the application(s) once the final data migration has completed The purpose of these tests is for the end users to decide if the application is sufficiently usable to meet it’s primary function in t he organization Table 2: Data Migration Test Plan Because the migration strategy is iterative these tests will be executed numerous times For maximum efficiency consider testing the data migration routines using a sampling from the production data if the total amount of data to migrate is large The outcome of the testing phase will often require revisiting a previous phase in the process The overall migration strategy will become more refined through each iteration through the process and once al l the tests have executed successfully it will be a good indication that it is time for the next and final phase: data migration Data Migration Phase In the data migration phase the full set of production data from the source RDBMS tables will be migr ated into DynamoDB By the time this phase is reached the end to end data migration process will have been tested and vetted thoroughly All the steps of the process will have been carefully documented so running it on the production data set should be as simple as following a procedure that has been executed numerous times before In preparation for this final phase a notification should be sent to the application users alerting them that the application will be undergoing maintenance and (if required) downtime Once the data migration has completed the user acceptance tests defined in the previous phase should be executed one final time to ensure that the application is in a usable state In the event that the migration fails for any reason the b ackup and recovery procedure —which will also have been thoroughly tested and vetted at this point —can be executed When the system is back to a stable state a root cause analysis of the failure should be conducted and the data migration rescheduled once the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 23 of 24 root cause has been resolved If all goes well the application should be closely monitored over the next several days until there is sufficient data indicating that the application is functioning normally Conclusion Leveraging DynamoDB for suitable workloads can result in lower costs a reduction in operational overhead and an increase in performance availability and reliability when compared to a traditional RDBMS In this paper we proposed a strategy for identifying and migrating suitable workloads from an RDBMS to DynamoDB While implementing such a strategy will require careful planning and engineering effort we are confident that the ROI of migrating to a fully managed NoSQL solution like DynamoDB will greatly exceed the upfront cost associated with the effort Cheat Sheet The following is a “cheat sheet” summarizing some of the key concepts discussed in this paper and the sections where those concepts are detailed: Concept Section Determining suitable wor kloads Suitable Workloads Choosing the right key structure Key Concepts Table indexing Data Modeling Phase Provisioning read and write throughput Data Modeling Example Choosing a migration strategy Planning Phase Further Reading For additional help please consult the following sources: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Best Practices for Migrating from RDBMS to DynamoDB August 2014 Page 24 of 24 • DynamoDB Developer Guide4 • DynamoDB Website5 4 http://docsawsamazoncom/amazondynamodb/latest/developerguide/GettingStartedDynamoDBhtml 5 http://awsamazoncom/dynamodb
|
General
|
consultant
|
Best Practices
|
Best_Practices_for_Migrating_MySQL_Databases_to_Amazon_Aurora
|
ArchivedBest Practices for Migrating MySQL Databases to Amazon Aurora October 2016 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Basic Performance Considerations 1 Client Location 1 Client Capacity 3 Client Configuration 4 Server Capacity 4 Tools and Procedures 5 Advanced Performance Concepts 6 Client Topics 6 Server Topics 7 Tools 8 Procedure Optimizations 12 Conclusion 18 Contributors 18 Archived Abstract This whitepaper discusses some of the important factors affecting the performance of selfmanaged export/import operations in Amazon Relational Database Service (Amazon RDS) for MySQL and Amazon Aurora Although many of the topics are discussed in the context of Amazon RDS performance principles presented here also apply to the MySQL Community Edition found in selfmanaged MySQL installations Target Audience The target audience of this paper includes: Database and system administrators performing migrations from MySQL compatible databases into Aurora where AWSmanaged migration tools cannot be used Software developers working on bulk data import tools for MySQL compatible databases ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 1 Introduction Migrations are among the most timeconsuming tasks handled by database administrators (DBAs) Although the task becomes easier with the advent of managed migration services such as the AWS Database Migration Service (AWS DMS) many largescale database migrations still require a custom approach due to performance manageability and compatibility requirements The total time required to export data from the source repository and import it into the target database is one of the most important factors determining the success of all migration projects This paper discuss es the following major contributors to migration performance: Client and server performance characteristics The choice of migration tools; without the right tools even the most powerful client and server machines cannot reach their full potential Optimized migration procedures to fully utilize the available client/server resources and leverage performanceoptimized tooling Basic Performance Considerations The following are basic considerations for client and server performance Tooling and procedure optimizations are described in more detail in “Tools and Procedures " later in this document Client Location Perform export/import operations from a client machine that is launched in the same location as the database server: For onpremises database servers the client machine should be in the same onpremises network For Amazon RDS or Amazon Elastic Compute Cloud (Amazon EC2) database instances the client instance should exist in the same Amazon Virtual Private Cloud (Amazon VPC) and Availability Zone as the server ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 2 For EC2Classic (nonVPC) servers the client should be located in the same AWS Region and Availability Zone Figure 1: Logical migration between AWS Cloud databases To follow the preceding recommendations during migrations between distant databases you might need to use two client machines: One in the source network so that it’s close to the server you’re migrating from Another in the target network so that it’s close to the server you’re migrating to In this case you can move dump files between client machines using file transfer protocols (such as FTP or SFTP) or upload them to Amazon Simple Storage Service (Amazon S3) To further reduce the total migration time you can compress files prior to transferring them ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 3 Figure 2: Data flow in a selfmanaged migration from onpremises to an AWS Cloud database Client Capacity Regardless of its location the client machine should have adequate CPU I/O and network capacity to perform the requested operations Although the definition of adequate varies depending on use cases the general recommendations are as follows: If the export or import involves realtime processing of data for example compression or decompression choose an instance class with at least one CPU core per export/import thread Ensure that there is enough network bandwidth available to the client instance We recommend using instance types that support enhanced networking For more information see the Enhanced Networking section in the Amazon EC2 User Guide 1 Ensure that the client’s storage layer provides the expecte d read/write capacity For example if you expect to dump data at 100 megabytes per second the instance and its underlying Amazon Elastic Block Store ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 4 (Amazon EBS) volume must be capable of sustaining at least 100 MB/s (800 Mbps) of I/O throughput Client Configuration For best performance on Linux client instances we recommend that you enable the receive packet steering (RPS) and receive flow steering (RFS) features To enable RPS use the following code sudo sh c 'for x in /sys/class/net/eth0/queues/r x*; do echo ffffffff > $x/rps_cpus; done' sudo sh c "echo 4096 > /sys/class/net/eth0/queues/rx 0/rps_flow_cnt" sudo sh c "echo 4096 > /sys/class/net/eth0/queues/rx 1/rps_flow_cnt To enable RFS use the following code sudo sh c "echo 32768 > /proc/sys/ net/core/rps_sock_flow_entries" Server Capacity To dump or ingest data at optimal speed the database server should have enough I/O and CPU capacity In traditional databases I/O performance often becomes the ultimate bottleneck during migrations Aurora addresses this challenge by using a custom distributed storage layer designed to provide low latency and high throughput under multithreaded workloads In Aurora you don’t have to choose between storage types or provision storage specifically for export/import purposes We recommend using Aurora for instances with one CPU core per thread for exports and two CPU cores per thread for imports If you’ve chosen an instance class with enough CPU cores to handle your export/import the instance should already offer adequate network bandwidth ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 5 For more information see “Server Topics ” later in this document Tools and Procedures Whenever possible perform export and import operations in multithreaded fashion On modern systems equipped with multicore CPUs and distributed storage this approach ensures that all available client/server resources are consumed efficiently Engineer export/import procedures to avoid unnecessary overhead The following table lists common export/import performance challenges and provides sample solutions You can use it do drive your tooling and procedure choices Import Technique Challenge Potential Solution Examples Single row INSERT statements Storage and SQL processing overhead Use multi row SQL statements Use non SQL format (eg CSV flat files) Import 1 MB of data per statement Use a set of flat files (chunks) 1 GB each Single row or multi row statements with small transaction size Transactional overhead each statement is committed separately Increase transaction size Commit once per 1000 statements Flat file imports with very large transaction size Undo management overhead Reduce transaction size Commit once per 1 GB of data imported Single threaded export/import Under utilization of server resources only one table is accessed at a time Export/import multiple tables in parallel Export from or load into 8 tables in parallel If you are exporting data from an active production database you have to find a balance between the performance of production queries and that of the export itself Execute export operations carefully so that you don ’t compromise the performance of the production workload This information is discussed i n more detail in the following section ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 6 Advanced Performance Concepts Client Topics Contrary to the popular opinion that total migration time depends exclusively on server performance data migrations can often be constrained by clientside factors It is important that you identify understand and finally address client side bottlenecks; otherwise you may not achieve the goal of reaching optimal import/export performance Client Location The location of the client machine is an important factor affecting data migrations performance benchmarks and day today database operations alike Remote clients can experience network latency ranging from dozens to hundreds of milliseconds Communication latency introduces unnecessary overhead to every database operation and can result in substantial performance degradation The performance impact of network latency is particularly visible during single threaded operations involving large amounts of short database statements With all statements executed on a single thread the statement throughput becomes the inverse of network latency yielding very low overall performance We strongly recommend that you perform all types of database activities from an Amazon EC2 instance located in the same VPC and Availability Zone as the database server For EC2Classic (non VPC) servers the client should be located in the same AWS Region and Availability Zone The reason we recommend that you launch client instances not only in the same AWS Region but also in the same VPC is that crossVPC traffic is treated as public and thus uses publicly routable IP addresses Because the traffic must travel through a public network segment the network path becomes longer resulting in higher communication latency ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 7 Client Capacity It is a common misconception that the specifications of client machines have little or no impact on export/import operations Although it is often true that resource utilization is higher on the server side it is still important to remember the following: On small client instances multithreaded exports and imports can become CPUbound especially if data is compressed or decompressed on the fly eg when the data stream is piped through a compression tool like gzip Multithreaded data migrations can consume substantial network and I/O bandwidth Choose the instance class and size and type of the underlying Amazon EBS storage volume carefully For more information see the Amazon EBS Volume Performance section in the Amazon EC2 User Guide 2 All operating systems provide diagnostic tools that can help you detect CPU network and I/O bottlenecks When investigating export/import performance issues we recommend that you use these tools and rule out clientside problems before digging deeper into server configuration Server Topics Serverside storage performance CPU power and network throughput are among the most important server characteristics affecting batch export/import operations Aurora supports pointandclick instance scaling that enables you to modify the compute and network capacity of your database cluster for the duration of the batch operations Storage Performance Aurora leverages a purposebuilt distributed storage layer designed to provide low latency and high throughput under multithreaded workloads You don't need to choose between storage volume types or provision storage specifically for export/import purposes ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 8 CPU Power Multithreaded exports/imports can become CPU bound when executed against smaller instance types We recommend using a server instance class with one CPU core per thread for exports and two CPU cores per thread for imports CPU capacity can be consumed efficiently only if the export/import is realized in multithreaded fashion Using an instance type with more CPU cores is unlikely to improve performance dump or import that is executed in a single thread Network Throughput Aurora does not use Amazon EBS volumes for storage As a result it is not constrained by the bandwidth of EBS network links or throughput limits of the EBS volumes However the theoretical peak I/O throughput of Aurora instances still depends on the instance class As a rule of thumb if you choose an instance class with enough CPU cores to handle the export/import (as discussed earlier) the instance should already offer adequate network performance Temporary Scaling In many cases export/import tasks can require significantly more compute capacity than day today database operations Thanks to the pointandclick compute scaling feature of Amazon RDS for MySQL and Aurora you can temporarily overprovision your instance and then scale it back down when you no longer need the additional capacity Note : Due to the benefits of the Aurora custom storage layer storage scaling is not needed before during or after exporting/imp orting data Tools With client and server machines located close to each other and sized adequately let ’s look at the different methods and tools you can use to actually move the data ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 9 Percona XtraBackup Aurora supports migration from Percona XtraBackup files stored in Amazon S3 Migrating from backup files can be significantly faster than migrating from logical schema and data dumps using tools such as mysqldump Logical imports work by executing SQL commands to recreate the schema and data from your source database which carries considerable processing overhead However Percona XtraBackup files can be ingested directly into an Aurora storage volume which removes the additional SQL execution cost A migration from Percona XtraBackup files involves three main steps: 1 Using the innobackupex tool to create a backup of the source database 2 Copying the backup to Amazon S3 3 Restoring the backup through the AWS RDS console You can use this migration method for source servers using MySQL versions 55 and 56 For more information and stepbystep instructions for migrating from Percona XtraBackup files see the Amazon Relational Database Service User Guide 3 mysqldump The mysqldump tool is perhaps the most popular export/import tool for MySQLcompatible database engines The tool produces dumps in the form of SQL files that contain data definition language (DDL) data control language (DCL) and data manipulation language (DML) statements The statements carry information about data structures data access rules and the actual data respectively In the context of this whitepaper two types of statements are of interest: CREATE TABLE statements to create relevant table structures before data can be inserted ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 10 INSERT statements to populate tables with data Each INSERT typically contains data from multiple rows but the dataset for each table is essentially represented as a series of INSERT statements The mysqldump based approach introduces certain issues related to performance: When used against managed database servers such as Amazon RDS instances the tool’s functionality is limited Due to privilege restrictions it cannot dump data in multiple threads or produce flatfile dumps suitable for parallel loading The SQL files do not include any transaction control statements by default Consequently you have very little control over the size of database transactions used to import data This lack of control can lead to poor performance for example: o With autocommit mode enabled (default) each individual INSERT statement runs inside its own transaction The database must COMMIT frequently which increases the overall execution overhead o With autocommit mode disabled each table is populated using one massive transaction The approach removes COMMIT overhead but leads to side effects such as tablespace bloat and long rollback times if the import operation is interrupted Note: Work is in progress to provide a modern replacement for the legacy mysqldump tool The new tool called mysqlpump is expected to check most of the boxes on MySQL DBA’s performance checklist For more information see the MySQL Reference Manual 4 Flat Files As opposed to SQLformat dumps that contain data encapsulated in SQL statements flatfile dumps come with very little overhead The only control ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 11 characters are the delimiters used to separate individual rows and columns Files in commaseparated value (CSV) or tabseparated value (TSV) format are both examples of the flatfile approach Flat files are most commonly produced using: The SELECT … INTO OUTFILE statement which dumps table contents (but not table structure) into a file located in the server’s local file system mysqldump command with the tab parameter which also dumps table contents to a file and creates the relevant metadata files with CREATE TABLE statements The command uses SELECT … INTO OUTFILE internally so it also creates dump files on the server’s local file system Note : Due to privilege restrictions you cannot use the methods mentioned previously with managed database servers such as Amazon RDS However you can import flat files dumped from self managed servers into managed instances with no issues Flat files have two major benefits: The lack of SQL encapsulation results in much smaller dump files and removes SQL processing overhead during import Flat files are always created in fileper table fashion which makes it easy to import them in parallel Flat files also have their disadvantages For example the server would use a single transaction to import data from each dump file To have more control over the size of import transactions you need to manually split very large dump files into chunks and then import one chunk at a time ThirdParty Tools and Alternative Solutions The mydumper and myloader tools are two popular opensource MySQL export/import tools designed to address performance issues that are associated ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 12 with the legacy mysqldump program They operate on SQLformat dumps and offer advanced features such as: Dumping and loading data in multiple threads Creating dump files in fileper table fashion Creating chunked dumps that is multiple files per table Dumping data and metadata into separate files Ability to configure transaction size during import Ability to schedule dumps in regular intervals For more information about mydumper and myloader see the project home page5 Efficient exports and imports are possible even without the help of thirdparty tools With enough effort you can solve issues associated with SQLformat or flat file dumps manually as follows: Solve singlethreaded mode of operations in legacy tools by running multiple instances of the tool in parallel However this does not allow you to create consistent databasewide dumps without temporarily suspending database writes Control transaction size by manually splitting large dump files into smaller chunks Procedure Optimizations This section describes ways that you can handle some of the common export/import challenges ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 13 Choosing the Right Number of Threads for Multithreaded Operations As mentioned earlier a rule of thumb is to use one thread per server CPU core for exports and one thread per two CPU cores for imports For example you should use 16 concurrent threads to dump data from a 16core dbr34xlarge instance and 8 concurrent threads to import data into the same instance type Exporting and Importing Multiple Large Tables If the dataset is spread fairly evenly across multiple tables export/import operations are relatively easy to parallelize To achieve optimal performance follow these guidelines: Perform export and import operations using multiple parallel threads To achieve this use a modern export tool such as mydumper described in “ThirdParty Tools and Alternative Solutions ” Never use singlerow INSERT statements for batch imports Instead use multi row INSERT statements or import data from flat files Avoid using small transactions but also don’t let each transaction become too heavy As a rule of thumb split large dumps into 500MB chunks and import one chunk per transaction Exporting and Importing Individual Large Tables In many databases data is not distributed equally across tables It is not uncommon for the majority of the data set to be stored in just a few tables or even a single table In this case the common approach of one export/import thread per table can result in suboptimal performance This is because the total export/import time depends on the slowest thread which is the thread that is processing the largest table To mitigate this you must leverage multithreading at the table level The following ideas can help you achieve better performance in similar situations ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 14 Large Table Approach for Exports On the source server you can perform a multithreaded dump of table data using a custom export script or a modern thirdparty export tool such as mydumper described in “ThirdParty Tools and Alternative Solutions ” When using custom scripts you can leverage multithreading by exporting multiple ranges (segments) of rows in parallel For best results you can produce segments by dumping ranges of values in an indexed table column preferably the primary key For performance reasons you should not produce segments using pagination ( LIMIT … OFFSET clause) When using mydumper know that the tool uses multiple threads across multiple tables but it does not parallelize operations against individual tables To use multiple threads per table you must explicitly provide the rows parameter when invoking the mydumper tool as follows rows : Split table into chunks of this many rows default unlimited You can choose the parameter value so that the total size of each chunk doesn’t exceed 100 MB For example if the average row length in the table is 1 KB you can choose a chunk size of 100000 rows for the total chunk size of ~100 MB Large Table Approach for Imports Once the dump is completed you can import it into the target server using custom scripts or the myloader tool Note : Both mydumper and myloader default to using four parallel threads which may not be enough to achieve optimal performance on Aurora dbr32xlarge instances or larger You can change the default level of parallelism using the threads parameter ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 15 Splitting Dump Files into Chunks You can import data from flat files using a single data chunk (for small tables) or a contiguous sequence of data chunks (for larger tables) Use the following guidelines to decide how to split table dumps into multiple chunks: Avoid generating very small chunks (<1 MB) so that you can avoid protocol and transactional overhead Alternatively very large chunks can put unnecessary pressure on server resources without bringing performance benefits As a rule of thumb you might use a 500MB chunk size for large batch imports For partitioned InnoDB tables use one chunk per partition and don’t mix data from different partitions in one chunk If individual partitions are very large split partition data further using one of the following solutions For tables or table partitions with an autoincremented PRIMARY key: o If PRIMARY key values are provided in the dump it is good practice not to split data in a random fashion Instead use rangebased splitting so that each chunk contains monotonically increasing primary key values For example if a table has a PRIMARY key column called id data can be sorted by id in ascending order and then sliced into chunks This approach reduces page fragmentation and lock contention during import o If PRIMARY key values are not provided in the dump the engine generates them automatically for each inserted row In such cases you don't need to chunk the data in any particular way and you can choose the method that’s easiest for you to implement If the table or table partition has a PRIMARY or NOT NULL UNIQUE key that is not autoincremented split the data so that each chunk contains monotonically increasing key values for that PRIMARY or NOT NULL UNIQUE KEY as described previously ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 16 If the table does not have a PRIMARY or NOT NULL UNIQUE key the engine creates an implicit internal clustered index and fills it with monotonically increasing values regardless of how the input data is split For more information about InnoDB index types see the MySQL Reference Manual 6 Avoiding Secondary Index Maintenance Overhead CREATE TABLE statements found in a typical SQLformat dump include the definition of the table primary key and all secondary keys Consequently the cost of secondary index management has to be paid for every row inserted during the import You can observe the index management cost as a gradual decrease in import performance as the table grows The negative effects of index management overhead are particularly visible if the table is large or if there are multiple secondary indexes defined on it In extreme cases importing data into a table with secondary indexes can be several times slower than importing the same data into a table with no secondary indexes Unfortunately none of the tools mentioned in this document support builtin secondary index optimization You can however implement the optimization using this simple technique: Modify the dump files so that CREATE TABLE statements do not include secondary key or foreign key definitions Import data Recreate secondary and foreign keys using ALTER TABLE statements or third party online schema manipulation tools such as “pt onlineschema change” from Percona Toolkit When using ALTER TABLE: o Avoid using separate ALTER TABLE statements for each index Instead use one ALTER TABLE statement per table to recreate all indexes for that table in a single operation ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 17 o You may run multiple ALTER TABLE statements in parallel (one per table) to reduce the total time required to process all tables ALTER TABLE operations can consume a significant amount of temporary storage space depending on the table size and the number and type of indexes defined on the table Aurora instances use local (perinstance) temporary storage volumes If you observe that ALTER TABLE operations on large tables are failing to complete it can be due to lack of free space on the instan ce’s temporary volume If this occurs you can apply one of the following solutions: Scale the Aurora instance to a larger type If altering multiple tables in parallel reduce the number of ALTER statements running concurrently or try running only one ALTER at a time Consider using a thirdparty online schema manipulation tool such as ptonlineschemachange from Percona Toolkit To learn more about monitoring the local temporary storage on Aurora instances see the Amazon Relational Database Service User Guide 7 Reducing the Impact of LongRunning Data Dumps Data dumps are often performed from active database servers that are part of a missioncritical production environment If severe performance impact of a massive dump is not acceptable in your environment consider one of the following ideas: If the source server has replicas you can dump data from one of the replicas If the source server is covered by regular backup procedures: o Use backup data as input for the import process if backup format allows for that ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 18 o If backup format is not suitable for direct importing into the target database use the backup to provision a temporary database and dump data from it If neither replicas nor backups are available: o Perform dumps during offpeak hours when production traffic is at its lowest o Reduce the concurrency of dump operations so that the server has enough spare capacity to handle production traffic Conclusion This paper discussed important factors affecting the performance of self managed export/import operations in Amazon Relational Database Service (Amazon RDS) for MySQL and Amazon Aurora: The location and sizing of client and server machines The ability to consume client and server resources efficiently which is mostly achieved through multithreading The ability to identify and avoid unnecessary overhead at all stages of the migration process We hope that the ideas and observations we provide will contribute to creating a better overall experience for data migrations in your MySQLcompatible database environments Contributors The following individuals and organizations contributed to this document: Szymon Komendera Database Engineer Amazon Web Services ArchivedAmazon Web Services – Best Practices for Migrating MySQL Databases to Amazon Aurora Page 19 1 http://docsawsamazoncom/AWSEC2/latest/UserGuide/enhanced networkinghtml 2 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSPerformanceh tml 3 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraMigrate MySQLhtml#AuroraMigrateMySQLS3 4 https://devmysqlcom/doc/refman/57/en/mysqlpumphtml 5 https://launchpadnet/mydumper/ 6 https://devmysqlcom/doc/refman/56/en/innodbindextypeshtml 7 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AuroraMonitor inghtml Notes
|
General
|
consultant
|
Best Practices
|
Best_Practices_for_Running_Oracle_Siebel_CRM_on_AWS
|
ArchivedBest Practices for Running Oracle Siebel CRM on AWS March 2018 This paper has been archived For the latest technical content about the AWS Clou d see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments cond itions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Benefits of Running Siebel Applications on AWS 1 Key Benefits of AWS versus On Premises 1 Key Benefits over SaaS 4 AWS Concepts and Services 5 Regions and Availability Zones 5 Amazon EC2 7 Amazon RDS 7 AWS DMS 7 Elastic Load Balancing 7 Amazon EBS 8 Amazon Machine Images 8 Amazon S3 8 Amazon Route 53 8 Amazon VPC 9 AWS Direct Connect 9 Siebel CRM Architecture on AWS and Deployment Best Practices 9 Traffic Distribution and Load Balancing 10 Scalability 11 Architecting for High Availability and Disaster Recovery 12 VPC and Connectivity Options 16 Securing Your Siebel Application on AWS 17 Siebel and Oracle Licensing on AWS 19 Siebel and Oracle Database License Portability 19 Amazon RDS for Oracle Licensing Models 20 Siebel on AWS Use Cases 20 Archived Monitoring Your Infrastructure 21 AWS and Oracle Support 22 AWS Support 22 Oracle Support 22 Conclusion 23 Contributors 23 Further Reading 23 Archived Abstract Oracle's Siebel Customer Relationship Management ( CRM ) is a widely used and popular CRM application This whitepaper is intended for AWS customers and partners who want to learn about the benefits and options for running Oracle Siebel CRM on AWS This whitepaper provides architectural guidance and outlines best practices for high availability security scalability performance and disaster recovery for running Oracle Siebe l CRM systems on AWS ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 1 Introduction Companies are increasingly adopting a “ cloud first mobile first ” strategy Migrating Oracle’s Siebel Customer Relationship Management (CRM) applications to a cloud platform is becoming a necessity This paper is intended to help you understand Amazon Web Services (AWS) and how to leverage AWS to run Oracle Siebel CRM applications The paper also discusses key benefits and best practices for running Oracle Siebel CRM workloads on AWS Benefits of R unning Siebel Applications on AWS Migrating your Siebel applications to AWS is relatively simple and straightforward However it’s important that you don’t view this as merely a physical to virtual conversion or as just a “ lift and shift ” migration Understanding and using the AWS services and capabilities will help you make the most of running your Siebel systems on AWS Key Benefits of AWS versus OnPremise s Migrating your on premises Siebel environment to AWS offers you the following benefits: • Eliminate long procurement cycles – Traditional deployment as shown in the following figure involves a long procurement process Each stage is time intensive and requir es large capital outlay and multiple approvals ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 2 Figure 1 : A typical IT procurement c ycle This process has to be repeated for the various environments for example development testing training break fix and production which compounds the costs and causes significant delays With AWS you can p rovision new infrastructure and Siebel environments in minutes compared to waiting weeks or months to procure and deploy traditional infrastructure • Have Moore’s law work for you instead of against you – In an onpremises environment you end up owning ha rdware that depreciates in value every year You ’re locked in to the price and capacity of the hardware once you acquire it and you have ongoing hardware support costs With AWS you can switch your underlying Siebel instances to newer AWS instance types as they become available • Right size anytime – Often customer s oversize environments for initial phases and then can’t cope with growth in later phases With AWS you can scale the usage up or down at any time You pay only for the computing capacity you use for the duration you use it You can change instance sizes in minutes through the AWS Management Console the AWS API or the AWS Command Line Interface (CLI) IT Procurement Cycle01Capacility Planning 02Capital Allocation 03 Provisioning04 Maintenance05Hardware RefreshArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 3 • Resilience and ability to keep recovering from multiple failures – Onpremise s failures have to be handled on a case bycase basis Failed parts have to be procured and replaced Key components such as the Siebel gateway name server have to be clustered using expensive clustering software D eployment is still limited by the ability to handle only one failure in the primary gateway With AWS clustering of the Siebel gateway is n’t required The gateway can recover from multiple failures using the instance recovery feature of Amazon EC2 • Disa ster recovery – You can build extremely low cost standby disaster recovery ( DR) environments for existing deployments and incur costs only for the duration of the outage • Lower total cost of ownership ( TCO ) – Siebel c ustomers with on premise s data centers typically pay hardware support costs virtualization licensing and support costs data center costs etc You can eliminate or reduce a ll of these by moving to AWS Y ou benefit from the economies of scale and efficiencies that AWS provide s and pay for only the compute storage and other resources you use • Ability to test application performance – Performance testing is recommended before any major change to a Siebel environment However m ost customers performance test their Siebel CRM application s only during the initial launch on the yet tobedeployed production hardware Later releases are usually never performance tested due to the expense and lack of the environment required for performance testing With AWS you can minimize the risk of discovering performance issues later in production An AWS Cloud environment can be created easily and quickly just for the duration of the performance test and used only when needed Again you ’re charged only for the hours the environment is used • No endoflife (EOL) for hardware /platform – All hardware platforms have EOL dates when the existing hardware is no longer supported and you are forced to buy new hardware In the AWS C loud you can simply upgrade your Amazon EC2 instances to new AWS instance types in a single click at no cost for the upgrade • No need for clustering – The Siebel gateway name server is a single point of failure On premise s implementations require you to cluster the gateway Clustering is complicated and expensive to implement With AWS no clustering is needed In addition you can automatically recover ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 4 a failed gateway name server using the instance recovery feature of Amazon EC2 • Unlimited environments – Customers with o npremise s data centers face the issue of limited environments For example a test environment will have a newer release compared to a production environment This means that if a performance issue is found in production you have no way to suddenly provision a performance debugging environment On AWS you can do this easily Key Benefits over S aaS The following are some of the benefits of deploying Siebel CRM on AWS compared to moving to a CRM offering based on a Software asaService ( SaaS ) model : • Lower total cost of ownership (TCO) – Existing Siebel customers don’t have to purchase new CRM licenses or risk a reimplementation of their CRM —they can just move their existing Siebel CRM implementation to AWS For new customers the TCO is still low because they don’t have to pay monthly SaaS license fees Siebel is a proven CRM with rich verticals • Unlimited usage – SaaS applications have governor/platform limits to accommodate underlying multi tenant architecture Governor limits restrict usage including the number of API calls transactio n times datasets and file sizes With AWS you can self provision and use as much or as little capacity as you need You pay only for what you use • Multi tenant v ersus elastic architecture – SaaS products typically use a multi tenant architecture which ties you to a specific instance and the limits of that instance With AWS you have complete control over the computing capacity you provision —you can provision as much or as little as you need • Single application management – With Siebel CRM you can manage everything —including marketing sales service CPQ and order s—in one application On SaaS this requires multiple applications that you have to buy and integrate The cost of integration with SaaS applications is easy to overlook in the buy deci sion but these costs can add up significantly later ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 5 AWS Concept s and Services In this section we introduce you to some AWS concepts and s ervices that help you understand how Siebel CRM is deployed on AWS Regions and Availability Zones The AWS Cloud infrastructure is built around AWS Regions and Availability Zones AWS Regions provide multiple physically separated and isolated Availability Zones that are connected with low latency high throughput and highly redundant networking Avail ability Zones consist of one or more discrete data centers each with redundant power networking and connectivity and housed in separate facilities These Availability Zones enable you to operate production applications and databases that are more highl y available fault tolerant and scalable than is possible from a single data center Each r egion is a separate geographic area isolated from the other regions Regions enable you to place resources such as Amazon EC2 instances and data in multiple locat ions Resources aren’t replicated across regions unless you do so specifically An AWS account provides multiple regions so that you can launch your application in locations that meet your requirements For example you might want to launch in Europe to be closer to your European customers or to meet legal requirements The following diagram illustrates the relationship between r egions and Availability Zones Figure 2: Relationship between AWS Regions and Availability Zones ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 6 The following figure shows the regions and the number of Availability Zones in each region provided by an AWS account at the time of this publication For the most current list of regions and Availability Zones see https://awsamazoncom/about aws/global infrastructure/ Figure 3: Map of AWS Regions and Availability Zones ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 7 Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud billed by the hour You can run virtual machines with various compute and memory capacities You have a choice of operating systems including different versions of Windows Server and Linux Amazon RDS Amazon Relation al Database Service (Amazon RDS) makes it easy to se t up operate and scale a relational database in the cloud It provides cost efficient and resizable capacity while managing time consuming database administration tasks This allows you to focus on your applications and business For Siebel both Micros oft SQL Server and Oracle databases are available AWS DMS AWS Database Migration Service (AWS DMS) helps you migrate databases to AWS quickly and securely AWS DMS can also be used for continuous data rep lication with high availability and supports mos t widely used commercial and open source databases like Oracle SQL Server PostgreSQL and SAP ASE Elastic Load Balancing Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple Amazon EC2 instances in the cloud It enables you to achieve greater levels of fault tolerance in your applications seamlessly providing the required amount of load balancing capacity needed to distribute application traffic You can use ELB to load balance web server traffic ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 8 Amazon EBS Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage volumes for use with EC2 instances in the AWS Cloud Each EBS volume is automatically replicated within its Availability Zone to protect you from component failure offeri ng high availability and durability EBS volumes offer the consistent and low latency performance needed to run your workloads Amazon Machine Images An Amazon Machine Image (AMI) is simply a packaged up environment that includes all the necessary bits to set up and boot your EC2 instance AMIs are your unit of deployment Amazon EC2 uses Amazon EBS and Amazon S3 to provide reliable scalable storage of your AMIs so that AWS can boot them when you ask AWS to do so Amazon S3 Amazon Simple Storage Service (Amazon S3) provides developers and IT teams with secure durable and highly scalable object storage Amazon S3 is easy to use with a simple web service interface to store and retrieve any amount of data from anywhere on the web With Amazon S3 you pay only for the storage you actually use There is no minimum fee and no setup cost Amazon Route 53 Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service It ’s designed to give developers and businesses an extremely reliable and cost effective way to route end users to internet applications ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 9 Amazon VPC Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch resources in a virtual network that you define You have complete control over your virtual networking environment including selection of your own private IP address range creation of subnets and configuration of route tables and network gateways You can levera ge multiple layers of security including security groups and network access control lists to help control access to EC2 instances in each subnet Additionally you can create a hardware virtual private network (VPN) connection between your corporate data center and your VPC and use the AWS Cloud as an extension of your corporate data center AWS Direct Connect AWS Direct Connect is a network service that provides an alternative to using the internet to utilize AWS Cloud services Using Direct Connect y ou can establish private dedicated network connectivity between AWS and your data center office or colocation environment In many cases this can reduce your network costs increase bandwidth throughput and provide a more consistent network experience t han internet based connections Siebel CRM Architecture on AWS and Deployment Best Practices The following architecture diagram illustrates how you can deploy Oracle Siebel CRM on AWS Three required components of your Siebel CRM application (the Siebel gateway name server Siebel a pplication server and Siebel web server) can be deployed to multiple EC2 instances behind an Elastic Load Balancing load balancer The fourth required Siebel component (the Siebel d atabase) can be set up on Amazon RDS for Oracle You can deploy your Siebel web application and gateway name servers and the Siebel database across multiple Availability Zones for high availability of your application ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 10 Figure 4: Architecture for deploying Siebel CRM on AW S The following sections describe the elements of this architecture in detail Traffic Distribution and Load Balancing Amazon Route 53 DNS is used to direct users to Siebel CRM hosted on AWS Elastic Load Balancing (ELB) is used to distribute incoming application traffic across the Siebel web servers deployed in multiple Availability Zones The load balancer serves as a single point of contact for client request s which enables you to increase the availability of your application You can add and remov e Siebel web server instances from your load balancer as your needs change without disrupting the overall flow of information ELB ensures that only healthy Siebel web server instances receive traffic by detecting unhealthy instances and rerouting traffic across the remaining healthy instances If a Siebel web server instance fails ELB automatically reroutes the traffic to the ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 11 remaining running Siebel web server instances If a failed Siebel web server instance is restored ELB restores the traffic to that instance Scalability When using AWS you can scale your application easily because of the elastic nature of the cloud You can scale up the Siebel web and application servers simply by changing the instance type to a larger instance type For example yo u can start with an r4large instance with 2 vCPUs and 15 GiB RAM and scale up all the way to an x 1e32xlarge instance with 128 vCPUs and 3904 GiB RAM After selecting a new instance type you only need a restart for the changes to take effect Typically the resizing operation is completed in a few minutes the Amazon EBS volumes remain attached to the instances and no data migration is required For your Siebel database deployed on Amazon RDS you can scale the compute and storage resources independently You can scale up the compute resources simply by changing the DB instance class to a larger one This modification typically takes only a few minutes and the database will be temporarily unavailable during this period You can increase the s torage capacity and IOPS provisioned for your database without any impact on database availability You can scale out the web and application tier by adding and configuring more instances when you need them The Siebel g ateway name server keeps track of available application and web servers These are registered with the Siebel gateway name server when the Siebel a pplication server or Siebel w eb server is installed To meet extra capacity requirements additional instances of Siebel w eb server s and application servers should be preinstalled and configured on EC2 instances These “standby ” instances can be shut down until extra capacity is required You don’t incur c harges when instances are shut down —you incur only Amazon Elastic Block Sto re (Amazon EBS ) storage charges At the time of this publication EBS General Purpose volumes are priced at $010 per GB per month in the US East ( Ohio ) Region Therefore for an instance with 120 GB of hard disk drive ( HDD ) space the storage charge is on ly $12 per month These preinstalled standby instances provide you the flexibility to use them to meet additional capacity needs as and when you need them ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 12 Architecting for H igh Availability and Disaster Recovery In this section we discuss best practices and options for deploying Siebel CRM on AWS for high availability of your Siebel application and for disaster recovery Multi Availability Zone Deployment for High Availability of a Siebel Database on Amazon RDS As described earlier each Availability Zone is isolated from other zones and runs on its own physically distinct independent infrastructure The likeli hood of two Availability Zones experiencing a failure at the same time is very small Like the Siebel web and application servers you can deploy the Siebel database on Amazon RDS in a M ultiAZ configuration Multi AZ deployments provide enhanced availability and durability for Amazon RDS DB i nstances making them a natural fit for production database workloads When you provision a Multi AZ DB i nstance Amazon RDS automatically creates a primary DB instance and synchronously replicates the data to a standby instance in a different Availability Zone In case of an infrastructure failure (for example instance hardware failure storage failure or network disruption) Amazon RDS performs an automatic fail over to the standby instance Because the endpoint for your DB i nstance remains the same after a failover your application can resume database operation s as soon as the failover is complete without manual administrative intervention To learn how to set up Amazon RDS for Oracle as the database backend of your Siebel CRM a pplication see this documentation 1 Configuring the Siebel Gateway Name Server for High Availability With bare metal implementations you can deploy Siebel gateway name servers in an active /passive cluster to ensure availability in case of underlying host failure When deploying on AWS you have several options for configuring Siebel gateway name server s to ensure high availability You can use the EC2 automatic instance recovery feature to recover the Siebel gateway if the underlying host fails Instance recovery perform s several system status checks of the Siebel gateway name server instance and the other components that need to be running for the instance to function as expected Among other things instance recovery checks for loss of network connectivity ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 13 loss of system power and software and hardware issues on the physical host If a sys tem status check of the underlying hardware fails the instance will be rebooted (on new hardware if necessary) but will retain its i nstance ID IP address Elastic IP a ddresses EBS v olume attachments and other configuration details Another option is t o put the Siebel gateway name servers in an A uto Scaling group that spans multiple Availability Zones and set the min imum and maximum size of the group to one Auto Scaling ensure s that an instance of the Siebel gateway name server is running in the selected Availability Zones This solution ensures high availability of the Siebel gateway name server in the unlikely event of an A vailability Zone failure Note : You should back up the siebns dat configuration file to Amazon S3 before and after making any configuration changes especially when creating new component definitions and adding or deleting Siebel servers When the Siebel gateway name server is restored after a failure it should update itself with the latest copy of sie bnsdat from Amazon S3 You don’t have to buy additional software or run additional passive instances while using instance recovery or a fixed size A uto Scaling group for high availability Finally you can configure high availability cluster s of the Siebel gateway name server s There are several third party products such as SIOS2 and SoftNAS3 that offer a shared storage solution on AWS for clustering the Siebel gateway name server s Multi Region Deployment for Disaster Recovery Although a single AWS Region architecture with M ultiAZ deployment might suffice for most use cases you might want to consider a m ultiregion deployment for disaster recovery (DR) depending on business requirements For example you might have regulator y requirements or a business policy that mandate s that the DR site be located a certain distance away from the primary site Crossr egion deployments for DR should be designed and validated for specific use cases based on your uptime needs and budget The following diagram shows a typical Siebel deployment across re gions that addresses both high availability and DR requirements ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 14 Figure 5: Multi region deployment of Siebel on Amazon RDS for Oracle In this scenario u sers are directed to the Siebel application server in the primary region using Amazon Route 53 If the primary r egion is unavailable due to a disaster failover is initiated and users are redirected to the Siebel application server deployed in the DR region The primary database is deployed on Amazon RDS for Oracle in a Multi AZ configuration AWS DMS is used to replicate the data from the RDS DB instance in the primary r egion to another RDS DB instance in the DR r egion Note: AWS DMS can replicate only the data not the database schema changes The database schema changes in the RDS DB instance in the primary region should be applied separately to the RDS DB instance in the DR region You can do this while updating the applications in the DR r egion Multi Region Deployment of Siebel on Oracle Running on Amazon EC2 Instances Although Amazon RDS for Oracle is the recommended option for deploying the Siebel database there could be scenarios where Amazon RDS might not be suitable For example Amazon RDS might not be suitable i n the unlikely scenario that t he database size is close to or greater than the Amazon RDS for Oracle storage limit In such scenarios you can install the Siebel database on ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 15 Oracle on EC2 instances and configure Oracle Data G uard replication for high availability and DR as shown in the following figure Figure 6: Multi region deployment of Siebel on Oracle on Amazon EC2 In this DR scenario the database is deployed on Oracle running on EC2 instances Oracle Data Guard replication is configured between the primary database and two standby databases One of the two standby databases is “local” (for synchronous replication) in another Availability Zone in the primary r egion The other is a “remote ” standby database ( for asynchronous replication) in the DR region If the primary database fails the local standby database is promoted as the primary database and the Siebel application server will connect to it In the extremely unlikely event of a r egion failure or unavailability the remote standby database is promoted as the primary database and users are re directed to the Siebel application server in the DR region using Route 53 For more details on deploying Oracle Database with Data Guard replication on AWS see the Oracle Database on the AWS Cloud Quick Start4 Refer to this AWS whitepaper to learn more about using AWS for d isaster recovery 5 ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 16 Using AWS as a DR Site and an OnPremise s Production Environment You can also deploy a DR environment on AWS for your Siebel applications running in an on premise s production environment If the production environment fails a failover is initiated and use rs are redirected to the Siebel application server deployed on AWS The process is fairly simple and inv olves the following major steps: • Setting up connectivity between your on premise s data center and AWS using a VPN connection or AWS Direct Connect • Insta lling Siebel web application and gateway name server s on AWS • Back ing up siebns dat to an Amazon S3 bucket • Installing the standby database on AWS and configur ing Oracle Data Guard replication or replication using AWS DMS between the on premise s production database and the standby database on AWS In this scenario i f the onpremises production environment fails you can initiate a failover and redirect users to the Siebel application server on AWS VPC and Connectivity Options Amazon VPC lets y ou provision a secure private isolated section of the AWS Cloud where you can launch AWS resources in a virtual network using IP address ranges that you define Amazon VPC provides you with several options for securely connecting your AWS virtual network s with other remote networks (Network security is discussed in greater detail in the section Amazon VPC and Network Security ) If users are accessing the Siebel application primarily from an office or on premise s (eg a call center) you ca n use a hardware IP sec VPN connection or AWS Direct Connect to connect your on premise s network and Amazon VPC If users are accessing the Siebel application from outside the office (eg a sales rep or customer accessing Siebel from the field or from hom e) you can use a software appliance based VPN connection over the i nternet For detailed information about various connectivity options see the whitepaper Amazon Virtual Private Cloud Con nectivity Options 6 ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 17 Securing Your Siebel Application on AWS The AWS infrastructure is architected to provide an extremely scalable highly reliable platform that enables you to deploy applications and data quickly and securely Security in the cloud is slightly different from security in your on premises data centers When you move computer systems and data to the cloud security responsibilities become shared between you and your cloud service provider In this case AWS is responsible for securing the underlying infrastruct ure that supports the cloud and you are responsible for securing workloads you deploy in AWS This shared security responsibility model can reduce your operational burden in many ways It also gives you the flexibility to implement the most applicable sec urity controls for your business functions in the AWS environment Figure 7: AWS shared responsibility model We recommend that you take advantage of the various security features that AWS offers when deploying Siebel CRM on AWS You can use t he following AWS security features to control and monitor access to the infrastructure components of your Siebel deployment ( eg OSlevel access to your Siebel application servers network level security limiting access to AWS services such as Amazon EC2 Amazon RD S Amazon S3 etc) The Siebel application security architecture (user authentication authorization field level encryption etc) does not change when you deploy you r Siebel application on AWS —you configure and manage it the same way as you would on premises7 ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 18 IAM When you deploy your Siebel application on AWS you can use AWS Identity and Access Management (IAM) to control access to the AWS environment in which your S iebel servers are deployed With IAM you can centrally manage users and security credentials ( such as passwords access keys and permissions policies ) that control which AWS services and resources users can access IAM supports multi factor authenticatio n for privileged accounts This includes options for hardware based authenticators and support for i ntegration and federation with corporate directories to reduce administrative overhead and improve end user experience Monitoring and Logging You can use AWS CloudTrail for resource change tracking and compliance auditing of the AWS infrastructure components of your Siebel environment (such as Amazon EC2 Amazon RDS Amazon S3 etc) For Siebel application level auditing you can continue to us e the Siebel Audit Trail feature8 AWS CloudTrail is a web service that records AWS API calls for your AWS account and delivers log files to you The recorded information includes the identity of the API caller the time of the API call the source IP address of the API caller the request parameters and the response elements returned by the AWS service This provides deep visibility into API calls including who what when and from where calls were made T he AWS API call history produced by CloudTrail enables security analysis resource change tracking and compliance auditing Amazon VPC and Network Security Amazon VPC enables you to provision a logically isol ated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define It offers you an IPsec VPN device to provide an encrypted tunnel between the Amazon VPC and your data center You create one or more subnets within each Amazon VPC Each instance launched in the Amazon VPC is connected to one subnet Traditional Layer 2 security attacks including MAC spoofing and ARP spoofing are blocked You can configure network access control lists ( network ACLs ) which are stateless traffic filters that apply to all traffic inbound or outbound from a subnet within Amazon VPC These network ACLs can contain ordered rules to allow or deny ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 19 traffic based on IP protocol by service port as well as source/destination IP address Security groups are a complete firewall solution enabling filtering on both ingress and egress traffic from an instance Traffic can be restricted by any IP protocol by service port and by source/destination IP address (individual IP or CIDR block) Data Encryption AWS offers you the ability to add a layer of security to your data at rest in the cloud by providing scalable and efficient encryption features Data encryption capabilities are available in AWS storage services such as Amazon EBS Amazon S3 and Amazon Glacier and database services such as Amazon RDS for Oracle and Amazon RDS for SQL Server for use with the Siebel database You can choose whether to have AWS manage your encryption keys using AWS Key Management Service ( AWS KMS) or you ca n maintain complete control over your keys Dedicated hardware based cryptographic key storage options (AWS CloudHSM) are available to help you satisfy compliance requirements For more information on AWS security see the Introduction to AWS Security 9 and AWS Security Best Practices10 whitepapers Siebel and Oracle Licensing on AWS In this section we will briefly discuss Siebel CRM and Oracle Database license portability and Amazon RDS for Oracle licensing models Siebel and Oracle D atabase License Portability Most Oracle s oftware licenses are fully portable to AWS including the Enterprise License Agreement (ELA) Unlimited License Agreement (ULA) Business Process Outsourcing (BPO) and Oracle Partner Network (OPN) You can use your existing Siebel license and Oracle database licenses on AWS However you should consult your own Oracle license agreement for specific information ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 20 Amazon RDS for Oracle L icensing Models You can deploy your Siebel CRM applications on Amazon RDS for Oracle under two different licensing models : “License Included” and “Bring Your Own License (BYOL)” In the License Included service model (available only for Oracle Standard Edition One and Oracle Standard Edition Two) you do n’t need to separately purchase Oracle licenses The Oracle Database software has been licensed by AWS If you already own Oracle Database licenses you can use the BYOL model to run Oracle databases on Amazon RDS The BYOL model is designed for customers who prefer to use existing Oracle database licenses or purchase new licenses directly from Oracle Siebel on AWS Use Cases The following are some of the common use cases for Siebel on AWS: • Migrate existing Siebel environments to AWS – This is most suitable if you are on a recent release of Siebel You should design your AWS deployments based on the best practices in this whitepaper For migrating large databases to Amazon RDS within a small downtime window we recommend that you take a point intime export of your database transfer it to AWS import it into Amazon RDS and then apply the delta changes from on premises You can use AWS Direct Connect or AWS Snowball to transfer the export dump to AWS You can use AWS DMS to apply the delta changes and sync the on premises database with the Amazon RDS instance • Siebel upgrade – You can leverage AWS as the upgrade environment to keep the costs of upgrade to a minimum You can either use this new environment only for test and development or you can migrate your entire Siebel environment to AWS Either way you can reduce your overall TCO • Performance testing – Most customers only do performance testing for Siebel changes either on initial implementation or when they have Siebel upgrades to put in place Performance testing for customer ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 21 enhancements is almost never continually done AWS enables you to do performance testing at minimal cost because you are only charged for the resources you use when you us e them This minimal cost enable s more realistic testing both for Siebel upgrades and for your own enhancements You can budget for this on an annual basis depending on your needs for example when Siebel repository file ( SRF ) changes are put in place With additional real world testing of your own planned changes or enhancements you can reduce performance issues and avoid business critical downtimes • Siebel test and development environments on AWS – You might want to set up test and d evelopment environment s on AWS just to try AWS or if the move of the production environment is n’t urgent • Disaster recovery on AWS – You might want to set up a DR environment for your existing Siebel CRM on AWS This can be done at a much lower cost than setting up traditi onal DR Monitoring Your Infrastructure You can continue to use the existing tools that you are familiar with for monitoring your Siebel application such as the Siebel Web Server Extension (SWSE) statistics page the Server Manager GUI or the Server Manager (srvrmgr) command line interface Optionally you can use Oracle Enterprise Manager to monitor your Siebel environment by installing the Oracle Enterprise Manage r Plug in for Oracle Siebel You can also use Amazon CloudWatch to monit or AWS C loud resources and the applications you’re running on AWS Amazon CloudWatch enables you to monitor your AWS resources in near real time including EC2 instances EBS volumes load balancers and RDS DB instances Metrics such as CPU utilization l atency and request counts are provided automatically for these AWS resources You can also supply your own logs or custom application and system metrics such as memory usage transaction volumes or error rates Amazon CloudWatch will monitor these also You can use the Enhanced Monitoring feature of Amazon RDS to monitor your Siebel database Enhanced Monitoring gives you access to over 50 metrics including CPU memory file system and disk I/O You can also view the processes running on the DB instance and their related metrics including percentage of CPU usage and memory usage ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 22 AWS and Oracle Support In this section we discuss the support model when you deploy your Siebel CRM applications on AWS AWS Support AWS Support is a one onone fast response support channel that is staffed around the clock with technical support engineers and experienced customer service professionals who can help you get the most from the products and features provided by AWS 11 All AWS Support tiers offer an unlimited number of support cases with pay by themonth pricing and no long term contracts The four tiers provide developers and businesses the flexibility to choose the supp ort tiers that meet their specific needs AWS Support Business and Enterprise levels include support for common operating systems and common application stack components AWS Support engineers can assist with the setup configuration and troubleshooting of certain third party platforms and applications including Red Hat Enterprise Linux SUSE Linux Windows S erver 2008 Windows Server 2012 Windows Server 2016 Open VPN RRAS etc Oracle Support Siebel CRM versions 150 and 160 are certified to run on A WS Oracle’s certification details for Siebel on AWS are available in the certification section of the Oracle Support site 12 You can use the existing licenses for Siebel a pplications that you had with your onpremises implementations You will have the same level of Oracle Support that you had with your onpremise s implementation Oracle’s only requirement for Infrastructure as a Service (IaaS ) clouds is that you use platforms and databases that are certifie d with Siebel Certified versions of both Siebel and platform s and database s are documented on the Oracle support site You can submit issues in the same manner and provide information about your environments as before When you contact Oracle Support t he fact that you are ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 23 running your Siebel CRM application in the cloud might not even enter the discussion because there is nothing unique about using IaaS that would require any change to the application This is the same approach for virtualization technology that Oracle S upport has followed with Siebel for many years Escalations would continue to go through the customer support site Conclusion By deploying Siebel i n the AWS C loud you can simultaneously reduce cost and enable capabilities that might not be possible or cost effective if deployed in an onpremises data center Some benefits of deploying Siebel on AWS include: • Low cost —resources are billed by the hour and only for the duration they are used • Changing from CapEx to OpEx eliminates the need for a large capital layout • Higher availability of 99 99% by deploying Siebel in a Multi AZ configuration • Flexibility to add capacity elastically to cope with demand This enables you to perform application upgrades faster • Flexibility to add envir onments and use them for short durations such as for performance testing and training Contributors The following individuals and organizations contributed to this document: • Ashok Sundaram Solutions Architect Amazon Web Services • Yoav Eilat Sr Product Marketing Manager Amazon Web Services • Mark Farrier Director Product Management – Siebel CRM Oracle • Milind Waikul CEO Enterprise Beacon Inc Further Reading For additional information see the following sources: ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 24 • Test drive Siebel running on Amazon EC 2 and Amazon RDS http://wwwenterprisebeaconcom/testdriveshtml • Amazon EC2 https://awsamazoncom/ec2/ • Amazon RDS https://awsamazoncom/rds/ • Amazon CloudWatch https://awsamazoncom/cloudwatch/ • AWS DMS https://awsamazoncom/dms/ • Elastic Load balancing https://awsamazoncom/elasticloadbalancing/ • Amazon EBS https://awsamazoncom/ebs/ • Amazon S3 https://awsam azoncom/s3/ • Amazon Route 53 https://awsamazoncom/route53/ • Amazon VPC https://awsamazoncom/vpc/ • AWS Direct Connect https://awsamazoncom/directconnect/ • AWS CloudTrail https://awsamazoncom/cloudtrail/ • AWS CloudHSM https://awsamazoncom/cloudhsm/ • Amazon Glacier https://awsamazoncom/glacier/ • AWS KMS https://awsamazoncom/kms/ • AWS Cost Estimator http ://calculators3amazonawscom/indexhtml ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 25 • AWS Trusted Advisor https://awsamazoncom/premiumsupport/trustedadvisor/ • Oracle cloud licensing http://wwworaclecom/us/corporate/pricing/cloud licensing 070579pdf • Oracle Processor Core Factor Table http://wwwo raclecom/us/corporate/contracts/processorcore factor table 070634pdf • Amazon EC2 virtual cores by instance type https://awsamazoncom/ec2/virtualcores/ • Oracle Database on the AWS Cloud Quick Start (with Data Guard replication) https://s3amazonawscom/quickstart reference/oracle/database/latest/doc/oracle database ontheaws cloudpdf ArchivedAmazon Web Services – Best Practices for Running Oracle Siebel CRM on AWS Page 26 1 http://docsawsamazoncom/AmazonRDS/latest/UserGuide/OracleResourc esSiebelhtml 2 http://ussioscom/clustersyourway/products/windows/datakeeper cluster 3 https://awsamazoncom/whitepapers/softnas architecture onaws/ 4 https://s3amazonawscom/quickstart reference/oracle/database/latest/doc/oracle database ontheawscloudpdf 5 https://d0awsstaticcom/whitepapers/aws disaster recoverypdf 6 https://d0awsstaticcom/whitepapers/aws amazon vpcconnectivity optionspdf 7 https://docsoraclecom/cd/E74890_01/books/Secur/secur_aboutsec005ht m 8 https://docsoraclecom/cd/E74890_01/books/AppsAdmin/AppsAdminAudi tTrail2html 9 http://d0awsstaticcom/whitepapers/Security/Intro_to_AWS_Securitypdf 10 https://d0awsstaticcom/whitepapers/Security/AWS_Security_Best_Practic espdf 11 https://awsamazoncom/premiumsupport/ 12 http://supportoraclecom/ Notes
|
General
|
consultant
|
Best Practices
|
Big_Data_Analytics_Options_on_AWS
|
ArchivedBig Data Analytics Options on AWS December 2018 This paper has been archived For the latest technical information see https://docsawsamazoncom/whitepapers/latest/bigdata analyticsoptions/welcomehtmlArchived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or l icensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 5 The AWS Advantage in Big Data Analytics 5 Amazon Kinesis 7 AWS Lambda 11 Amazon EMR 14 AWS Glue 20 Amazon Machine Learning 22 Amazon DynamoDB 25 Amazon Redshift 29 Amazon Elasticsearch Service 33 Amazon QuickSight 37 Amazon EC2 40 Amazon Athena 42 Solving Big Data Problems on AWS 45 Example 1: Queries against an Amazon S3 Data Lake 47 Example 2: Capturing and Analyzing Sensor Data 49 Example 3: Sentiment Analysis of Social Media 52 Conclusion 54 Contributors 55 Further Reading 55 Document Rev isions 56 Archived Abstract This whitepaper helps architects data scientists and developers understand the big data analytics options available in the AWS cloud by providing an overview of services with the following information: • Ideal usage patterns • Cost model • Performance • Durability and availability • Scalability and elasticity • Interfaces • Anti patterns This paper concludes with scenarios that showcase the an alytics options in use as well as additional resources for getting started with big data analytics on AWS ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 5 of 56 Introduction As we become a more digital society the amount of data being created and collected is growing and accelerating significantly Analysis of this ever growing data becomes a challenge with traditional analytical tools We require innovation to bridge the gap between data being generated and data that can be analyzed effectively Big data tools and technologies offer opportunities and challenges in being able to analyze data efficiently to better understand customer preferences gain a competitive advantage in the marketplace and grow your business Data management architectures have evolved from the traditional data warehousing model to more complex architectures that address more requirements such as realtime and batch processing; structured and unstructured data; high velocity transactions; and so on Amazon Web Services (AWS) provides a broad platform of managed services to help you build secure and seamlessly scale endtoend big data applications quickly and with ease Whether your applications require realtime streaming or batch data processing AWS provides the infrastructure and tools to tackle your next big data project No hardware to procure no infrastructure to maintain and scale —only what you need to collect store process and analyze big data AWS has an ecosystem of analytical solutions specifically designed to handle this growing amount of data and provide insight into your business The AWS Advantage in Big Data Analytics Analyzing large data sets requires significant compute capacity that can vary in size based on the amount of input data and the type of analysis This characteristic of big data workloads is ideally suited to the payasyougo cloud computing model where applications can easily scale up and down based on demand As requirements change you can easily resize your environment (horizontally or vertically) on AWS to meet your needs without having to wait for additional hardware or being required to over invest to provision enough capacity For mission critical applications on a more traditional infrastructure system designers have no choice but to over provision because a surge in additional data due to an increase in business need must be something the system can ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 6 of 56 handle By contrast on AWS you can provision more capacity and compute in a matter of minutes meaning that your big data applications grow and shrink as demand dictates and your system runs as close to optimal efficiency as possible In addition you get flexible computing on a global infrastructure with access to the many different geographic regions that AWS offers along with the ability to use other scalable services that augment to build sophisticated big data applications These other services include Amazon Simple Storage Service (Amazon S3) to store data and AWS Glue to orchestrate jobs to move and transform that data easily AWS IoT which lets connected devices interact with cloud applications and other connected devices As the amount of data being generated continues to grow AWS has many options to get that data to the cloud including secure devices like AWS Snowball to accelerate petabyte scale data transfers delivery s treams with Amazon Kinesis Data Firehose to load streaming data continuously migrating databases using AWS D atabase Migration Service and scalable p rivate connections through AWS Direct Connect AWS recently added AWS Snowball Edge which is a 100 TB data transfer device with on board storage and compute capabilities You can use Snowball Edge to move large amounts of data into and out of AWS as a temporary storage tier for large local datasets or to support local workloads in remote or offline locations Additionally you can deploy AWS Lambda code on Snowball Edge to perform tasks such as analyzing data streams or processing data locally As mobile continues to rapidly grow in usage you can use the suite of services within the AWS Mobil e Hub to collect and measure app usage and data or export that data to another service for further custom analysis These capabilities of the AWS platform make it an ideal fit for solving big data problems and many customers have implemented successful big data analytics workloads on AWS For more information about case studies see Big Data Customer Success Stories The following services for collecting processing stori ng and analyzing big data are described in order : • Amazon Kinesis ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 7 of 56 • AWS Lambda • Amazon Elastic MapReduce • Amazon Glue • Amazon Machine Learning • Amazon DynamoDB • Amazon Redshift • Amazon Athena • Amazon Elasticsearch Service • Amazon QuickSight In addition to these services Amazon EC2 instances are available for self managed big data applications Amazon Kinesis Amazon Kinesis is a platform for streaming data on AWS making it easy to load and analyze streaming data and also providing the ability for you to build custom streaming data applications for specialized needs With Kinesis you can ingest real time data such as application logs website clickstreams IoT telemetry data and more into your databases data lakes and data warehouses or build y our own real time applications using this data Amazon Kinesis enables you to process and analyze data as it arrives and respond in real time instead of having to wait until all your data is collected before the processing can begin Currently there are 4 pieces of the Kinesis platform that can be utilized based on your use case : • Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data • Amazon Kinesis Video Streams enables you to build custom applications that process or analyze streaming video • Amazon Kinesis Data Firehose enables you to deliver real time streaming data to AWS destinations such as Amazon S3 Amazon Redshift Amazon Kinesis Analytics and Amazon Elasticsearch Service • Amazon Kinesis Data Analytics enables you to process and analyze streaming data with standard SQL ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 8 of 56 Kinesis Data Streams and Kinesi s Video Streams enable you to build custom applications that process or analyze streaming data in real time Kinesis Data Streams can continuously capture and store terabytes of data per hour from hundreds of thousands of sources such as website clickstreams financial transactions social media feeds IT logs and location tracking events Kinesis Video Streams can continuously capture video data from smartphones security cameras drones satellites dashcams and other edge devices With the Amazon Kinesis Client Library (KCL) you can build Amazon Kinesis applications and use streaming data to power real time dashboards generate alerts and implement dynamic pricing and advertising You can also emit data from Kinesis Data Streams and Kinesis Video Streams to other AWS services such as Amazon Simple Storage Service (Amazon S3) Amazon Redshift Amazon Elastic MapReduce (Amazon EMR) and AWS Lambda Provision the level of input and output required for your data stream in blocks of 1 megabyte per second (MB/sec) using the AWS Management Console API or SDK s The size of your stream can be adjusted up or down at any time without restarting the stream and without any impact on the data sources pushing data to the stream Within seconds data put into a stream is available for analysis With Kinesis Data Firehose you do not need to write applications or manage resources You configure your data producers to send data to Kinesis Firehose and it automatically delivers the data to the AWS destination that you specified You can also configure Kinesis Data Firehose t o transform your data before data delivery It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration It can also batch compress and encrypt the data before loading it minimizing the amount of storage used at the destination and increasing security Amazon Kinesis Data Analytics is the easiest way to process and analyze real time streaming data With Kinesis Data Anal ytics you just use standard SQL to process your data streams so you don’t have to learn any new programming languages Simply point Kinesis Data Analytics at an incoming data stream write your SQL queries and specify where you want to load the results Kinesis Data Analytics takes care of running your SQL queries continuously on data while it’s in transit and sending the results to the destinations ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 9 of 56 In the subsequent sections we will focus primarily on Amazon Kinesis Data Streams Ideal Usage Patterns Amazon Kinesis Data Steams is useful wherever there is a need to move data rapidly off producers (data sources) and continuously process it That processing can be to transform the data before emitting into another data store drive realtime metrics and analytics or derive and aggregate multiple streams into more complex streams or downstream processing The following are typical scenarios for using Kinesis Data Streams for analytics • Real time data analytics –Kinesis Data Streams enables realtime data analytics on streaming data such as analyzing website clickstream data and customer engagement analytics • Log and data feed intake and processing – With Kinesis Data Streams you can have producers push data directly into an Amazon Kinesis stream For example you can submit system and application logs to Kinesis Data Streams and access the stream for processing within seconds This prevents the log data from being lost if the front end or application server fails and reduces local log storage on the source Kinesis Data Streams provides accelerated data intake because you are not batching up the data on the servers before you submit it for intake • Real time metrics and reporting – You can use data ingested into Kinesis Data Streams for extracting metrics and generating KPIs to power reports and dashboards at realtime speeds This enables data processing application logic to work on data as it is streaming in continuously rather than wait for data batches to arrive Cost Model Amazon Kinesis Data Streams has simple payasyougo pricing with no up front costs or minimum fees and you only pay for the resources you consume An Amazon Kinesis stream is made up of one or more shards each shard gives you a capacity 5 read transactions per second up to a maximum total of 2 MB of data read per second Each shard can support up to 1000 write transactions per second and up to a maximum total of 1 MB data written per second The data capacity of your stream is a function of the number of shards that you specify for the stream The total capacity of the stream is the sum of the capacity ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 10 of 56 of each shard There are just two pricing components an hourly charge per shard and a charge for each 1 million PUT transactions For more information see Amazon Kinesis Data Streams Pricing Applications that run on Amazon EC2 and process Amazon Kinesis streams also incur standard Amaz on EC2 costs Performance Amazon Kinesis Data Streams allows you to choose throughput capacity you require in terms of shards With each shard in an Amazon Kinesis stream you can capture up to 1 megabyte per second of data at 1000 write transactions per second Your Amazon Kinesis applications can read data from each shard at up to 2 megabytes per second You can provision as many shards as you need to get the throughput capacity you want; for instance a 1 gigabyte per second data stream would require 1024 shards Durability and Availability Amazon Kinesis Data Streams synchronously replicates data across three Availability Zones in an AWS Region providing high availability and data durability Additionally you can store a cursor in DynamoDB to durably track what has been read from an Amazon Kinesis stream In the event that your application fails in the middle of reading data from the stream you can restart your application and use the cursor to pick up from the exact spot where the failed application left off Scalability and Elasticity You can increase or decrease the capacity of the stream at any time according to your business or operational needs without any interruption to ongoing stream processing By using API calls or development tools you can automate scaling of your Amazon Kinesis Data Streams environment to meet demand and ensure you only pay for what you need Interfaces There are two interfaces to Kinesis Data Streams: input which is used by data producers to put data into Kinesis Data Streams; and output to process and analyze data that comes in Producers can write data using the Amazon Kinesis PUT API an AWS Software Development Kit (SDK) or toolkit abstraction the Amazon Kinesis Producer Library (KPL) or the Amazon Kinesis Agent ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 11 of 56 For processing data that has already been put into an Amazon Kinesis stream there are client libraries provided to build and operate realtime streaming data processing applications The KCL17 acts as an intermediary between Amazon Kinesis Data Streams and your business applications which contain the specific processing logic There is also integration to read from an Amazon Kinesis stream into Apache Storm via the Amazon Kinesis Storm Spout AntiPatterns Amazon Kinesis Data Streams has the following antipatterns: • Small scale consistent throughput – Even though Kinesis Data Streams works for streaming data at 200 KB/sec or less it is designed and optimized for larger data throughputs • Long term data storage and analytics –Kinesis Data Streams is not suited for long term data storage By default data is retained for 24 hours and you can extend the retention period by up to 7 days You can move any data that needs to be stored for longer than 7 days into another durable storage service such as Amazon S3 Amazon Glacier Amazon Redshift or DynamoDB AWS Lambda AWS Lambda lets you run code without provisioning or managing servers You pay only for the compute time you consume – there is no charge when your code is not running With Lambda you can run code for virtually any type of application or backend service – all with zero administration Just upload your code and Lambda takes care of everything required to run and scale your code with high availability You can set up your code to automati cally trigger from other AWS services or call it directly from any web or mobile app Ideal Usage Pattern AWS Lambda enables you to execute code in response to triggers such as changes in data shifts in system state or actions by users Lambda can be directly triggered by AWS services such as Amazon S3 DynamoDB Amazon Kinesis Data Streams Amazon Simp le Notification Service (Amazon SNS ) and ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 12 of 56 CloudWatch allowing you to build a variety of real time data processing systems • Real time File Processing – You can trigger Lambda to invoke a process where a file has been uploaded to Amazon S3 or modified For example to change an image from color to gray scale after it has been uploaded to Amazon S3 • Real time Stream Processing – You can use Kinesis Data Streams and Lambda to process streaming data for click stream analysis log filtering and social media analysis • Extract Transform Load – You can use Lambda to run code that transforms data and loads that data into one data rep ository to another • Replace Cron – Use schedule expressions to run a Lambda function at regular intervals as a cheaper and more available solution than running cron on an EC2 instance • Process AWS Events – Many other services such as AWS CloudTrail can act as event sources simply by logging to Amazon S3 and using S3 bucket notifications to trigger Lambda functions Cost Model With AWS Lambda you only pay for what you use You are charged based on the number of requests for your functions and the time you r code executes The Lambda free tier includes 1M free requests per month and 400000 GB seconds of compute time per month You are charged $020 per 1 million requests thereafter ($00000002 per request) Additionally the duration of your code executing is priced in relation to memory allocated You are charged $000001667 for every GB second used See Lambda pricing for more details Performance After deploying your code into Lambda for the first time your functions are typically ready to call within seconds of upload Lambda is designed to process events within milliseconds Latency will be higher immediately after a Lambda function is created updated or if it has not been used recently To improve performance Lambda may choose to retain an instance of your function and reuse it to serve a subsequent request rather than creating a new copy To learn more about how Lambda reuses function insta nces see our documentation Your code should not assume that this reuse will always happen ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 13 of 56 Durability and Availability AWS Lambda is designed to use replica tion and redundancy to provide high availability for both the service itself and for the Lambda functions it operates There are no maintenance windows or scheduled downtimes for either On failure Lambda functions being invoked synchronously respond with an exception Lambda functions being invoked asynchronously are retried at least 3 times after which the event may be rejected Scalability and Elasticity There is no limit on the number of Lambda functions that you can run However Lambda has a default safety throttle of 1000 concurrent executions per account per region A member of the AWS support team can increase this limit Lambda is designed to scale automatically on your behalf T here are no fundamental limits to scaling a function Lambda dynamically allocate s capacity to match the rate of incoming events Interfaces Lambda functions can be managed in a variety of ways You can easily list delete update and monitor your Lambda functions using the dashboard in the Lambda console You also can use the AWS CLI and AWS SDK to manage your Lambda functions You can trigger a Lambda function from an AWS event such as Amazon S3 bucket notifications Amazon DynamoDB Streams Amazon Clo udWatch logs Amazon Simple Email Service (Amazon SES) Amazon Kinesis Data Streams Amazon SNS Amazon Cognito and more Any API call in any service that supports AWS CloudTrail can be processed as an event in Lambda by responding to CloudTrail audit logs For more information about event sources see Core Components: AWS Lambda Function and Event Sources AWS Lambda supports code written in Nodejs (JavaScript) Python Java (Java 8 compatible) C# (NET Core) Go PowerShell and Ruby Your code can include existing libraries even native ones Please read our documentation on using Nodejs Python Java C# Go PowerShell and Ruby ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 14 of 56 AntiPatterns • Long Running Applications – Each Lambda function must complete within 900 seconds For long running applications that may require jobs to run longer than fi fteen minutes Amazon EC2 is recommended Alternately create a chain of Lambda functions where function 1 call s function 2 which calls function 3 and so on until the process is completed See Creating a Lambda State Machine for more information • Dynamic Websites – While it is possible to run a static website with AWS Lambda running a highly dynamic and large volume website can be performance proh ibitive Utilizing Amazon EC2 and Amazon CloudFront would be a recommended use case • Stateful Applications –Lambda code must be written in a “stateless” style ie it should assume there is no affinity to the underlying compute infrastructure Local file system access child processes and similar artifacts may not extend beyond the lifetime of the request and any persistent state should be stored in Amazon S3 DynamoDB or another Internet available storage service Amazon EMR Amazon EMR is a highly distributed computing framework to easily process and store data quickly in a costeffective manner Amazon EMR uses Apache Hadoop an open source framework to distribute your data and processing across a resizable cluster of Amazon EC2 instances and allows you to use the most common Hadoop tools such as Hive Pig Spark and so on Hadoop provides a framework to run big data processing and analytics Amazon EMR does all the work involved with provisioning managing and maintaining the infrastructure and software of a Hadoop cluster Ideal Usage Patterns Amazon EMR’s flexible framework reduces large processing problems and data sets into smaller jobs and distributes them across many compute nodes in a Hadoop cluster This capability lends itself to many usage patterns with big data analytics Here are a few examples: ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 15 of 56 • Log processing and analytics • Large extract transform and load (ETL) data movement • Risk modeling and threat analytics • Ad targeting and click stream analytics • Genomics • Predictive analytics • Ad hoc data mining and analytics For more information see the documentation for Amazon EMR Cost Model With Amazon EMR you can launch a persistent cluster that stays up indefinitely or a temporary cluster that terminates after the analysis is complete In either scenario you only pay for the hours the cluster is up Amazon EMR supports a variety of Amazon EC2 instanc e types (standard high CPU high memory high I/O and so on) and all Amazon EC2 pricing options (OnDemand Reserved and Spot) When you launch an Amazon EMR cluster (also called a "job flow") you choose how many and what type of Amazon EC2 instances to provision The Amazon EMR price is in addition to the Amazon EC2 price For more information see Amazon EMR Pricing Performance Amazon EMR performance is driven by the type of EC2 instances you choose to run your cluster on and how many you chose to run your analytics You should choose an instance type suitable for your processing requirements with sufficient memory storage and processing power For more information about EC2 instance specifications see Amazon EC2 Instance Types Durability and Availability By default Amazon EMR is fault tolerant for core node failures and continues job execution if a slave node goes down Amazon EMR will also provision a new node when a core node fails However Amazon EMR will not replace nodes if all nodes in the cluster are lost Customers can monitor the health of nodes and replace failed nodes with CloudWatch ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 16 of 56 Amazon EMR is fault tolerant for slave failures and continues job execution if a slave node goes down Amazon EMR will also provision a new node when a core node fails However Amazon EMR will not replace nodes if all nodes in the cluster are lost Scalability and Elasticity With Amazon EMR it is easy to resize a running cluster You can add core nodes which hold the Hadoop Distributed File System (HDFS) at any time to increase your proce ssing power and increase the HDFS storage capacity (and throughput) Additionally you can use Amazon S3 natively or using EM RFS along with or instead of local HDFS which allows you to decouple your memory and compute from your storage providing greater flexibility and cost efficiency You can also add and remove task nodes at any time which can process Hadoop jobs but do not maintain HDFS Some customers add hundreds of instances to their clusters when their batch processing occurs and remove the extra instances when processing completes For example you may not know how much data your clusters will be handling in 6 months or you may have spikey processing needs With Amazon EMR you don't need to guess your future requirements or provision for peak demand because you can easily add or remove capacity at any time Additionally you can add all new clusters of various sizes and remove them at any time with a few clicks in the console or by a programmatic API call Interfaces Amazon EMR supports many tools on top of Hadoop that can be used for big data analytics and each has their own interfaces Here is a brief summary of the most popular options: Hive Hive is an open source data warehouse and analytics package that runs on top of Hadoop Hive is operated by Hive QL a SQL based language which allows users to structure summarize and query data Hive QL goes beyond standard SQL adding firstclass support for map/reduce functions and complex extensible user defined data types like JSON and Thrift This capability allows processing of complex and unstructured data sources such as text documents and log files ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 17 of 56 Hive allows user extensions via user defined functions written in Java Amazon EMR has made numerous improvements to Hive including direct integration with DynamoDB and Amazon S3 For example with Amazon EMR you can load table partitions automatically from Amazon S3 you can write data to tables in Amazon S3 without using temporary files and you can access resources in Amazon S3 such as scripts for custom map and/or reduce operations and additional libraries For more information see Apache Hive in the Amazon EMR Release Guide Pig Pig is an open source analytics package that runs on top of Hadoop Pig is operated by Pig Latin a SQL like language which allows users to structure summarize and query data As well as SQL like operations Pig Latin also adds firstclass support for map and reduce functions and complex extensible user defined data types This capability allows processing of complex and unstructured data sources such as text documents and log files Pig allows user extensions via user defined functions written in Java Amazon EMR has made numerous improvements to Pig including the ability to use multiple file systems (normally Pig can only access one remote file system) the ability to load customer JARs and scripts from Amazon S3 (such as “REGISTER s3://my bucket/piggybankjar”) and additional functionality for String and DateTime processing For more information see Apache Pig33 in the Amazon EMR Release Guide Spark Spark is an open source data analytics engine built on Hadoop with the fundamentals for inmemory MapReduce Spark provides additional speed for certain analytics and is the foundation for other power tools such as Shark (SQL driven data warehousing) Spark Streaming (streaming applications) GraphX (graph systems) and MLlib (machine learning) For more information see Apache Spark on Amazon EMR HBase HBase is an open source nonrelational distributed database modeled after Google's BigTable It was developed as part of Apache Software Foundation's Hadoop project and runs on top of Hadoop Distributed File System (HDFS) to provide BigTable like capabilities for Hadoop HBase provides you a fault tolerant efficient way of storing large quantities of sparse data using column ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 18 of 56 based compression and storage In addition HBase provides fast lookup of data because data is stored inmemory instead of on disk HBase is optimized for sequential write operations and it is highly efficient for batch inserts updates and deletes HBase works seamlessly with Hadoop sharing its file system and serving as a direct input and output to Hadoop jobs HBase also integrates with Apache Hive enabling SQL like queries over HBase tables joins with Hive based tables and support for Java Database Connectivity (JDBC) With Amazon EMR you can back up HBase to Amazon S3 (full or incremental manual or automated) and you can restore from a previously created backup For more information see Apache HBase in the Amazon EMR Release Guide Hunk Hunk was developed by Splunk to make machine data accessible usable and valuable to everyone With Hunk you can interactively explore analyze and visualize data stored in Amazon EMR and Amazon S3 harnessing Splunk analytics on Hadoop For more information see Amazon EMR with Hunk: Splunk Analytics for Hadoop and NoSQL Presto Presto is an open source distributed SQL query engine optimized for low latency adhoc analysis of data It supports the ANSI SQL standard including complex queries aggregations joins and window functions Presto can process data from multiple data sources including the Hadoop Distributed File System (HDFS) and Amazon S3 Kinesis Connector The Kinesis Connector enables EMR to directly read and query data from Kinesis Data Streams You can perform batch processing of Kinesis streams using existing Hadoop ecosystem tools such as Hive Pig MapRedu ce Hadoop Streaming and Cascading Some use cases enabled by this integration are: • Streaming Log Analysis: You can analyze streaming web logs to generate a list of top 10 error type every few minutes by region browser and access domains • Complex Data Processing Workflows: You can join Kinesis stream with data stored in Amazon S3 Dynamo DB tables and HDFS You can write queries that join clickstream data from Kinesis with advertising ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 19 of 56 campaign information stored in a DynamoDB table to identify the most effective categories of ads that are displayed on particular websites • Adhoc Queries: You can periodically load data from Kinesis into HDFS and make it available as a local Impala table for fast interactive analytic queries Other third party tools Amazon EMR also supports a variety of other popular applications and tools in the Hadoop ecosystem such as R (statistics) Mahout (machine learning) Ganglia (monitoring) Accumulo (secure NoSQL database) Hue (user interface to analyze Hadoop data) Sqoo p (relational database connector) HCatalog (table and storage management) and more Additionally you can install your own software on top of Amazon EMR to help solve your business needs AWS provides the ability to quickly move large amounts of data from Amazon S3 to HDFS from HDFS to Amazon S3 and between Amazon S3 buckets using Amazon EMR’s S3DistCp an extension of the open source tool DistCp that uses MapReduce to efficiently move large amounts of data You can optionally use the EMR File System (EMRFS) an implementation of HDFS which allows Amazon EMR clusters to store data on Amazon S3 You can enable Amazon S3 server side and client side encryption When you use EMRFS a metadata store is transparently built in DynamoDB to help manage the interactions with Amazon S3 and allows you to have multiple EMR clusters easily use the same EMRFS metadata and storage on Amazon S3 AntiPatterns Amazon EMR has the following antipatterns: • Small data sets – Amazon EMR is built for massive parallel processing; if your data set is small enough to run quickly on a single machine in a single thread the added overhead to map and reduce jobs may not be worth it for small data sets that can easily be processed in memory on a single system • ACID transaction requirements – While there are ways to achieve ACID (atomicity consistency isolation durability) or limited ACID on Hadoop using another database such as Amazon Relational Database ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 20 of 56 Service (Amazon RDS ) or a relational database running on Amazon EC2 may be a better option for workloads with stringent requirements AWS Glue AWS Glue is a fully managed extract transform and load (ETL) service that you can use to catalog your data clean it enrich it and move it reliably between data stores With AWS Glue you can significantly red uce the cost complexity and time spent creating ETL jobs AWS Glue is Serverless so there is no infrastructure to setup or manage You pay only for the resources consumed while your jobs are running Ideal Usage Patterns AWS Glue is designed to easily prepare data for extract transform and load (ETL) jobs Using AWS Glue gives you the following benefits: • AWS Glue can automatically crawl your data and generate code to execute or data transformations and loading processes • Integration with services like Amazon Athena Amazon EMR and Amazon Redshift • Serverless no infrastructure to provision or manage • AWS Glue generates ETL code that is customizable reusable and portable using familiar technology – Python and Spark Cost Model With AWS Glue you pay an hourly rate billed by the minute for crawler jobs (discovering data) and ETL jobs (processing and loading data) For the AWS Glue Data Catalog you pay a simple monthly fee for storing and accessing the metadata The fi rst million objects stored are free and the first million accesses are free If you provision a development endpoint to interactively develop your ETL code you pay an hourly rate billed per minute See AWS Glue Pricing for more details ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 21 of 56 Performance AWS Glue uses a scale out Apache Spark environment to load your data into its destination You can simply specify the number of Data Processing Units (DPUs ) that you want to allocate to your ETL job A n AWS Glue ETL job requires a minimum of 2 DPUs By default AWS Glue allocates 10 DPUs to each ETL job Additional DPUs can be added to increase the performance of your ETL job Multiple jobs can be triggered in parallel or sequentially by triggering them on a job completion event You can also trigger one or more AWS Glue jobs from an external source such as an AWS Lambda function Durability and Availability AWS Glue connects to the data source of your preference whether it is in an Amazon S3 file an Amazon RDS table or another set of data As a result all your data is stored and available as it pertains to that data stores durability characteristics The AWS Glue service provides status of each job and pushes al l notifications to Amazon CloudWatch events You can setup SNS notifications using CloudWatch actions to be informed of job failures or completions Scalability and Elasticity AWS Glue provides a managed ETL service that runs on a Serverless Apache Spark e nvironment This allows you to focus on your ETL job and not worry about configuring and managing the underlying compute resources AWS Glue works on top of the Apache Spark environment to provide a scale out execution environment for your data transformat ion jobs Interfaces AWS Glue provides a number of ways to populate metadata into the AWS Glue Data Catalog AWS Glue crawlers scan various data stores you own to automatically infer schemas and partition structure and populate the AWS Glue Data Catalog w ith corresponding table definitions and statistics You can also schedule crawlers to run periodically so that your metadata is always up todate and in sync with the underlying data Alternately you can add and update table details manually by using the AWS Glue Console or by calling the API You can also run Hive DDL statements via the Amazon Athena Console or a Hive client ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 22 of 56 on an Amazon EMR cluster Finally if you already have a persistent Apache Hive Metastore you can perform a bulk import of that met adata into the AWS Glue Data Catalog by using our import script AntiPatterns AWS Glue has the following antipatterns: • Streaming data – AWS Glue ETL is batch oriented and you can schedule your ETL jobs at a minimum of 5 min ute intervals While it can process micro batches it does not handle streaming data If your use case requires you to ETL data while you stream it in you can perfo rm the first leg of your ETL using Amazon Kinesis Amazon Kinesis Data Firehose or Amazon Kinesis Analytics Then store the data in either Amazon S3 or Amazon Redshift and trigger a n AWS Glue ETL job to pick up that dataset and continue applying additiona l transformations to that data • Multiple ETL engines – AWS Glue ETL jobs are PySpark based If your use case requires you to use an engine other than Apache Spark or if you want to run a heterogeneous set of jobs that run on a variety of engines like Hive Pig etc then AWS Data Pipeline or Amazon EMR would be a better choice • NoSQL Databases – Currently AWS Glue does not support data sources like NoSQL databases or Amazon DynamoDB Since NoSQL databases do not require a rigid schema like traditional rela tional databases most common ETL jobs would not apply Amazon Machine Learning Amazon Machine Learning (Amazon ML) is a service that makes it easy for anyone to use predictive analytics and machine learning technology Amazon ML provides visualization tools and wizards to guide you through the process of creating machine learning (ML) models without having to learn complex ML algorithms and technology After your models are ready Amazon ML makes it easy to obtain predictions for your application using API operations without having to implement custom prediction generation code or manage any infrastructure ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 23 of 56 Amazon ML can create ML models based on data stored in Amazon S3 Amazon Redshift or Amazon RDS Built in wizards guide you through the steps of interactively exploring your data to training the ML model to evaluating the model quality and adjusting outputs to align with business goals After a model is ready you can request predictions in either batches or using the lowlatency realtime API Ideal Usage Patterns Amazon ML is ideal for discovering patterns in your data and using these patterns to create ML models that can generate predictions on new unseen data points For example you can: • Enable applications to flag suspicious transactions – Build an ML model that predicts whether a new transaction is legitimate or fraudulent • Forecast product demand – Input historical order information to predict future order quantities • Personalize application content – Predict which items a user will be most interested in and retrieve these predictions from your application in realtime • Predict user activity – Analyze user behavior to customize your website and provide a better user experience • Listen to social media – Ingest and analyze social media feeds that potentially impact business decisions Cost Model With Amazon ML you pay only for what you use There are no minimum fees and no upfront commitments Amazon ML charges an hourly rate for the compute time used to build predictive models and then you pay for the number of predictions generated for your application For realtime predictions you also pay an hourly reserved capacity charge based on the amount of memory required to run your model The charge for data analysis model training and evaluation is based on the number of compute hours required to perform them and depends on the size of the input data the number of attributes within it and the number and types of transformations applied Data analysis and model building fees are priced at ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 24 of 56 $042 per hour Prediction fees are categorized as batch and realtime Batch predictions are $010 per 1000 predictions rounded up to the next 1000 while realtime predictions are $00001 per prediction rounded up to the nearest penny For realtime predictions there is also a reserved capacity charge of $0001 per hour for each 10 MB of memory provisioned for your model During model creation you specify the maximum memory size of each model to manage cost and control predictive performance You pay the reserved capacity charge only while your model is enabled for realtime predictions Charges for data stored in Amazon S3 Amazon RDS or Amazon Redshift are billed separately For more information see Amazon Machine Learning Pricing Performance The time it takes to create models or to request batch predictions from these models depends on the number of input data records the types and distribution of attributes within these records and the complexity of the data processing “recipe” that you specify Most realtime prediction requests return a response within 100 ms making them fast enough for interactive web mobile or desktop applications The exact time it takes for the realtime API to generate a prediction varies depending on the size of the input data record and the complexity of the data processing “recipe ” associated with the ML model that is generating the predictions Each ML model that is enabled for realtime predictions can be used to request up to 200 transactions per second by default and this number can be increased by contacting customer support You can monitor the number of predictions requested by your ML models by using CloudWatc h metrics Durability and Availability Amazon ML is designed for high availability There are no maintenance windows or scheduled downtimes The service runs in Amazon’ s proven high availability data centers with service stack replication configured across three facilities in each AWS Region to provide fault tolerance in the event of a server failure or Availability Zone outage Scalability and Elasticity By default you can process data sets that are up to 100 GB (this can be increased with a support ticket) in size to create ML models or to request batch ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 25 of 56 predictions For large volumes of batch predictions you can split your input data records into separate chunks to enable the processing of larger prediction data volume By default you can run up to five simultaneous jobs and by contacting customer service you can have this limit raised Because Amazon ML is a managed service there are no servers to provision and as a result you are able to scale as your application grows without having to over provision or pay for resources not being used Interfaces Creating a data source is as simple as adding your data to Amazon S3 or you can pull data directly from Amazon Redshift or MySQL databases managed by Amazon RDS After your data source is defined you can interact with Amazon ML using the console Programmatic access to Amazon ML is enabled by the AWS SDKs and Amazon ML API You can also create and manage Amazon ML entities using the AWS CLI available on Windows Mac and Linux/UNIX systems AntiPatterns Amazon ML has the following antipatterns: • Very large data sets – While Amazon ML can support up to a default 100 GB of data (this can be increased with a support ticket) terabyte scale ingestion of data is not currently supported Using Amazon EMR to run Spark’s Machine Learning Library (MLlib) is a common tool for such a use case • Unsupported learning tasks – Amazon ML can be used to create ML models that perform binary classification (choose one of two choices and provide a measure of confidence) multiclass classification (extend choices to beyond two options) or numeric regression (predict a number directly) Unsupported ML tasks such as sequence prediction or unsupervised clustering can be approached by using Amazon EMR to run Spark and MLlib Amazon DynamoDB Amazon DynamoDB is a fast fully managed NoSQL database service that makes it simple and cost effective to store and retrieve any amount of data and serve ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 26 of 56 any level of request traffic DynamoDB helps offload the administrative burden of operating and scaling a highly available distributed database cluster This storage alternative meets the latency and throughput requirements of highly demanding applications by providing single digit millisecond latency and predictable performance with seamless throughput and storage scalability DynamoDB stores structured data in tables indexed by primary key and allows lowlatency read and write access to items ranging from 1 byte up to 400 KB DynamoDB supports three data types (number string and binary) in both scalar and multi valued sets It supports document stores such as JSON XML or HTML in these data types Tables do not have a fixed schema so each data item can have a different number of attributes The primary key can either be a single attribute hash key or a composite hash range key DynamoDB offers both global and local secondary indexes provide additional flexibility for querying against attributes other than the primary key DynamoDB provides both eventually consistent reads (by default) and strongly consistent reads (optional) as well as implicit item level transactions for item put update delete conditional operations and increment/decrement DynamoDB is integrated with other services such as Amazon EMR Amazon Redshift AWS Data Pipeline and Amazon S3 for analytics data warehouse data import/export backup and archive Ideal Usage Patterns DynamoDB is ideal for existing or new applications that need a flexible NoSQL database with low read and write latencies and the ability to scale storage and throughput up or down as needed without code changes or downtime Common use cases include: • Mobile apps • Gaming • Digital ad serving • Live voting • Audience interaction for live events ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 27 of 56 • Sensor networks • Log ingestion • Access control for webbased content • Metadata storage for Amazon S3 objects • Ecommerce shopping carts • Web session management Many of these use cases require a highly available and scalable database because downtime or performance degradation has an immediate negative impact on an organization’s business Cost Model With DynamoDB you pay only for what you use and there is no minimum fee DynamoDB has three pricing components: provisioned throughput capacity (per hour) indexed data storage (per GB per month) data transfer in or out (per GB per month) New customers can start using DynamoDB for free as part of the AWS Free Usage Tier For more information see Amazon DynamoDB Pricing Performance SSDs and limiting indexing on attributes provides high throughput and low latency and drastically reduces the cost of read and write operations As the datasets grow predictable performance is required so that lowlatency for the workloads can be maintained This predictable performance can be achieved by defining the provisioned throughput capacity required for a given table Behind the scenes the service handles the provisioning of resources to achieve the requested throughput rate; you don’t need to think about instances hardware memory and other factors that can affect an application’s throughput rate Provisioned throughput capacity reservations are elastic and can be increased or decreased on demand Durability and Availability DynamoDB has built in fault tolerance that automatically and synchronously replicates data across three data centers in a region for high availability and to help protect data against individual machine or even facility failures ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 28 of 56 Amazon DynamoDB Streams captures all data activity that happens on your table and allows the ability to set up regional replication from one geographic region to another to provide even greater availability Scalability and Elasticity DynamoDB is both highly scalable and elastic There is no limit to the amount of data that you can store in a DynamoDB table and the service automatically allocates more storage as you store more data using the DynamoDB write API operations Data is automatically partitioned and repartitioned as needed while the use of SSDs provides predictable lowlatency response times at any scale The service is also elastic in that you can simply “dial up” or “dial down” the read and write capacity of a table as your needs change Interfaces DynamoDB provides a lowlevel REST API as well as higher level SDKs for Java ET and PHP that wrap the lowlevel REST API and provide some object relational mapping (ORM) functions These APIs provide both a management and data interface for DynamoDB The API currently offers operations that enable table management (creating listing deleting and obtaining metadata) and working with attributes (getting writing and deleting attributes; query using an index and full scan) While standard SQL isn’t available you can use the DynamoDB select operation to create SQL like queries that retrieve a set of attributes based on criteria that you provide You can also work with DynamoDB using the console AntiPatterns DynamoDB has the following antipatterns: • Prewritten application tied to a traditional relational database – If you are attempting to port an existing application to the AWS cloud and need to continue using a relational database you can use either Amazon RDS (Amazon Aurora MySQL PostgreSQL Oracle or SQL Server) or one of the many preconfigured Amazon EC2 database AMIs You can also install your choice of database software on an EC2 instance that you manage • Joins or complex transactions – While many solutions are able to leverage DynamoDB to support their users it’s possible that your ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 29 of 56 application may require joins complex transactions and other relational infrastructure provided by traditional database platforms If this is the case you may want to explore Amazon Redshift Amazon RDS or Amazon EC2 with a selfmanaged database • Binary large objects (BLOB) data – If you plan on storing large (greater than 400 KB) BLOB data such as digital video images or music you’ll want to consider Amazon S3 However DynamoDB can be used in this scenario for keeping track of metadata (eg item name size date created owner location etc) about your binary objects • Large data with low I/O rate –DynamoDB uses SSD drives and is optimized for workloads with a high I/O rate per GB stored If you plan to store very large amounts of data that are infrequently accessed other storage options may be a better choice such as Amazon S3 Amazon Redshift Amazon Redshift is a fast fully managed petabyte scale data warehouse service that makes it simple and costeffective to analyze all your data efficiently using your existing business intelligence tools It is optimized for data sets ranging from a few hundred gigabytes to a petabyte or more and is designed to cost less than a tenth of the cost of most traditional data warehousing solutions Amazon Redshift delivers fast query and I/O performance for virtually any size dataset by using columnar storage technology while parallelizing and distributing queries across multiple nodes It automates most of the common administrative tasks associated with provisioning configuring monitoring backing up and securing a data warehouse making it easy and inexpensive to manage and maintain This automation allows you to build petabyte scale data warehouses in minutes instead of weeks or months taken by traditional on premises implementations ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 30 of 56 Amazon Redshift Spectrum is a feature that enables you to run queries against exabyte s of unstructured dat a in Amazon S3 with no loading or ETL required When you issue a query it goes to the Amazon Redshift SQL endpoint which generates and optimizes a query plan Amazon Redshift determines what data is local and what is in Amazon S3 generates a plan to mi nimize the amount of Amazon S3 data that needs to be read and then requests Redshift Spectrum workers out of a shared resource pool to read and process the data from Amazon S3 Ideal Usage Patterns Amazon Redshift is ideal for online analytical processing (OLAP) using your existing business intelligence tools Organizations are using Amazon Redshift to: • Analyze global sales data for multiple products • Store historical stock trade data • Analyze ad impressions and clicks • Aggregate gaming data • Analyze social trends • Measure clinical quality operation efficiency and financial performance in health care Cost Model An Amazon Redshift data warehouse cluster requires no long term commitments or upfront costs This frees you from the capital expense and complexity of planning and purchasing data warehouse capacity ahead of your needs Charges are based on the size and number of nodes of your cluster There is no additional charge for backup storage up to 100% of your provisioned storage For example if you have an active cluster with 2 XL nodes for a total of 4 TB of storage AWS provides up to 4 TB of backup storage on Amazon S3 at no additional charge Backup storage beyond the provisioned storage size and backups stored after your cluster is terminated are billed at standard Amazon S3 rates There is no data transfer charge for communication between Amazon S3 and Amazon Redshift For more information see Amazon Redshift pricing ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 31 of 56 Performance Amazon Redshift uses a variety of innovations to obtain very high performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more It uses columnar storage data compression and zone maps to reduce the amount of I/O needed to perform queries Amazon Redshift has a massively parallel processing (MPP) architecture parallelizing and distributing SQL operations to take advantage of all available resources The underlying hardware is designed for high performance data processing using local attached storage to maximize throughput between the CPUs and drives and a 10 GigE mesh network to maximize throughput between nodes Performance can be tuned based on your data warehousing needs: AWS offers Dense Compute (DC) with SSD drives as well as Dense Storage (DS) options Durability and Availability Amazon Redshift automatically detects and replaces a failed node in your data warehouse cluster The data warehouse cluster is read only until a replacement node is provisioned and added to the DB which typically only takes a few minutes Amazon Redshift makes your replacement node available immediately and streams your most frequently accessed data from Amazon S3 first to allow you to resume querying your data as quickly as possible Additionally your data warehouse cluster remains available in the event of a drive failure; because Amazon Redshift mirrors your data across the cluster it uses the data from another node to rebuild failed drives Amazon Redshift clusters reside within one Availability Zone but if you wish to have a multi AZ set up for Amazon Redshift you can set up a mirror and then selfmanage replication and failover Scalability and Elasticity With a few clicks in the console or an API call you can easily change the number or type of nodes in your data warehouse as your performance or capacity needs change Amazon Redshift enables you to start with a single 160 GB node and scale up to a petabyte or more of compressed user data using many nodes For more information see Clusters and Nodes in Amazon Redshift in the Amazon Redshift Management Guide ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 32 of 56 While resizing Amazon Redshift places your existing cluster into read only mode provisions a new cluster of your chosen size and then copies data from your old cluster to your new one in parallel During this process you pay only for the active Amazon Redshift cluster You can continue running queries against your old cluster while the new one is being provisioned After your data has been copied to your new cluster Amazon Redshift automatically redirects queries to your new cluster and removes the old cluster Interfaces Amazon Redshift has custom JDBC and ODBC drivers that you can download from the Connect Client tab of the console allowing you to use a wide range of familiar SQL clients You can also use standard PostgreSQL JDBC and ODBC drivers For more information about Amazon Redshift drivers see Amazon Redshift and PostgreSQL There are numerous examples of validated integrations with many popular BI and ETL vendors Loads and unloads are attempted in parallel into each compute node to maximize the rate at which you can ingest data into your data warehouse cluster as well as to and from Amazon S3 and DynamoDB You can easily load streaming data into Amazon Redshift using Amazon Kinesis Data Firehose enabling near realtime analytics with existing business intelligence tools and dashboards you’re already using today Metrics for compute utilization memory utilization storage utilization and read/write traffic to your Amazon Redshift data warehouse cluster are available free of charge via the console or CloudWatch API operations AntiPatterns Amazon Redshift has the following antipatterns: • Small data sets – Amazon Redshift is built for parallel processing across a cluster If your data set is less than a hundred gigabytes you are not going to get all the benefits that Amazon Redshift has to offer and Amazon RDS may be a better solution • Online transaction processing (OLTP) – Amazon Redshift is designed for data warehouse workloads producing extremely fast and inexpensive analytic capabilities If you require a fast transactional system you may want to choose a traditional relational database system built on Amazon RDS or a NoSQL database offering such as DynamoDB ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 33 of 56 • Unstructured data – Data in Amazon Redshift must be structured by a defined schema rather than supporting arbitrary schema structure for each row If your data is unstructured you can perform extract transform and load (ETL) on Amazon EMR to get the data ready for loading into Amazon Redshift • BLOB data – If you plan on storing large binary files (such as digital video images or music) you may want to consider storing the data in Amazon S3 and referencing its location in Amazon Redshift In this scenario Amazon Redshift keeps track of metadata (such as item name size date created owner location and so on) about your binary objects but the large objects themselves are stored in Amazon S3 Amazon Elasticsearch Service Amazon Elasticsearch Service (Amazon ES) makes it easy to deploy operate and scale Elasticsearch for log analytics full text search application monitoring and more Amazon ES is a fully manag ed service that delivers Elasticsearch’s easy touse APIs and real time capabilities along with the availability scalability and security required by production workloads The service offers built in integrations with Kibana Logstash and AWS services including Amazon Kinesis Data Firehose AWS Lambda and Amazon CloudWatch so that you can go from raw data to actionable insights quickly It’s easy to get started with Amazon ES You can set up and configure your Amazon ES domain in minutes from the AWS Management Console Amazon ES provisions all the resources for your domain and launches it The service automatically detects and replaces failed Elasticsearch nodes reducing the overhead associated with self managed infrastructure and Elasticsearch software Amazon ES allows you to easily scale your cluster via a single API call or a few clicks in the console With Amazon ES you get direct access to the Elasticsearch open source API so th at code and applications you’re already using with your existing Elasticsearch environments will work seamlessly Ideal Usage Pattern Amazon Elasticsearch Service is ideal for querying and searching large amounts of data Organizations can use Amazon ES to do the following: ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 34 of 56 • Analyze activity logs eg logs for customer facing applications or websites • Analyze CloudWatch logs with Elasticsearch • Analyze product usage data coming from various services and systems • Analyze social media sentiments CRM data and find trends for your brand and products • Analyze data stream updates from other AWS services eg Amazon Kinesis Data Streams and Amazon DynamoDB • Provide customer s a rich search and navigation experience • Usage monitoring for mobile applications Cost Model With Amazon Elasticsearch Service you pay only for what you use There are no minimum fees or upfront commitments You are charged for Amazon ES instance hour s Amazon EBS storage (if you choose this option) and standard data transfer fees You can get started with our free tier which provides free usage of up to 750 hours per month of a single AZ t2microelasticsearch or t2smallelasticsearch instance and 10 GB per month of optional Amazon EBS storage (Magnetic or General Purpose) Amazon ES allows you to add data durability through automated and manual snapshots of your cluster Amazon ES provides storage space for automated snapshots free of charge for ea ch Amazon Elasticsearch domain Automated snapshots are retained for a period of 14 days Manual snapshots are charged according to Amazon S3 storage rates Data transfer for using the snapshots is free of charge For more information see Amazon Elasticsearch Service Pricing Performance Performance of Amazon ES depends on multiple factors including instance type workload index number of shards used read replicas storage ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 35 of 56 configurations –instance storage or EBS storage (general purpose SSD) Indexes are made up of shards of data which can be distributed on different instances in multiple Availability Zones Read replica of the shards are maintained by Amazon ES in a different Availability Zone if zone awareness is checked Amazon ES can use either the fast SSD instance storage for stor ing indexes or multiple EBS volumes A search engine makes heavy use of storage devices and making disks faster will result in faster query and search performance Durability and Availability You can configure your Amazon ES domains for high availability by enabling the Zone Awareness option either at domain creation time or by modifying a live domain When Zone Awareness is enabled Amazon ES distributes the instances supporting the domain across two different Availability Zones Then if you enable repli cas in Elasticsearch the instances are automatically distributed in such a way as to deliver cross zone replication You can build data durability for your Amazon ES domain through automated and manual snapshots You can use snapshots to recover your dom ain with preloaded data or to create a new domain with preloaded data Snapshots are stored in Amazon S3 which is a secure durable highly scalable object storage By default Amazon ES automatically creates daily snapshots of each domain In addition y ou can use the Amazon ES snapshot APIs to create additional manual snapshots The manual snapshots are stored in Amazon S3 Manual snapshots can be used for cross region disaster recovery and to provide additional durability Scalability and Elasticity You can add or remove instances and easily modify Amazon EBS volumes to accommodate data growth You can write a few lines of code that will monitor the state of your domain through Amazon CloudWatch metrics and call the Amazon Elasticsearch Service API t o scale your domain up or down based on thresholds you set The service will execute the scaling without any downtime Amazon Elasticsearch Service supports 1 EBS volume (max size of 15 TB) per instance associated with a domain With the default maximum o f 20 data nodes ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 36 of 56 allowed per Amazon ES domain you can allocate about 30 TB of EBS storage to a single domain You can request a service limit increase up to 100 instances per domain by creating a case with the AWS Support Center With 100 instances you can allocate about 150 TB of EBS storage to a single domain Interfaces Amazon ES supports many of the commonly used Elasticsearch APIs so code applications and popular tools that you're already using with your current Elasticsearch environments will work seamlessly For a full list of supported Elasticsearch operations see our documentation The AWS CLI API or the AWS Management Console can be used for creating and managing your domains as well Amazon ES supports integration with several AWS services including streaming data from S3 buckets Amazon Kinesis Data S treams and DynamoDB Streams Both integrations use a Lambda function as an event handler in the cloud that responds to new data in Amazon S3 and Amazon Kinesis Data Streams by processing it and streaming the data to your Amazon ES domain Amazon ES also integrates with Amazon CloudWatch for monitoring Amazon ES domain metrics and CloudTrail for auditing configuration API calls to Amazon ES domains Amazon ES includes built in integration with Kibana an open source analytics and visualization platform and supports integration with Logstash an open source data pipeline that helps you process logs and other event data You can set up your Amazon ES domain as the backend store for all logs coming through your Logstash implementation to easily ingest structured and unstructured data from a variety of sources AntiPatterns • Online transaction processing (OLTP) Amazon ES is a real time distributed search and analytics engine There is no support for transactions or processing on data manipulation If your requirement is for a fast transactional system then a traditional relational database ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 37 of 56 system built on Amazon RDS or a NoSQL databa se offering functionality such as DynamoDB is a better choice • Ad hoc data querying – While Amazon ES takes care of the operational overhead of building a highly scalable Elasticsearch cluster if running Ad hoc queries or oneoff queries against your da ta set is your usecase Amazon Athena is a better choice Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL without provisioning servers Amazon QuickSight Amazon QuickSight is a very fast easy touse cloud powered business analytics service that makes it easy for all employees within an organization to build visualizations perform ad hoc analysis and quickly get business insights from their d ata anytime on any device It can connect to a wide variety of data sources including flat files eg CSV and Excel access on premise databases including SQL Server MySQL and PostgreSQL AWS resources like Amazon RDS databases Amazon Redshift Amazo n Athena and Amazon S3 Amazon QuickSight enables organizations to scale their business analytics capabilities to hundreds of thousands of users and delivers fast and responsive query performance by using a robust in memory engine (SPICE) Amazon QuickSig ht is built with "SPICE" – a Super fast Parallel In memory Calculation Engine Built from the ground up for the cloud SPICE uses a combination of columnar storage in memory technologies enabled through the latest hardware innovations and machine code g eneration to run interactive queries on large datasets and get rapid responses SPICE supports rich calculations to help you derive valuable insights from your analysis without worrying about provisioning or managing infrastructure Data in SPICE is persis ted until it is explicitly deleted by the user SPICE also automatically replicates data for high availability and enables Amazon QuickSight to scale to hundreds of thousands of users who can all simultaneously perform fast interactive analysis across a wi de variety of AWS data sources Ideal Usage Patterns Amazon QuickSight is an ideal Business Intelligence tool allowing end users to create visualizations that provide insight into their data to help them make better business decisions Amazon QuickSight can be used to do the following: ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 38 of 56 • Quick interactive ad hoc exploration and optimized visualization of data • Create and share dashboards and KPI’s to provide insight into your data • Create Stories which are guided tours through specific views of an analysis and allow you to share insights and collaborate with others They are used to convey key points a thought process or the evolution of an analysis for collaboration • Analyze and visualize data coming from logs and stored in S3 • Analyze and visual ize data from on premise databases like SQL Server Oracle PostGreSQL and MySQL • Analyze and visualize data in various AWS resources eg Amazon RDS databases Amazon Redshift Amazon Athena and Amazon S3 • Analyze and visualize data in software as a se rvice ( SaaS) applications like Salesforce • Analyze and visualize data in data sources that can be connected to using JDBC/ODBC connection Cost Model Amazon QuickS ight has two different editions for pricing; standard edition and enterprise edition For an annual subscription it is $9/user/month for standard edition and $18/user/month for enterprise edition both with 10 GB of SPICE capacity included You can get addition al SPICE capacity for $25/GB/month for standard edition and $38/GB/month for enterprise edition We also have month to month option for both the editions For standard edition it is $12/GB/month and enterprise edition is $24/GB/month Additional informat ion on pricing can be found at Amazon QuickSight Pricing Both editions offer a full set of features for creating and sharing data visualizations Enterprise edition also offers encryption at rest and Microsoft Activ e Directory (AD) integration In Enterprise edition you select a Microsoft AD directory in AWS Directory Service You use that active directory to identify and manage your Amazon QuickSight users and administrators ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 39 of 56 Performance Amazon QuickSight is built with ‘SPICE’ a Super fast Parallel and In memory Calculation Engine Built from the ground up for the cloud SPICE uses a combination of columnar storage in memory technologies enabled through the latest hardware innovations and machine code generation to run interactive queries on large datasets and get rapid responses Durability and Availability SPICE automatically replicates data for high availability and enables Amazon QuickSight to scale to hundreds of thousands of users who can all simultaneously perform fast interactive analysis across a wide variety of AWS data sources Scalability and Elasticity Amazon QuickSight is a fully managed service and it internally takes care of scaling to meet the demands of your end users With Amazon Qui ckSight you don’t need to worry about scale You can seamlessly grow your data from a few hundred megabytes to many terabytes of data without managing any infrastructure Interfaces Amazon QuickSight can connect to a wide variety of data sources including flat files (CSV TSV CLF ELF) connect to on premises databases like SQL Server MySQL and PostgreSQL and AWS data sources including Amazon RDS Amazon Aurora Amazon Redshift Amazon Athena and Amazon S3 and SaaS applications like Salesforce You can also export analyzes from a visual to a file with CSV format You can share an analysis dashboard or story using the share icon from the Amazon QuickSight service interface You will be able to select the recipients (email address username or group name) permission levels and other options before sharing the content with others AntiPatterns • Highly formatted canned Reports – Amazon QuickSight is much more suited for ad hoc query analysis and visualization of da ta For highly formatted reports eg formatted financial statements consider using a different tool • ETL While Amazon QuickSight can perform some transformations it is not a full fledged ETL tool AWS offers AWS Glue which is a fully ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 40 of 56 managed extract t ransform and load (ETL) service that makes it easy for customers to prepare and load their data for analytics Amazon EC2 Amazon EC2 with instances acting as AWS virtual machines provides an ideal platform for operating your own selfmanaged big data analytics applications on AWS infrastructure Almost any software you can install on Linux or Windows virtualized environments can be run on Amazon EC2 and you can use the payas yougo pricing model What you don’t get are the application level managed services that come with the other services mentioned in this whitepaper There are many options for selfmanaged big data analytics; here are some examples: • A NoSQL offering such as MongoDB • A data warehouse or columnar store like Vertica • A Hadoop cluster • An Apache Storm cluster • An Apache Kafka environment Ideal Usage Patterns • Specialized Environment – When running a custom application a variation of a standard Hadoop set or an application not covered by one of our other offerings Amazon EC2 provides the flexibility and scalability to meet your computing needs • Compliance Requirements – Certain compliance requirements may require you to run applications yourself on Amazon EC2 instead of using a managed service offering Cost Model Amazon EC2 has a variety of instance types in a number of instance families (standard high CPU high memory high I/O etc) and different pricing options (OnDemand Reserved and Spot) Depending on your application requirements you may want to use additional services along with Amazon EC2 such as Amazon Elastic Block Store (Amazon EBS) for directly attached persistent storage or Amazon S3 as a durable object store; each comes with their own pricing model If you do run your big data application on Amazon EC2 you ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 41 of 56 are responsible for any license fees just as you would be in your own data center The AWS Marketplace offers many different third party big data software packages preconfigured to launch with a simple click of a button Performance Performance in Amazon EC2 is driven by the instance type that you choose for your big data platform Each instance type has a different amount of CPU RAM storage IOPs and networking capability so that you can pick the right performance level for your application requirements Durability and Availability Critical applications should be run in a cluster across multiple Availability Zones within an AWS Region so that any instance or data center failure does not affect application users For non uptime critical applications you can back up your application to Amazon S3 and restore to any Availability Zone in the region if an instance or zone failure occurs Other options exist depending on which application you are running and the requirements such as mirroring your application Scalability and Elasticity Auto Scaling is a service that allows you to automatically scale your Amazon EC2 capacity up or down according to conditions that you define With Auto Scaling you can ensure that the number of EC2 instan ces you’re using scales up seamlessly during demand spikes to maintain performance and scales down automatically during demand lulls to minimize costs Auto Scaling is particularly well suited for applications that experience hourly daily or weekly variability in usage Auto Scaling is enabled by CloudWatch and available at no additional charge beyond CloudWatch fees Interfaces Amazon EC2 can be managed programmatically via API SDK or the console Metrics for compute utilization memory utilization storage utilization network consumption and read/write traffic to your instances are free of charge using the console or CloudWatch API operations The interfaces for your big data analytics software that you run on top of Amazon EC2 varies based on the characteristics of the software you choose ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 42 of 56 AntiPatterns Amazon EC2 has the following antipatterns: • Managed Service – If your requirement is a managed service offering where you abstract the infrastructure layer and administration from the big data analytics then this “do it yourself” model of managing your own analytics software on Amazon EC2 may not be the correct choice • Lack of Expertise or Resources – If your organization does not have or does not want to expend the resources or expertise to install and manage a high availability installation for the system in question you should consider using the AWS equivalent such as Amazon EMR DynamoDB Amazon Kinesis Data Streams or Amazon Redshift Amazon Athena Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL Athena is serverless so there is no infrastructure to setup or manage and you can start analyzing data immediately You don’t need to load your data into Athena as it works directly with data stored in S3 Just log into the Athena Console define your table schema and start querying Amazo n Athena uses Presto with full ANSI SQL support and works with a variety of standard data formats including CSV JSON ORC Apache Parquet and Apache Avro Ideal Usage Patterns • Interactive ad hoc querying for web logs – Athena is a good tool for interactive onetime SQL queries against data on Amazon S3 For example you could use Athena to run a query on web and application logs to troubleshoot a performance issue You simply define a table for your data and start queryi ng using standard SQL Athena integrates with Amazon QuickSight for easy visualization • To query staging data before loading into Redshift – You can stage your raw data in S3 before processing and loading it into Redshift and then use Athena to query tha t data • Send AWS Service logs to S3 for Analysis with Athena – CloudTrail Cloudfront ELB/ALB and VPC flow logs can be analyzed ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 43 of 56 with Athena AWS CloudTrail logs include details about any API calls made to your AWS services including from the console CloudFront logs can be used to explore users’ surfing patterns across web properties served by CloudFront Querying ELB/ALB logs allows you to s ee the source of traffic latency and bytes transferred to and from Elastic Load Balancing instances and backend applications VPC flow logs capture information about the IP traffic going to and from network interfaces in VPCs in the Amazon VPC service The logs allow you to investigate network traffic patterns and identify threats and risks across your VPC estate • Building Interactive Analytical Solutions with notebook based solutions eg RStudio Jupyter or Zeppelin Data scientists and Analysts are often concerned about managing the infrastructure behind big data platforms while running notebook based solutions such as RStudio Jupyter and Zeppelin Amazon Athena makes it easy to analyze data using standard SQL without the need to manage infrastructure Integrating these notebook based solutions with Amazon Athena gives data scientists a powerful platform for building interactive analytical solutions Cost Model Amazon Athena has simple payasyougo pricing with no upfront costs or minimum fees and you’ll only pay for the resources you consume It is priced per query $5 per TB of data scanned and charges based on the amount of data scanned by the query You can save from 30% to 90% on your per query costs and get better performance by compressing partitioning and converting your data into colum nar formats Converting data to the columnar format allows Athena to read only the columns it needs to process the query You are charged for the number of bytes scanned by Amazon Athena rounded up to the nearest megabyte with a 10 MB minimum per query There are no charges for Data Definition Language (DDL) statements like CREATE/ALTER/DROP TABLE statements for managing partitions or failed queries Cancelled queries are charged based on the amount of data scanned ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 44 of 56 Performance You can improve the perfo rmance of your query by compressing partitioning and converting your data into columnar formats Amazon Athena supports open source columnar data formats such as Apache Parquet and Apache ORC Converting your data into a compressed columnar format lowers your cost and improves query performance by enabling Athena to scan less data from S3 when executing your query Durability and Availability Amazon Athena is highly available and executes queries using compute resources across multiple facilities automatically routing queries appropriately if a particular facility is unreachable Athena uses Amazon S3 as its underlying data store making your data highly available and durable Amazon S3 provides durable infrastructure to store important data and is designed for durability of 99999999999% of objects Your d ata is redundantly stored across multiple facilities and multiple devices in each facility Scalability and Elasticity Athena is serverless so there is no infrastructure to setup or manage and you can start analyz ing data immediately Since it is serverl ess it can scale automatically as needed Security Authorization and Encryption Amazon Athena allows you to control access to your data by using AWS Identity and Access Management (IAM) policies Access Control Lists (ACLs) and Amazon S3 bucket policies With IAM policies you can grant IAM users fine grained control to your S3 buckets By controlling access to data in S3 you can restrict users from querying it using Athena You can query data that’s been protected by : • Server side encryption with an Ama zon S3 managed key • Server side encryption with an AWS KMS managed key • Client side encryption with an AWS KMS managed key Amazon Athena also can directly integrate with AWS Key Management System (KMS ) to encrypt your result sets if desired ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 45 of 56 Interfaces Quer ying can be done by using the Athena Console Athena also supports CLI API via SDK and JDBC Athena also integrates with Amazon QuickSight for creating visualizations based on the Athena queries AntiPatterns Amazon Athena has the following antipatterns: • Enterprise Reporting and Business Intelligence Workloads – Amazon Redshift is a better tool for Enterprise Reporting and Business Intelligence Workloads involving iceberg queries or cached data at the nodes Data warehouses pull data from ma ny sources format and organize it store it and support complex high speed queries that produce business reports The query engine in Amazon Redshift has been optimized to perform especially well on data warehouse workloads • ETL Workloads – You should use Amazon EMR/ Amazon Glue if you are looking for an ETL tool to process extremely large datasets and analyze them with the latest big data processing frameworks such as Spark Hadoop Presto or Hbase • RDBMS – Athena is not a relation al/transactional dat abase It is not meant to be a replacement for SQL engines like M ySQL Solving Big Data Problems on AWS In this whitepaper we have examined some tools available on AWS for big data analytics This paper provides a good reference point when starting to design your big data applications However there are additional aspects you should consider when selecting the right tools for your specific use case In general each analytical workload has certain characteristics and requirements that dictate which tool to use such as: • How quickly do you need analytic results : in real time in seconds or is an hour a more appropriate time frame? • How much value will these analytics provide your organization and what budget constraints exist? • How large is the data and what is its growth rate? • How is the data structured? ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 46 of 56 • What integration capabilities do the producers and consumers have? • How much latency is acceptable between the producers and consumers? • What is the cost of downtime or how available and durable does the solution need to be? • Is the analytic workload consistent or elastic? Each one of these questions helps guide you to the right tool In some cases you can simply map your big data analytics workload into one of the services based on a set of requirements However in most realworld big data analytic workloads there are many different and sometimes conflicting characteristics and requirements on the same data set For example some result sets may have realtime requirements as a user interacts with a system while other analytics could be batched and run on a daily basis These different requirements over the same data set should be decoupled and solved by using more than one tool If you try to solve both of these examples using the same toolset you end up either over provisioning or therefore overpaying for unnecessary response time or you have a solution that does not respond fast enough to your users in real time Matching the best suited tool to each analytical problem results in the most cost effective use of your compute and storage resources Big data doesn’t need to mean “big costs” So when designing your applications it’s important to make sure that your design is cost efficient If it’s not relative to the alternatives then it’s probably not the right design Another common misconception is that using multiple tool sets to solve a big data problem is more expensive or harder to manage than using one big tool If you take the same example of two different requirements on the same data set the realtime request may be low on CPU but high on I/O while the slower processing request may be very compute intensive Decoupling can end up being much less expensive and easier to manage because you can build each tool to exact specification s and not overprovision With the AWS payasyougo model this equates to a much better value because you could run the batch analytics in just one hour and therefore only pay for the compute resources for that hour Also you may find this approach easier to manage rather than leveraging a single system that tries to meet all of the requirements Solving for different requirements with one tool results in ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 47 of 56 attempting to fit a square peg (real time requests) into a round hole (a large data warehouse) The AWS platform makes it easy to decouple your architecture by having different tools analyze the same data set AWS services have built in integration so that moving a subset of data from one tool to another can be done very easily and quickl y using parallelization Let’s put this into practice by exploring a few real world big data analytics problem scenarios and walk ing through an AWS architectural solution Example 1: Queries against an Amazon S3 Data Lake Data lakes are an increasingly popular way to store and analyze both structured and unstructured data If you use an Amazon S3 data lake AWS Glue can make all your data immediately available for analytics without moving the data AWS Glue crawlers can sca n your data lake and keep the AWS Glue Data Catalog in sync with the underlying data You can then directly query your data lake with Amazon Athena and Amazon Redshift Spectrum You can also use the AWS Glue Data Catalog as your external Apache Hive Metast ore for big data applications running on Amazon EMR ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 48 of 56 1 An AWS Glue crawler connects to a data store progresses through a prioritized list of classifiers to extract the schema of your data and other statistics and then populates the AWS Glue Data Catalog with this metadata Crawlers can run periodically to de tect the availability of new data as well as changes to existing data including table definition changes Crawlers automatically add new tables new partitions to existing table and new versions of table definitions You can customize AWS Glue crawlers t o classify your own file types 2 The AWS Glue Data Catalog is a central repository to store structural and operational metadata for all your data assets For a given data set you can store its table definition physical location add business relevant attributes as well as track how this data has changed over time The AWS Glue Data Catalog is Apache Hive Metastore compatible and is a drop in replacement for the Apache Hive Metastore for Big Data applications running on Amazon EMR For more information on setting up your EMR cluster to use AWS Glue Data Catalog as an Apache Hive Metastore click here 3 The AWS Glue Data Catalog also provides out ofbox integration with Amazon Athena Amazon EMR and Amazon Redshift Spectrum Once you add your table definitions to the AWS Glue Data Catalog they are available for ETL and also readily available for querying in Amazon ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 49 of 56 Athena Amazon EMR and Amazon Redshift Spectrum so that you can have a common view of your data between these services 4 Using a BI tool like Amazon QuickSight enables you to easily build visualizations perform ad hoc analysis and quickly get business insights from your data Amazon QuickSight supports data so urces like: Amazon Athena Amazon Redshift Spectrum Amazon S3 and many others see here: Supported Data Sources Example 2: Capturing and Analyzing Sensor Data An international air conditioner manufacturer has many large air conditioners that it sells to various commercial and industrial companies Not only do they sell the air conditioner units but to better position themselves against their competitors they also offer addon services where you can see realtime dashboards in a mobile app or a web browser Each unit sends its sensor information for processing and analysis This data is used by the manufacturer and its customers With this capability the manufacturer can visualize the dataset and spot trends Currently they have a few thousand prepurchased air conditioning (A/C) units with this capability They expect to deliver these to customers in the next couple of months and are hopi ng that in time thousands of units throughout the world will be using this platform If successful they would like to expand this offering to their consumer line as well with a much larger volume and a greater market share The solution needs to be able to handle massive amounts of data and scale as they grow their business without interruption How should you design such a system? First break it up into two work streams both originating from the same data: • A/C unit’s current information with near real time requirements and a large number of customers consuming this information • All historical information on the A/C units to run trending and analytics for internal use The data flow architecture in the following illustration sh ows how to solve this big data problem ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 50 of 56 Capturing and Analyzing Sensor Data 1 The process begins with each A/C unit providing a constant data stream to Amazon Kinesis Data Streams This provides an elastic and durable interface the units can talk to that can be scaled seamlessly as more and more A/C units are sold and brought online 2 Using the Amazon Kinesis Data Streams provided tools such as the Kinesis Client Library or SDK a simple application is built on Amazon EC2 to read data as it comes into Amazon Kinesis Data Streams analyze it and determine if the data warrants an update to the realtime dashboard It looks for changes in system operation temperature fluctuations and any errors that the units encounter 3 This data flow needs to occur in near real time so that customers and maintenance teams can be alerted as quickly as possible if there is an issue with the unit The data in the dashboard does have some aggregated trend information but it is mainly the current state as well as any system errors So the data needed to populate the dashboard is relatively small Additionally there will be lots of potential access to this data from the following sources: o Customers checking on their system via a mobile device or browser o Maintenance teams checking the status of its fleet o Data and intelligence algorithms and analytics in the reporting platform spot trends that can be then sent out as alerts such as if ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 51 of 56 the A/C fan has been running unusually long with the building temperatur e not going down DynamoDB was chosen to store this near real time data set because it is both highly available and scalable; throughput to this data can be easily scaled up or down to meet the needs of its consumers as the platform is adopted and usage grows 4 The reporting dashboard is a custom web application that is built on top of this data set and run on Amazon EC2 It provides content based on the system status and trends as well as alerting customers and maintenance crews of any issues that may come up with the unit 5 The customer accesses the data from a mobile device or a web browser to get the current status of the system and visualize historical trends The data flow (steps 25) that was just described is built for near realtime reporting of information to human consumers It is built and designed for low latency and can scale very quickly to meet demand The data flow (steps 69) that is depicted in the lower part of the diagram does not have such stringent speed and latency requirements This allows the architect to design a different solution stack that can hold larger amounts of data at a much smaller cost per byte of information and choose less expensive compute and storage resources 6 To read from the Amazon Kinesis stream there is a separate Amazon Kinesis enabled application that probably runs on a smaller EC2 instance that scales at a slower rate While this application is going to analyze the same data set as the upper data flow the ultimate purpose of this data is to store it for long term record and to host the data set in a data warehouse This data set ends up being all data sent from the systems and allows a much broader set of analytics to be performed without the near realtime requirements 7 The data is transformed by the Amazon Kinesis enabled application into a format that is suitable for long term storage for loading into its data warehouse and storing on Amazon S3 The data on Amazon S3 not only serves as a parallel ingestion point to Amazon Redshift but is durable storage that will hold all data that ever runs through this system; it can be the single source of truth It can be used to load other analytical tools if additional requirements arise Amazon S3 also comes with native integration with Amazon Glacier if any data needs to be cycled into long term lowcost storage ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 52 of 56 8 Amazon Redshift is again used as the data warehouse for the larger data set It can scale easily when the data set grows larger by adding another node to the cluster 9 For visualizing the analytics one of the many partner visualization platforms can be used via the OBDC/JDBC connection to Amazon Redshift This is where the reports graphs and ad hoc analytics can be performed on the data set to find certain variables and trends that can lead to A/C units underperforming or breaking This architecture can start off small and grow as needed Additionally by decoupling the two different work streams from each other they can grow at their own rate without upfront commitment allowing the manufacturer to assess the viability of this new offering without a large initial investment You could easily imag ine further additions such as adding Amazon ML to predict how long an A/C unit will last and preemptively send ing out maintenance teams based on its prediction algorithms to give their customers the best possible service and experience This level of service would be a differentiator to the competition and lead to increased future sales Example 3: Sentiment Analysis of Social Media A large toy maker has been growing very quickly and expanding their product line After each new toy release the company wants to understand how consumers are enjoying and using their products Additionally the company wants to ensure that their consumers are having a good experience with their products As the toy ecosystem grows the company wants to ensure that their products are still relevant to their customers and that they can plan future roadmaps items based on customer feedback The company wants to capture the following insights from social media: • Understand how consumers are using their products • Ensure customer satisfaction • Plan future roadmaps Capturing the data from various social networks is relatively easy but the challenge is building the intelligence programmatically After the data is ingested the company wants to be able to analyze and classify the data in a cost effective and programmatic way To do this you can use the architecture in the following illustration ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 53 of 56 Sentiment Analysis of Social Media The first step is to decide which social media sites to listen to Then create an application on Amazon EC2 that polls those sites using their corresponding APIs Next create an Amazon Kinesis stream because we might have multiple data sources: Twitter Tumblr and so on This way a new stream can be created each time a new data source is added and you can take advantage of the existing application code and architecture In this example a new Amazon Kinesis stream is created to copy the raw data to Amazon S3 as well For archival long term analysis and historical reference raw data is stored into Amazon S3 Additional Amazon ML batch models can be run on the data in Amazon S3 to perform predictive analysis and track consumer buying trends As noted in the architecture diagram Lambda is used for processing and normalizing the data and requesting predictions from Amazon ML After the Amazon ML prediction is returned the Lambda function can take action s based on the prediction – for example to route a social media post to the customer service team for further review Amazon ML is used to make predictions on the input data For example an ML model can be built to analyze a social media comment to determine whether the customer expressed negative sentiment about a product To get accurate predictions with Amazon ML start with training data and ensure that your ML models are working properly If you are creating ML models for the first time see Tutorial: Using Amazon ML to Predict Responses to a Marketing Offer As ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 54 of 56 mentioned earlier if multiple social network data sources are used then a different ML model for each one is suggested to ensure prediction accuracy Finally actionable data is sent to Amazon SNS using Lambda and delivered to the proper resources by text message or email for further investigation As part of the sentiment analysis creating an Amazon ML model that is updated regularly is imperative for accurate results Additional metrics about a specific model can be graphically displayed via the console such as: accuracy false positive rate precision and recall For more information see Step 4: Review the ML Model Predictive Performance and Set a CutOff By using a combination of Amazon Kinesis Data Streams Lambda Amazon ML and Amazon SES we have create d a scalable and easily customizable social listening platform Note that this scenario does not describe creating an Amazon ML model You would create the model initially and then need to update it periodically or as workloads change to keep it accurate Conclusion As more and more data is generated and collected data analysis requires scalable flexible and high performing tools to provide insights in a timely fashion However organizations are facing a growing big data ecosystem where new tools emerge and “die” very quickly Therefore it can be very difficult to keep pace and choose the right tools This whitepaper offers a first step to help you solve this challenge With a broad set of managed services to collect process and analyze big data the AWS platform makes it easier to build deploy and scale big data applications This allow s you to focus on business problems instead of updating and managing these tools AWS provides many solutions to address your big data analytic requirements Most big data architecture solutions use multiple AWS tools to build a complete solution This approach help s meet stringent business requirements in the most costoptimized performant and resilient way possible The result is a flexible big data architecture that is able to scale along with your business ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 55 of 56 Contributors The following individuals and organizations contributed to this document: • Erik Swensson Manager Solutions Architecture Amazon Web Services • Erick Dame Solutions Architect Amazon Web Services • Shree Kenghe S olutions Architect Amazon Web Services Further Reading The following resources can help you get started in running big data analytics on AWS: • Big Data on AWS View the comprehensive portfolio of big data services as well as links to other resources such AWS big data partners tutorials articles and AWS Marketplace offerings on big data solutions Contact us if you need any help • Read the AWS Big Data Blog The blog features real life examples and ideas updated regularly to help you collect store clean process and visualize big data • Try one of the Big Data Test Drives Explore the rich ecosystem of products designed to address big data challenges using AWS Test Drives are developed by AWS Partner Network (APN) Consulting and Technology partners and are provided free of charge for education demonstration and evaluation purposes • Take an AWS training course on big data The Big Data on AWS course introduces you to cloud based big data solutions and Amazon EMR We show you how to use Amazon EMR to process data using the broad ecosystem of Hadoop tools like Pig and Hive We also teach you how to create big data environments work with DynamoDB and Amazon Redshift understand the benefits of Amazon Kinesis Streams and leverage best practices to design big data environments for security and costeffectiveness ArchivedAmazon Web Services – Big Data Analytics Options on AWS Page 56 of 56 • View the Big Data Customer Case Studies Learn from the experience of other customers who have built powerful and successful big data platforms on the AWS cloud Document Revisions Date Description December 2018 Revised to add information on Amazon Athena AWS QuickSight AWS Glue and general update s throughout January 2016 Revised to add information on Amazon Machine Learning AWS Lambda Amazon Elasti csearch Service; general update December 2014 First publication
|
General
|
consultant
|
Best Practices
|
BlueGreen_Deployments_on_AWS
|
Blue/Green Deployments on AWS First Published August 1 2016 Updated September 29 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Abstract 1 Introduction 2 Blue/Green deployment methodology 2 Benefits of blue/green 3 Define the environment boundary 4 Services for blue/green deployments 5 Amazon Route 53 5 Elastic Load Balancing 5 Auto Scaling 6 AWS Elastic Beanstalk 6 AWS OpsWorks 6 AWS CloudFormation 6 Amazon CloudWatch 7 AWS CodeDeploy 7 Implementation techniques 8 Update DNS Routing with Amazon Route 53 8 Swap the Auto Scaling Group behind the Elastic Load Balancer 10 Update Auto Scaling Group launch configurations 13 Swap the environment of an Elastic Beanstalk application 16 Clone a Stack in AWS OpsWorks and Update DNS 19 Best Practices for Managing Data Synchronization and Schema Changes 22 Decoupling Schema Changes from Code Changes 22 When blue/green deployments are not recommended 23 Conclusion 26 Contributors 27 Document revisions 27 Appendix 28 Comparison of Blue Green Deployment Techniques 28 Amazon Web Services Blue/Green Deployments on AWS 1 Abstract The blue/green deployment technique enables you to releas e applications by shifting traffic between two identical environments that are running different vers ions of the application Blue/green deployments can mitigate common risks associated with deploying software such as downtime and rollback capability This white paper provides an overview of the blue/green deployment methodology and describes techniques customers can implement using Amazon Web Services (AWS) services and tools It also addresses considerations around the data tier which is an important component of most applications Amazon Web Services Blue/Green Deployments on AWS 2 Introduction In a traditional approach to application deployment you typically fix a failed deployment by redeploying an earlie r stable version of the application Redeployment in traditional data centers is typically done on the same set of resources due to the cost and effort of provisioning additional resources Although this approach works it has many shortcomings Rollback isn’t easy because it’s implemented by r edeployment of an earlier version from scratch This process takes time making the application potentially unavailable for long periods Even in situations where the application is only impaired a rollback is required which overwrites the faulty version As a result you have no opportunity to debug the faulty application in place Applying the principles of agility scalability utility c onsumption as well as the automation capabilities of Amazon Web Services can shift the paradigm o f application deployment This enables a better deployment technique called blue/green deployment Blue/Green deployment methodology Blue/ green deployment s provide releases with near zero downtime and rollback capabilities The fundamental idea behind blue/green deployment is to shift traffic between two identical environments that are running different versions of your application The blue environment represents the current application version serving production traffic In parallel the green environment is staged running a different version of your application After the green environment is ready and tested production traffic is redirect ed from blue to green If any problems are identified you can roll back by reverting traffic back to the blue environment Amazon Web Services Blue/Green Deployments on AWS 3 Blue/green example Although blue/green deployment isn’t a new concept you don’t commonly see it used in traditional on premises hosted environments due to the cost and effort required to provision additional resources The advent of cloud computing dramatically changes how easy and cost effective it is to adopt the blue/green approach for deploying software Benefits of blue/green Traditional deployments with in place upgrades make it difficult to validate your new application version in a production deployment while also continuing to run the earlier version of the application Blue/green deployments provide a level of isolation between your blue and green application environments This helps ensure spinning up a parallel green environment does not affect resources underpinning your blue environment This isolation reduces your deployment risk After you deploy the green environme nt you have the opportunity to validate it You might do that with test traffic before sending production traffic to the green environment or by using a very small fraction of production traffic to better reflect real user traffic This is called canary analysis or canary testing If you discover the green environment is not operating as expected there is no impact on the blue environment You can route traffic back to it minimizing impaired operation or downtime and limiting the blast radius of impact Amazon Web Services Blue/Green Deployments on AWS 4 This ability to simply roll traffic back to the operational environment is a key benefit of blue/green deployments You can roll back to the blue environment at any time during the deployment process Impaired operation or downtime is minimized because i mpact is limited to the window of time between green environment issue detection and shift of traffic back to the blue environment Additionally impact is limited to the portion of traffic going to the green environment not all traffic If the blast radi us of deployment errors is reduced so is the overall deployment risk Blue/green deployments also work well with continuous integration and continuous deployment (CI/CD) workflows in many cases limiting their complexity Your deployment automation has to consider fewer dependencies on an existing environment state or configuration as your new green environment gets launched onto an entirely new set of resources Blue/green deployments conducted in AWS also provide cost optimization benefits You’re not tied to the same underlying resources So if the performance envelope of the application changes from one version to another you simply launch the new environment with optimized resources whether that means fewer resources or just different compute reso urces You also don’t have to run an overprovisioned architecture for an extended period of time During the deployment you can scale out the green environment as more traffic gets sent to it and scale the blue environment back in as it receives less traf fic Once the deployment succeeds you decommission the blue environment and stop paying for the resources it was using Define the environment boundary When planning for blue/green deployments you have to think about your environment boundary —where have things changed and what needs to be deployed to make those changes live The scope of your environment is influenced by a number of factors as described in the following table Table 1 Factors that affect environment boundary Factors Criteria Application architecture Dependencies loosely/tightly coupled Organization Speed and number of iterations Risk and complexity Blast radius and impact of failed deployment People Expertise of teams Amazon Web Services Blue/Green Deployments on AWS 5 Factors Criteria Process Testing/QA rollback capability Cost Operating budgets additional resources For example organizations operating applications that are based on the microservices architecture pattern could have smaller environment boundaries because of the loose coupling and well defined interfaces between the individual services Organizations running legacy monolithic apps can still utilize blue/green deployments but the environment scope can be wider and the testing more extensive Regardless of the environment boundary you should make use of automation wherever you can to streamline the proc ess reduce human error and control your costs Services for blue/green deployments AWS provides a number of tools and services to help you automate and streamline your deployments and infrastructure You can access these tools using the web console CLI tools SDKs and IDEs Amazon Route 53 Amazon Route 53 is a highly available and scalable authoritative DNS service that routes user requests for Internet based resources to the appropriate destination Route 53 runs on a global network of DNS servers providing customers with added features such as routing b ased on health checks geography and latency DNS is a classic approach to blue/green deployments allowing administrators to direct traffic by simply updating DNS records in the hosted zone Also time to live (TTL) can be adjusted for resource records; this is important for an effective DNS pattern because a shorter TTL allows record changes to propagate faster to clients Elastic Load Balancing Another common approach to routing traffic for a blue/green deployment is through the use of load balancing t echnologies Amazon Elastic Load Balancing (ELB) distributes incoming application traffic across designated Amazon Elastic Compute Cloud (Amazon EC2) instan ces ELB scales in response to incoming requests performs health checking against Amazon EC2 resources and naturally integrates with other services Amazon Web Services Blue/Green Deployments on AWS 6 such as Auto Scaling This makes it a great option for customers who want to increase application fault t olerance Auto Scaling AWS A uto Scaling helps maintain application availability and lets you scale EC2 capacity up or down automatically according to defined conditions The templates used to launch EC2 insta nces in an Auto Scaling group are called launch configurations You can attach different versions of launch configuration s to an auto scaling group to enable blue/green deployment You can also configure auto scaling for use with an ELB In this configuration the ELB balances the traffic across the EC2 instances running in an auto scaling group You define t ermination policies in auto scaling groups to determine which EC2 instances to remo ve during a scaling action ; auto scaling also al lows instances to be placed in Standby state instead of termination which helps with quick rollback when required Both auto scaling's termination policies a nd Standby state allow for blue/green deployment AWS Elastic Beanstalk AWS Elastic Beanstalk is a fast and simple way to get an application up and running on AWS It’s perfect for developers who want to deploy code without worry ing about managing the underlying infrastructure Elastic Beanstalk supports Auto Scaling and ELB both of which allow for blue/green deployment Elastic Beanstalk helps you run multiple versions of your application and provide s capabilities to swap the environment URLs facilitating blue/green deployment AWS OpsWorks AWS OpsWorks is a configuration management service based on Chef that allows customers to deploy and manage application stacks on AWS Customers can specify resource and application configuration and deploy and monitor running resources OpsWorks simplifies cloning entire stacks when you’re preparing blu e/green environments AWS CloudFormation AWS CloudF ormation provides customers with the ability to describe the AWS resources they need through JSON or YAML formatted templates This service provides very powerful automation capabilities for provisioning blue/green environments and facilitating updates to switch traffic whether through Route 53 DNS ELB or similar Amazon Web Services Blue/Green Deployments on AWS 7 tools The service can be used as part of a larger infrastructure as code strategy whe re the infrastructure is provisioned and managed using code and software development techniques such as version control and continuous integration in a manner similar to how application code is treated Amazon CloudWatch Amazon CloudWatch is a monitoring service for AWS resources and applications CloudWatch collect s and visualize s metrics ingest s and monitor s log files and define s alarms It provides system wide visibility into resource utilization ap plication performance and operational health which are key to early detection of application health in blue/green deployments AWS CodeDeploy AWS CodeDeploy is a deployment service that automates deployments to various compute types such as EC2 instances on premises instances Lambda functions or Amazon ECS services Blue/Green deployment is a feature of CodeDeploy CodeDeploy can also roll back deployment in case of failur e You can also use CloudWatch alarms to monitor the state of deployment and utilize CloudWatch Events to process the deployment or instance state change events Amazon Elastic Container Service There are three ways traffic can be shifted during a deployment on Amazon Elastic Container Services (Amazon ECS) • Canary – Traffic is shifted in two increments • Linear – Traffic is shifted in equal increments • Allatonce – All traffic is shifted to the updated tasks AWS Lambda Hooks With AWS Lambda hooks CodeDeploy can call the Lambda function during the various lifecycle events including deployment of ECS Lambda function deployment and EC2/On premise deployment The hooks are helpful in creating a deployment workflow for you r apps Amazon Web Services Blue/Green Deployments on AWS 8 Implementation techniques The following techniques are examples of how you can implement blue/green on AWS While AWS highlight s specific services in each technique yo u may have other services or tools to implement the same pattern Choose the appropriate technique based on the existing architecture the nature of the application and the goals for software deployment in your organization Experiment as much as possible to gain experience for your environment and to understand how the different deployment risk factors affect your specific workload Update DNS Routing with Amazon Route 53 DNS routing through record updates is a common approach to blue/green deployments DNS is used as a mechanism for switching traffic from the blue environment to the green and vice versa when rollback is necessary This approach works with a wide variety of environment configurations as long as you can express the endpoint into the environment as a DNS name or IP address Withi n AWS this technique applies to environments that are: • Single instances with a public or Elastic IP address • Groups of instances beh ind an Elastic Load Balancing load balancer or third party load balancer • Instances in an auto scaling group with an ELB load balancer as the front end • Services running on an Amazon Elastic Container Service (Amazon ECS) cluster fronted by an ELB load bala ncer • Elastic Beanstalk environment web tiers • Other configurations that expose an IP or DNS endpoint The following figure shows how Amazon Route 53 manages the DNS hosted zone By updating the alias record you can route traffic from the blue environment to the green environment Amazon Web Services Blue/Green Deployments on AWS 9 Classic DNS pattern You can shift traffic all at once or you can do a weighted distribution For weighted distribution with Amazon Route 53 you can define a percentage of traffic to go to the green environment and gradually update the weights until the green environment carr ies the full production traffic This provides the ability to perform canary analysis where a small percentage of production traffic is introduced to a new environment You can test the new code and monitor for errors limiting the blast radius if any issu es are encountered It also allows the green environment to scale out to support the full production load if you’re using Elastic Load Balancing (ELB ) ELB automatically scales its request handling capacity to meet the inbound application traffic; the process of scaling isn’t instant so we recommend that you test observe and understand your traffic patterns Load balancers can also be pre warmed (configured for optimum capacity) through a support request Amazon Web Services Blue/Green Deployme nts on AWS 10 Classic DNS weighted distribution If issues arise during the deployment you can roll back by updating the DNS record to shift traffic back to the blue environment Although DNS routing is simple to implement for blue/green you should take into consideration how quickly can you complete a rollback DNS Time to Live ( TTL) determines how lon g clients cache query results However with earlier clients and potentially clients that aggressively cache DNS records certain sessions may still be tied to the previous environment Although rollback can be challenging this feature has the benefit of enabling a granular transition at your own pace to allow for more substantial testing and for scaling activities To help manage costs consider using Auto Scaling instances to scale out the resources based on actual demand This works well with the gradua l shift using Amazon Route 53 weighted distribution For a full cutover be sure to tune your Auto Scaling policy to scale as expected and remember that the new ELB endpoint may need time to scale up as well Swap the Auto Scaling Group behind the Elastic Load Balancer If DNS complexities are prohibitive consider using load balancing for traffic management to your blue and green environments This technique uses Auto Scaling to Amazon Web Services Blue/Green Deployments on AWS 11 manage the EC2 resources for your blue and green environments scaling up or do wn based on actual demand You can also control the Auto Scaling group size by updating your maximum desired instance counts for your particular group Auto Scaling also integrates with Elastic Load Balancing (ELB) so any new instances are automatically added to the load balancing pool if they pass the health checks governed by the load balancer ELB tests the health of your registered EC2 instances with a simple ping or a more sophisticated connection attempt or request Health checks occur at configurab le intervals and have defined thresholds to determine whether an instance is identified as healthy or unhealthy For example you could have an ELB health check policy that pings port 80 every 20 seconds and after passing a threshold of 10 successful ping s health check will report the instance as being InService If enough ping requests time out then the instance is reported to be OutofService With Auto Scaling an instance that is OutofService could be replaced if the Auto Scaling policy dictates Conv ersely for scale ddown activities the load balancer removes the EC2 instance from the pool and drains current connections before they terminate The following figure shows the environment boundary reduced to the Auto Scaling group A blue group carries the production load while a green group is staged and deployed with the new code When it’s time to deploy you simply attach the green group to the existing load balancer to introduce traffic to the new environment For HTTP/HTTPS listeners the load bala ncer favors the green Auto Scaling group because it uses a least outstanding requests routing algorithm For more information see How Elastic Load Balancin g works You can also control how much traffic is introduced by adjusting the size of your green group up or down Amazon Web Services Blue/Green Deployments on AWS 12 Swap Auto Scaling group pattern s As you scale up the green Auto Scaling group you can take the blue Auto Scaling group instances out of service by either terminating them or putting them in Standby state For more information see Temporarily removing instances from your Auto Scaling group Standby is a good option because if you need to roll back to the blue environment you only have to put your blue server instances back in service and they're ready to go As soon as the green group is scaled up without issues you can decommission the blue group by adjusting the group size to zero If you need to roll back detach the load balancer from the green group or reduce the group size of the green group to zero Amazon Web Services Blue/Green Deployments on AWS 13 Blue Auto Scaling group nodes in standby and decommission This pattern’s traffic management capabilities aren’t as granular as the classic DNS but you could still exercise control through the configuration of the Auto Scaling groups For example you could have a larger fleet of smaller instances with finer scaling policies which would also help control costs of scaling Because th e complexities of DNS are removed the traffic shift itself is more expedient In addition with an already warm load balancer you can be confident that you’ll have the capacity to support production load Update Auto Scaling Group launch configurations A launch configuration contains information like the Amazon Machine Image (AMI) ID instance type key pair one or more security groups and a block device mapping Auto Scaling groups have their own launch configurations You can associate only one launch configuration with an Auto Scaling group at a time and it can’t be modified after you create it To change the launch configuration associated with an Auto Scaling group replace the existing launch configuration with a new one After a new launch config uration is in place any new instances that are launched use the new launch configuration parameters but existing instances are not affected When Auto Scaling removes instances (referred to as scaling in ) from the group the default termination policy is to remove instances with the earliest launch configuration However you Amazon Web Services Blue/Green Deployments on AWS 14 should know that if the Availability Zones were unbalanced to begin with then Auto Scaling could remove an instance with a new launch configuration to balance the zones In such sit uations you should have processes in place to compensate for this effect To implement this technique start with an Auto Scaling group and an ELB load balancer The current launch configuration has the blue environment as shown in the following figure Launch configuration update pattern To deploy the new version of the application in the green environment update the Auto Scaling group with the new launch configuration and then scale the Auto Scaling group to twice its original size Amazon Web Services Blue/Green Deployments on AWS 15 Scale up green launch configuration The next step is to shrink the Auto Scaling group back to the original size By default instances with the old launch configuration are removed first You can also utilize a group’s S tandby state to temporarily remove instances from an Auto Scaling group Having the instance in standby state helps in quick rollbacks if required As soon as you’re confident about the newly deployed version of the application you can permanently remove instances in Standby state Amazon Web Services Blue/Green Deployments on AWS 16 Scale down blue launch configuration To perform a rollback update the Auto Scaling group with the old launch configuration Then perform the preceding steps in reverse Or if the instances are in Standby state bring them back online Swap the environment of an Elastic Beanstalk application Elastic Beanstalk enables quick and easier deployment and management of applications without having to worry about the infrastructure that runs those applications To deploy an app lication using Elastic Beanstalk upload an application version in the form of an application bundle (for example java war file or zip file) and then provide some information about your application Based on application information Elastic Beanstalk d eploys the application in the blue environment and provides a URL to access the environment (typically for web server environments) Elastic Beanstalk provides several deployment policies that you can configure for use ranging from policies that perform a n inplace update on existing instances to immutable deployment using a set of new instances Because Elastic Beanstalk performs an in place update when you update your application versions your application may become unavailable to users for a short per iod of time Amazon Web Services Blue/Green Deployments on AWS 17 However you can avoid this downtime by deploying the new version to a separate environment The existing environment’s configuration is copied and used to launch the green environment with the new version of the application The new green environment will have its own URL When it’s time to promote the green environment to serve production traffic you can use Elastic Beanstalk's Swap Environme nt URLs feature To implement this technique use Elastic Beanstalk to spin up the blue environment Elastic Beanstalk environment Elastic Beanstalk provides an environment URL when the application is up and running The green environment is then spun up with its own environment URL At this time two environments are up and running but only the blue environment is serving production traffic Amazon Web Services Blue/Green Deployments on AWS 18 Prepare green Elastic Beanstalk environment Use the following procedure to promote the green environment to serve production traffic 1 Navigate to the environment's dashboard in the Elastic Beanstalk console 2 In the Actions menu choose Swap Environment URL Elastic Beanstalk perform s a DNS switch which typically takes a few minutes See the Update DNS Routing with Amazon Route 53 section for the factors to consider when performing a DNS switch 3 Once the DNS changes have propagated you can terminate the blue environment To perform a rollback select Swap Environment URL again Amazon Web Services Blue/Gree n Deployments on AWS 19 Decommission blue Elastic Beanstalk environment Clone a Stack in AWS OpsWorks and Update DNS AWS OpsWorks utilizes the concept of stacks which are logical groupings of AWS resources (EC2 instances Amazon RDS ELB and so on) that have a common purpose and should be logically managed together Stacks are made of one or more layers A layer represents a set of EC2 instances that serve a particular purpose such as serving applications or hosting a database server When a data store is part of the stack you should be aware of certain data management challenges such as those discuss ed in the next section To impleme nt this technique in AWS OpsWorks bring up the blue environment /stack with the current version of the application Amazon Web Services Blue/Green Deployments on AWS 20 AWS OpsWorks stack Next create the green environment/stack with the newer version of application At this point the green environment i s not receiving any traffic If Elastic Load Balancing needs to be initialized you can do that at this time Clone stack to create green environment Amazon Web Services Blue/Green Deployments on AWS 21 When it’s time to promote the green environment/stack into production update DNS records to point to the green environment/stack ’s load balancer You can also do this DNS flip gradually by using the Amazon Route 53 weighted routing policy This process involves updating DNS so be aware of DNS issues discussed in the Update DNS Routing with Amazon Route 53 section Decommission blue stack Amazon Web Services Blue/Green Deployments on AWS 22 Best Practices for Managing Data Synchronization and Schema Changes The complexity of m anaging data synchronization across two distinct environments depend s on the number of data stores in use the intricacy of the data model and the data consistency requirements Both the blue and green environments need up todate data: • The green environment needs up todate data access because it’s becoming the new production environment • The blue environment needs up todate data in the event of a rollback when production is either shifts back or remains on the blue environment Broadly you accomplish this by having both the green and blue environments share the same data stores Unstructured data stores such as Amazon Simple Storage Service (Amazon S3) object storage NoSQL databases and shared file systems are often easier to share b etween the two environments Structured data stores such as relational database management systems (RDBMS) where the data schema can diverge between the environments typically require additional considerations Decoupling Schema Changes from Code Changes A general recommendation is to decouple schema changes from the code changes This way the relational database is outside of the environment boundary defined for the blue/green deployment and shared between the blue and green environments The two approaches for performing the schema changes are often used in tandem: • The schema is changed first before the blue/green code deployment Database updates must be backward compatible so the old version of the application can still interact with the data • The schema is changed last after the blue/green code deployment Code changes in the new version of the application must be backward compatible with the old schema Schema modifications in the first approach are often additive You can add fields to tables new entities and relationships If needed you can use triggers or asynchronous processes to populate these new constructs with data based on data changes performed by the old application version Amazon Web Services Blue/Green Deployments on AWS 23 It’s important to follow coding best practices when d eveloping applications to ensure your application can tolerate the presence of additional fields in existing tables even if they are not used When table row values are read and mapped into source code structures ( for example objects and array hashes) y our code should ignore fields it can’t map to avoid causing application runtime errors Schema modifications in the second approach are often deletive You can remove unneeded fields entities and relationships or merge and consolidate them After this removal the earlier application version is no longer operational Decoupled schema and code changes There’s an increased risk involved when managing schema with a deletive approach : failures in the schema modification process can impact your production e nvironment Your additive changes can bring down the earlier application because of an undocumented issue where best practices weren’t followed or where the new application version still has a dependency on a deleted field somewhere in the code To mitigat e risk appropriately this pattern places a heavy emphasis on your pre deployment software lifecycle steps Be sure to have a strong testing phase and framework and a strong QA phase Performing the deployment in a test environment can help identify these sorts of issues early before the push to production When blue/green deployments are not recommended As blue/green deployments become more popular developers and companies are constantly applying the methodology to new and innovative use cases However in some common use case patterns applying this methodology even if possible isn’t recommended In these cases implementing blue/green deployment introduces too much risk whether due to workarounds or additional moving parts in the deployment process T hese Amazon Web Services Blue/Green Deployments on AWS 24 complexities can introduce additional points of failure or opportunities for the process to break down that may negate any risk mitigation benefits blue/green deployments bring in the first place The following scenarios highlight patterns that may not be well suited for blue/green deployments Are your schema changes too complex to decouple from the code changes? Is sharing of data stores not feasible? In some scenarios sharing a data store isn’t desired or feasible Schema changes are too complex to decouple Data locality introduces too much performance degradation to the application as when the blue and green environments are in geographically disparate regions All of these situations require a solution where the data store is inside of the dep loyment environment boundary and tightly coupled to the blue and green applications respectively This requires data changes to be synchronized —propagated from the blue environment to the green one and vice versa The systems and processes to accomplish t his are generally complex and limited by the data consistency requirements of your application This means that during the deployment itself you have to also manage the reliability scalability and performance of that synchronization workload adding ris k to the deployment Does your application need to be deployment aware ? You should consider using feature flags in your application to make i t deployment aware This will help you control the enabling/disabling of application features in blue/green deploym ent Your application code would run additional or alternate subroutines during the deployment to keep data in sync or perform other deployment related duties These routines are enabled /disabled turned off during the deployment by using configuration fl ags Making your applications deployment aware introduces additional risk and complexity and typically isn’t recommended with blue/green deployments The goal of blue/green deployments is to achieve immutable infrastructure where you don’t make changes to your application after it’s deployed but redeploy altogether That way you ensure the same code is operating in a production setting and in the deployment setting reducing overall risk factors Amazon Web Services Blue/Green Deployments on AWS 25 Does your commercial offtheshelf (COTS) application co me with a predefined update/upgrade process that isn’t blue/green deployment friendly? Many commercial software vendors provide their own update and upgrade process for applications which they have tested and validated for distribution While vendors are increasingly adopting the principles of immutable infrastructure and automated deployment currently not all software products have those capabilities Working around the vendor’s recommended update and deployment practices to try to implement or simulate a blue/green deployment process may also introduce unnecessary risk that can potentially negate the benefits of this methodology Amazon Web Services Blue/Green Deployments on AWS 26 Conclusion Application deployment has associated risks However advancements such as the advent of cloud computing deployment and automation frameworks and new deployment techniques blue/green for example help mitigate risks such as human error process downtime and rollback capability The AWS utility billing model and wide range of automation tools make it much easier for customers to move fast and cost effectively implement blue/green deployments at scale Amazon Web Services Blue/Green Deployments on AWS 27 Contributors The following individuals and organizations contributed to this document: • George John Solutions Architect Amazon Web Services • Andy Mui Solutions Architect Amazon Web Services • Vlad Vlasceanu Solutions Architect Amazon Web Servic es • Muhammad Mansoor Solutions Architect Amazon Web Services Document revisions Date Description September 21 2021 Updated for technical accuracy June 1 2015 Initial publication Amazon Web Services Blue/Green Deployments on AWS 28 Appendix : Comparison of Blue Green Deployment Techniques The following table offers an overview and comparison of the different blue/green deployment techniques discussed in this paper The risk potential is evaluated from desirable lower risk ( X) to less desirable higher risk ( X X X) Technique Risk Category Risk Potential Reasoning Update DNS Routing with Amazon Route 53 Application Issues X Facilitates canary analysis Application Performance X Gradual switch traffic split management People/Process Errors X X Depends on automation framework overall simple process Infrastructure Failures X X Depends on automation framework Rollback X X X DNS TTL complexities (reaction time flip/flop) Cost X Optimized via Auto Scaling Swap the Auto Scaling group behind Elastic Load Balancer Application Issues X Facilitates canary analysi s Application Performance X X Less granular traffic split management already warm load balancer People/Process Errors X X Depends on automation framework Infrastructure Failures X Auto Scaling Rollback X No DNS complexities Cost X Optimized via Auto Scaling Application Issues X X X Detection of errors/issues in a heterogeneous fleet is complex Amazon Web Services Blue/Green Deployments on AWS 29 Technique Risk Category Risk Potential Reasoning Update Auto Scaling Group launch configurations Application Performance X X X Less granular traffic split initial traffic load People/Process Errors X X Depends on automation framework Infrastructure Failures X Auto Scaling Rollback X No DNS complexities Cost X X Optimized via Auto Scaling but initial scale out overprovisions Swap the environment of an Elastic Beanstalk application Application Issues X X Ability to do canary analysis ahead of cutover but not with production traffic Application Performance X X X Full cutover People/Process Errors X Simple process automated Infrastructure Failures X Auto Scaling CloudWatch monitoring Elastic Beanstalk health reporting Rollback X X X DNS TTL complexities Cost X X Optimized via Auto Scaling but initial scale out may overprovision Clone a stack in OpsWorks and update DNS Application Issues X Facilitates canary analysis Application Performance X Gradual switch traffic split management People/Process Errors X Highly automated Infrastructure Failures X Autohealing capability Rollback X X X DNS TTL complexities Amazon Web Services Blue/Green Deployments on AWS 30 Technique Risk Category Risk Potential Reasoning Cost X X X Dual stack of resources
|
General
|
consultant
|
Best Practices
|
Building_a_RealTime_Bidding_Platform_on_AWS
|
ArchivedBuilding a RealTime Bidding Platform on AWS February 2016 This paper has been archived For the latest technical guidance about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 2 of 21 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 3 of 21 Contents Abstract 4 Introduction 4 RealTime Bidding Explained 4 Elastic Nature of Advertising and Ad Tech 5 Why Speed Matters 7 Advertising Is Global 8 The Economics of RTB 8 Components of a RTB Platform 8 RTB Platform Diagram 11 Real Time Bidding on AWS 11 Elasticity on AWS 12 Low Latency Networking on AWS 12 AWS Global Footprint 12 The Economics of RTB on AWS 13 Components of an RTB Platform on AWS 13 Reference Architecture Example 19 Citations 19 Conclusion 19 Contributors 20 Further Reading 20 Notes 21 ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 4 of 21 Abstract Amazon Web Services (AWS) is a flexible costeffective easy touse global cloud computing platform The AWS cloud delivers a comprehensive portfolio of secure and scalable cloud computing services in a selfservice pay asyougo model with zero capital expense needed to manage your realtim e bidding platform This whitepaper helps architects engineers advertisers and developers understand realtime bidding (RTB) and the services available in AWS that can be used for RTB This paper will showcase the RTB platform reference architecture used by customers today as well as provide additional resources to get started with building an RTB platform on AWS Introduction Online advertising is a growing industry and its share of total advertising spending is also increasing every year and projected to surpass TV advertising spend in 2016 A significant area of growth is r ealtime b idding (RTB) which is the auctionbased approach for transacting digital display ads in real time at the most granular impression level RTB was the dominant transaction method in 2015 accounting for 740 percent of programmatically purchased advertising or 11 billion dollars in the US1 RTB transactions are projected to grow over 30 percent in 2016 according to industry research2 Realtime b idding is also gaining popularity in mobile display advertising as mobile advertising spend is anticipated to grow in excess of 60 percent in 20163 As the amount of data being created and collected grows organizations need to use it to make better decisions i n determining the value of each ad impression AWS has an ecosystem of solutions specifically designed to handle the realtime low latency analytics that allow you to make the best possible and most efficient ad impressions to drive your business RealTime Bidding Explain ed When you go to a website and are served an advertisement the process to serve you that advertisement involves the website or publisher contacting an ad exchange which then accepts realtime bids from many different parties The bidders us e the information about the user that they know (for example the ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 5 of 21 website and the ad location/size plus demographic information such as user location browsing history and time of day ) to determine how much they are willing to pay to deliver an advertisement to the user The data may come directly from publishers (mobile applications or websites) or thirdparty data providers Whichever bidder bids the most within a time period set by the exchange usually under 100 milliseconds gets to serve the ad and pay the b id price This process at a highlevel is depicted in Figure 1 RTB is the process of accepting data from Step 2 and doing the action in Step 3 Elastic N ature of Advertising and Ad Tech Web traffic is the engine that drives the advertising industry Daily web traffic volume can vary by 2 00 percent or more (based on time of day ) In Figure 2 you can see a typical pattern of load on an RTB platform in a single day With elasticity you can achieve greater infrastructure savings by turning off resources as traffic decreases Figure 1: Real Time Bidding Process 1 User goes to a web page 2 Ad impression is sent to an Ad Exchange 3 Ad Exchange invites bidders 4 Highest bidder wins the impression 5 Advertiser delivers the winning ad creativeArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 6 of 21 In addition Figure 3 below illustrates the typical pattern that an RTB platform will see for seasonal events (such as the Christmas holiday in December and the spring tax season in the United States) that create very large consistent spikes that might account for more than half of all traffic for the whole year These peak times are the most important time to serve the right ad to the right potential customers To accomplish this you can either build an RTB platform that always has the capacity to handle peak and spiked loads or you can build your platform to grow and shrink based on the required need Building elasticity into your platform can dramatically reduce your operating cost s You don’t need to maintain peak capacity yearround just to avoid performance issues during important holidays and busy traffic times each day Figure 2: Daily Load Pattern for RTB ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 7 of 21 Figure 3: Yearly Load Pattern for RTB Why Speed Matters An ad exchange expects to hear an answer from all bidders in 100 milliseconds (ms) If your bid is even a millisecond late you will lose your opportunity to win this ad impression and your advertisement will go unseen Lost bids are lost opportunities to g et the right advertisement to your key demographic There are millions of bids per minute and it ’s critical for advertisers to have the ability bid on all of them Therefore you need to make sure that the entire platform including the network connection to the exchange is as quick as possible Additionally any less time needed to transmit data is more time you can use to run analytics and make better bidding decisions Therefore you want to have your RTB platforms have the lowest latency connection possible to the exchange you ar e bidding on ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 8 of 21 Advertising Is Global Advertising is becoming a truly global activity This doesn’t necessarily m ean that your advertising strategy isn’t localized When providing campaigns to advertisers you want to offer the freedom of reaching customers with localized messaging across the globe To reach the largest possible audience you need to have an RTB platform physically near all the exchanges throughout the world You cannot respond to exchanges that are physically far apart and still meet the 100 ms requirement Therefore when you plan for building an RTB platform you need to make sure you are able to deploy your platform throughout the world to be as effective as possible The Economics of RTB The digital advertising business is extremely competitive with ever decreasing margins Many technological solutions might be able to deliver the required business functionality however few can deliver it at the very low cost needed to achieve the desired profitability Costs of RTB can be broken down into two broad categories: costs associated with listening to traffic and recording it and additional costs of executing the bidding logic and populating and maintaining the data repositories related to the bidding process When you use AWS these costs can be spread across AWS services with different economics and can be effectively monitored controlled and projected through the AWS budgets and forecasts capability Cost optimization of RTB on AWS is a critical part of a successful solution with numerous strategies available Components of a RTB Platform This section discusses the components that make up a functioning RTB platform Bid Traffic Ingestion and Processing As a user goes to a website that website will contact an ad exchange that will then send out bid traffic to RTB platforms for bids on this impression The bid traffic includes just the website URL that is being browsed ad/size and location on that website and demographic information about the user that the publisher knows This data must be ingest ed in real time and a decision must be made on whether you want to bid on this impression and the amount you’re willing to bid Each ad request comes with some form of user identification (ID) from the ad exchange At this point the bidder needs to be able to leverage this user ID and all available ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 9 of 21 data for that user (if this is an existing user that the system has seen previously) The bidder must map this user ID to another source of information (eg a cookie store) to match the user calculate the value of the bid and probability of winning the auction Then the bidder sends the bid along with the ad link tied to that bid so that the ad creative can be displayed to the end user in the case of an auction win To make this decision the solution must utilize a lowlatency data store along with a campaign management system which will be described in more detail below Analysis Traffic Ingestion and Processing Analysis traffic can come from ad exchanges and directly from content publishers through tracking pixels Analysis traffic is usually not as timesensitive as bidding traffic but it provides valuable information which can be used to make the real time bidding decision on future bid traffic It is important to capture all or as much analysis traffic as possible and not just sample it because analysis traffic improves the system’s ability to understand data patterns and learn from them This data is critical to making an intelligent decision on how much any given impression is worth to the advertiser and how likely it is that this impression will stick with the website user or lead to a direct action like a clickthrough Low Latency Data Repository The primary purpose of a low latency data repository is to look up and make decisions very quickly on not only if you wish to bid on an impression but also how much you are willing to pay for that impression This decision is based on three key factors: knowledge about the user (user profile) how well the user match es a set of predetermined advertising campaigns with specific budget objectives and how often the user has a specific ad The key capabilities of this data store are to provide data very fast (preferably in a single millisecond) to scale to peak traffic and to have regional replication abilities Regional replication is critical for targeting users who connect from different geographic locations and who can be targeted through advertising exchanges worldwide The data that is stored in the low latency data repository is an index for fast retrieval set of aggregated data from the durable data repository Durable Data Repository for LongTerm Storage The durable data repository is a storage platform built to hold large amounts of data inexpensively It will hold all historical data for the analytical pipelines for data transformation enrichment and preparation for rich analytics It ’s ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 10 of 21 important to have as much historical data as possible to best be able to predict user behavior and have a good impression bidding strategy For example shopping behavior may be very different in December around the Christmas holiday than in April If you have data from December of last year or Decembers over multiple years you can make better predictions about the behavior patterns and demographics that lead to the most valuable impressions In addition the advertising customers may have their own “first party” data about the customers they want to target with RTB or they might use data from other data providers ’ thirdparty data to enhance the RTB process Analytics Platform An analytics platform is used to run computation models such as machine learning to calculate the likelihood of specific campaigns getting the desired result from specific demographics and users This platform will keep track of users across multiple devices record their activities and update user profiles and audience segments It will run the analytics off the different data feeds and the long term durable data repository It will take the analytical results and store them an indexed manner in the lowlatency data store so that bid processing can quickly find the data it needs to make its bidding decisions Campaign Management Campaign management is typically a multitenant web application that manages the advertising campaigns and controls the budgets for different advertisers This web application provides detailed statistics of the bids that have already taken place in the campaigns and the audiences that have provided the best response In some cases the advertising campaign can be manually or automatically adjusted “on the fly ” and the information can be pumped back into the low latency data store so that new bidding traffic can incorp orate new or updated campaigns ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 11 of 21 RTB Platform Diagram The diagram in Figure 4 displays a generic infrastructure provider independent data flow and each component involved in a generic RTB platform This illustrates not only the components of an RTB platform but also the interactions with a website from outside sources such as ad exchanges advertisers user tracking systems publishers and end users Figure 3: RTB Platform Components Real Time Bidding on AWS We will now explore the specific advantages that AWS offers to RTB systems We’ll show how AWS help s RTB providers implement all of the components discussed earlier for their platforms AWS provides many services and features so customers can focus on analytics models and your own customers instead of spending a significant amount of time on infrastructure networking availability and the platform ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 12 of 21 Elasticity on AWS The AWS platform is built with elasticity in mind; at any time you can utilize compute databases and storage You only pay for what you use For example Amazon Elastic Compute Cloud (EC2) reduces the time required to obtain and boot new server instances to minutes This allows you to quickly scale capacity both up and down as your computing requirements change Y ou can build your RTB platform to scale up and down in size as more traffic comes in You also can do computational analytics on your data set in batches and then release the resource back when the analytics are done so you’re not continuing to pay for it This elasticity not only gives you the assurance that you can handle very large unpredictable spikes in traffic that may occur but also that you are not tied to architectural or software choices You can freely change because there is no long term commitment or investment to your existing infrastructure Low Latency Networking on AWS In the simplest case both the RTB solution and the exchange are located in the same AWS Region This is an increasingly popular scenario among the rapidly growing mobile and video exchanges In some cases however the exchange is not located on AWS so the traffic between the RTB solution and the ad exchange goes over the public Internet To reduce the latency and jitter caused by the Internet a private connectivity path via AWS Direct Connect can be established between your Amazon Virtual Private Cloud (VPC) that hosts the RTB solution and the provider that hosts the exchange Some hosting providers may require a public Autonomous System Number (ASN) in order to connect to the exchanges in the most efficient way If a company does not own a public ASN this can be accomplished by leasing an ASN from AWS Direct Connect Partners Additionally when choosing the EC2 instance type you want to make sure to pick instances with enhanced networking with SRIOV to get the best possible network performance In some cases customers may take advantage of Placement Groups that ensure nonblocking low latency connections between instances In addition different networking stacks can be deployed to further reduce latency for connections inside the VPC and outside of the VPC AWS Global Footprint AWS offers many different regions around the world where you can deploy your RTB platform to be as close as possible to the different exchanges around the ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 13 of 21 world To see a full list of current locations click here One of the big advantages of the AWS platform is that you can use deployment services like AWS CloudFormation AWS OpsWorks and AWS Elastic Beanstalk to easily deploy the exact same architecture to any region you want with a simple click in the AWS Management Console or a service API call This allows you to easily meet the demands of new campaig ns If you no longer have a campaign tied to a specific geographic location you can shut down operations at that location until there is demand Due to the AWS pay asyougo model you will pay nothing once operations cease When a new campaign starts that requir es this geographic location again just spin it up in minutes with your deployment tool of choice The Economics of RTB on AWS There are several ways of improving the economics of RTB on AWS Some of the common methods include the following: 1 Elastically scale your compute and memory resources using Auto Scaling to maximize your resources and to ensure that you are paying for peak load when only when you need the resources 2 Use Spot Instances especially with latest EC2 Spot Fleet API and Spot Bid Advisor 3 Use Reserved Instances 4 Reduce the costs of outbound network traffic with Direct Connect to exchanges outside of the AWS network 5 Dynamically scale Amazon DynamoDB These methods will typically lead to significant savings over building it yourself or using other providers without sacrificing performance or availability Components of an RTB Platform on AWS Now that you have a solid understanding of what RTB platforms are and what their generic components are let’s look at how customers have implemented this successfully on the AWS Platform The AWS platform offers a rich ecosystem of selfmanaged servers via Amazon EC2 third party products via the AWS Marketplace and managed services offerings such as Amazon DynamoDB and Amazon ElastiCache so there are multiple ways to architect your platform on ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 14 of 21 AWS We will explore how each component of an RTB p latform could be deployed on AWS Bid Traffic Ingestion and Processing on AWS To build an elastic bid traffic ingestion and processing platform you need to front all traffic into a loadbalancing tier The load balancing can be done in AWS using an Elastic Load Balancing (ELB) load balancer which is a fully managed software load balancer that will scale with traffic at a very attractive price point You can also run your own load balancing software such as HAProxy Netscaler or F5 on Amazon EC2 instances in a selfmanaged implementation However running your own load balancer requires you to ensure scalability and availability across Availability Zones Typically DNS with a health check is used to monitor your load balancers and move new traffic around if any of the instances running your load balancer has an issue or is overloaded You will also want to scale your web and application tier up and down independently as traffic fluctuates to not only ensure that you can handle traffic demand but also reduce your infrastructure cost when you do not need max capacity of servers to handle the current traffic You scale your servers yourself using the AWS API or Command Line Interface (CLI) or you can use Auto Scaling to automatically manage your fleet A best practice is to use the smallest possible instance type that can manage your web and application tier without sacrificing network throughput This will lead to the lowest possible price when running at your minimum capacity It wil l also reduce cost by allowing you to scale up and down in small increments that best match your compute and memory resources to your actual needs as the bid traffic varies throughout the day For more details on best practices for building and managing scalable architectures see the AWS whitepaper Managing Your Infrastructure at Scale An example of launching an open source bidder (RTBkit) on AWS can be found in the RTBkit GitHub repository Analysis Traffic Ingestion and Processing on AWS Analysis traffic can flow into Amazon Kinesis directly from users or it might require some preprocessing In the second scenario it will go through a load balancer to a fleet of scalable EC2 instances that preprocess the data After data arrives to EC2 instances (Kinesis Producers) and then is forwarded to Amazon Kinesis (likely with some batching to reduce the costs) it can be picked up by a ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 15 of 21 number of applications directly from the Amazon Kinesis stream using the Kinesis Client Library (KCL) Kinesis Producer Library (KPL) can be used to simplify the process of putting records into an Amazon Kinesis stream Kinesis is a convenient data store for the multiplexed stream data from several EC2 instances This data can be used to compute some metrics and do some time window calculations to understand the patterns in the web traffic In order to optimize the costs for this additional processing step the data can be flushed in small batches by concatenating the logs to Amazon Kinesis 1 MB record size to minimize the costs associated with the put record requests From Amazon Kinesis data is typically moved into a durable repository like Amazon S3 and processed with frameworks like Apache Spark ( using Spark Streaming and Kinesis integration) In addition the Amazon Kinesis Firehose service significantly simplifies the process of large volume data capture Low Latency Data Repository on AWS To have a low latency data repository on AWS you need AWS managed services like Amazon DynamoDB and Amazon ElastiCache or a multitude of do ityourself options that you would run on Amazon EC2 such as Aerospike Cassandra and Couchbase Amazon DynamoDB offers the simplicity of managing very large tables with low administrative overhead and human intervention while providing singledigit millisecond latency and utilizing multiple data centers for high durability and availability Amazon DynamoDB can be combined with DynamoDB Streams which captures all activity that happens on a table This simplifies development and administration of crossregion multi master replication scenarios Amazon DynamoDB is a convenient repository for user profile audience and cookie data as well as for keeping track of advertising served (frequency capping) and advertising budgets Amazon DynamoDB also allows for easily scaling up and down the amount of transaction requests the system can handle on a pertable basis This allows you to scale your data tier up and down as your transaction load changes throughout the year Each table in Amazon DynamoDB has its own provisioned amount of throughput that can be scaled This makes administration of your database easy; you don’t need to turn a clustered set of servers into a set of tables with different ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 16 of 21 performance characteristics and you avoid poorly written scales or an unexpected spike in traffic occurring on one table affecting your other tables This allows you to deploy the concept of hot and cold tables very easily For example there is the typical pattern of timeseries data where new data is examined often and older data is rarely needed In this case you can create a unique table for each day week or month and have the new tables have very high throughput You can also programmatically dial back the throughput on your older tables over time to further save money since older data accessed less often This simple per table throughput administration reduces performance variation and uncertainty found in clusters trying to manage many tables with varying in unpredictable loads One of the popular use cases for Amazon DynamoDB is a distributed lowlatenc y user profile store The user store contains the categories (or segments) a specific user belongs to as well as the times that user was assigned a given segment This usersegment information can be used as inputs for bidding decision logic Amazon DynamoDB can be very flexible in terms of schema design and there are several best practices for data modeling One example of a best practice is to use hash and range keys for data retrieval and modification of multiple items (segments) belonging to the same or different hash keys In this scenario the hash key is the user ID and the range key is the segment the user belongs to Durable Data Repository for LongTerm Storage on AWS Amazon Simple Storage Service ( S3) provides a scalable secure highly available and durable repository for analytical data Amazon S3 runs a pay asyou go model so that you are only charged for what you used Amazon S3 also has User ID (Hash Key) User Segment (Range Key) Timestamp (Attribute) 1234 Segment1 1448895406 1234 Segment2 1448895322 1235 Segment1 1448895201 ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 17 of 21 different storage classes S3 standard for generalpurpose storage S3 Infrequent Access (S3 IA) for data that is longlived but infrequently accessed and Amazon Glacier for a longterm archive You can also set up Object Lifecycle Management policies which will move your data between these different storage options based on a schedule at no additional cost For example a policy might move data older than a year to S3 IA then after three years to Amazon Glacier and then after seven years the data is deleted Amazon S3 is a durable scalable and inexpensive option for RTB longterm storage that can then be used as a data source for the analytical pipelines for data transformation enrichment and preparation for rich analytics AWS has several technologies you can use for distributed data transformation Amazon Elastic MapReduce (EMR) is a managed cluster compute framework that can natively read directly from Amazon S3 utilizing open source tools such as Apache Spark In addition AWS Data Pipeline is a highly available managed service that allows easy data movement Processing jobs can be implemented for managing workflows including those done by Amazon EMR and other processing and database technologies Finally you can take advantage of eventdriven processing when objects are written to Amazon S3 Eventdriven processing can automati cally trigger an event handled by an AWS Lambda function to simplify processing at scale and not require batchbased architectures RTB Analytics Platform on AWS AWS has a wide variety of analytics platforms that can be utilized by RTB platforms so that bidding decisions can be as effective as possible In the machine learning space for very large data sets a common pattern is to use the machine learning library that comes with Spark MLlib on EMR You can also utilize other tools that run on Amazon EMR or you can use a managed service such as Amazon Machine Learning (Amazon ML) All of the se options have full integration with Amazon S3 storage for your longterm data set This allows the data to be analyzed so you can utilize many different tools to achieve your predictive analytics goals You can also read about different options and benefits AWS provides for largescale analytics in the Big Data Analytics Options on AWS whitepaper Typically an analytical workload requires a workflow component and can be implemented using Amazon Simple Workflow Service (SWF) AWS Data Pipeline or AWS Lambda ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 18 of 21 Campaign Management on AWS Campaign management systems architectures on AWS look like typical well architected web application s similar to those described in our bidprocessing system but this time with a fullscale persistent data tier Campaign management should exist in Auto Scaling groups sit behind ELB load balancers and security groups and deploy in multiple Availability Zones for high availability You can use Amazon Relational Database Service (RDS) for your campaign management Amazon RDS is a managed RDBMS service that supports Oracle SQL Server Aurora MySQL PostgreSQL and MariaDB engines Amazon RDS will install patch maintain perform multiAZ synchronous replication and back up your database You could also run your own database technology on Amazon EC2 but you would need to take ownership of managing and maintaining that database yourself Your application will typically tie into your lowlatency data tier to provide realtime information on the success of your campaigns back to your customers We recommend us ing a content delivery network such as Amazon CloudFront which is a managed content delivery network that helps speed up and securely deliver dynamic and static data (eg JavaScript ad images) as close to your users as possible ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 19 of 21 Reference Architecture Example Figure 5 is an example of a reference architecture that customers have successfully deployed It has Auto Scaling groups to allow for scalability and it spans multiple Availability Zones so that any localized failure would not stop its ability to responds to bids Citations US Programmatic Ad Spend AdRoll re:Invent 2014 AdRoll Kinesis data processing Automating Analytic Workflows on AWS Figure 5: Example Reference Architecture Conclusion Realtime bidding is a growing trend that has many different components required to effectively deliver intelligent realtime purchasing of media The AWS platform is a perfect fit for each component of the RTB platform due to the global reach and breadth of services An RTB architecture on AWS allows you to get the realtime performance necessary for RTB as well as reduce the overall cost and complexity involved in running an RTB platform The result is a flexible big data ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 20 of 21 architecture that is able to scale along with your business on the AWS global infrastructure Deploying on AWS offloads a significant amount of the complexity of operating a scalable realtime infrastructure so that you can focus on what differentiates you from your competitors and focus on making the best possible bidding strategies for your customers Contributors The following individuals and organizations contributed to this document: Steve Boltuch solutions architect Amazon Web Services Chris Marshall solutions architect Amazon Web Services Marco Pedroso software engineer A9 Erik Swensson solutions architect manager Amazon Web Services Dmitri Tchikatilov business development manager Amazon Web Services Vlad Vlasceanu solutions architect Amazon Web Services Further Reading For additional help please consult the following sources: IAB Real Time Bidding Project Beating the Speed of Light with Your Infrastructure on AWS Deploying an RTBkit on AWS with a CloudF ormation Template ArchivedAmazon Web Services – Building a RealTime Bidding Platform on AWS February 2016 Page 21 of 21 Notes 1 US Programmatic ad spend to double by 2016 eMarketer analysis 2 US Programmatic digital display ad spending 20142017 eMarketer analysis20142017 3 US Programmatic ad spend to double by 2016 eMarketer analysis
|
General
|
consultant
|
Best Practices
|
Building_a_Secure_Approved_AMI_Factory_Process_Using_Amazon_EC2_Systems_Manager_SSM_AWS_Marketplace_and_AWS_Service_Catalog
|
Archived Building a Secure Approved AMI Factory Process Us ing A mazon EC 2 Systems Manager (SSM) AWS Marketplace andAWS Service Catalog November 2017 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived © 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitme nts conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS a nd its customers Archived Contents Introduction 1 Building the Approved AMI 3 Considerations for AWS Mark etplace AMIs 5 Distributing the Approved AMI 6 Distributing and Updating AWS Service Catalog 8 Continuously Scanning Published AMIs 10 Conclusion 11 Document Revisions 12 Archived Abstract Customers require that AMIs used in AWS meet general and customer specific security standards Customers may also need to install software agents such as logging or antimalware agents To meet this requirement customers often build approved AMI s that are then shared across the many te ams The responsibility of building and maintaining these can fall to a central cloud or security team or to the individual development teams This paper outlines a process using the best practices for buildi ng and maintaining Approved AMI s through Amazon EC2 Systems Manager and delivering them to your teams using AWS Service Catalog ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 1 Introduction As your organization move s more and more of your workloads to Amazon Web Services ( AWS) your IT Team need s to ensure that they can meet the security requirements defined by your internal Information Security team The Amazon Machine Images ( AMIs ) used by diffe rent customer business units must be hardened patched and scanned for vulnerabilities regularly Like most companies your organization is probably looking for ways to reduce the time required to provid e approved AMIs Often evidence of compliance and approval is required before you can use AMIs in your production environments It can be difficult for your d evelopment teams to determine which AMIs are approved and how to integrate AMIs into their own applications Organization wide cloud teams need to ensure compliance and enforce that development teams use the hardened AMIs and not just any offtheshelf AMI It isn’t uncommon for organization to build fragile internal tool chains Those are often dependent on one or two skilled people whose departure introduces risk This whitepaper presents the challenges faced by customer cloud teams It describes a method for providing a repeatable scalable and approved application stack factory that increases innovation velocity reduces effort and increases the chief information security officer’s (CISO ) confidence that teams are compliant In a typical enterprise scenario a cloud team is responsible for providing the core infrastructure services This team owns providing the appropriate AWS environment for t he many development teams and approved AMIs that include the latest operating system updates hardening requirements and required thirdparty software agents They need to provide these approved images to teams across the organization in a seamless way In a more decentralized model organizations typically use this same method Development teams want to consume the latest approved AMI in the simplest way possible often through automation They want to customize these approved AMIs with the required softw are components but also ensure that the images continue to meet your organization’s InfoSec requirements ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 2 This solution uses Amazon EC2 Systems Manager Automation to drive the workflow Automation defines a sequence of steps an d is compos able The solution is broken down into a set of logical building blocks where the master workflow invokes the following individual components : 1 Build the AMI 2 Validate the AMI 3 Publish the AMI to AWS Service Catalog The master Automation invoke s all the steps as i llustrated in the following figure Figure 1: Solution overview The development teams can repeat this process Each team can add their own software and produce a new AMI that is scanned distributed and consumed as necessary The extended flow across the teams is as follows : • Central cloud engineering team is responsible for the following : o Setting policy on the specified operating systems the variants and the frequency of change policy o Buildin g the approved AMIs that include the latest operating system updates hardening requirements and approved software agents o Running AWS EC2 Systems Manager Automation to build approved AMI o Making the AMI available to teams for further automation with EC2 Systems Manager and making the product available through Service Catalog ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 3 o Optional: Setting up AWS EC2 Systems Manager to auto mate scheduled scanning of approved AMIs for vulnerabilities using Amazon Inspector • Development Teams are responsible for the following: o Building the application stacks used in production and meet ing any hardening requirements You can use AWS EC2 Systems Manager or AWS Code Pipeline to build the required AMIs or AWS CloudFormation stacks o Optional: Completing any steps that require authorized approval o Optional: Provide the resulting approved application stack for deployment via automation or AWS Service Catalog The solution uses the following AWS Services: • AWS Service Catalog1 • Amazon EC2 Systems Manager2 • Amazon Inspector3 • AWS Marketplace4 • AWS CodePipeline5 • AWS CodeCommit6 Building the Approved AMI The key to the entire pro cess is generating an AMI that meets all your hardening requirements The following diagram illustrates the high level process ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 4 Figure 2: AMI hardening process ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 5 Phase Description Automation Trigger You can configure Amazon EC2 Systems Manager Automation to be triggered by a user or an event 1 You can set up an event using Amazon CloudWatch (for example a monthly timed event ) or some other customer event (for example when code is checked into AWS CodeC ommit Build Phase The build phase takes a source AMI as the input and generates a hardened AMI ready for testing 2 Create instance – An instance is created from the latest available base AMI This could be an Amazon AWS Marketplace or customer provided AMI As part of the instance launch you install Amazon EC2 Systems Manager (SSM) Agent using userdata 3 Run command – When the instance is up and running packages and scripts are securely downloaded from an Amazon S3 bucket and executed This could include operating system updates operating system hardening scripts and the installation of new software and configuration changes These packages and scripts could be anything from custom bash scripts to Ansible playbooks 4 Build AMI – After the instance has been updated a new h ardened AMI is created Validation Phase Depending on your requirements you can use custom scripts thirdparty security software or Amazon Inspector to verify that your instances meet your security requirements Regardless of your choice the process is the same If you have implemented a custom scanning solution 5 Create instance – A new instance is created from the h ardened AMI 6 Run command – When the instance is up and running validation scripts and tools can be securely download from an S3 bucket and then executed to validate the instance or you can use Qualys Nessus or Amazon Inspector to validate the AMI Approval Phase After the scanning is complete you can inspect the reports before approving the new hardened AMI 7 You can store the new hardened AMI ID in a data store such as the SSM Parameter Store which can be used by other automations later in the pipeline Notifications After the Automation job is complete you can notify your teams 8 You can use CloudWatch Events to generate email alerts to teams and Amazon Simple Notification Service (Amazon SNS) notifications to trigger other automations Considerations for AWS Marketplace AMIs AWS Marketplace AMIs have a Marketplace product code attached to the AMI When you create your version of the AMI this product code is copied across to ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 6 the new AMI You need to confirm that any changes you make to the AMI do n’t affect the stability or performance of the product Some Marketplace offers come with vendor designed Cloud Formation templates to reduce effort on establishing clusters and HA configurations If the product can only be launched from AWS Marketplace using an AWS CloudFormation template you must update the AMI ID in the template to customize and harden the instance to create a new AMI You can download and change the template from the AWS Marketplace product page If the template launch requires any scripting test the template to ensure that these scripts work as expected Distributing the Approved AMI After you have an approved AMI you can distribute th e AMI across AWS Regions and then share it with any other AWS accounts To do this you use an Amazon EC2 Systems Manager Automation document that uses a n AWS Lambda function to copy the AMIs across a specified list of regions and then another Lambda function to share this copied AMI with the other accounts The resulting AMI IDs can be stored in the SSM Parameter S tore or Amazon DynamoDB for later consumption ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 7 Figure 3: Copying and sharing across AWS Regions and accounts After the AMI is shared with the specified accounts you can trigger another notification using email or SNS which could start further automations If there is a requirement to encrypt the AMIs the process is similar except instead of sharing the AMI with accounts the AMI must be copied to each account and then encrypted This increases the number of AMIs to manage but you can still automate it using th e same process Note If you have sourced the AMI from AWS Marketplace make sure that any accounts you share this new AMI with subscribes to the product in Marketplace ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 8 Distributing and Updating AWS Service Catalog AWS Service Catalog has two important c omponents: product s and portfolios ( a collection of products ) Both components use JSON/YAML CloudFormation templates You can apply constraints tags and policies to a product or portfolio AWS Service Catalog supports up to 50 versions per product AWS Service Catalog provides a TagOption library that enables you to creat e and apply repeatable and consistent tags to a product After you build and distribute the AMIs you can update AWS Service Catalog portfolios a cross the AWS Regions and accounts When managing multiple AWS Service Catalog product portfolios across AWS Regions and your organization ’s AWS accounts it is good practice to use a script to creat e portfolios and products You can st ore portfolio definitions in a JSON or YAML file and then create portfolios using scripts that target specific accounts and regions as shown in the following figure ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 9 Figure 4: Distributing AWS Service Catalog portfolios and products When the AMI is updated you can create a new version of an AWS Service Catalog product To do this you need to generate a new AWS CloudFormation template for the product containing the update d AMIs You can handle AWS Regions using the standard CloudFormation mappings sections You can standardize the template and use a parameter for the AMI ID You can enforce the AMI ID by defining a template constraint Regardless of how you choose to set it up the process for deploying portfolios and products remains the same ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 10 Continuously Scanning Published AMIs You need to regularly scan approved AMIs to ensure that they don’t contain any newly discovered Common Vul nerabilities and Exposures (CVEs ) You can schedul e daily inspections of the AMI as shown in the f ollowing architecture diagram To kick start the continuous scanning process you set up a CloudWatch Event that is triggered based on a schedule The event starts a new Automation document execution as illustrated in the following figure Figure 5: Continuous scanning architecture overview ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 11 Number Description 1 Read AMI ID – The SSM Automation document reads the AMI IDs from parameter store 2 Launch AMI – The SSM Automation document launches EC2 instances with a userdata script and installs the Amazon Inspector Agent 3 Trigger Amazon Inspector assessment – The Automation document starts the Amazon Inspector assessment on the instance 4 Update assessment execution status – The results of the Amazon Inspector is sent from the agent on the instance back to Amazon Inspector 5 Update Amazon Inspector assessment result – The Amazon Inspector results are stored in an S3 bucket for later retrieval 6 Notification of any high/medium /low CVEs – A notification is sent via SNS if any CVE’s are found 7 Terminate the instance – The SSM Automation document terminates the instance 8 Send notification – After the Amazon Inspector assessment is complete a message containing the CVE details is published to an SNS topic You can also set up CloudWatch Events to identify Automation document execution failures Conclusion Setting up an efficient tool chain for a large enterprise can require substantial effort and often hinge s on a few people in a big company Many companies build internal tools and processes us ing code written by one or two developers This approach creates problems as companies grow because it doesn’t scale and usually doesn’t include automation AWS provides a consistent template model which ensures consistency and reduces the risk of failure You can source many AMIs from the Amazon EC2 Console or AWS Marketplace By building and verifying approved hardened AMIs using the solution described in this whitepaper you can tag catalog apply polic ies and distribute AMIs across your organization ArchivedAmazon Web Services – Building a Secure Approved AMI Factory Process Page 12 Document Revisions Date Description November 2017 First publication 1 https://awsamazoncom/servicecatalog/ 2 https://awsamazoncom/ec2/systems manager/ 3 https://awsamazoncom/inspector/ 4 https://awsamazoncom/marketplace/ 5 https://awsamazoncom/codepipeline/ 6 https://awsamazoncom/codecommit/ Notes
|
General
|
consultant
|
Best Practices
|
Building_Big_Data_Storage_Solutions_Data_Lakes_for_Maximum_Flexibility
|
Building Big Data Storage Solutions (Data Lakes) for Maximum Flexibility July 2017 Archived This document has been archived For the most recent version refer to : https://docsawsamazoncom/whitepapers/latest/ buildingdatalakes/buildingdatalakeawshtml© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its a ffiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Amazon S3 as the Data Lake Storage Platform 2 Data Ingestion Methods 3 Amazon Kinesis Firehose 4 AWS Snowball 5 AWS Storage Gateway 5 Data Cataloging 6 Comprehensive Data Catalog 6 HCatalog with AWS Glue 7 Securing Protecting and Managing Data 8 Access Policy Options and AWS IAM 9 Data Encryption with Amazon S3 and AWS KMS 10 Protecting Data with Amazon S3 11 Managing Data with Object Tagging 12 Monitoring and Optimizing the Data Lake Environment 13 Data Lake Monitoring 13 Data Lak e Optimization 15 Transforming Data Assets 18 InPlace Querying 19 Amazon Athena 20 Amazon Redshift Spectrum 20 The Broader Analytics Portfolio 21 Amazon EMR 21 Amazon Machine Learning 22 Amazon QuickSight 22 Amazon Rek ognition 23 ArchivedFuture Proofing the Data Lake 23 Contributors 24 Document Revisions 24 ArchivedAbstract Organizations are collecting and analyzing increasing amounts of data making it difficult for traditional on premises solutions for data storage data management and analytics to keep pace Amazon S3 and Amazon Glacier provide an ideal storage solution for data lakes They provide options such as a breadth and depth of integration with traditional big data analytics tools as well as innovative query inplace analytics tools that help you eliminate costly and complex extract transform and load processes This guide explains each of these optio ns and provides best practi ces for building your Amazon S3 based data lake ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 1 Introduction As o rganizations are collecting and analyzing increasing amounts of data traditional onpremise s solutions for data storage data management and analytics can no longer keep pace Data siloes that aren’t built to work well together make storage consolidation for more comprehensive and efficient analytics difficult This in turn limit s an organization’s agility ability to derive more insights and value from its data and capability to seamles sly adopt more sophisticated analytics tools and processes as its skills and needs evolve A data lake which is a single platform combining storage data governance and analytics is designed to address these challenges It’s a centralized secure and durable cloud based storage platform that allows you to ingest and store structured and unstructured data and transform these raw data assets as needed You don’t need an innovation limiting pre defined schema You can use a complete portfolio of data exploration reporting analytics machine learning and visualization tools on the data A data lake makes data and the optimal analytic s tools available to more users across more lines of business allowing them to get all of the business insights they need whe never they need them Until recently the data lake had been more concept than reality However Amazon Web Services (AWS) has developed a data lake architecture that allows you to build data lake solutions costeffectively using Amazon Simple Sto rage Service (Amazon S3) and other services Using the Amazon S3 based data lake architecture capabilities you can do the following : • Ingest and store data from a wide variety of sources into a centralized platform • Build a comprehensive data catalog to fin d an d use data assets stored in the data lake • Secur e protect and manag e all of the data stored in the data lake • Use t ools and policies to monitor analyze and optimize infrastructure and data • Transform raw data assets in place into optimized usable formats • Query data assets in place ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 2 • Use a b road and deep portfolio of data analytics data science machine learning and visualization tools • Quickly integrat e current and future third party data processing tools • Easily and securely shar e process ed datasets and results The remainder of this paper provide s more information about each of these capabil ities Figure 1 illustrates a sample AWS data lake platform Figure 1: Sample AWS data lake platform Amazon S3 as the Data Lake Storage Platform The Amazon S3 based data lake solution uses Amazon S3 as its primary storage platform Amazon S3 provides an optimal foundation for a data lake because of its virtually unlimited scalability You can seamlessly and nondisruptively increase storage from gigabyt es to petabytes of content paying only for what you use Amazon S3 is designed to provide 99999999999% durability It has scalable performance ease ofuse features and native encryption and access control capabilities Amazon S3 integrates with a broad portfolio of AWS and third party ISV data processing tools Key data lake enabling features of Amazon S3 include the following : ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 3 • Decoupling of storage from compute and data processing In traditional Hadoop and data warehouse solutions storage and compute are tightly coupled making it difficult to optimize costs and data processing workflows With Amazon S3 you can cost effectively store all data types in their native formats You can then launch as many or as few v irtual servers as you need using Amazon Elastic Compute Cloud (EC2) and you can use AWS analytics tools to process your data You can o ptimize your EC2 instances to provide the right ratios of CPU memory and bandwidth for best performance • Centralized data architecture Amazon S3 makes it easy to build a multi tenant environment where many users can bring their own data analytics tools to a common set of data This improv es both cost and data governance over that of traditional solutions which require multiple copies of data to be distributed across multiple processing platforms • Integration with clusterless and serverless AWS services Use Amazon S3 with Amazon Athena Amazon Redshift Spectrum Amazon Rekognition and AWS Glue to query and process data Amazon S3 also integrates with AWS Lambda serverless computing to run code without provisioning or managing servers With all of these capabilities you only pay for the actual amounts of data you process or for the compute time that you consume • Standardized APIs Amazon S3 R EST ful APIs are simple easy to use and supported by most major third party independent software vendors (ISVs ) including leading Apache Hadoop and analytic s tool vendors This allows customers to bring th e tools they are most comfortable with and knowledgeable about to help them perform analytics on data in Amazon S3 Data Ingest ion Methods One of the core capabilities of a data lake architecture is the ability to quickly and easily ingest multiple types o f data such as real time streaming data and bulk data assets from onpremise s storage platforms as well as data generated and processed by legacy on premise s platforms such as mainframes and data warehouses AWS provides services and capabilities to cover all of these scenarios ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 4 Amazon Kinesis Firehose Amazon Kinesis Firehose is a fully managed service for delivering real time streaming data directly to Amazon S3 Kinesis Firehose automatically scales to match the volume and throughput of streaming data and requires no ongoing administr ation Kinesis Fireho se can also be configured to transform streaming data before it ’s stored in Amazon S3 Its transformation capabilities include compression encryption data batching and Lambda func tions Kinesis Fireho se can compress data before it’ s stored in Amazon S3 It currently supports GZIP ZIP and SNAPPY compression formats GZIP is the preferred format because it can be used by Amazon Athena Amazon EMR and Amazon Redshift Kinesis Fire hose encryption supports Amazon S3 server side encryption with AWS Key Management Service (AWS KMS) for encrypting delivered data in Amazon S3 You can choose not to encrypt the data or to encrypt with a key from the list of AWS KMS keys that you own (see the section Encryption with AWS KMS ) Kinesis Firehose can concatenate multiple incoming records and then deliver them to Amazon S3 as a single S3 object This is an important capability because it reduces Amazon S3 transaction costs and transactions per second load Finally Kinesis Firehose can invoke Lambda functions to transform incoming source data and deliver it to Amazon S3 Common transformation func tions include transforming Apache Log and Syslog formats to standardized JSON and/or CSV formats The JSON and CS V formats can then be directly queried using Amazon Athena If using a Lambda data transformation you can optionally back up raw source data t o another S3 bucket as Figure 2 illustrates ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 5 Figure 2: Delivering real time streaming data with Amazon Kinesis Firehose to Amazon S3 with optional backup AWS Snowball You can use AWS Snowball to securely and efficiently migrate bulk data from onpremise s storage platforms and Hadoop clusters to S3 buckets After you create a job in the AWS Management Console a Snowball appliance will be automatically shipped to you After a Snowball arrives connect it to your local network install the Snowball client on your on premises data source and then use the Snowball client to select and transfer the file directories to the Snowball device The Snowball client uses AES 256bit encrypt ion Encryption keys are never shipped with the Snowball devic e so the data transfer process is highly secure After the data transfer is complete the Snowball’s E Ink shipping label will automatically update Ship the device back to AWS Upon receipt at AWS your data is then transferred from the Snowball device t o your S3 bucket and stored as S3 objects in their original/native format Snowball also has an HDFS client so data may be migrated directly from Hadoop clusters into an S3 bucket in its native format AWS Storage Gateway AWS Storage Gateway can be used to integrate legacy on premise s data processing platforms with an Amazon S3 based data lake The File Gateway configuration of Storage Gateway offers onpremise s devices and applications a network file share via an NFS connection Files written to this mount point are converted to objects stored in Amazon S3 in their original format without any ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 6 proprietary modification This means that you can easily integrate applications and platforms that don’t have native Amazon S3 capabilities —such as on premise s lab equipment mainframe computers databases and data warehouses —with S3 buckets and then use tools such as Amazon EMR or Amazon Athena to process this data Additionally Amazon S3 natively support s DistCP which is a standard Apache Hadoop data transfer mechanism This allows you to run DistCP jobs to transfer data from an on premises Hadoop cluster to an S3 bucket The command to transfer data typically look s like the following : hadoop distcp hdfs://source folder s3a://destination bucket Data Cataloging The earliest challenges that inhibited building a data lake were keeping track of all of the raw assets as they were loaded into the data l ake and then tracking all of the new data assets and versions that were created by data trans formation data processing and analytics Thus a n essential component of an Amazon S3 based data lake is the data catalog The data catalog provides a query able interface of all assets stored in the data lake’s S3 buckets The data catalog is designed to provide a single source of truth about the contents of the data lake There are two general forms of a data catalog : a comprehensive data catalog that contains information about all assets that have been ingested into the S3 data lake and a Hive Metastore Catalog (HCatalog) that contains information about data assets that have been transformed into formats and table definitions that are usable by analytics tools like Amazon Athena Amazon Redshift Amazon Redshift Spectrum and Amazon EMR The two catalogs are not mutually exclusive and both may exist The comprehensive data catalog can be used to search for all assets in the data lake and the HCatalog can be used to discover and query data assets in the data lake Comprehensive Data Catalog The comprehensive data catalog can be created by using standard AWS services like AWS Lambda Amazon DynamoDB and Amazon Elastic search Service (Amazon ES) At a high level Lambda triggers are used to populate DynamoDB ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 7 tables with object names and metadata when those objects are put into Amazon S3; then Amazon ES is used to search for specific assets related met adata and data classifications Figure 3 shows a high level architectural overview of this solution Figure 3 : Comprehensive data catalog using AWS Lambda Amazon DynamoDB and Amazon Elasticsearch Service HCatalog with AWS Glue AWS Glue can be used to create a Hive compatible Metastore Catalog of data stored in an Amazon S3 based data lake To use AWS Glue to build your data catalog register your data sources with AWS Glue in the AWS Management Console AWS Glue will then crawl your S3 buckets for data sources and construct a data catalog using pre built classifiers for many popular source formats and data types including JSON CSV Parquet and more You may also add your own classifiers or choose classifiers from the AWS Glue community to add to your crawls to recognize and catalog other data formats The AWS Glue generated catalog can be used by Ama zon Athena Amazon Redshift Amazon Redshift Spectrum and Amazon EMR as well as third party analytics tools that use a standard Hive Metastore Catalog Figure 4 shows a sample screenshot of the AWS Glue data catalog interface ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 8 Figure 4: Sample AWS Glue data catalog interface Securing Protecting and Managing Data Building a data lake and making it the centralized repository for assets that were previously duplicated and placed across many siloes of smaller platforms and groups of users requires implementing stringent and fine grained security and access controls along with methods to protect and manage the data assets A data lake solution on AWS —with Amazon S3 as its core —provides a robust set of features and services to secure and protect your data against both internal and external threats even in large multi tenant environments Additionally innovative Amazon S3 data management features enable automation and scaling of data lake storage management even when it contains billions of objects and petabytes of data assets Securing your data lake begins with implementing very fine grained controls that allow authorized users to see access process and modify particular assets and ensure that unauthorized users are blocked from taking any actio ns that would compromise data confidentiality and security A complicating factor is that access roles may evolve over various stages of a data asset’s processing and lifecycle Fortunately Amazon has a comprehensive and well integrated set of security fe atures to secure an Amazon S3 based data lake ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 9 Access Policy Options and AWS IAM You can manage access to your Amazon S3 resources using access policy options By default all Amazon S3 resources —buckets objects and related subresources —are private : only the resource owner an AWS account that created them can access the resource s The resource owner can then grant access permissions to others by writing an access policy Amazon S3 access policy options are broadly categorized as resource based policies and user policies Access policies that are attached to resources are referred to as resource based policies Example resource based policies include bucket policies and access control lists (ACLs) Acces s policies that are attached to users i n an account are called user policie s Typically a combination of resource based and user policies are used to manage permissions to S3 buckets objects and other resources For most data lake environments we recommend using user policies so that perm issions to access data assets can also be tied to user roles and permissions for the data processing and analytics services and tools that your d ata lake users will use User policies are associated with AWS Identity and Access Management (IAM) service wh ich allows you to securely control access to AWS services and resources With IAM you can create IAM users groups and roles in account s and then attach access policies to them that grant access to AWS resources including Amazon S3 The model for user policies is show n in Figure 5 For more details and information on securing Amazon S3 with user policies and AWS IAM please reference: Amazon Simple Storage Service Developers Guide and AWS Identity a nd Access Management User Guide Figure 5: Model for user policies ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 10 Data Encryption with Amazon S3 and AWS KMS Although user policies and IAM contr ol who can see and access data in your Amazon S3 based data lake it’s also important to ensure that users who might inadvertently or maliciously manage to gain access to those data assets can ’t see and use them This is accomplished by using encryption keys to encrypt and de encrypt data assets Amazon S3 supports multiple encryption options Additionally AWS KMS helps scale and simplify management of encryption keys AWS KMS gives you centralized control over the encryption keys used to protect your data assets You can create import rotate disable delete define usage policies for and audit the use of encryption keys used to encrypt your data AWS KMS is integrated with several other AWS services making it easy to encrypt the data sto red in these services with encryption keys AWS KMS is integrated with AWS CloudTrail which provides you with the ability to audit who used which keys on which resources and when Data lakes built on AWS primarily use two types of encryption : Server side encryption (SSE) and client side encryption SSE provides data atrest encryption for data written to Amazon S3 With SSE Amazon S3 encrypts user data assets at the object level stores the encrypted objects and then decrypts them as they are accessed and retrieved With client side encryption data objects are encrypted before they written into Amazon S3 For example a data lake user could specify client side encryption before transferring data assets into Amazon S3 from the Internet or could specify that services like Amazon EMR Amazon Athena or Amazon Redshift use client side encryption with Amazon S3 SSE and client side encryption can be combined for the highest levels of protection Given the intricacies of coordinating encryption key management in a complex environment like a data lake we strongly recommend using AWS KMS to coordinate keys across client and server side encryption and across multiple da ta processing and analytics services For even greater levels of data lake data protection other services like Amazon API Gateway Amazon Cognito and IAM can be combined to create a “shopping cart” model for users to check in and check out data lake data assets This architecture has been created for the Amazon S3 based data lake solution reference architecture which can be found downloaded and deployed at https://awsamazonco m/answers/big data/data lake solution/ ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 11 Protecting Data with Amazon S3 A vital function of a centralized data lake is data asset protection —primarily protection against corruption loss and accidental or malicious overwrites modifications or deletions Amazon S3 has several intrinsic features and capabilities to provide the highest levels of data protection when it is used as the core platform for a data lake Data protection rests on the inherent durability of the storage platform used Durability is defined as the ability to protect data assets against corruption and loss Amazon S3 provides 99999999999% data durability which is 4 to 6 orders of magnitude greater than that which most on premise s single site storage platforms can provide Put another way the durability of Amazon S3 is designed so that 10000000 data assets can be reliably stored for 10000 years Amazon S3 achieves this durability in all 16 of its global Regions by using multiple Availability Zones Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity housed in separate facilities Availability Zones offer the ability to operate production applications and analytics services which are more highly ava ilable fault tolerant and scalable than would be possible from a single data center Data written to Amazon S3 is redundantly stored across three Availability Zones and multiple devices within each Availability Zone to achieve 999999999% durability Thi s means that even in the event of an entire data center failure data would not be lost Beyond core data protection another key element is to protect data assets against unintentional and malicious deletion and corruption whether through users accidenta lly deleting data assets applications inadvertently deleting or corrupting data or rogue actors trying to tamper with data This becomes especially important in a large multi tenant data lake which will have a large number of users many applications and constant ad hoc data processing and application development Amazon S3 provides versioning to protect data assets against these scenarios When enabled Amazon S3 versioning will keep multiple copies of a data asset When an asset is updated prior vers ions of the asset will be retained and can be retrieved at any time If an asset is deleted the last version of it can be retrieved Data asset versioning can be managed by policies to automate management at large scale and can be combined with other Am azon S3 capabilities such as lifecycle management for long term ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 12 retention of versions on lower cost storage tiers such as Amazon Glacier and Multi Factor Authentication (MFA) Delete which requires a second layer of authentication —typically via an approve d external authentication device —to delete data asset versions Even though Amazon S3 provides 99999999999% data durability within an AWS Region many enterprise organizations may have compliance and risk models that require them to replicate their data assets to a second geographically distant location and build disaster recovery (DR) architectures in a second location Amazon S3 cross region replication (CRR) is an integral S3 capability that automatically and asynchronously copies data assets from a data lake in one AWS Region to a data lake in a different AWS Region The data assets in the second Region are exact replicas of the source data assets that they were copied from including their names metadata versions and access controls All data assets are encrypted during transit with SSL to ensure the highest levels of data security All of these Amazon S3 features and capabilities —when combined with other AWS services like IAM AWS KMS Amazon Cognito and Amazon API Gateway —ensure that a data lake using Amazon S3 as its core storage platform will be able to meet the most stringent data security compliance privacy and protection requirements Amazon S3 includes a broad range of certifications including PCI DSS HIPAA/HITECH FedRAMP SEC Rule 17 a4 FISMA EU Data Protection Directive and many other global agency certifications These levels of compliance and protection allow organizations to build a data lake on AWS that operates more securely and with less risk than one b uilt in their on premise s data centers Managing Data with Object Tagging Because data lake solutions are inherently multi tenant with many organizations lines of businesses users and applications using and processing data assets it becomes very important to associate data assets to all of these entities and set policies to manage these assets coherently Amazon S3 has introduced a new capability —object tagging —to assist with categorizing and managing S 3 data assets An object tag is a mutable key value pair Each S3 object can have up to 10 object tags Each tag key can be up to 128 Unicode characters in length and each tag value can be up to 256 Unicode characters in length For an example of object tagging suppose an object contains protected ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 13 health information (PHI) data —a user administrator or application that uses object tags might tag the object using the key value pair PHI=True or Classification=PHI In addition to being used for data classifi cation object tagging offers other important capabilities Object tags can be used in conjunction with IAM to enable fine grain controls of access permissions For example a particular data lake user can be granted permissions to only read objects with s pecific tags Object tags can also be used to manage Amazon S3 data lifecycle policies which is discussed in the next section of this whitepaper A data lifecycle policy can contain tag based filters Finally object tags can be combined with Amazon Cloud Watch metrics and AWS CloudTrail logs —also discussed in the next section of this paper —to display monitoring and action audit data by specific data asset tag filters Monitoring and Optimizing the Data Lake Environment Beyond the efforts required to architect and build a data lake your organization must also consider the operational aspects of a data lake and how to cost effectively and efficiently operate a production data lake at large scale Key elements you must co nsider are monitoring the operations of the data lake making sure that it meets performance expectations and SLAs analyzing utilization patterns and using this information to optimize the cost and performance of your data lake AWS provides multiple fea tures and services to help optimize a data lake that is built on AWS including Amazon S3 s torage analytics A mazon CloudW atch metrics AWS CloudT rail and Amazon Glacier Data Lake Monitoring A key aspect of operating a data lake environment is understand ing how all of the components that comprise the data lake are operating and performing and generating notifications when issues occur or operational performance falls below predefined thresholds Amazon CloudWatch As a n administrator you need to look at t he complete data lake environment holistically This can be achieved using Amazon CloudWatch CloudWatch is a ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 14 monitoring service for AWS Cloud resources and the applications that run on AWS You can use CloudWatch to collect and track metrics collect and monitor log files set thresholds and trigger alar ms This allows you to automatically react to changes in your AWS resources CloudWatch can monitor AWS resources such as Amazon EC2 instances Amazon S3 Amazon EMR Amazon Redshift Amazon DynamoDB and Amazon Relational Database Service ( RDS ) database instances as well as custom metrics generated by other data lake applications and service s CloudWatch provides system wide visibility into resource ut ilization application performa nce and operational health You can use these insights to proactively react to issues and keep your data lake application s and workflows running smoothly AWS CloudTrail An operational data lake has many users and multiple a dministrators and may be subject to compliance and audit requirements so it’ s important to have a complete audit trail of actions take n and who has performed these actions AWS CloudTrail is an AWS service that enables governance compliance operational audi ting and risk auditing of AWS account s CloudTrail continuously monitor s and retain s events related to API calls across the AWS services that comprise a data lake CloudTrail provides a h istory of AWS API calls for an account including A PI calls made through the AWS Management Console AWS SD Ks command line tools and most Amazon S3 based data lake services You can identify which users and accounts made requests or took actions against AWS services that support CloudTrail the source IP address the actions were made from and when the actions occurred CloudTrail can be used to simplify data lake compliance audits by automatically recording and storing activity logs for actions made within AWS accounts Integration with Amazon CloudWatch Logs provides a convenient way to search through log data identify out ofcompliance events accelerate incident investigations and expedite responses to auditor requests CloudTrail logs are stored in an S3 bucket for durability and deeper analysis ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 15 Data Lake Optimiz ation Optimizing a data lake environment includes minimizing operational costs By building a data lake on Amazon S3 you only pay for the data storage and data processing services that you actually use as you use them You can reduce cost s by optimizing how you use these services Data asset storage is often a significant portion of the costs associated with a data lake Fortunately AWS has several features that can be used to optimize and reduce costs these include S3 lifecycle management S3 storage class analy sis and Amazon Glacier Amazon S3 Lifecycle Management Amazon S3 lifecycle management allows you to create lifecycle rules which can be used to automatically migrate data assets to a lower cost tier of storage —such as S3 Standard Infrequent Access or Amazon Glacier —or let them expire when they are no longer needed A lifecycle configuration which consists of an XML file comprises a set of rules with predefined actions that you want Amazon S3 to perform on data assets dur ing their lifetime Lifecycle configurations can perform actions based on data asset age and data asset names but can also be combined with S3 object tagging to perform very granular management of data assets Amazon S3 Storage Class Analy sis One of the c hallenges of developing and configuring lifecycle rules for the data lake is gaining an understanding of how data assets are accessed over time It only makes economic sense to transition data assets to a more cost effective storage or archive tier if thos e objects are infrequently accessed Otherwise data access charges associated with these more cost effective storage classes could negate any potential savings Amazon S3 provides S3 storage class analy sis to help you understand how data lake data assets are used Amazon S3 storage class analy sis uses machine learning algorithms on collected access data to help you develop lifecycle rules that will optimize costs Seamlessly tiering to lower cost storage tiers in an important capability for a data lake particularly as its users plan for and move to more advanced analytics and machine learning capabilities Data lake users will typically ingest raw data assets from many sources and transform those assets into harmonized formats that they can use for ad hoc querying and on going business intelligence ( BI) querying via SQL However they will also want to perform more advanced analytics using streaming analytics machine learning and ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 16 artificial intelligence These more advanced analytics capab ilities consist of building data models validating these data models with data assets and then training and refining these models with historical data Keeping more historical data assets particularly raw data assets allows for better training and refinement of models Additionally as your organization ’s analytics sophistication grows you may want to go back and reprocess historical data to look for new insights and value These historical data assets are infrequently accessed and consume a lot of capacity so they are often well suited to be stored on an archival storage layer Another long term data storage need for the data lake is to keep processed data assets and results for long term retention for compliance and audit purposes to be accessed by auditors when needed Both of these use cases are well served by Amazon Glacier which is an AWS storage service optimized for infrequ ently used cold data and for storing write once read many (WORM) data Amazon Glacier Amazon Glacier is an extremely low cost storage service that provides durable storage with security features for data archiving and backup Amazon Glacier has the same data durability (99999999999%) as Amazon S3 the same integrat ion with AWS security features and can be integrated with S3 by using S3 lifecycle management on data assets stored in S3 so that data assets can be seamlessly migrated from S3 to Glacier Amazon Glacier is a great storage choice when low storage cost is paramount data assets are rarely retrieved and retrieval latency of several minutes to several hours is acceptable Different types of data lake assets may have different retrieval needs For example compliance data may be infrequently accesse d and relatively small in size but need s to be made available in minutes when auditors request data while historical raw data assets may be very large but can be retrieved in bulk over the course of a day when needed Amazon Glacier allows data lake user s to specify retrieval times when the data retrieval request is created with longer retrieval times leading to lower retrieval costs For processed data and records that need to be securely retained Amazon Glacier Vault Lock allows data lake administrato rs to easily deploy and enforce compliance controls on individual Glacier vaults via a lockable policy Administrators can specify controls such as Write Once Read Many (WORM) in ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 17 a Vault Lock policy and lock the policy from future edits Once locked the p olicy becomes immutable and Amazon Glacier will enforce the prescribed controls to help achieve your compliance objectives and provide an audit trail for these assets using AWS CloudTrail Cost and Performance Optimization You can optimize your data lake using cost and performance Amazon S3 provides a very performant foundation for the data lake because its enormous scale provides virtually limitless throughput and extremely high transaction rates Using Amazon S3 best practices for data asset naming ensures high levels of performance These best practices can be found in the Amazon Simple Storage Service Developers Guide Another area of o ptimization is to use optimal data formats when transforming raw data assets into normalized formats in preparation for querying and analytics These optimal data formats can compress data and reduce data capacities needed for storage and also substantially increase query performance by common Amazon S3 based data lake analytic services Data lake environments are designed to ingest and process many types of data and store raw data assets for future archival and reprocessing purposes as well as store processed and normal ized data assets for active querying analytics and reporting One of the key best practices to reduce storage and analytics processing costs as well as improve analytics querying performance is to use an optimized data format par ticularly a format lik e Apache Parquet Parquet is a columnar compressed storage file format that is designed for querying large amounts of data regardless of the data processing framework data model or programming language Compared to common raw data log formats like CSV JSON or TXT format Parquet can reduce the required storage footprint improve query performance significantly and greatly reduce querying costs for AWS services which charge by amount of data scanned Amazon tests comparing the CSV and Parquet format s using 1 TB of log data stored in CSV format to Parquet format showed the following : • Space savings of 87% with Parquet ( 1 TB of log data stored in CSV format compressed to 130 GB with Parquet) ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 18 • A query time for a representative Athena query was 34x faster with Parquet (237 seconds for CSV versus 513 seconds for Parquet) and the amount of data scanned for that Athena query was 99% less (115TB scanned for CSV versus 269GB for Parquet) • The cost t o run that Athena query was 997% less ($575 for CSV versus $0013 for Parquet) Parquet has the additional benefit of being an open data format that can be used by multiple querying and analytics tools in an Amazon S3 based data lake particularly Amazon Athena Amazon EMR Amazon Redshift and Amazon Redshift Spectrum Transforming Data Assets One of the core values of a data lake is that it is the collection point and repository for all of an organization’s data assets in whatever their native formats a re This enables quick ingest ion elimination of data duplication and data sprawl and centralized governance and management After the data assets are collected they need to be transformed into normalized formats to be used by a variety of data analytics and processing tools The key to ‘democratizing’ the data and making the data lake available to the widest number of users of varying skill sets and responsibilities is to transform data assets into a format that allows for efficient ad hoc SQL querying As discussed earlier when a data lake is built on AWS we recommend transforming log based data assets into Parquet format AWS provides multiple services to quickly and efficiently achieve this There are a multitude of ways to transform data assets and the “best” way often comes down to individual preference skill sets and the tools available When a data lake is built on AWS services there is a wide variety of tools and services available fo r data transformation so you can pick the methods and tools that you are most comfortable with Since the data lake is inherently multi tenant multiple data transformation jobs using different tools can be run concurrently The two most common and strai ghtforward methods to transform data assets into Parquet in an Amazon S3 based data lake use Amazon EMR clusters The first method involves creating an EMR cluster with Hive installed using the raw data assets in Amazon S3 as input transforming those data assets into Hive ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 19 tables and then writing those Hive tables back out to Amazon S3 in Parquet format The second related method is to use Spark on Amazon EMR With this method a typical transformation can be achieved with only 20 lines of PySpark code A third simpler data transformation method on an Amazon S3 based data lake is to use AWS Glue AWS Glue is an AWS fully managed extract transform and load ( ETL ) service that can be directly used with data stored in Amazon S3 AWS Glue simplifies and automates difficult and time consuming data discovery conversion mapping and job sched uling tasks AWS Glue guides you through the process of transforming and moving your data assets with an ea sy touse console that helps you understand your data sources transform and prepare the se data assets for analytics and load them reliably from S3 data sources back into S3 destinations AWS Glue automatically crawls raw data assets in your data lake ’s S3 buckets identifies data formats and then suggests schemas and transformations so that you don’t have to spend time hand coding data flows You can then edit these transformations if necessary using the tools and technologies you already know such as Python Spark Git and your favorite integ rated developer environment (IDE) and then share them with other AWS Glue users of the data lake AWS Glue’s flexible job scheduler can be set up to run data transformation flows on a recurring basis in response to triggers or even in response to AWS Lambda events AWS Glue automatically and transparently provisions hardware resources and distributes ETL jobs on Apache Spark nodes so that ETL run times remain consistent as data volume grows AWS Glue coordinates the execution of data lake jobs in the ri ght sequence and automatically re tries failed jobs With AWS Glue t here are no servers or clusters to manage and you pay only for the resources consumed by your ETL jobs InPlace Querying One of the most important capabilities of a data lake that is built on AWS is the ability to do in place transformation and querying of data assets without having to provision and manage clusters This allows you to run sophisticated analytic queries direc tly on your data assets stored in Amazon S3 without having to copy and load data into separate analytics platforms or data warehouses You ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 20 can query S3 data without any additional infrastructure and you only pay for the queries that you run This makes t he ability to analyze vast amounts of unstruc tured data accessible to any data lake user who can use SQL and makes it far more cost effective than the traditional method of performing an ETL process creating a Hadoop cluster or data warehouse loading th e transformed data into these environments and then running query jobs AWS Glue as described in the previous sections provides the data discovery and ETL capabilities and Amazon Athena and Amazon Redshift Spectrum provide the inplace querying capabilities Amazon Athena Amazon Athena is an interactive query service that makes it easy for you to analyze data directly in Amazon S3 using standard SQL With a few actions in the AWS Management Console you can use Athena directly against data assets stored in the data lake and begin using standard SQL to run ad hoc queries and get results in a mat ter of seconds Athena is serverless so there is no infrastructure to set up or manage and you only pay for the volume of data assets scanned during the queries you run Athena scales automatically —executing queries in parallel —so results are fast even with large datasets and complex queries You can use Athena to process unstructured semi structured and structured data sets Supported data asset formats include CSV JSON or columnar data formats such as Apache Parquet and Apache ORC Athena integrate s with Amazon QuickSight for easy visualization It can also be used with third party reporting and business intelligence tools by connecting these tools to Athena with a JDBC driver Amazon Redshift Spectrum A second way to perform in place querying of da ta assets in an Amazon S3 based data lake is to use Amazon Redshift Spectrum Amazon Redshift is a large scale managed data warehouse service that can be used with data assets in Amazon S3 However data assets must be loaded into Amazon Redshift before q ueries can be run By contrast Amazon Redshift Spectrum enables you to run Amazon Redshift SQL queries directly against massive amounts of data — up to exabytes —stored in an Amazon S3 based data lake Amazon Redshift Spectrum applies sophisticated query opt imization scaling processing across thousands of nodes so results are fast —even with large data sets and complex ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 21 queries Redshift Spectrum can directly query a wide variety of data assets stored in the data lake including CSV TSV Parquet Sequence and RCFile Since Redshift Spectrum supports the SQL syntax of Amazon Redshift you can run sophisticated queries us ing the same BI tools that you use today You also have the flexibility to run queries that span both frequently accessed data assets that are s tored loca lly in Amazon Redshift and your full da ta sets stored in Amazon S3 Because Amazon Athena and Amazon R edshift share a common data catalog and common data formats you can use both Athena and Redshift Spectrum against the same data assets You would typically use Athena for ad hoc data discovery and SQL querying and then use Redshift Spectrum for more comp lex queries and scenarios where a large number of data lake users want to run concurrent BI and reporting workloads The Broader Analytics Portfolio The power of a data lake built on AWS is that data assets get ingested and stored in one massively scalable low cost performant platform —and that data discovery transformation and SQL querying can all be done in place using innovative AWS services like AWS Glue Amazon Athena and Amazon Redshift Spectrum In addition there are a wide variety of other AWS services that can be directly integrated with Amazon S3 to create any number of sophisticated analytics machine learning and artificial intelligence (AI) data processing pipelines This allows you to quickly solve a wide range of analytics business challenges on a single platform against common data assets without having to worry about provisioning hardware and installing and configuring complex software packages before loading data and performin g analytics Plus you only pay for what you consume Some of the most common AWS services that can be used with data assets in an Amazon S3 based data lake are described next Amazon EMR Amazon EMR is a highly di stributed computing framework used to quick ly and easily process data in a cost effective manner Amazon EMR uses Apache Hadoop an open sour ce framework to distribute data and processing across a n elastically resizable cluster of EC2 instances and allows you to use all the common Hadoop tools suc h as Hive Pig Spark and HBase Amazon EMR does all the heavily lifting involved with provisioning managing and maintaining the infrastructure a nd software of a Hadoop cluster and is integrated directly with Amazon S3 With Amazon EMR you can launch a persistent cluster that stays ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 22 up indefinitely or a temporary cluster that terminates after the analysis is complete In either scenario you only pay for the hours the cluster is up Amazon EMR supports a variety of EC2 instance types encompassing genera l purpose compute memory and storage I/O optimized (eg T2 C4 X1 and I3 ) instances and all Amazon EC2 pricing options (On Demand Reserved and Spot) When you launch an EMR cluster (also called a job flow ) you choose how many and what type of EC2 instances to provision Companies with many different lines of business and a large number of users can build a single data lake solution store their data assets in Amazon S3 and then spin up multiple EMR clusters to share data assets in a multi tenant fashion Amazon Machine Learning Machine learning is another important data lake use case Amazon Machine Learning (ML) is a data lake service that makes it easy for anyone to use predictive analytics and machine learnin g technology Amazon ML provides visualization tools and wizards to guide you through the process of creating ML models without having to learn complex algorithms and technology After the models are ready Amazon ML makes it easy to obtain predictions for your application using API operations You don’ t have to implement custom prediction generation code or manage any infrastructure Amazon ML can create ML models based on data stored in Amazon S3 Amazon Redshift or Amazon RDS Built in wizards guide you through the steps of interactively exploring your data training the ML model evaluating the model quality and adjusting outputs to align with business goals Af ter a model is ready you can request predictions either in batches or by using the low latency real time API As discussed earlier in this paper a data lake built on AWS greatly enhances machine learning capabilities by combining Amazon ML with large historical data sets than can be cost effectively stored on Amazon Glacier but can be easily recalled when needed to train new ML models Amazon QuickSight Amazon QuickSight is a very fast easy touse business analytics service that makes it easy for you to build visualizations perform ad hoc analysis and quickly get business insights from your data assets store d in the data lake anytime on any device You can use Amazon QuickSight to seamlessly discover AWS data sources such as Amazon Redshift Amazon RDS Amazon Auror a Amazon Athena and Amazon S3 connect to any or all of these data source s and ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 23 data assets and get insights from this data in minutes Amazon QuickSight enables organizations using the data lake to seamlessly scale their business analytics capabilities to hundreds of thousands of users It delivers fast and responsive query performance by using a robust in memory engine (SPICE) Amazon Rekognition Another innovative data lake service is Amazon Rekognition which is a fully managed image recognition service powered by deep learning run agai nst image data assets stored in Amazon S3 Amazon Rekognition has been built by Amazon’s Computer Vision teams over many years and already analyzes billions of images every day The Amazon Rekognition easy touse API detects thousands of objects and scene s analyzes faces compares two faces to measure similarity and verifies faces in a collection of faces With Amazon Rekognition you can easily build applications that search based on visual content in images analyze face attributes to identify demograp hics implement secure face based verification and more Amazon Rekognition is built to analyze images at scale and integrates seamlessly with data assets stored in Amazon S3 as well as AWS Lambda and other key AWS services These are just a few examples of power ful data processing and analytics tools that can be integrated with a data lake built on AWS See the AWS website for more examples and for the latest list of innovative AWS services available for data lake users Future Proofing the Data Lake A data lake built on AWS can immediately solve a broad r ange of business analytics challenges and quickly provide value to your business H owever business needs are constantly evolving AWS and the analytics partner ecosystem are rapidly evolving and adding new services and capabilities a s businesses and their data lake users achieve more experience and analytics sophistication over time Therefore it’s important that the data lake can seamlessly and non disruptively evolve as needed AWS futureproofs your data lake with a standardized storage solution that grows with your organization by ingesting and storing all of your business’ s data assets on a platform with virtually unlimited scalability and well defined APIs and integrat es with a wide variety of data processing tools This allow s you to ArchivedAmazon Web Services – Building a Data Lake with Amazon Web Services Page 24 add new capabilities to your data lake as you need them without infrastructure limitations or barriers Additionally you can perform agile analytics experiments against data lake assets to quickly explore new processing methods and tools and then scale the promising ones into production without the need to build new infrastructure duplicate and/or migrate data and have users migrate to a new platform In closing a data lake built on AWS allows you to evolve your business around your data assets and to use these data assets to quickly and agilely drive more business value and competitive differentiation without limits Contributors The following individuals and organizations co ntributed to this document: • John Mallory Business Development Manager AWS Storage • Robbie Wright Product Marketing Manager AWS Storage Document Revisions Date Description July 2017 First publication Archived
|
General
|
consultant
|
Best Practices
|
Building_FaultTolerant_Applications_on_AWS
|
Fault Tolerant Components on AWS Novem ber 2019 This paper has been archived For the latest technical information see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers ArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Failures Shouldn’t Be THAT Interesting 1 Amazon Elastic Compute Cloud 1 Elastic Block Store 3 Auto Scaling 4 Failures Can Be Useful 5 AWS Global Infrastructure 6 AWS Regions and Availability Zones 6 High Availability Through Multiple Availability Zones 6 Building Architectures to Achieve High Availability 7 Improving Continuity with Replication Between Regions 7 High Availability Building Blocks 7 Elastic IP Addresses 7 Elastic Load Balancing 8 Amazon Simple Queue Service 10 Amazon Simple Storage Service 11 Amazon Elastic File System and Amazon FSx for Windows File Server 12 Amazon Relational Database Service 12 Amazon DynamoDB 13 Using Serverless Architectures for High Availability 14 What is Serverless? 14 Using Continuous Integration and Continuous Deployment/Delivery to Roll out Application Changes 15 What is Continuous Integration? 15 What is Continuous Deployment/Delive ry? 15 How Does This Help? 16 Utilize Immutable Environment Updates 16 ArchivedLeverage AWS Elastic Beanstalk 16 Amazon CloudWatch 17 Conclusion 17 Contributors 17 Further Reading 18 Document Revisions 18 ArchivedAbstract This whitepaper provides an introduction to building fault tolerant software systems using Amazon Web Services (AWS) You will learn about the diverse array of AWS services at your disposal including compute storage networking and database solutions By leveraging these solutions you can set up an infrastructure that refreshes automatically helping you to avoid degradations and points of failures The AWS platform can be operated with minimal human interaction and up front financial investment In addition you will learn about the AWS Global Infrastructure an architecture that provides high availability using AWS Regions and Availability Zones This paper is intended for IT managers and system architects looking to deploy or migrate their solutions to the cloud using a platform that provides highly available reliable and fault tolerant systems ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 1 Introd uction Fault tolerance is the ability for a system to remain in operation even if some of the components used to build the system fail Even with very conservative assumptions a busy e commerce site may lose thousands of dollars for every minute it is una vailable This is just one reason why businesses and organizations strive to develop software systems that can survive faults Amazon Web Services (AWS) provides a platform that is ideally suited for building fault tolerant software systems The AWS platfo rm enables you to build fault tolerant systems that operate with a minimal amount of human interaction and up front financial investment Failures Shouldn’t Be THAT Interesting The ideal state in a traditional on premises data center environment tends to be one where failure notifications are delivered reliably to a staff of administrators who are ready to take quick and decisive action s in order to solve the problem Many organizations are able to reach this state of IT nirvana however doing so typicall y requires extensive experience up front financial investment and significant human resources Amazon Web Services provides services and infrastructure to build reliable faulttolerant and highly available systems in the cloud As a result potential f ailures can be dealt with automatically by the system itself and as a result are fairly uninteresting events AWS gives you access to a vast amount of IT infrastructure —compute storage networking and databases just to name a few (such as Amazon Elastic Compute Cloud (Amazon EC2) Amazon Elastic Block Store (Amazon EBS) and Auto Scaling )—that you can allocate automatically (or nearly automatically) to account for almost any kind of failure You are charged only for resources that you actually use so there is no up front financial investment Amazon Elastic Compute Cloud Amazon Elastic Compute Cloud (Amazon EC2) provides computing resources literally server instances that you use to build and host your software systems Amazon EC2 is a natural e ntry point to AWS for your application development You can build a highly reliable and fault tolerant system using multiple EC2 instances and ancillary services such as Auto Scaling and Elastic Load Balancing ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 2 On the surface EC2 instances are very simila r to traditional hardware servers EC2 instances use familiar operating systems like Linux or Windows As such they can accommodate nearly any kind of software that runs on those operating systems EC2 instances have IP addresses so the usual methods of i nteracting with a remote machine (for example SSH or RDP) can be used The template that you use to define your service instances is called an Amazon Machine Image (AMI) which contains a defined software configuration ( that is operating system applicati on server and applications) From an AMI you launch an instance which is a copy of the AMI running as a virtual server in the cloud You can launch multiple instances of an AMI as shown in the following figure Instance types in Amazon EC2 are essent ially hardware archetypes You choose an instance type that matches the amount of memory (RAM) and computing power (number of CPUs) that you need for your application Your instances keep running until you stop or terminate them or until they fail If an instance fails you can launch a new one from the AMI Amazon publishes many AMIs that contain common software configurations for public use In addition members of the AWS developer community have published their own custom AMIs You can also create your own custom AMI enabl ing you to quickly and easily start new instances that contain the software configuration you need The first step towards building fault tolerant applications on AWS is to decide on how the AMIs will be configured There are two distinct mechanisms to do this dynamic and static A dynamic configuration starts with a base AMI and on launch deploys the software and data required by the application A static configuration deploys the required software and data to the base AMI and then uses this to create an application specific AMI that is used for application deployment Take the following factors into account when deciding to use either a dynamic or static configuration : ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 3 • The frequency of application changes —a dynamic configuratio n offers greater flexibility for frequent application changes • Speed of launch —an application installed on the AMI reduce s the time between launch and when the instance becomes available If this is important then a static configuration minimize s the launc h time • Audit —when an audit trail of the application configuration is required then a static configuration combined with a retention policy for AMIs allow s past configurations to be recreated It is possible to mix dynamic and static configuration s A common pattern is for the application software to be deployed on the AMI while data is deployed once the instance is launched Your application should be comprised of at least one AMI that you have configured To start your application l aunch the required number of instances from your AMI For example if your application is a website or a web service your AMI could include a web server the associated static content and the code for the dynamic pages As a result after you launch an i nstance from this AMI your web server starts and your application is ready to accept requests When the required fleet of instances from the AMI is launched then an instance failure can be addressed by launching a replacement instance that uses the same A MI This can be done through an API invocation scriptable command line tools or the AWS Management Console Additionally an Auto Scaling group can be configured to automatically replace failed or degraded instances The ability to quickly replace a problematic instance is just the first step towards fault tolerance With AWS an AMI lets you launch a new instance based on the same template allowing you to quickly recover from failures or problematic behaviors To minimiz e downtime you have the option to keep a spare instance running ready to take over in the event of a failure This can be done efficiently using elastic IP addresses Failover to a replacement instance or (running) spare instance by remapping your elastic IP address to the new instance Elastic Block Store Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances EBS volumes can be attached to a running EC2 instance and can persist independently from the instance EBS v olumes are automatically replicated within an Availability Zone providing high durability and availability along with protection from component failure ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 4 Amazon EBS is especially suited for applications that require a database a file system or access to raw block storage Typical use cases include big data analytics relational or NoSQL databases stream or log processing applications and data warehousing applications Amazon EBS and Amazon EC2 are often used in conjunction with one another when buildin g a fault tolerant application on the AWS platform Any application data that needs to be persisted should be stored on EBS volumes If the EC2 instance fails and needs to be replaced the EBS volume can simply be attached to the new EC2 instance Since th is new instance is essentially a duplicate of the original there should be no loss of data or functionality Amazon EBS volumes are highly reliable but to further mitigate the possibility of a failure backups of these volumes can be created using a fea ture called snapshots A robust backup strategy will include an interval (time between backups generally daily but perhaps more frequently for certain applications) a retention period (dependent on the application and the business requirements for rollba ck) and a recovery plan To ensure high durability for backups of EBS volumes snapshots are stored in Amazon Simple Storage Service (Amazon S3) EBS snapshots are used to create new Amazon EBS volumes which are an exact replica of the original volume at the time the snapshot was taken Because snapshots represent the on disk state of the application care must be taken to flush in memory data to disk before initiating a snapshot EBS snapshots are created and managed using the API AWS Management Console Amazon Data Lifecycle Manager (DLM) or AWS Backup Auto Scaling An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management In the context of a high ly availab le solution using an Auto Scaling group ensure s that an EC2 fleet provides the required capacity The continuous monitoring of the fleet instance health metrics allows for failures to be automatic ally detect ed and for replacement instances to be launched when required Where the required size of the EC2 fleet varies Auto Scaling can adjust the capacity using a number of criteria including scheduled and target ed tracking against the value for a specific metric Multiple scaling criteria can be applie d providing a flexible mechanism to manage EC2 capacity ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 5 The requirements of your application and high availability ( HA) strategy determine s the number of Auto Scaling groups need ed For an application that uses EC2 capacity spread across one or more Availability Zones (AZ) then a single Auto Scaling group suffice s Capacity launche s where available and the Auto Scaling group replace s instances as required ; but the placement within selected AZs is arbitrary If the HA strategy requires more precise con trol of the distribution of EC2 capacity deployments then using an Auto Scaling group per AZ is the appropriate solution An example is an application with two instances —production and fail over—that needs to be deployed in separate Availability Zones Us ing two Auto Scaling groups to manage the capacity of each application instance separately ensure s that they do not both have capacity in the same Availability Zone Failures Can Be Useful Software systems degrade over time This is due in part to : • Softwar e leak ing memory and/or resources includ ing software that you wrote and software that you depend on ( such as application frameworks operating systems and device drivers) • File systems fragment ing over time which impact s performance • Hardware ( particular ly storage) devices physically degrad ing over time Disciplined software engineering can mitigate some of these problems but ultimately even the most sophisticated software system depends on a number of components that are out of its control ( such as the operating system firmware and hardware) Eventually some combination of hardware system software and your software will cause a failure and interrupt the availability of your application In a traditional IT environment hardware can be regularly mai ntained and serviced but there are practical and financial limits to how aggressively this can be done However with Amazon EC2 you can terminate and recreate the resources you need at will An application that takes full advantage of the AWS platform c an be refreshed periodically with new server instances This ensures that any potential degradation does not adversely affect your system as a whole Essentially y ou are using what would be considered a failure ( such as a server termination) as a forcing function to refresh this resource ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 6 Using this approach an AWS application is more accurately defined as the service it provides to its clients rather than the server instance(s) it is comprised of With this mindset server instances become immaterial an d even disposable AWS Global Infrastructure To build fault tolerant applications in AWS it is important to understand the architecture of the AWS Global Infrastructure The AWS Global Infrastructure is built around Regions and Availability Zones AWS Regions and Availability Zones An AWS Region is a geographical area of the world Each AWS Region is a collection of data centers that are logically grouped into what we call Availability Zones AWS Regions provide multiple (typically three) physically separated and isolated Availability Zones which are connected with low latency high throughput and highly redundant networking 1 Each AZ consists of one or more physical data centers Availability Zones are designed for physical redundancy and provide resilience enabling uninterrupted performance even in the event of power outages Internet downtime floods and other natural disasters Note: Refer to the Global Infrastructure page for current information about AWS Regions and Avail ability Zones or our interactive map High Availability Through Multiple Availability Zones Availability Zones are connected to each other with fast private fiber optic networking enabling you to architect applications that automatically fail over between AZs without interruption These AZs offer AWS customers an easier and more effective way t o design and operate applications and databases making them more highly available fault tolerant and scalable than traditional single data center infrastructures or multi data center infrastructures 1 Asia Pacific (Osaka) is a Local Region with a single AZ that is available to select AWS customers to provide regional redundancy in Japan ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 7 Building Architectures to Achieve High Availability You can achieve high availability by deploying your applications to span across multiple Availability Zones For each application tier ( that is web application and database) placing multiple redundant instances in distinct AZs creates a multi site solu tion Using Elastic Load Balancing (ELB) you get improved fault tolerance as the ELB service automatically balance s traffic across multiple instances in multiple Availability Zones ensuring that only healthy instances receive traffic The desired goal is to have an independent copy of each application stack in two or more AZs with automated traffic routing to healthy resources Improving Continuity with Replication Between Regions In addition to replicating applications and data across multiple data cent ers in the same Region using Availability Zones you can also choose to increase redundancy and fault tolerance further by replicating data between geographic Regions You can do so using both private high speed networking and public internet connections to provide an additional layer of business continuity or to provide low latency access across the globe High Availability Building Blocks Amazon EC2 and its related services provide a powerful yet economic platform upon which to deploy and build your ap plications However they are just one aspect of the Amazon Web Services platform AWS offers a number of other services that can be incorporated into your application development and deployments to increase the availability of your applications Elastic I P Addresses An Elastic IP address is a static public IPv4 address allocated to your AWS account With an Elastic IP address you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account Elastic IPs do not change and remain all ocated to your account until you delete them An Elastic IP address is allocated from the public AWS IPv4 network ranges in a specific region If your instance does not have a public IPv4 address you can associate an Elastic IP address with your instance to enable communication with the internet ; for ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 8 example to connect to your instance from your local computer Elastic IP addresse s are mapped via an Internet Gateway to the private address of the instance Once you associate an Elastic IP address with an i nstance it remains associated until you remove the association or associate the address with another resource Elastic IP addresse s are one method for handling failover especially for legacy type applications that cannot be scaled horizontally In the ev ent of a failure of a single server with an associated Elastic IP address the failover mechanism can re associate the Elastic IP address to a replacement instance ideally in an automated fashion While this scenario may experience downtime for the applic ation the time may be limited to the time it takes to detect the failure and quickly re associate the Elastic IP address to the replacement resource Where higher availability levels are required you can use multiple instances and an Elastic Load Balance r Elastic Load Balancing Elastic Load Balancing is an AWS service that automatically distributes incoming application traffic across multiple targets such as Amazon EC2 instances containers IP addresses and Lambda functions and ensures only healthy t argets receive traffic It can handle the varying load of your application traffic in a single Availability Zone or across multiple AZs and supports the ability to load balance across AWS and on premises resources in the same load balancer Elastic Load B alancing offers three types of load balancers that all feature the high availability automatic scaling and robust security necessary to make your applications fault tolerant Application Load Balancer The Application Load Balancer is best suited for load balancing HTTP and HTTPS traffic and provides advanced request routing targeted at the delivery of modern application architectures including microservices and containers Operating at the individual request level (Layer 7) the Application Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) based on the content of the request ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 9 Network Load Balancer Network Load Balancer is best suited for load balancing of Transmission Control Protocol (TCP) User Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is required Operating at the connection level (Layer 4) Network Load Balancer routes traffic to targets within Amazon VPC and is capable of handling millions of requests per second while maintaining ultra low latencies Network Load Balancer is also optimized to handle sudden and volatile traffic patterns Benefits of Using Elastic Load Balancing • Highly available —Elastic Load Balancing automatically distributes incoming traffic across multiple targets —Amazon EC2 instances containers IP addresses and Lambda functions —in multiple AZs and ensures only healthy targets receive traffic The Amazon Elastic Load Balancing Service Level Agreement commitment is 9999% a vailability for a load balancer • Secure —Elastic Load Balancing works with Amazon VPC to provide robust security features including integrated certificate management user authentication and SSL/TLS decryption Together they give you the flexibility to centrally manage TLS settings and offload CPU intensive workloads from your applications • Elastic —Elastic Load Balancing is capable of handling rapid changes in network traffic patterns Additionally deep integration with Auto Scaling ensures sufficient ap plication capacity to meet varying levels of application load without requiring manual intervention • Flexible —Elastic Load Balancing also allows you to use IP addresses to route requests to application targets This offers you flexibility in how you virtua lize your application targets allowing you to host more applications on the same instance This also enables these applications to have individual security groups and use the same network port to further simplify inter application communication in microse rvice based architecture • Robust monitoring & auditing —Elastic Load Balancing allows you to monitor your applications and their performance in real time with Amazon CloudWatch metrics logging and request tracing This improves visibility into the behavio r of your applications uncovering issues and identifying performance bottlenecks in your application stack at the granularity of an individual request ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 10 • Hybrid load balancing —Elastic Load Balancing offers ability to load balance across AWS and on premises resources using the same load balancer This makes it easy for you to migrate burst or failover on premises applications to the cloud Amazon Simple Queue Service Amazon Simple Queue Service ( Amazon SQS) is a fully managed message queuing service that en ables you to decouple and scale microservices distributed systems and serverless applications Amazon SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware and empowers developers to focus on diffe rentiating work Using Amazon SQS you can send store and receive messages between software components at any volume without losing messages or requiring other services to be available Messages are stored in queues that you create Each queue is define d as a URL so it can be accessed by any server that has access to the Internet subject to the Access Control List (ACL) of that queue Use Amazon SQS to ensure that your queue is always available; any messages that you send to a queue are retained for up to 14 days SQS offers two types of message queues Standard queues offer maximum throughput besteffort ordering and at least once delivery with best effort ordering SQS FIFO queues offer high throughput and are designed to guarantee th at messages are processed exactly once in the exact order that they are sent Using Amazon SQS with Other AWS Infrastructure Web Services Amazon SQS message queuing can be used with other AWS Services such as Amazon Redshift Amazon DynamoDB Amazon Relat ional Database Service (Amazon RDS) Amazon EC2 Amazon Elastic Container Service (Amazon ECS) AWS Lambda and Amazon S3 to make distributed applications more scalable and reliable Common design patterns include: • Work Queues —Decouple components of a dis tributed application that may not all process the same amount of work simultaneously • Buffer and Batch Operations —Add scalability and reliability to your architecture and smooth out temporary volume spikes without losing messages or increasing latency • Request Offloading —Move slow operations off of interactive request paths by enqueuing the request ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 11 • Fanout —Combine SQS with Simple Notification Service (SNS) to send identical copies of a message to multiple queues in parallel • Priority —Use separate queues to provide prioritization of work • Scalability —Scale up the send or receive rate of messages by adding another process since message queues decouple your processes • Resiliency —Continue adding messages to the queue even if a process that is reading messages from the queue fails; once the system recovers the queue can be processed since message queues decouple components of your system Amazon Simple Storage Service Amazon S3 is an object storage service that provides highly durable secure fault tolerant data storage AWS is responsible for maintaining availability and fault tolerance; you simply pay for the storage that you use Data is stored as objects within resources called buckets and a single object can be up to 5 terabytes in size Behind the scenes Amazon S3 stores objects redundantly on multiple devices across multiple facilities in an AWS Region —so even in the rare case of a failure in an AWS data center you will still have access to your data Amazon S3 is designed for 99999999999% (11 9's) of durability and stores data for millions of applications for companies globally Amazon S3 is ideal for any kind of object data storage requirements that your application might have Amazon S3 can be accessed using the AWS Management Console by a U RL through a Command Line Interface (CLI) or via API using an SDK with your programming language of choice The versioning feature in Amazon S3 allows you to retain prior versions of objects stored in S3 and also protects against accidental deletions initiated by a misbehaving application Versioning can be enabled for any of your S3 buckets You can also use either S3 Cross Region Repli cation (CRR) to replicate objects in another region or Same Region Replication (SRR) to replicate objects in the same AWS Region for reduced latency security disaster recovery and other use cases In addition to providing highly available storage Amazon S3 provides multiple storage classes to help reduce storage costs while still providing high availability and durability Using S3 Lifecycle policies objects can be transferred to lower cost storage If you are unsure of your data access patterns you can select S3 Intelligent Tiering which automatically move s your data based on changing access patterns ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 12 By using Amazon S3 you can delegate the responsibility of one critical aspect of fault tolerance —data storage —to AWS Amazon Elastic File System and Amazon FSx for Windows File Server While Amazon S3 is ideal for applications that can access data as objects many applications store and access data as files Amazon Elastic File System (Amazon EFS) and Amazon FSx for Windows File Server (Amazon FSx) are fully managed AWS services that provide file based storage for applications Amazon EFS provides a simple scalable elastic file sys tem for Linux based workloads File systems grow and shrink on demand and can scale to petabytes of capacity Amazon EFS is a regional service storing data within and across multiple Availability Zones for high availability and durability Applications tha t need access to shared storage from multiple EC2 instances can store data reliably and securely on Amazon EFS Amazon FSx provides a fully managed native Microsoft Windows file system so you can move your Windows based applications that require file stora ge to AWS With Amazon FSx you can launch highly durable and available Windows file systems that can be accessed from up to thousands of application instances Amazon FSx is highly available within a single AZ For applications that require additional lev els of availability Amazon FSx supports the use of Distributed File System (DFS) Replication to enable multi AZ deployments Using either Amazon EFS or Amazon FSx you can provide highly available fault tolerant file storage to your applications running in AWS Amazon Relational Database Service Amazon Relational Database Service guides you in the setup operat ion and scal ing a relational database in the cloud It provides cost efficient and resizable capacity while automating time consuming administrati on tasks such as hardware provisioning database setup patching and backups It frees you to focus on your applications so you can give them the fast performance high availability security and compatibility they need Amazon RDS is available on severa l database instance types and is optimized for memory performance or I/O You can choose from six familiar database engines ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 13 including Amazon Aurora PostgreSQL MySQL MariaDB Oracle Database and SQL Server Amazon RDS has many features that enhance reliability for critical production databases including automated backups database snapshots and automatic host replacement • Administration —Go from project conception to deployment using the Amazon RDS Management Console the AWS RDS CLI or API calls to access the capabilities of a production ready relational database in minutes No need for infrastructure provisioning or installing and maintaining database software • Scalability —Scale your database compute and storage resources using the console or an API call often with no downtime Many Amazon RDS engine types allow you to launch one or more Read Replicas to offload read traffic from your primary database instance • Availability —Run on the same highly reliable infrastructure used by other Amazo n Web Services Use Amazon RDS for replication to enhance availability and reliability for production workloads across Availability Zones Us e the Multi AZ deployment option to run mission critical workloads with high availability and builtin automated fa ilover from your primary database to a synchronously replicated secondary database • Security —Control network access to your database by running your database instances in an Amazon VPC which enables you to isolate your database instances and to connect t o your existing IT infrastructure through an industry standard encrypted IPsec VPN Many Amazon RDS engine types offer encryption at rest and encryption in transit • Cost —Pay for only the resources you actually consume with no up front or long term commitments You have the flexibility to use on demand resources or utilize our Reserved Instance pricing to further reduce your costs Amazon DynamoDB Amazon DynamoDB is a key value and document database that delivers single digit millisecond performance at any scale It's a fully managed multi region multi master database with built in security backup and restore and in memory caching for internet scale applications ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 14 Amazon DynamoDB is purpose built for mission critical workloads DynamoDB helps secur e your data with encryption at rest by default and continuously backs up your data for protection with guaranteed reliability through a service level agreement Point in time recovery (PITR) helps protect DynamoDB tables from accidental write or delete operations PITR provides continuous backups of your DynamoDB table data and you can restore that table to any point in time up to the second during the preceding 35 days With Amazon DynamoDB there are no servers to provision patch or manage and no software to install maintain or operate DynamoDB automatically scales tables to adjust for capacity and maintains performance with zero administration Availability and fault tolerance are built in eliminating the need to architect your applications for t hese capabilities Using Serverless Architectures for High Availability What is Serverless? Serverless is the native architecture of the cloud that enables you to shift more of your operational responsibilities to AWS increasing your agility and innovatio n Serverless allows you to build and run applications and services without thinking about servers It eliminates infrastructure management tasks such as server or cluster provisioning patching operating system maintenance and capacity provisioning Ser verless provides built in availability and fault tolerance You don't need to architect for these capabilities since the services running the application provide them by default Central to many serverless designs is AWS Lambda AWS Lambda automatically runs your code on highly available fault tolerant infrastructure spread across multiple Availability Zones in a single region without requiring you to provision or manage servers With Lambda you can run code for virtually any type of application or backend service no administration Upload your code and Lambda will run and scale your code with high availability You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app AWS Lambda automatically scales your application by running code in response to each trigger Your code runs in parallel and processes each trigger individually scaling precisely with the size of the workload In addition to AWS Lambda other AWS ser verless technologies include: ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 15 • AWS Fargate —a serverless compute engine for containers • Amazon DynamoDB —a fast and flexible NoSQL database • Amazon Aurora Serverless —a MySQL compatible relational database • Amazon API Gateway —a service to create publish monitor and secure APIs • Amazon S3—a secure durable and highly scalable object storage • Amazon Elastic File System —a simple scalable elastic file storage • Amazon SNS—a fully managed pub/sub messaging service • Amazon SQS —a fully managed message queuing service Note: While a full discussion on Serverless capabilities is outside the scope of this paper you may find additional information about Serverless Computing on our website Using Continuous Integration and Continuous Deployment/Delivery to Rollout Application Changes What is Continuous Integration? Continuous integration (CI) is a software development practice where developers regularly merge their code change s into a central repository after which automated builds and tests are run What is Continuous Deployment/Delivery? Continuous deployment/ delivery (CD) is a software development practice where code changes are automatically built tested and prepared for production release It expands on continuous integration by deploying all code changes to a testing environment a production environment or both after the build stage has been completed Continuous delivery can be fully automated with a workflow process or partially automated with manual steps at critical points ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 16 How Does This Help? Continuous integration and c ontinuous deployment/ delivery tools remove the human factor of rolling out application changes and instead add automation as much as possible Prior to CI/CD tools scripts were used which required manual intervention or kick off process Many deployments occurred during the weekends to minimize potential disruptions to the business and could be quickly rolled back if issues arose Deployment steps were usually documented in runbooks AWS CodeBuild AWS CodePipeline and AWS CodeDeploy are part of the CI/CD services that DevOps teams us e to deploy applications or application changes in their environment For example a single pipeline can roll out application changes in one region and if successful the same pipeline roll s out the changes in other region s With a streamlined CI/CD pipe line developers can deploy application changes which are transparent to end users These pipelines can be leveraged to perform multi region deployments or to quickly deploy a bug fix If a fault occurs in one environment users can be redirected to anothe r environment (or region) and updates can be r olled out to the faulty environment Once the fault has been addressed you can redirect users back to the original environment Utilize Immutable Environment Updates An immutable environment is a type of infrastructure in which resources ( that is servers) are never modified once they have been deployed Typically these servers are built from a common image (such as an Amazon Machine Image) The benefit of this type of environment is increas ed reliability consistency and a more predictable environment In AWS this can be achieved by creating the infrastructure using AWS CloudFormation or AWS Cloud Development Kit (CDK) Leverage AWS Elastic Beanstalk With AWS Elastic Beanstalk you can quic kly deploy and manage applications in the AWS Cloud without having to learn about the infrastructure that runs those applications Elastic Beanstalk reduces management complexity without restricting choice or control After upload ing your application Elas tic Beanstalk will automatically handle the details of capacity provisioning load balancing scaling and application health monitoring ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 17 Amazon CloudWatch Amazon CloudWatch is a fully managed monitoring service for AWS resources and the applications that you run on top of them You can use Amazon CloudWatch to collect and store metrics on a durable platform that is separate and independent from your own infrastructure You can use these metrics to measure the performance and response times and also capture custom metrics for your applications These metrics that can be used to do additional actions such as trigger Auto Scaling Notification Fan out trigger automated tasks etc To capture the custom metrics you can publish your own metr ics to CloudWatch through a simple API request Conclusion Amazon Web Services provides services and infrastructure to build reliable fault tolerant and highly available systems in the cloud Services that provide basic infrastructure such as Amazon EC2 and Amazon EBS provide specific features such as availability zones elastic IP addresses and snapshots In particular Amazon EBS provides durable block storage for applications running on EC2 and an Auto Scaling group ensures your Amazon EC2 fleet oper ates at the required capacity automatically detects failures and replaces instances as needed Higher level building blocks such as Amazon S3 provide highly scalable globally accessible object storage with 11 9s of durability For durable fault toleran t file storage for applications running in AWS you can use Amazon EFS and Amazon FSx for Windows The wide spectrum of building blocks available give you the flexibility and capability to set up the reliable and highly available environment you need and o nly pay for the services you consume Contributors Contributors to this document include : • Jeff Bartley Solutions Architect Amazon Web Services • Lewis Foti Solutions Architect Amazon Web Services • Bert Zahniser Solutions Architect Amazon Web Services • Muhammad Mansoor S olutions Architect Amazon Web Services ArchivedAmazon Web Services Fault Tolerant Components on AWS Page 18 Further Reading Amazon API Gateway Amazon Aurora Amazon Aurora Serverless Amazon CloudWatch Amazon DynamoDB Amazon Elastic Block Store Amazon Elastic Compute Cloud Amazon Elastic Container Service Amazon Elastic File System Amazon FSx for Windows File Server Amazon Machine Image Amazon Redshift Amazon Relational Database Service Amazon Simple Notification Service Amazon Simple Queue Service Amazon Simple Storage Service Amazon Virtual Private Cloud AWS Auto Scaling AWS Cloud Development Kit AWS CloudFormation AWS CodeBuild AWS CodeDeploy AWS CodePipeline AWS Command Line Interface AWS Elastic Beanstalk AWS Fargate AWS Global Infrastructure Region and Availability Zones AWS Lamb da AWS Management Console Elastic IP Addresses Elastic Load Balancing Serverless Document Revisions Date Description Novem ber 2019 Refreshed the paper removing outdated references and adding newer AWS services not previously available October 2011 First publication Archived
|
General
|
consultant
|
Best Practices
|
Building_Media__Entertainment_Predictive_Analytics_Solutions_on_AWS
|
Building Media & Entertainment Predictive Analytics Solutions on AWS First published December 2016 Updated March 30 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Overview of AWS Enabled M&E Workloads 1 Overview of the Predictive Analytics Process Flow 3 Common M&E Predictive Analytics Use Cases 6 Predictive Analytics Archi tecture on AWS 8 Data Sources and Data Ingestion 9 Data Store 13 Processing by Data Scientists 14 Prediction Processing and Serving 22 AWS Services and Benefits 23 Amazon S3 23 Amazon Kinesis 24 Amazon EMR 24 Amazon Machine Learning (Amazon ML) 25 AWS Data Pipeline 25 Amazon Elastic Compute Cloud (Amazon EC2) 25 Amazon CloudSearch 26 AWS Lambda 26 Amazon Relational Database Service (Amazon RDS) 26 Amazon DynamoD B 26 Conclusion 27 Contributors 27 Abstract This whitepaper is intended for data scientists data architects and data engineers who want to design and build Media and Entertainment ( M&E ) predictive analytics solutions on AWS Specifically this paper provide s an introduction to common cloud enabled M&E workloads and describes how a predictive analytics workload fits into the overall M&E workflows in the cloud The paper provide s an overview of the main phases for the predictive analytics business process as well as an overview of comm on M& E predictive analytics use case s Then the paper describes the technical reference architecture and tool options for implementing predictive analytics solutions on AWS Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 1 Introduction The world of Media and Entertainment (M&E) has shifted from treating custo mers as mass audiences to forming connection s with individuals This progression was enabled by unlocking insights from data generated through new distribution platforms and web and social networks M&E companies a re aggressivel y moving from a traditional mass broadcasting business model to an Over The Top (OTT) model where relevant data can be gathered In this new model they are embracing the challenge of acquiring enriching and retaining customers through big data and predictive analytic s solutions As cloud technology adoption becomes mainstream M&E companies are moving many analytics workload s to AWS to achieve ag ility scale lower cost rapid innovation and operational efficiency As these companies start their journey to the cloud they have questions about c ommon M&E use case s and how to design build and operate these solutions AWS provides many services i n the data and analytics space that are well suited for all M&E analytics workloads including traditional BI reporting real time analytics and predictive analytics In this paper we discuss the approach to architecture and tools We’ll cover design build and operate aspects of predictive analytics in subsequent papers Overview of AWS Enabled M&E Workloads M&E c ontent producers have traditionally relied heavily on systems located on premise s for production and post production workloads Content produ cers are increasingly looking into the AWS Cloud to run workloads This is d ue to the huge increase in the volume of content from new business models such as on demand and other online delivery as well as new content formats such as 4k and high dynamic r ange ( HDR ) M&E customers deliver live linear on demand and OTT content with the AWS Cloud AWS services also enable media partners to build solutions across M&E lines of business Examples include: • Managing digital assets • Publishing digital content • Automating media supply chains • Broadcast master control and play out Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 2 • Streamlining content distribution to licensees • Affiliates (business to business B2B) • Direct to consumer ( business to consumer B2C) channels • Solutions for content and customer analytics using real time data and machine learning Figure 1 is a diagram that shows a typical M&E workflow with a brief description of each area Figure 1 — Cloud enabled M&E workflow Acquisition — Workloads that capture and ingest media contents such as videos audio and images into AWS VFX & NLE — Visual Effects (VFX) and nonlinear editing system (NLE) workloads that allow editing of im ages for visual effects or nondestruc tive editing of video and audio source files DAM & Archive — Digital asset management (DAM) and archive solutions for the management of media assets Media Supply Chain — Workloads that manage the process to deliver digital asset s such as video or music from the point of origin to the destinat ion Publishing — Solutions for media contents publishing OTT — Systems that allow the delivery of aud io content and video content over the Internet Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 3 Playout & Distribution — Systems that support the transmission of media contents and channels into the broadcast network Analytics — Solutions that provide business intelligence and predictive analytics capabilities on M&E data Some typical domain questions to be answered by the analytics solutions are: How do I segment my customers for email campaign? What videos should I be promoting at the top of audiences OTT/VOD watchlists? Who is at risk of cancelling a subscription? What ads can I target mid roll to maximize audience engagement? What is the aggregate trending sentiment regarding titles brands prop erties and talents across social media and where is it headed? Overview of the Predictive Analytics Process Flow There are two main categories of analytics : business and predictive Business analytics focus on reporting metrics for historical and real time data Predictive analytics help predict future events and provide estimations by applying predictive modeling that is based on historical and real time data This paper will only cover predictive analytics A predictive analytics initiative involves man y phases and is a highly iterative process Figure 2 shows some of the main phases in a predictive analytics project with a brief description of each phase Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 4 Figure 2 — Cross industry standards process for data mining 1 Business Understanding — The main objective of this phase is to develop an understanding of the business goals and t hen t ranslate the goals into predictive analytics objectives For the M&E industry examples of business goals could include increasing con tent consumption by existing customers or understanding social sentiment toward contents and talents to assist with new content development The associated predictive analytics goals could also include personalized content recommendations and sentiment a nalysis of social data regarding contents and talents 2 Data Understanding — The goal of this phase is to consider the data required for predictive analytics Initial data collection exploration and quality assessment take place during this phase To dev elop high quality models the dataset needs to be relevant complete and large enough to support model training Model training is the process of training a machine learning model by providing a machine learning algorithm with training data to learn from Some relevant datasets for M&E use case s are customer information/profile data content viewing history data content rating data social engagement data customer behavioral data content subscription data and purchase data Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 5 3 Data Preparation — Data preparation is a critical step to ensure that highquality predictive models can be generated In this phase the data required for each modeling process is selected Data acquisition mechanisms need to be created Data is integrated formatted transformed and enriched for the modeling purpose Supervised machine learning algorithms require a labeled training dataset to generate predictive models A labeled training dataset has a target prediction variable and other dependent data attributes or features The quality of the training data is often considered more important than the machine learning algorithms for performance improvement 4 Modeling — In this phase the appropriate modeling techniques are selected for different modeling and business objec tives For example : o A clustering technique could be employed for customer segmentation o A binary classification technique could be used to analyze customer churn o A collaborative filtering technique could be applied to content recommendation The perform ance of the model can be evaluated and tweaked using technical measures such as Area Under Curve (AUC) for binary classification (Logistic Regression) Root Mean Square (RMSE) for collaborative filtering (Alternating Least Squares) and Sum ofSquared Error (SSE) for clustering (K Means) Based on the initial evaluation of the model result the model setting s can be revised and fine tuned by going back to the data preparation stage 5 Model Evaluation — The generated models are formally evaluated in this phase not only in terms of technical measures but also in the context of the business success criteria set out during the business understanding phase If the model properly addresses the initial business objectives it can be approved and prepared for deployment 6 Deployment — In this phase the model is deployed into an environment to generate predictions for future events Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 6 Common M&E Predictive Analyt ics Use Cases To a certain extent some of the predictive analytics use case s for the M&E industry do not differ much from other industries The following are common use case s that apply to the M&E industry Customer segmentation — As the engagement betw een customers and M&E companies become s more direct across different channels and as more data is collected on those engagements appropriate segmentation of customers becomes increasingly important Customer relationship management (CRM) strategies incl uding customer acquisition customer development and customer retention greatly rely upon such segmentation Although customer segmentation can be achieved using basic business rules it can only efficiently handle a few attributes and dimensions A dat adriven segmentation with a predictive modeling approach is more objective and can handle more complex datasets and volumes Customer segmentation solution s can be implemented by leveraging clustering algorithms such as k means which is a type of unsup ervised learning algorithm A clustering algorithm is used to find natural clusters of customers based on a list of attributes from the raw customer data Content recommendation — One of the most widely adopted predictive analytics by M&E companies this type of analytics is an importan t technique to maintain customer engagement and increase content consumption Due to the huge volume of available content customers need to be guided to the content they might find most interesting Two comm on algorithms leveraged in recommendation solutions are content based filtering and collaborative filtering • Content based filtering is based on how similar a particular item is to other items based on usage and rating The model uses the content attribut es of items (categories tags descriptions and other data) to generate a matrix of each item to other items and calculates similarity based on the ratings provided Then the most similar items are listed together with a similarity score Items with the highest score are most similar Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 7 • Collaborative filtering is based on making predictions to find a specific item or user based on similarity with other items or users The filter applies weights based on peer user preferences The assumption is users who di splay similar profile or behavior have similar preferences for items More advanced recommendation solutions can leverage deep learning techniques for better performance One example of this would be using Recurrent Neural Networks (RNN) with collaborative filtering by predicting the sequence of items in previous streams such as past purchases Sentiment analysis — This is the process of categorizing words phrases and other contextual information into subjective feelings A common outcome fo r sentiment analysis is positive negative or neutral sentiment Impressions publicized by consumers can be a val uable source of insight into the opinions of broader audiences These insights when employed in real time can be used to significantly enhan ce audience engagement Insights can also be used with other analytic learnings such as customer segmentation to identify a positive match between an audience segment and associated content There are many tools to analyze and identify sentiment and many of them rely on linguistic analysis that is optimized for a specific context From a machine learning perspective one traditional approach is to consider sentiment analysis as a classification problem The sentiment of a document sentence or word is cl assified with positive negative or neutral labels In general the algorithm consists of tokenization of the text feature extraction and classification using different classifiers such as linear classifiers (eg Support Vector Machine Logistic Regre ssion) or probabilistic classifiers (eg Naïve Bayes Bayesian Network) However this traditional approach lacks recognition for the structur e and subtleties of written language A more advanced approach is to use deep learning algorithm s for sentiment analysis You don’t need to provide these models with predefined features as the model can learn sophisticated features from the dataset The words are represented in highly dimensional vectors and features are extracted by the neural netwo rk Examples of deep learning algorithms that can be used for sentiment analysis are Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) MXNet Tensorflow and Caffe are some deep learning frameworks that are well suited for RNN and CNN model training AWS makes it easy to get started with these frameworks by providing an Amazon Machine Image (AMI) that includes these frameworks preinstalled This AMI can be run on a large number of instance types including the P2 instances that provide general Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 8 purpose GPU processing for deep learning applications The Deep Learning AMI is available in the AWS Marketplace Churn prediction — This is the identification of customers who are at risk of no longer being customers Churn prediction helps to identify where to deploy retention resources most effectively The data used in churn prediction is generally user activity data related to a specif ic service or content offering This type of analysis is generally solved using a logistic regression with a binary classification The binary classification is designated as customer leave predicted or customer retention predicted Weightings and cutoff values can be used with predictive models to tweak the sensitivity of predictions to minimize false positives or false negatives to optimize for business objectives For example Amazon Machine Learning (Amazon ML) has an input for cutoff and sliders for precision recall false positive rate and accuracy Predictive Analytics Architecture on AWS AWS includes the components needed to enable pipelines for predictive analytics workflows There are many viable architectural patterns to effectively compute pr edictive analytics In this section we discuss some of the technology options for building predictive analytics architecture on AWS Figure 3 shows one such conceptual architecture Figure 3 — Conceptual reference architecture Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 9 Data Sources and Data Ingestion Data collection and ingestion is the first step and one of the most important technical architecture components to the overall predictive analytics architecture At a high level the main s ource data required for M&E analytics can be classified into the following categories • Dimension data — Provides structured labeling information to numeric measures Dimension data is mainly used for grouping filtering and labeling of information Exampl es of dimension data are customer master data demographics data transaction or subscription data content metadata and other reference data These are mostly structured data stored in relational databases such as CRM Master Data Management (MDM) or Digital Asset Management (DAM) databases • Social media d ata — Can be used for sentiment analysis Some of the main social data sources for M&E are Twitter YouTube and Facebook The data could encompass content ratings reviews social sharing tagging and bookmarking events • Event data — In OTT and online media examples of event data are audience engagement behaviors with st reaming videos such as web browsing patterns searchi ng events for content video play/watch/stop events and device data These are mostly real time click streaming data from website s mobile apps and OTT players • Other relevant data — Includes data from aggregators (Nielson comS core etc) advertising response data customer contacts and service case data There are two main modes of data ingestion into AWS : batch and streaming Batch Ingestion In this mode data is ingested as files (eg database extracts) following a specified schedule Data ingestion approaches include the following Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 10 • Third party applications — These applications have connector integration with Amazon Simple Storage Service (Amazon S3) object storage that can ingest data into Amazon S3 buckets The applications can either take source files or extract data from the source database directly and store them in Amazon S3 There are commercial products (eg Informatica Talend) and open source utilities ( eg Embulk ) that can extract data from databases and export the data into a n Amazon S3 bucket directly • Custom applications using AWS SDK/APIs — Custom applications can use AWS SDKs and the Amazon S3 application programming interface (API) to ingest data into target Amazon S3 buckets The SDKs and API also support multipart upload for faster data transfer to Amazon S3 buckets • AWS Data Pipeline — This service facilitates moving data between different sources including AWS services AWS Data Pipeline launch es a task runner that is a Linux based Amazon Elastic Compute Cloud (Amazon EC2) instance which can run scripts and commands to move data on a n event based or scheduled basis • Command line interface (CLI) — Amazon S3 also provides a CLI for interacting and ingesting data into Amazon S3 buckets • File synchronization utilities — Utilities such as rsynch and s3synch can keep source data directories in sync with Amazon S3 buckets as a way to move files from source locations to Amazon S3 buc kets Streaming Ingestion In this mode data is ingested in streams (eg clickstream data) Architecturally there must be a streaming store that accepts and stores streaming data at scale and in real time Additionally data collectors that collect dat a at the sources are needed to send data to the streaming store • Stream ing stores — There are various options for the streaming stores Amazon Kinesis Stream s and Amazon Kinesis Firehose are fully managed stream ing stores Streams and Firehose also provide SDKs and APIs for programmatic integration Alternatively open source platforms such as Kafka can be installed and configured on EC2 clusters to manage streaming data ingestion and storage Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 11 • Data collector s — These can be web mobile or OTT appl ications that send data directly to the streaming store or collector agents running next to the data sources (eg clickstream logs) that send data to the streaming store in real time There are several options for the data collectors Flume and Flentd are two open source data collectors that can collect log data and send data to streaming stores An Amazon Kinesis agent can be used as the data collector for Streams and Firehose One common practice is to ingest all the input data into staging Amazon S3 buckets or folders first perform further data processing and then store the data in target Amazon S3 buckets or folders Any data processing related to data quality (eg data completeness invalid data) should be handled at the sources when possible and is not discussed in this document During this stage the following data processing might be needed • Data transformatio n — This could be transformation of source data to the defined common standards For example breaking up a single name field into first name middle name and last name field s • Metadata extraction and persistence — Any metadata associated with input files s hould be extracted and stored in a persistent store This could include file name file or record size content description data source information and date or time information • Data enrichment — Raw data can be enha nced and refined with additional infor mation For example you can enrich source IP addresse s with geographic data • Table schema creation and maintenance — Once the data is processed into a target structure you can create the schemas for the target systems File Formats The various file formats have tradeoffs regarding compatibility storage efficiency read performance write performance and schema extensibility In the Hadoop ecosystem there are many variations of file based data stores The following are some of the more common ones i n use Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 12 • Comma Separated Values (CSV) — CSV typically the lowest common denominator of file formats excels at providing tremendous compatibility between platforms It’s a common format for going into and out of the Hadoop ecosystem This file type can be easily inspected and edited with a text editor which provides flexibility for ad hoc usage One drawback is poor support for compression so the files tend to take up more storage space than some other available formats You should also note that CSV sometimes has a header row with column names Avoid using this with machine learning tools because it inhibits the ability to arbitrarily split files • JavaScript Object Notation (JSON) — JSON is similar to CSV in that text editors can consume this format easily JSON records can be stored using a delimiter such as a newline character as a demarcation to split large data sets across multiple files However JSON files include some additional metadata whereas CSV files typically do not when used in Hadoop JSON files with one record should be avoided because this would generally result in too many small files • Apache Parquet — A columnar storage format that is integrated into much of the Hadoop ecosystem Parquet allows for compression schemes to be specified on a per column level This provides the flexibility to take advantage of compression in the right places without the penalty of wasted CPU cycles compressi ng and de compressing data that doesn’t need compressing Parquet is also flexible for encoding columns Selecting the right encoding mechanism is also important to maximize CPU utiliz ation when reading and writing data Because of the columnar format Parquet can b e very efficient when processing jobs that only require reading a subset of columns However this columnar format also comes with a write penalty if your processing includes writes • Apache Avro — Avro can be used as a file format or as an object format that is used within a file format such as Parquet Avro uses a binary data format requiring less space to represent the same data in a text format This results in lower processing demands in terms of I/O and memory Avro also has the advantage of being compressible further reducing the storage size and increasing disk read performance Avro includes schema data and data that is defined in JSON while still being persisted in a binary format The Avro data format is flexible and expressive allowin g for schema evolution and support for more complex data structures such as nested types Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 13 • Apache ORC — Another column based file format designed for high speed within Hadoop For flat data structures ORC has the advantage of being optimized for reads tha t use predicates in WHERE clauses in Hadoop ecosystem queries It also compresses quite efficiently with compression schemes such as Snappy Zlib or GZip • Sequence files — Hadoop often uses sequence files as temporary files during processing steps of a M apReduce job Sequence files are binary and can be compressed to improve performance and reduce required storage volume Sequence files are stored row based with sync ending markers enabling splitting However any edits will require the entire file to be rewritten Data Store For the data stor e portion of your solution you need storage for the data derived data lake schemas and a metadata data catalogue As part of that a critical decision to make is the type or types of data file formats you will pr ocess Many types of object models and storage formats are used for machine learning Common storage locations include databases and files From a storage perspective Amazon S3 is the preferred storage option for data science proces sing on AWS Amazon S3 provides highly durable storage and seamless integration with various data processing services and machine learning platforms on AWS Data Lake Schemas Data lake schema s are Apache HIVE tables that supp ort SQLlike data querying using Hadoop based query tools such as Apache HIVE Spark SQL and Presto Data lake schemas are based on the schema onread design which means table schemas can be created after the source data is already loaded into the data store A data lake schema uses a HIVE metastor e as the schema repository which can be accessed by different query engines In addition t he tables can be created and managed using the HIVE engine directly Metadata Data Catalogue A metadata data catalogue contain s information about the data in the data store It can be loosely categorized into three areas: technical operational and business Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 14 • Technical metadata refers to the forms and structure of the data In addition to data types technical metadata can also contain information about what data is valid and the data’s sensitivity • Operational metadata captures information such as the source of the data time of ingestion and what ingested the data Operat ional metadata can show data lineage movement and transformation • Business metadata provides labels and tags for data with business level attribute s to make it easier for someone to search and brows e data in the data store There are different options to process and store metadata on AWS One way is to trigger AWS Lambda functions by using Amazon S3 events to extract or derive metadata from the input files and store metadata in Amazon DynamoDB Processing by Data Scien tists When all relevant data is available in the data store data scientists can perform offline data exploration and model selection data preparation and model training and generation based on the defined business objectives The following solutions were selected because they are ideal for handling the large amount of data M&E use case s generate Interactive Data Exploration To develop the data understanding needed to support the modeling process data scientists often must explore the available datasets and determine their usefulness This is normally an interactive and iterative process and require s tools that can query data quickly across massive amount s of datasets It is also useful to be able to visualize the data with graphs charts and maps Table 1 provides a list of data exploration tools available on AWS followed by some specific examples that can be used to explore the data interactively Table 1: Data exploration tool options on AWS Query Style Query Engine User Interface Tools AWS Services SQL Presto AirPal JDBC/ODBC Clients Presto CLI EMR Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 15 Query Style Query Engine User Interface Tools AWS Services Spark SQL Zeppelin Spark Interactive Shell EMR Apache HIVE Apache HUE HIVE Interactive Shell EMR Programmatic R/SparkR (R) RStudio R Interactive Shell EMR Spark(PySpark Scala) Zeppelin Spark Interactive Shell EMR Presto on Amazon EMR The M&E datasets can be stored in Amazon S3 and are accessible as external HIVE tables An external Amazon RDS database can be deployed for the HIVE metastore data Presto running in an Amazon EMR cluster can be used to run interactive SQL queries against the data sets Presto supports ANSI SQL so you can run complex quer ies as well as aggregation against any dataset size from gigab ytes to petabytes Java Database Connectivity ( JDBC ) and Open Database Connectivity ( ODBC ) drivers support connections from data vis ualization tools such as Qlikview Tableau and Presto for rich data visualization Web tools such as AirPal provide an easy touse web front end to run Presto queries directly Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 16 Figure 4 — Data exploration with Presto Apache Zeppelin with Spark on EMR Another tool for data exploration is Apache Zeppelin notebook with Spark Spark is a general purpose cluster computing system It provides high level APIs fo r Java Python Scala and R Spark SQL an in memory SQL engine can integrate with HIVE external tables using HiveContext to query the da taset Zeppelin provides a fr iendly user interface to interact with Spark and visualize data using a range of charts and tables Spark SQL can also support JDBC/ODBC connectivity through a server running Thrift EMR Data Storage on S3 HIVE Metastore DB BI Tool JDBC/ODBC Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 17 Figure 5 — Data exploration with Zeppelin R/SparkR on EMR Some data scientists like to use R /RStudio as the tool for data exploration and analysis but feel constrained by the limitations of R such as single threaded execution and small data size support SparkR provides both the interactive environment rich statistical libraries and visualization of R Additionally SparkR provides the scalable fast distributed storage and processing capability of Spark SparkR uses DataF rame s as the data structure which is a distributed collection of data organized into named columns DataFrames can be constructed fro m wide array of data sources including HIVE tables EMR Data Storage on S3 HIVE Metastore DB Zeppelin Notebook Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 18 Figure 6 — Data exploration with Spark + R Training Data Preparation Data scientists will need to prepare training data to support supervised and unsupervised model training Data is formatted transformed and enriched for the modeling purpose As only the relevant data variable should be included in the model training feature selection is often performed to remove unneeded and irrelevant attributes that do not cont ribute to the accura cy of the predictive model Amazon ML provides feature transformation and feature selection capability that simplifies this process Labeled training dataset s can be stored in Amazon S3 for easy access by machine learning services and f ramework s Interactive Model Training To generate and select the right models for the target business use case s data scientists must perform interactive model training against the tr aining data Table 2 provides a list of use cases with potential product s that can you can use to create your solution followed by several example architectures for interactive model training EMR Data Storage on S3 HIVE Metastore DB Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 19 Table 2 — Machine learning options on AWS M&E Use Case ML Algorithms ML Software AWS Services Segmentation Clustering (eg k Means) Spark ML Mahout R EMR Recommendation Collaborative Filtering (eg Alternating Least Square) Spark ML Apache Mahout EMR Neural Network MXNet Amazon EC2/GPU Customer Churn Classification (eg Logistic Regression) Managed Service Amazon Machine Learning Spark ML Apache Mahout R EMR Sentiment Analysis Classification (eg Logistic Regression) Managed Service Amazon Machine Learning Classification (eg Support Vector Machines Naïve Bayes) Spark ML Mahout R EMR Neural Network MXNet Caffe Tensorflow Torch Theano Amazon EC2/GPU Amazon ML Architecture Amazon ML is a fully managed machine learning service that provides the quickest way to get started with model training Amazon ML can support long tail use case s such as churn and sentiment analysis where logistic regression (for classification) or linear regression (for the prediction of a numeric value) algorithms can be applied The followi ng are the main steps of model training using Amazon ML Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 20 1 Data source creation — Label training data is loaded directly from the Amazon S3 bucket where the data is stored A target column indicating the prediction field must be selected as part the data source creation 2 Feature processing — Certain variables can be transformed to improve the predictive power of the model 3 ML model generation — After the data source is created it can be used to train the machine learning mode l Amazon ML automatically split s the labeled training set into a training set (70%) and an evaluation set (30%) Depending on the selected target column Amazon ML automatically picks one of three algorithms ( binary logistic regression multinomial logist ic regression or linear regression) for the training 4 Performance evaluation — Amazon ML provides model evaluation features for model performance assessment and allows for adjustment to the error tolerance threshold All trained models are stored and man aged directly within the Amazon ML service and can be used for both batch and real time prediction Spark ML/Spark MLlib on Amazon EMR Architecture For the use case s that require other machine learning algorithms such as clustering (for segmentation) and collaborative filtering (for recommendation) Amazon EMR provides cluster management support for running Spark ML To use Spark ML and Spark MLlib for interactive data modeling data scientist s have two choices They can use Spark shell by SSH’ing onto the master node of the EMR cluster or use data science notebook Zeppelin running on the EMR cluster master node Spark ML or Spark MLlib support s a range of machine learning algorithms for classification regression collaborative filter ing clustering decomposition and optimization Another key benefit of Spark is that the same engine can perform data extraction model training and interactive query A data scientist will need to programmatically train the model using languages such as Java Python or Scala Spark ML provides a set of APIs for creating and tuning machine learning pipelines The following are the main concepts to understand for pipeline s Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 21 • DataFrame — Spark ML uses a DataFrame from Spark SQL as an ML dataset For example a DataFrame can have different columns corresponding to different columns in the training dataset that is stored in Amazon S3 • Transformer — An algorithm that can transform one DataFrame into another DataFrame For instance an ML model is a Transformer that transforms a DataFrame with features into a DataFrame with predictions • Estimator — An algorithm that can fit on a DataFrame to produce a transformer • Parameter — All transformers and estimators share a common API for specifying parameters • Pipeline — Chains multiple Transformers and Estimators to specify an ML workflow Spark ML provides two approaches for model selection : cross validation and validation split With cross validation the dataset is split into multiple folds that are used as separat e training and test datasets Two thirds of each fold are used for training and onethird of each fold is used for testing This approach is a wellestablished method for choosing parameters and is more statistical ly sound than heuristic tuning by hand However it can be very expensive as it cross validates over a grid of parameters With validation split the dataset is split into a training asset and a test data asset This approach is less expensive but when the training data is not sufficiently large it won’t produce results that are as reliable as using cross validation Spark ML supports a method to export models in the Predictive Model Markup Language (PMML) format The trained model can be exported a nd persisted into an Amazon S3 bucket using the model save function The saved models can then be deployed into other environment s and loaded for generating prediction Machine Learning on EC2 /GPU /EMR Architecture s For use case s that require dif ferent ma chine learning frameworks that are not supported by A mazon ML or Amazon EMR these frameworks can be installed and run on EC2 fleet s An AMI is available with preinstalled machine learning Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 22 packages including MXNet CNTK Caffe Tensorflow Theano and Torch Additional machine learning packages can be added easily to EC2 instances Other machine learning frameworks can also be installed on Amazon EMR via bootstrap actions to take advantage of the EMR cluster management Examples include Vowpal Wabbit Skytree and H2O Prediction Processing and Serving One architecture pattern for serving predictions quickly using both historic and new data is the lambda architecture The components for this architecture include a batch layer speed layer and serving layer all working together to enable up todate predictions as new data flows into the system Despite its name this pattern is not related to the AWS Lambda service The following is a brief description for each portion of the pattern shown in Figure 7 • Event data — Eventlevel data is typically log data based on user activity This could be data captured on websites mobile devices or social media activities Amazon Mobile Analytics provides an easy way to capture user activity for mobile devices The Amazon Kinesis Agent makes it easy to ingest log data such as web logs Also the Amazon Kinesis Producer Library (KPL) makes it easy to programmatically ingest data int o a stream • Streaming — The streaming layer ingests data as it flows into the system A popular choice for processing streams is Amazon Kinesis Streams because it is a managed service that minimiz es administration and maintenance Amazon Kinesis Firehose c an be used as a stream that stores all the records to a data lake such as an Amazon S3 bucket Figure 7 — Lambda architecture components Event Data Streaming Speed Layer Serving Layer Data Lake Batch Layer Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 23 • Data lake — The data lake is the storage layer for big data associated with event level data generated by M&E users The popular choice in AWS is Amazon S3 for highly durable and scalable data • Speed layer — The speed layer continually updat es predictive results as new data arrives This layer processes less data than the batch layer so the results may not be as accurate as the batch layer However the results are more readily available This layer can be implemented in Amazon EMR using Spark Streaming • Batch layer — The batch layer processes machine learning models using the full set of event level data available This processing can take longer but ca n produce higher fidelity predictions This layer can be implemented using Spark ML in Amazon EMR • Serving layer — The serving layer respond s to predictions on an ongoing basis This layer arbitrate s between the results generated by the batch and speed la yers One way to accomplish this is by storing predictive results in a NoSQL database such as DynamoDB With this approach predictions are stored on an ongoing basis by both the batch and speed layers as they are processed AWS Services and Benefits Mach ine learning solutions come in many shapes and sizes Some of t he AWS services commonly used to build machine learning solutions are described in the following sections During the predictive analytics process work flow different resources are needed throughout different parts of the lifecycle AWS services work well in this scenario because resources can run on demand and y ou pay only for the services you consume Once you stop using them there are no additional costs or terminat ion fees Amazon S3 In the context of machine learning Amazon S3 is an excellent choice for storing training and evaluation data Reasons for this choice include its provision of highly parallelized low latency access that it can store vast amounts of structure d and unstructured data and is low cost Amazon S3 is also integrated into a useful ecosystem of tools and other services extending the functionality of Amazon S3 for ingestion and processing of new data For example Amazon Kinesis Firehose can be used to capture streaming data AWS Lambda event Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 24 based triggers enable serverless compute processing when data arrives in an Amazon S3 bucket Amazon ML uses Amazon S3 as input for training and evaluation dataset s as well as for batch predictions Amazon EMR with its ecosystem of machine learning tools also benefits from using Amazon S3 buckets for storage By using Amazon S3 EMR clusters can decouple storage and compute which has the advantage of scaling eac h independently It also facilitates using transient clusters or multiple clusters for reading the same data at the same time Amazon Kinesis Amazon Kinesis is a platform for streaming data on AWS offering powerful services to make it easy to load and analyze streaming data The Amazon suite of services also provid es the ability for you to build custom streaming data applications for specialized needs One such use case is applying machine learning to stream ing data There are three Amazon Kinesis services that fit different needs : • Amazon Kinesis Firehose accepts streaming data and persists the data to persistent storage including Amazon S3 Amazon Redshift and Amazon Elasticsearch Service • Amazon Kinesis Analytics lets you gain insights from streaming data in real time using standard SQL Analytics also include advanced functions such as the Random Cut Forest which calculates anomalies on streaming datasets • Amazon Kinesis Streams is a streaming service that can be used to create custom streaming applications or integrate into other applications such as Spark Streaming in Amazon EMR for real time Machine Learning Library (MLlib) workloads Amazon EMR Amazon EMR simplifies big data processing providing a managed Hadoop framework This approach makes it easy fast and cost effective for you to distribute and process vast amounts of data across dynamically scalable Amazon EC2 instances You can also run o ther popular distributed frameworks such as Apache Spark and Presto in Amazon EMR and interact with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB The large ecosystem of Hadoop based machine learning tools can be used in Amazon EMR Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 25 Amazon Machine Learning (Amazon ML) Amazon ML is a service that makes it easy for developers of all skill levels to use machine learning technology Amazon ML provides visualization tools and wizards that guide you through the process of creating machine learning models without having to learn complex machine learning algorithms and technology Once your models are ready Amazon ML makes it easy to obtain predictions for your application using simple APIs without having to implement custom prediction generation code or manage any infrastructure Amazon ML is based on the same proven highly scalable machine learning technology used for years by Amazon’s internal data scientist community The service uses powerful algorithms to create machine learning models by finding patterns in your existing data Then Amazon ML uses these models to process new data and generate predictions for your application Amazon ML is highly scalable and can generate billion s of predictions daily and serve those predictions in real time and at high throughput With Amazon ML there is no upfront hardware or software investment and you pay as you go so you can start small and scale as your application grows AWS Data Pipeli ne AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services as well as on premise s data sources at specified intervals With Data Pipeline you can regularly access your data where it’s stored trans form and process it at scale and efficiently transfer the results to AWS services such as Amazon S3 Amazon RDS Amazon DynamoDB and Amazon EMR Data Pipeline helps you easily create complex data processing workloads that are fault tolerant repeatable and highly available You don’t have to worry about ensuring resource availability managing intertask dependencies retrying transient failures or timeouts in individual tasks or creating a failure notification system Data Pipeline also enables you to m ove and process data that was previously locked up in on premise s data silos unlocking new predictive analytics workloads Amazon Elastic Compute Cloud (Amazon EC2) Amazon EC2 is a simple yet powerful compute service that provid es complete control of server instances that can be used to run many machine learning packages The EC2 instance type options include a wide variety of options to Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 26 meet the various needs of machine learni ng packages These include compute optimized instances with relatively more CPU cores memory optimized instances for packages that use lots of RAM and massively powerful GPU optimized instances for packages that can take advantage of GPU processing power Amazon CloudSearch Amazon CloudSearch is a managed service in the AWS Cloud that makes it simple and cost effective to set up manage and scale a search solution for your website or application In the context of predictive analytics architecture CloudSearch can be used to serve prediction outputs for the various use cases AWS Lambda AWS Lambda lets you run code without provisioning or managing servers With Lambda you can run code for virtually any type of application or backend service all with zero administration In the predictive analytics architecture Lambda can be used for tasks such a s data processing triggered by events machine learning batch job scheduling or as the back end for microservices to serve prediction results Amazon Relational Database Service (Amazon RDS) Amazon RDS makes it e asy to set up operate and scale a relational database in the cloud It provides cost efficient and resizable capacity while managing time consuming database administration tasks freeing you up to focus on your applications and business In the predicti ve analytics architecture Amazon RDS can be used as the data store for HIVE metastore s and as the database for servicing prediction results Amazon DynamoDB Amazon DynamoDB is a fast and flexible NoSQL dat abase service ideal for any applications that need consistent single digit millisecond latency at any scale It is a fully managed cloud database and supports both document and key value store models In the predictive analytics architecture DynamoDB ca n be used to store data processing status or metadata or as a database to serve prediction results Amazon Web Services – Building Media & Entertainment Predictive Analytics Solutions Page 27 Conclusion In this paper we provided an overview of the common Media and Entertainment (M&E) predictive analytics use case s We presented an architectur e that uses a broad set of services and capabilities of the AWS Cloud to enable both the data scientist workflow and the predictive analytics generation workflow in production Contributors The following individuals and organizations contributed to this do cument: • David Ping solutions architect Amazon Web Services • Chris Marshall solutions architect Amazon Web Services Document revisions Date Description March 30 2021 Reviewed for technical accuracy February 24 2017 Corrected broken links added links to libraries and incorporated minor text updates throughout December 2016 First publication
|
General
|
consultant
|
Best Practices
|
Choosing_the_Operating_System_for_Oracle_Workloads_on_Amazon_EC2
|
Choosing the Operating System for Oracle Workloads on Amazon EC2 Published June 2014 Updated July 19 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Oracle AMIs 2 Operating systems and Oracle licensing 3 Oracle certified operating systems 3 Red Hat Enterprise Linux 3 SUSE Lin ux Enterprise Server 4 Oracle Linux 5 Microsoft Windows Serv er 6 Conclusion 6 Contributors 6 Further reading 7 Document history 7 Abstract Amazon Web Services (AWS) provide s a comprehensive set of services and tools for deploying enterprise applications in a highly secure reliable available and cost effective manner The AWS Cloud is an excellent platform to run business critical Oracle workloads in a n efficient way This whitepaper discusses the operating system choices that are best suited for running Oracle workloads on AWS The target audience for this whitepaper includes enterprise architects database administrators IT managers and developers who want to migrate Or acle workloads to AWS Amazon Web Services Choosing the Operating System for Oracle Workloads on Amazon EC2 1 Introduction Oracle software works well on Amazon Web Services (AWS) and many enterprises run their critical Oracle workloads on AWS for both production systems and non production systems These applications can benefit from the many features of the AWS Cloud like scriptable infrastructure instant provisioning and de provisionin g scalability elasticity usage based billing and the ability to support a wide variety of operating systems Whether you are migrating your existing Oracle environments to AWS or implement ing new Oracle applications on AWS choosing the operating system on which these applications will run is a crucial decision We highly recommend that you choose an Oracle certified operating system to run Oracle software on AWS whether you are running Or acle Database Oracle enterprise applications or Oracle middleware You can use the following Oracle certified operating systems on AWS: • Red Hat Enterprise Linux (RHEL) • SUSE Linux Enterprise Server • Oracle Linux • Microsoft Windows Server Note : Only Oracle c an make definitive statements about what products are considered certified For details see Oracle's My Oracle Support website or ask your Oracle sales representative You can use any one of the four oper ating systems for all of your Oracle workloads or you can use a combination of them as needed For example to implement Oracle Siebel you can run Oracle Database on RHEL while running web servers and application servers on Microsoft Windows Server All four of these operating systems are well suited for enterprise workloads but each of them has features and capabilities that the others do not have Knowing the differences will help you make the right decision about what is best for your envi ronments If you migrate an existing Oracle environment on Intel platform to AWS and if that environment currently uses one of the four operating systems in the preceding list then it might be best to choose the same operating system on AWS to keep any compatibil ity risks to a minimum However it also might be worthwhile to evaluate other options If you migrate from a non Intel platform or implement a completely new Amazon Web Services Choosing the Operating System for Oracle Workloads o n Amazon EC2 2 environment on AWS then you should carefully evaluate the operating systems before you choose th e one that is best for your environment Oracle AMIs An Amazon Machine Image (AMI) is a special type of pre configured operating system and virtual application software that is used to create a virtual machine on Amazon Elastic Compute Cloud ( Amazon EC2) The AMI serves as the basic unit of deployment for services delivered using Amazon EC2 The AMI provides the information required to launch an instance which is a virtual server in the cloud You specify an AMI when you launch an instance and you can lau nch as many instances from the AMI as you need An AMI includes the following: • A template for the root volume for the instance • Launch permissions that control which AWS accounts can use the AMI to launch instances • A block device mapping that specifies the volumes to attach to the instance when it's launched There are no official AMIs available for most Oracle products In addition the AMIs that are available might not always be the latest version Even when there are latest versions of the AMIs available they will be based on the Oracle Linux operating system so depending on your operating system of choice this might not be the best option You do not need an Oracle provided AMI to install and use Oracle products on AWS You can start an Amazon EC2 insta nce with an operating system AMI and then download and install Oracle software from the Oracle website just as you would do in the case of a physical server You can use any one of the four operating systems discussed in the preceding section for this pu rpose Once you have the first environment set up with all the necessary Oracle software you can create your own custom AMI for subsequent installations You can also directly launch AMIs from AWS Mark etplace You should closely scrutinize any community AMIs provided by third parties for security and reliability before using them AWS is not responsible or liable for their security or reliability AMIs use one of two types of virtualization: • Paravirtua l (PV) Amazon Web Services Choosing the Operating System for Oracle Workloads on Amazon EC2 3 • Hardware Virtual M achine (HVM) The main difference between PV and HVM AMIs is the way in which they boot and whether they can take advantage of special ha rdware extensions (CPU network and storage) for better performance Note: For the best performance we recommend that you use current generation instance types and HVM AMIs when you launch new instances For more information on current generation instanc e types see Amazon EC2 Instances Operating systems and Oracle licensing On AWS the operating system you use does not affect Oracle licensing The number of Oracle licenses you need to run your Oracle workloads on AWS will be the same no matter which operating system you choose As currently advised by Oracle the key factor that affect s Oracle licensing on AWS for Amazon EC2 and RDS is that you should count two vCPUs as equivalent to one Oracle Processor license if hyper threading is enabled and one vCPU as equivalent to one Oracle Processor license if hyper threading is not enabled When counting Oracle Processor license requirements in AWS Cloud Environments the Oracle Processor Core Factor Table is not applicable You can consult Oracle’s Licensing Oracle Software in the Cloud Computing Environment to understand how Oracle licensing applies to AW S To find out the physical core count of each Amazon EC2 instance type see Physical Cores by Amazon EC2 Instance Type Oracle certified operating systems This section provides information about t he four operating systems that are certified by Oracle and recommended for use with AWS Note: It is possible to run Oracle products on non certified operating systems but for the best performance and supportability we recommend that you use an Oracle certified operating system for use on AWS Red Hat Enterprise Linux A large number of enterprises of all sizes use Red Hat Enterprise Linux (RHEL) to deploy Oracle workloads RHEL is a great choice for any Oracle workloads on AWS Amazon Web Services Choosing the Operating System for Oracle Workloads o n Amazon EC2 4 AWS and Red Hat have teamed to offer RHEL on Amazon EC2 providing a complete enterprise class computing environment with the simplicity and scalability of AWS Red Hat maintains the base RHEL images for Amazon EC2 As an AWS customer you will receive updates at the same time that updates are made available from Red Hat so your computing environment remains reliable and your RHEL certified applications maintain their supportability For additional information about RHEL on AWS see Red Hat on AWS RHEL is available for all Amazon EC2 instance types on AWS including HVM instances On HVM instances RHEL supports HugePages which can especially enhance the performance of Oracle Database HugePages is a Linux feature that makes it possible for the operating system to support very large memory pages On AWS you can use HugePages only on HVM instances For more information about HVM instances on AWS see Linux AMI virtualization types Important: A special feature in RHEL named Transparent HugePages (THP) is not compatible with O racle Database and should be disabl ed for best performance RHEL on AWS Pricing AWS customers can quickly deploy and scale compute resources according to their business needs with flexible purchase options for RHEL and RHEL with High availability: • Payasyougo Provision resources on demand as computing needs grow without long term commitments or upfront costs • Reserved Instances Lower cost further by purchasing compute resources with a one time upfront payment • Bring existing subscription Customers with Re d Hat Enterprise Linux Premium subscriptions can use Red Hat Cloud Access to move subscriptions to Amazon EC2 SUSE Linux Enterprise Server SUSE Linux Enterprise Server (SL ES) is an operating system of choice for Oracle workloads in many large Oracle deployments SLES is a great choice to run Oracle workloads on AWS as well SUSE maintains the base SLES images for Amazon EC2 Amazon Web Services Choosing the Operating System for Oracle Workloads on Amazon EC2 5 and as an AWS customer you will receive updates a t the same time that updates are made available from SUSE SLES also is available for all Amazon EC2 instance types on AWS including HVM instances On HVM instances SLES supports HugePages which can especially enhance the performance of Oracle Database You can launch an SLES based Amazon EC2 instance directly from the AWS console or from the AWS Marketplace For additional information about SLES on AWS see SUSE and AWS SUSE on AWS Pricing SUSE on AWS is available with the ondemand and Bring Your Own Subscription (BYOS) subscription model AWS on demand SUSE subscriptions are offered at either a flat hourly rate with no commitment or through a one time upfront payment Both purchase options include Amazon EC2 compute charges and SUSE subscription charges Amazon tracks and bills customers who purchase SUSE Li nux Enterprise Server (SLES) or SUSE Linux Enterprise Server for SAP Applications (SLES for SAP) subscriptions through AWS In BYOS image customers use existing products purchased from SUSE on a BYOS basis with images available as a Community AMI Oracle L inux As the operating system Oracle uses to build and test their products Oracle Linux is an excellent choice for running Oracle workloads on AWS Oracle Linux EC2 instances can be launched using an Amazon Machine Image (AMI) available in the AWS Marketpl ace or as a Community AMI You can also bring your own Oracle Linux AMI or existing Oracle Linux license to AWS Unlike the other three Linux operating systems discussed here Oracle Linux has no cost for licensing making it the lowest cost option You ca n purchase support directly from Oracle but support is not necessary to get updates and patches Oracle provides public yum repositories to download updates and patches even for customers who have not subscribed to support For customers who have subscri bed to support Oracle Linux allows zero downtime updates which can be useful for mission critical applications Oracle Linux has a special feature named Database Smart Flash Cache that is not available in any of the other operating systems discussed here Database Smart Flash Cache allows the database buffer cache to expand beyond the system global area Amazon Web Services Choosing the Operating System for Oracle Workloads o n Amazon EC2 6 (SGA ) in main memory to a second level cache on flash memory Making use of Database Smart Flash Cache for Oracle Database can substantially increase the database performance This is a good feature to use with Amazon EC2 instances that have a large amount of SSD instance storage Microsoft Windows Server Microsoft Windows Server versions 2012 2012 R2 2016 and 2019 are available on Amazon EC2 as Oracle certified operating systems to run Oracle workloads Microsoft Windows Server is an excellent choice for many Oracle workloads especially for running enterprise applications like PeopleSoft Siebel and JD Edwards M icrosoft Windows is available on all types of Amazon EC2 instances including HVM making it a good choice for Amazon EC2 instance types with large memory configurations To access and launch all Microsoft Windows AMIs see Windows AMIs Microsoft Windows on Amazon EC2 is available with the managed service model where AWS takes on all the burdens of acquiring Microsoft Windows licenses to use in the Amazon EC2 service The Microsoft Windows l icense tends to be more expensive than the other three Oracle certified operating systems for the same instance type Conclusion We recommend that you choose one of the four operating systems discussed in this whitepaper for any of your Oracle environments on AWS so that your Oracle workloads run on an Oracle certified operating system You can use any one of the four operating systems for all your Oracle workloads or you can use a combination of them as needed Your choice typically will depend on familia rity type of workload instance choice and cost preference Contributors Contributors to this document include: • Vuyisa Maswana Solutions Architect Amazon Web Services • Abdul Sathar Sait Amazon Web Services Amazon Web Services Choosing the Operating System for Oracle Workloads on Amazon EC2 7 Further reading For additional information about running Oracle workloads on AWS consult the following resources: Oracle Database on AWS: • Advanced Architectures for Oracle Database on Amazon EC2 • Strategies for Migrating Oracle Database to AWS • Determining the IOPS Needs for Oracle Database on AWS • Best Practices for Running Oracle Database on AWS Oracle on AWS • Oracle and Amazon Web Services • Amazon RDS for Oracle AWS service details • AWS Cloud Products • AWS Documentation • AWS Whitepapers & Guides AWS pricing information : • AWS Pricing • AWS Pricing Calculator Document history Date Description July 19 2021 Updated for latest service changes and technologies December 2014 First publication
|
General
|
consultant
|
Best Practices
|
Comparing_the_Use_of_Amazon_DynamoDB_and_Apache_HBase_for_NoSQL
|
Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL January 2020 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Amazon DynamoDB Overview 2 Apache HBase Overview 3 Apache HBase Deployment Options 3 Managed Apache HBase on Amazon EMR (Amazon S3 Storage Mode) 4 Managed Apache HBase on Amazon EMR (HDFS Storage Mode) 4 SelfManaged Apache HBase Deployment Model on Amazon EC2 5 Feature Summary 6 Use Cases 8 Data Models 9 Data Types 15 Indexing 17 Data Processing 21 Throughput Model 21 Consistency Model 23 Transaction Model 23 Table Operations 24 Architecture 25 Amazon DynamoDB Architecture Overview 25 Apache HBase Architecture Overview 26 Partitioning 28 Performance Optimizations 29 Amazon DynamoDB Performance Considerations 29 Apache HBase Performance Considerations 33 Conclusion 37 Contributors 38 Further Reading 38 Document Revisions 38 Abstract One challenge that architects and developers face today is how to process large volumes of data in a timely cost effective and reliable manner There are several NoSQL solutions in the market and choosing the most appropriate one for your partic ular use case can be difficult This paper compares two popular NoSQL data stores —Amazon DynamoDB a fully managed NoSQL cloud database service and Apache HBase an open source column oriented distributed big data store Both Amazon DynamoDB and Apache HBase are available in the Amazon Web Services (AWS) Cloud Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 1 Introduction The AWS Cloud accelera tes big data analytics With access to instant scalability and elasticity on AWS you can focus on analytics instead of infrastructure Whether you are indexing large data sets analyzing massive amounts of scientific data or processing clickstream logs AWS provides a range of big data products and services that you can leverage for virtually any data intensive project There is a wide adoption of NoSQL databases in the growing industry of big data and realtime web applications Amazon DynamoDB and Apach e HBase are examples of NoSQL databases which are highly optimized to yield significant performance benefits over a traditional relational database management system (RDBMS) Both Amazon DynamoDB and Apache HBase can process large volumes of data with hig h performance and throughput Amazon DynamoDB provides a fast fully managed NoSQL database service It lets you offload operating and scaling a highly available distributed database cluster Apache HBase is an open source column oriented distributed bi g data store that runs on the Apache Hadoop framework and is typically deployed on top of the Hadoop Distributed File System (HDFS) which provides a scalab le persistent storage layer In the AWS Cloud you can choose to deploy Apache HBase on Amazon Elastic Compute Cloud (Amazon EC2) and manage it yourself Alternatively you can leverage Apache HBase as a managed service on Amazon EMR a fully managed hosted Hadoo p framework on top of Amazon EC2 With Apache HBase on Amazon EMR you can use Amazon Simple Storage Service (Amazon S3) as a data store using the EMR File System (EMRFS) an implementation of HDFS that all Amazon EMR clusters use for reading and writing regular files from Amazon EMR directly to Amazon S3 The following figure shows the relationsh ip between Amazon DynamoDB Amazon EC2 Amazon EMR Amazon S3 and Apache HBase in the AWS Cloud Both Amazon DynamoDB and Apache HBase have tight integration with popular open source processing frameworks like Apache Hive and Apache Spark to enhance querying capabilities as illustrated in the diagram Amazon Web Services Comparing the Us e of Amazon DynamoDB and Apache HBase for NoSQL Page 2 Figure 1: Relation between Amazon DynamoDB Amazon EC2 Amazon EMR and Apache HBase in the AWS Cloud Amazon Dynam oDB Overview Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability Amazon DynamoDB offers the following benefits: • Zero administrative overhead —Amazon DynamoDB manages the burdens of hardware provisioning setup and configuration replication cluster scaling hardware and software updates and monitoring and handling of hardware failures • Virtually unlimited throughput and scale —The provisioned throughput model of Amazon DynamoDB allows you to specify throughput capacity to serve nearly any level of request traffic With Amazon DynamoDB there is virtually no limit to the amount of data that can be stored and retrieved • Elasticity and flexibility —Amazon DynamoDB can handle unpredictable workloads with predictable performance and still maintain a stable latency profile that shows no latency increase or throughput decrease as the data volume rises with increased usage Amazon DynamoDB lets you increa se or decrease capacity as needed to handle variable workloads Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 3 • Auto matic scaling— Amazon DynamoDB can scale automatically within user defined lower and upper bounds for read and write capacity in response to changes in application traffic These qualitie s render Amazon DynamoDB a suitable choice for online applications with spiky traffic patterns or the potential to go viral anytime • Integration with other AWS services —Amazon DynamoDB integrates seamlessly with other AWS services for logging and monitorin g security analytics and more For more information see the Amazon DynamoDB Developer Guide Apache HBase Overview Apache HBase a Hadoop NoSQL database offers the following benefits: • Efficient storage of sparse data —Apache HBase provides fault tolerant storage for large quantities of sparse data using column based compression Apache HBase is capable of storing and processing billions of rows and millions of columns per row • Store for high frequency counters —Apache HBase is suitable for tasks such as high speed counter aggregation because of its consistent reads and writes • High write throughput and update rates —Apache HBase supports low latency lookups and range scans efficient updates and deletions of individual records and high write throughput • Support for multiple Hadoop jobs —The Apache HBase data store allows data to be used by one or more Hadoop jobs on a single cluster or across multiple Hadoop clusters Apache HBase Deployment Options The following section provides a description of Apache HBase deployment options in the AWS Cloud Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 4 Managed Apache HBase on Amazon EMR (Amazon S3 Storage Mode) Amazon EMR enables you to use Amazon S3 as a data store for Apache HBase using the EMR File System and offers the following benefits : • Separation of compute from storage — You can size your Amazon EMR cluster for compute instead of data requirements allowing you to avoid the need for the customary 3x repli cation in HDFS • Transient clusters —You can scale compute nodes without impacting your underlying storage and terminate your cluster to save costs and quickly restore it • Built in availability and durability —You get the availability and durability of Amazon S3 storage by default • Easy to provision read replicas —You can create and configure a read replica cluster in another Amazon EC2 Availability Zone that provides read only access to the same data as the primary cluster ensuring uninterrupted access to you r data even if the primary cluster becomes unavailable Managed Apache HBase on Amazon EMR (HDFS Storage Mode) Apache HBase on Amazon EMR is optimized to run on AWS and offers the following benefits : • Minimal administrative overhead —Amazon EMR handles provi sioning of Amazon EC2 instances security settings Apache HBase configuration log collection health monitoring and replacement of faulty instances You still have the flexibility to access the underlying infrastructure and customize Apache HBase furthe r if desired • Easy and flexible deployment options —You can deploy Apache HBase on Amazon EMR using the AWS Management Console or by using the AWS Command Line Interface (AWS CLI) Once launched resizing an Apache HBase cluster is easily accomplished with a single API call Activities such as modifying the Apache HBase configuration at launch time or i nstalling third party tools such as Ganglia for monitoring performance metrics are feasible with custom or predefined scripts Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 5 • Unlimited scale —With Apache HBase running on Amazon EMR you can gain significant cloud benefits such as easy scaling low cost pay only for what you use and ease of use as opposed to the self managed deployment model on Amazon EC2 • Integration with other AWS services —Amazon EMR is designed to seamlessly integrate with other AWS serv ices such as Amazon S3 Amazon DynamoDB Amazon EC2 and Amazon CloudWatch • Built in backup feature —A key benefit of Apache HBase running on Amazon EMR is the built in mechanism available for backing up Apache HBase data durably in Amazon S3 Using this f eature you can schedule full or incremental backups and roll back or even restore backups to existing or newly launched clusters anytime SelfManaged Apache HBase Deployment Model on Amazon EC2 The Apache HBase self managed model offers the most flexibi lity in terms of cluster management but also presents the following challenges: • Administrative overhead —You must deal with the administrative burden of provisioning and managing your Apache HBase clusters • Capacity planning —As with any traditional infrast ructure capacity planning is difficult and often prone to significant costly error For example you could over invest and end up paying for unused capacity or under invest and risk performance or availability issues • Memory management —Apache HBase is mai nly memory driven Memory can become a limiting factor as the cluster grows It is important to determine how much memory is needed to run diverse applications on your Apache HBase cluster to prevent nodes from swapping data too often to the disk The numb er of Apache HBase nodes and memory requirements should be planned well in advance • Compute storage and network planning —Other key considerations for effectively operating an Apache HBase cluster include compute storage and network These infrastructur e components often require dedicated Apache Hadoop/Apache HBase administrators with specialized skills Amazon Web Services Comparing the Use of Ama zon DynamoDB and Apache HBase for NoSQL Page 6 Feature Summary Amazon DynamoDB and Apache HBase both possess characteristics that are critical for successfully processing massive amounts of data The following table provides a summary of key features of Amazon DynamoDB and Apache HBase that can help you understand key similarities and differences between the two databases These features are discussed in later sections Table 1: Amazon DynamoDB and Apache HBase Feature Summary Feature Amazon DynamoDB Apache HBase Description Hosted scalable database service by Amazon Column store based on Apache Hadoop and on concepts of BigTable Implementation Language Java Server Operating Systems Hosted Linux Unix Windows Database Model Keyvalue & Document store Wide column store Data Scheme Schema free Schema free Typing Yes No APIs and Other Access Methods Flexible Flexible Supported Programming Languages Multiple Multiple Server side Scripts No Yes Triggers Yes Yes Partitioning Methods Sharding Sharding Throughput Model User provisions throughput Limited to hardware configuration Auto matic Scaling Yes No Partitioning Automatic partitioning Automatic sharding Replication Yes Yes Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 7 Feature Amazon DynamoDB Apache HBase Durability Yes Yes Administration No administration overhead High administration overhead in self managed and minimal on Amazon EMR User Concepts Yes Yes Data Model Row Item – 1 or more attributes Columns/column families Row Size Item size restriction No row size restrictions Primary Key Simple/Composite Row key Foreign Key No No Indexes Optional No built in index model implemented as secondary tables or coprocessors Transactions Row Transactions Itemlevel transactions Single row transactions Multi row Transactions Yes Yes Cross table Transactions Yes Yes Consistency Model Eventually consistent and strongly consistent reads Strongly consistent reads and writes Concurrency Yes Yes Updates Conditional updates Atomic read modify write Integrated Cache Yes Yes Time ToLive (TTL) Yes Yes Encryption at Rest Yes Yes Backup and Restore Yes Yes Point intime Recovery Yes Yes Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 8 Feature Amazon DynamoDB Apache HBase Multiregion Multi master Yes No Use Cases Amazon DynamoDB and Apache HBase are optimized to process massive amounts of data Popular use cases for Amazon DynamoDB and Apache HBase include the following: • Serverless applications —Amazon DynamoDB provides a durable backend for storing data at any scale and has become the de facto database for powering Web and mobile backend s for ecomm erce/retail education and m edia verticals • High volume special events —Special events and seasonal events such as national electoral campaigns are of relatively short duration and have variable workloads with the potential to consume large amounts of resources Amazon DynamoDB lets you increase capacity when you need it and decrease as needed to handle variable workloads This quality renders Amazon DynamoDB a suitable choice for such high volume special events • Social media applications —Community based applications such as online gaming photo sharing location aware applications and so on have unpredictable usage patterns with the potential to go viral anytime The elasticity and flexibility of Amazon DynamoDB make it suitable for such high volume variable workloads • Regulatory and complianc e requirements —Both Amazon DynamoDB and Amazon EMR are in scope of the AWS compliance efforts and therefore suitable for healthcare and financial services workloads as described in AWS Se rvices in Scope by Compliance Program • Batch oriented processing —For large datasets such as log data weather data product catalogs and so on you m ay already have large amounts of historical data that you want to maintain for historical trend analysis but need to ingest and batch process current data for predictive purposes For these types of workloads Apache HBase is a good choice because of its high read and write throughput and efficient storage of sparse data Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 9 • Reporting —To process and report on hi gh volume transactional data such as daily stock market trades Apache HBase is a good choice because it supports high throughput writes and update rates which make it suitable for storage of high frequency counters and complex aggregations • Real time analytics —The payload or message size in event data such as tweets E commerce and so on is relatively small when compared with application logs If you want to ingest streaming event data in real time for sentiment analysis ad serving trend ing analysis and so on Amazon DynamoDB lets you increase throughout capacity when you need it and decrease it when you are done with no downtime Apache HBase can handle realtime ingestion of data such as application logs with ease due to its high write throughput and efficient storage of sparse data Combining this capability with Hadoop's ability to handle sequential reads and scans in a highly optimized way renders Apache HBase a powerful tool for real time data analytics Data Models Amazon Dynam oDB is a key/value as well as a document store and Apac he HBase is a key/value store For a meaningful comparison of Amazon DynamoDB with Apache HBase as a NoSQL data store this document focus es on the key/value data model for Amazon DynamoDB Amazon DynamoDB and Apache HBase are designed with the goal to deliver significant performance benefits with low latency and high throughput To achieve this goal key/value stores and document stores have simpler and less constrained data models than trad itional relational databases Although the fundamental data model building blocks are similar in both Amazon DynamoDB and Apache HBase each database uses a distinct terminology to describe its specific data model At a high level a database is a collecti on of tables and each table is a collection of rows A row can contain one or more columns In most cases NoSQL database tables typically do not require a formal schema except for a mandatory primary key that uniquely identifies each row The following t able illustrates the high level concept of a NoSQL database Table 2: High Level NoSQL Database Table Representation Amazon Web Services Comparing the Use of Amazon Dyna moDB and Apache HBase for NoSQL Page 10 Table Row Primary Key Column 1 Columnar databases are devised to store each column separately so that aggregate operations for one column of the entire table are significantly quicker than the traditional row storage model From a comparative standpoint a row in Amazon DynamoDB is referred to as an item and each item can have any number of attributes An attribute comprises a key and a value and commonly referred to as a name value pair An Amazon DynamoDB table can have unlimited items indexed by primary key as shown in the following example Table 3: High Level Representation of Amazon DynamoDB Table Table Item 1 Primary Key Attribute 1 Attribute 2 Attribute 3 Attribute …n Item 2 Primary Key Attribute 1 Attribute 3 Item n Primary Key Attribute 2 Attribute 3 Amazon DynamoDB defines two types of primary keys: a simple primary key with one attribute called a partition key (Table 4) and a composite primary key with two attributes (Table 5) Table 4: Amazon DynamoDB Simple Primary Key (Partition Key) Table Item Partition Key Attribute 1 Attribute 2 Attribute 3 Attribute …n Table 5: Amazon DynamoDB Composite Primary Key (Partition & Sort Key) Table Item Partition Key Sort Key Attribute 1 Attribute 2 Attribute 3 attribute …n Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 11 A JSON representation of the item in the Table 5 with additional nested attributes is given below: { "Partition Key": "Value" "Sort Key": "Value" "Attribute 1": "Value" "Attribute 2": "Value" "Attribute 3": [ { "Attribute 4": "Value" "Attribute 5": "Value" } { "Attribute 4": "Value" "Attribute 5": "Value" } ] } In Amazon DynamoDB a single attribute primary key or partition key is useful for quick reads and writes of data For example PersonID serves as the partition key in the following Person table Table 6: Example Person Amazon DynamoDB Table Person Table Item PersonId (Partition Key) FirstName LastName Zipcode Gender Item 1 1001 Fname 1 Lname 1 00000 Item 2 1002 Fname 2 Lname 2 M Item 3 2002 Fname 3 Lname 3 10000 F A composite key in Amazon DynamoDB is indexed as a partition key and a sort key This multi part key maintains a hierarchy between the first and second element values Holding the partition key element constant facilitates searches across the sort key elem ent to retrieve items quickly for a given partition key In the following GameScores table the composite partition sort key is a combination of PersonId (partition key) and GameId (sort key ) Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 12 Table 7: Example GameScores Amazon Dyna moDB Table GameScores Table PersonId (Partition Key) GameId (Sort Key) TopScore TopScoreDate Wins Losses item1 1001 Game01 67453 201312 09:17:24:31 73 21 item2 1001 Game02 98567 2013 12 11:14:14:37 98 27 Item3 1002 Game01 43876 2013 12 15:19:24:39 12 23 Item4 2002 Game02 65689 2013 10 01:17:14:41 23 54 The partition key of an item is also known as its hash attribute and sort key as its range attribute The term hash attribute arises from the use of an internal hash function that takes the value of the partition key as input and the output of that hash funct ion determines the partition or physical storage node where the item will be stored The term range attribute derives from the way DynamoDB stores items with the same partition key together in sorted order by the sort key value Although there is no expli cit limit on the number of attributes associated with an individual item in an Amazon DynamoDB table there are restrictions on the aggregate size of an item or payload including all attribute names and values A small payload can potentially improve perf ormance and reduce costs because it requires fewer resources to process For information on how to handle items that exceed the maximum item size see Best Practices for Storing Large Items and Attributes In Apache HBase the most basic unit is a column One or more columns form a row Each row is addressed uniquely by a primary key referred to as a row key A row in Apache HBase can have millions of columns Each column can have multiple versions with each distinct value contained in a separate cell One fundamental modeling concept in Apache HBase is that of a column family A column family is a container for grouping sets of related data together within on e table as shown in the following example Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 13 Table 8: Apache HBase Row Representation Table Column Family 1 Column Family 2 Column Family 3 row row key Column 1 Column 2 Column 3 Column 4 Column 5 Column 6 Apache HBase groups columns with the same general access patterns and size characteristics into column families to form a basic unit of separation For example in the following Person table you can group personal data into one column family called personal_info and the statistical data into a demographic column family Any other columns in the table would be grouped accordingly as well as shown in the following example Table 9: Example Person Table in Apache HBase Person Table personal_info demographic row key firstname lastname zipcode gender row 1 1001 Fname 1 Lname 1 00000 row 2 1002 Fname 2 Lname 2 M row 3 2002 Fname 3 Lname 3 10000 F Columns are addressed as a combination of the column family name and the column qualifier expressed as family:qualifier All members of a column family have the same prefix In the preceding example the firstname and lastname column qualifiers can be refe renced as personal_info:firstname and personal_info:lastname respectively Column families allow you to fetch only those columns that are required by a query All members of a column family are physically stored together on a disk This means that optimiz ation features such as performance tunings compression encodings and so on can be scoped at the column family level Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 14 The row key is a combination of user and game identifiers in the following Apache HBase GameScores table A row key can consist of mult iple parts concatenated to provide an immutable way of referring to entities From an Apache HBase modeling perspective the resulting table is tallnarrow This is because the table has few columns relative to the number of rows as shown in the following example Table 10: TallNarrow GameScores Apache HBase Table GameScores Table top_scores metrics row key score date wins loses row 1 1001 game01 67453 2013 12 09:17:24:31 73 21 row 2 1001 game02 98567 2013 12 11:14:14:37 98 27 row 3 1002 game01 43876 2013 12 15:19:24:39 12 23 row 4 2002 game02 65689 2013 10 01:17:14:41 23 54 Alternatively you can model the game identifier as a column qualifier in Apache HBase This approach facilitates precise column lookups and supports usage of filters to read data The result is a flatwide table with few rows relative to the number of col umns This concept of a flat wide Apache HBase table is shown in the following table Table 11: Flat Wide GameScores Apache HBase Table GameScores Table top_scores metrics row key gameId score top_score_date gameId wins loses row 1 1001 game01 98567 2013 12 11:14:14:37 game01 98 27 game02 43876 2013 12 15:19:24:39 game02 12 23 Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 15 GameScores Table row 2 1002 game01 67453 2013 12 09:17:24:31 game01 73 21 row 3 2002 game02 65689 2013 10 01:17:14:41 game02 23 54 For performance reasons it is important to keep the number of column families in your Apache HBase schema low Anything above three column families can potentially degrade performance The recommended best practice is to maintain a one column family in your s chemas and introduce a two column family and three column family only if data access is limited to a one column family at a time Note that Apache HBase does not impose any restrictions on row size Data Types Both Amazon DynamoDB and Apache HBase support unstructured datasets with a wide range of data types Amazon DynamoDB supports the data types shown in the following table: Table 12: Amazon DynamoDB Data Types Type Description Example (JSON Format) Scalar String Unicode with UTF8 binary encoding {"S": "Game01"} Number Positive or negative exact value decimals and integers {"N": "67453"} Binary Encoded sequence of bytes {"B": "dGhpcyB0ZXh0IGlzIGJhc2U2NC1l"} Boolean True or false {"BOOL": true} Null Unknown or undefined state {"NULL": true} Document List Ordered collection of values {"L": ["Game01" 67453]} Amazon Web Services Comparing the Use of Amazon DynamoDB and Apa che HBase for NoSQL Page 16 Type Description Example (JSON Format) Map Unordered collection of name value pairs {"M": {"GameId": {"S": "Game01"} "TopScore": {"N": "67453"}}} Multi valued String Set Unique set of strings {"SS": ["Black""Green] } Number Set Unique set of numbers {"NS": ["422"" 1987"] } Binary Set Unique set of binary values {"BS": ["U3Vubnk=""UmFpbnk=] } Each Amazon DynamoDB attribute can be a name value pair with exactly one value (scalar type) a complex data structure with nested attributes (document type) or a unique set of values (multi valued set type) Individual items in an Amazon DynamoDB table c an have any number of attributes Primary key attributes can only be scalar types with a single value and the only data types allowed are string number or binary Binary type attributes can store any binary data for example compressed data encrypted data or even images Map is ideal for storing JSON documents in Amazon DynamoDB For example in Table 6 Person could be represented as a map of person id that maps to detailed information about the person: name gender and a list of their previous a ddresses also represented as a map This is illustrated in the following script : { "PersonId": 1001 "FirstName": "Fname 1" "LastName": "Lname 1" "Gender": "M" "Addresses": [ { "Street": "Main S t" "City": "Seattle" "Zipcode": 98005 "Type": "current" } { "Street": "9th S t" "City": Seattle "Zipcode": 98005 "Type": "past" Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 17 } ] } In summary Apache HBase defines the following concepts: • Row —An atomic byte array or key/value container • Column—A key with in the key/value container inside a row • Column Family —Divides columns into related subsets of data that are stored together on disk • Timestamp —Apache HBase adds the concept of a fourth dimension column that is expressed as an explicit or implicit timestam p A timestamp is usually represented as a long integer in milliseconds • Value—A time versioned value in the key/value container This means that a cell can contain multiple versions of a value that can change over time Versions are stored in decreasing t imestamp with the most recent first Apache HBase supports a bytes in/bytes out interface This means that anything that can be converted into an array of bytes can be stored as a value Input could be strings numbers complex objects or even images as long as they can be rendered as bytes Consequently key/value pairs in Apache HBase are arbitrary arrays of bytes Because row keys and column qualifiers are also arbitrary arrays of bytes almost anything can serve as a row key or column qualifier from strings to binary representations of longs or even serialized data structures Column family names must comprise printable characters in human readable format This is because column family names are used as part of the directory name in the file system Furthermore column families must be declared up front at the time of schema definition Column qualifiers are not subjected to this restriction and can comprise any arbitrary binary characters and be created at runtime Indexing In general data i s indexed using a primary key for fast retrieval in both Amazon DynamoDB and Apache HBase Secondary indexes extend the basic indexing functionality and provide an alternate query path in addition to queries against the primary key Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 18 Amazon DynamoDB support s two kinds of secondary indexes on a table that already implements a partition and sort key : • Global secondary index —An index with a partition and optional sort key that can be different from those on the table • Local secondary index —An index that has the same partition key as the table but a different sort key You can define one or more global secondary indexes and one or more local secondary indexes per table For documents you can create a local secondary index or global secondary index on any top level JSON element In the example GameScores table introduced in the preceding section you can define LeaderBoardIndex as a global secondary index as follows: Table 13: Example Global Secondary Index in Amazon DynamoDB LeaderBoardIndex Index Key Attribute 1 GameId (Partition Key) TopScore (Sort Key) PersonId Game01 98567 1001 Game02 43876 1001 Game01 65689 1002 Game02 67453 2002 The LeaderBoardIndex shown in Table 13 defines GameId as its primary key and TopScore as its sort key It is not necessary for the index key to contain any of the key attributes from the source table However the table’s primary key attributes are always present in the global secondary index In this example PersonId is automatically projected or copied into the index With LeaderBoardIndex defined you can easily obtain a list of top scores for a specific game by simply querying it The output is ordered by TopScore the sort key You can choose to project additional attributes from the source table into the index A local secondary index on the other hand organizes data by the index sort key It provides an alternate query pat h for efficiently accessing data using a different sort key Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 19 You can define PersonTopScoresIndex as a local secondary index for the example GameScores table introduced in the preceding section The index contains the same partition key PersonId as the source table and defines TopScoreDate as its new sort key The old sort key value from the source table (in this example GameId ) is automatically projected or copied into the index but it is not a part of the index key as shown in the following table Table 14: Local Secondary Index in Amazon Dynamo DB PersonTopScoresIndex Index Key Attribute1 Attribute2 PersonId (Partition Key) TopScoreDate (New Sort Key) GameId (Old Sort Key as attribute) TopScore (Optional projected attribute) 1001 2013 12 09:17:24:31 Game01 67453 1001 2013 12 11:14:14:37 Game02 98567 1002 2013 12 15:19:24:39 Game01 43876 2002 2013 10 01:17:14:41 Game02 65689 A local secondary index is a sparse index An index will only have an item if the index sort key attribute has a value With local secondary indexes any group of items that have the same partition key value in a table and all their associated local secondary indexes form an item collection There is a size restriction on item collections in a DynamoDB table For more infor mation see Item Collection Size Limit The main difference between a global secondary index and a local secondary index is that a global secondary index def ines a completely new partition key and optional sort index on a table You can define any attribute as the partition key for the global secondary index as long as its data type is scalar rather than a multi value set Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBas e for NoSQL Page 20 Additional highlights between global and local secondary indexes are captured in the following table Table 15: Global and secondary indexes Global Secondary Indexes Local Secondary Indexes Creation Can be created for existing tables (Online indexing supported) Only at table creation time (Online indexing not supported) Primary Key Values Need not be unique Must be unique Partition Key Different from primary table Same as primary table Sort Key Optional Required (different from Primary table) Provisioned Throughput Independent from primary table Dependent on primary table Writes Asynchronous Synchronous For more information on global and local secondary indexes in Amazon DynamoDB see Improving Data Access with Secondary Indexes In Apache HBase all row s are always sorted lexicographically by row key The sort is byteordered This means that each row key is compared on a binary level byte by byte from left to right Row keys are always unique and act as the primary index in Apache HBase Although Apac he HBase does not have native support for built in indexing models such as Amazon DynamoDB you can implement custom secondary indexes to serve as alternate query paths by using these techniques: • Create an index in another table —You can maintain a secondary table that is periodically updated However depending on the load strategy the risk with this method is that the secondary index can potentially become out of sync with the main table You can mitigate this risk if you build the secondary index while publishing data to the cluster and perform concurrent writes into the index table • Use the coprocessor framework —You can leverage the coprocessor framework to implement custom secondary indexes Coprocessors act like triggers that are similar to sto red procedures in RDBMS Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 21 • Use Apache Phoenix —Acts as a front end to Apache HBase to convert standard SQL into native HBase scans and queries and for secondary indexing In summary both Amazon DynamoDB and Apache HBase define data models that allow efficient storage of data to optimize query performance Amazon DynamoDB imposes a restriction on its item size to allow efficient processing and reduce costs Apache HBase uses the concept of column families to provide data locality for more efficient read operations Amazon DynamoDB supports both scalar and multi valued sets to accommodate a wide range of unstructured datasets Similarly Apache HBase stores its key/value pairs as arbitrary arrays of bytes giving it the flexibility to store any data type Amazon DynamoDB supports built in secondary indexes and automatically updates and synchronizes all indexes with their parent tables With Apache HBase you can implement and ma nage custom secondary indexes yourself From a data model perspective you can choose Amazon DynamoDB if your item size is relatively small Although Amazon DynamoDB provides a number of options to overcome row size restrictions Apache HBase is better equ ipped to handle large complex payloads with minimal restrictions Data Processing This section highlights foundational elements for processing and querying data within Amazon DynamoDB and Apache HBase Throughput Model Amazon DynamoDB uses a provisioned th roughput model to process data With this model you can specify your read and write capacity needs in terms of number of input operations per second that a table is expected to achieve During table creation time Amazon DynamoDB automatically partitions and reserves the appropriate amount of resources to meet your specified throughput requirements Automatic scaling for Amazon DynamoD B automate s capacity management and eliminates the guesswork involved in provisioning adequate capacity when creating new tables and global secondary indexes With automatic scaling enabled you can specify percent target utilization and DynamoDB will scale the provisioned capacity for reads and writes within the bounds to meet the target utilization percent For more information see Managing Throughput Capacity Automatically with DynamoDB Auto Scaling Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase fo r NoSQL Page 22 To decide on the required read and write throughput values for a table without auto scaling feature enabled consider the following factors: • Item size —The read and write capacity units that you specify are based on a predefined data item size per read or per write operation For more information about provisioned throughput data item size restrictions see Provisioned Throughput in Am azon DynamoDB • Expected read and write request rates —You must also determine the expected number of read and write operations your application will perform against the table per second • Consistency —Whether your application requires strongly consistent or eventually consistent reads is a factor in determining how many read capacity units you need to provision for your table For more information about consistency and Amazon DynamoDB see the Consistency Model section in this document • Global secondary indexes —The provisioned throughput settings of a global secondary index are separate from those of its parent table Therefore you must also consider the expe cted workload on the global secondary index when specifying the read and write capacity at index creation time • Local secondary indexes —Queries against indexes consume provisioned read throughput For more information see Provisioned Throughput Considerations for Local Secondary Indexes Although read and write requirements are specified at table creation time Amazon DynamoDB lets yo u increase or decrease the provisioned throughput to accommodate load with no downtime With Apache HBase the number of nodes in a cluster can be driven by the required throughput for reads and/or writes The available throughput on a given node can vary depending on the data specifically: • Key/value sizes • Data access patterns • Cache hit rates • Node and system configuration Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 23 You should plan for peak load if load will likely be the primary factor that increases node count within an Apache HBase cluster Consistency Model A database consistency model determines the manner and timing in which a successful write or update is reflected in a subsequent read operation of that same value Amazon DynamoDB lets you specify the desired consistency characteristics f or each read request within an application You can specify whether a read is eventually consistent or strongly consistent The eventual consistency option is the default in Amazon DynamoDB and maximizes the read throughput However an eventually consiste nt read might not always reflect the results of a recently completed write Consistency across all copies of data is usually reached within a second A strongly consistent read in Amazon DynamoDB returns a result that reflects all writes that received a su ccessful response prior to the read To get a strongly consistent read result you can specify optional parameters in a request It takes more resources to process a strongly consistent read than an eventually consistent read For more information about re ad consistency see Data Read and Consistency Considerations Apache HBase reads and writes are strongly consistent This means that all reads and writes to a single row in Apache HBase are atomic Each concurrent reader and writer can make safe assumptions about the state of a row Multi versioning and time stamping in Apache HBase contribute to its strongly consistent model Transaction Model Unlike RDBMS NoSQL databases typically have no domain specific language such as SQL to query data Amazon DynamoDB and Apache HBase provide simple application programming interfaces (APIs) to perform the standard create read update and delete (CRUD) o perations Amazon DynamoDB Transactions support coordinated all ornothing changes to multiple items both within and across tables Transactions provide atomicity consistency isolation and durability (ACID) in DynamoDB helping you to maintain data correctness in your applications Apache HBase integrates with Apache Phoenix to add cross row and cross table transaction support with full ACID semantics Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for No SQL Page 24 Amazon DynamoDB provides atomic item and attribute operations for adding updating or deleting data Further item level transactions can specify a condition that must be satisfied before that transaction is fulfilled For example you can choose to update an item only if it already has a ce rtain value Conditional operations allow you to implement optimistic concurrency control systems on Amazon DynamoDB For conditional updates Amazon DynamoDB allows atomic increment and decrement operations on existing scalar values without interfering wi th other write requests For more information about conditional operations see Conditional Writes Apache HBase also supports atomic high update rates (the classic read modify write) within a single row key enabling storage for high frequency counters Unlike Amazon DynamoDB Apache HBase uses multi version concurrency control to implement updates This means that an existing piece of data is not overwritten with a new one; instead it becomes obsolete when a newer version is added Row data access in Apache HBase is atomic and includes any number of columns but there are no further guarantees or transactional feat ures spanning multiple rows Similar to Amazon DynamoDB Apache HBase supports only single row transactions Amazon DynamoDB has an optional feature DynamoDB Streams to capture table activity The data modification events such as add update or delete c an be captured in near real time in a time ordered sequence If stream is enabled on a DynamoDB table each event gets recorded as a stream record along with name of the table event timestamp and other metadata For more information see the section on Capturing Table Activity with DynamoDB Streams Amazon DynamoDB Streams can be u sed with AWS Lambda to create trigger code that executes automatically whenever an event of interest (add update delete) appears in a stream This pattern enables powerful solutions such as data replication within and across AWS Regions materialized views of data in DynamoDB tables data analysis using Amazon Kinesis notifications via Amazon Simple Notification Service (Amazon SNS) or Amazon Simple Email Service (Amazon SES) and much more For more information see DynamoDB Streams and AWS Lambda Triggers Table Operations Amazon D ynamoDB and Apache HBase provide scan operations to support large scale analytical processing A scan operation is similar to cursors in RDBMS By taking advantage of the underlying sequential sorted storage layout a scan operation can Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 25 easily iterate ove r wide ranges of records or entire tables Applying filters to a scan operation can effectively narrow the result set and optimize performance Amazon DynamoDB uses parallel scanning to improve performance of a scan operation A parallel scan logically sub divides an Amazon DynamoDB table into multiple segments and then processes each segment in parallel Rather than using the default scan operation in Apache HBase you can implement a custom parallel scan by means of the API to read rows in parallel Both Amazon DynamoDB and Apache HBase provide a Query API for complex query processing in addition to the scan operation The Query API in Amazon DynamoDB is accessible only in tables that define a composite primary key In Apache HBase bloom filters improve Get operations and the potential performance gain increases with the number of parallel reads In summary Amazon DynamoDB and Apache HBase have similar data processin g models in that they both support only atomic single row transactions Both databases also provide batch operations for bulk data processing across multiple rows and tables One key difference between the two databases is the flexible provisioned throughp ut model of Amazon DynamoDB The ability to increase capacity when you need it and decrease it when you are done is useful for processing variable workloads with unpredictable peaks For workloads that need high update rates to perform data aggregations or maintain counters Apache HBase is a good choice This is because Apache HBase supports a multi version concurrency control mechanism which contributes to its strongly consistent reads and writes Amazon DynamoDB gives you the flexibility to specify whet her you want your read request to be eventually consistent or strongly consistent depending on your specific workload Architecture This section summarizes key architectural components of Amazon DynamoDB and Apache HBase Amazon DynamoDB Architecture Overv iew At a high level Amazon DynamoDB is designed for high availability durability and consistently low latency (typically in the single digit milliseconds) performance Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 26 Amazon DynamoDB runs on a fleet of AWS managed servers that leverage solid state drives (SSDs) to create an optimized high density storage platform This platform decouples performance from table size and eliminates the need for the working set of data to fit in memory while still returning consistent low latency responses to queries As a managed service Amazon DynamoDB abstracts its underlying architectural details from the user Apache HBase Architecture Overview Apache HBase is typically deployed on top of HDFS Apache ZooKeeper is a critical component for maintai ning configuration information and managing the entire Apache HBase cluster The three major Apache HBa se components are the following: • Client API — Provides programmatic access to D ata Manipulation Language (DML) for performing CRUD operations on HBase tables • Region servers — HBase tables are split into regions and are served by region servers • Master server — Responsible for monitoring all region server instan ces in the cluster and is the interface for all metadata changes Apache HBase stores data in indexed store files called HFiles on HDFS The store files are sequences of blocks with a block index stored at the end for fast lookups The store files provide an API to access specific values as well as to scan ranges of values given a start and end key During a write operation data is first written to a commit log called a write ahead log (WAL) and then moved into memory in a structure called Memstore When the size of the Memstore exceeds a given maximum value it is flushed as a HFile to disk Each time data is flushed from Memstores to disk new HFiles must be created As the number of HFiles builds up a compaction process merges the files into fewer lar ger files A read operation essentially is a merge of data stored in the Memstores and in the HFiles The WAL is never used in the read operation It is meant only for recovery purposes if a server crashes before writing the in memory data to disk A regio n in Apache HBase acts as a store per column family Each region contains contiguous ranges of rows stored together Regions can be merged to reduce the Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 27 number of store files A large store file that exceeds the configured maximum store file size can trigg er a region split A region server can serve multiple regions Each region is mapped to exactly one region server Region servers handle reads and writes as well as keeping data in memory until enough is collected to warrant a flush Clients communicate d irectly with region servers to handle all data related operations The master server is responsible for monitoring and assigning regions to region servers and uses Apache ZooKeeper to facilitate this task Apache ZooKeeper also serves as a registry for reg ion servers and a bootstrap location for region discovery The master server is also responsible for handling critical functions such as load balancing of regions across region servers region server failover and completing region splits but it is not pa rt of the actual data storage or retrieval path You can run Apache HBase in a multi master environment All masters compete to run the cluster in a multi master mode However if the active master shuts down then the remaining masters contend to take ove r the master role Apache HBase on Amazon EMR Architecture Overview Amazon EMR defines the concept of instance groups which are collections of Amazon EC2 instances The Amazon EC2 virtual servers perform roles analogous to the master and slave nodes of Hadoop For best performance Apache HBase clusters should run on at least two Amazon EC2 instances There are three types of instance groups in an Amaz on EMR cluster • Master —Contains one master node that manages the cluster You can use the Secure Shell (SSH) protocol to access the master node if you want to view logs or administer the cluster yourself The master node runs the Apache HBase master server and Apache ZooKeeper • Core —Contains one or more core nodes that run HDFS and store data The core nodes run the Apache HBase region servers • Task —(Optional) Contains any number of task nodes Managed Apache HBase on Amazon EMR (Amazon S3 Storage Mode) When you run Apache HBase on Amazon EMR with Amazon S3 storage mode enabled the HBase root directory is stored in Amazon S3 including HBase store files Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 28 and table metadata For more information see HBase on Amazon S3 (Amazon S3 Storage Mode) For production workloads EMRFS consistent view is recommended when you enable HBase on Amazon S3 Not usin g consistent view may result in performance impacts for specific operations Partitioning Amazon DynamoDB stores three geographically distributed replicas of each table to enable high availability and data durability within a region Data is auto partitioned primarily using the partition key As throughput and data size increase Amazon DynamoDB will automatically repartition and reallocate data across more nodes Partitions in Amazon DynamoDB are fully independent resulting in a shared nothing cluster However provisioned throughput is divided evenly across the partiti ons A region is the basic unit of scalability and load balancing in Apache HBase Region splitting and subsequent load balancing follow this sequence of events: 1 Initially there is only one region for a table and as more data is added to it the system monitors the load to ensure that the configured maximum size is not exceeded 2 If the region size exceeds the configured limit the system dynamically splits the region into two at the row key in the middle of the region creating two roughly equal halves 3 The master then schedules the new regions to be moved off to other servers for load balancing if required Behind the scenes Apache ZooKeeper tracks all activities that take place during a region split and maintains the state of the region in case of server failure Apache HBase regions are equivalent to range partitions that are used in RDBMS sharding Regions can be spread across many physical servers that consequently distribute the load resulting in scalability In summary as a managed service the architectural details of Amazon DynamoDB are abstracted from you to let you focus on your application details Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 29 With the self managed Apache HBase deployment model it is crucial to u nderstand the underlying architectural details to maximize scalability and performance AWS gives you the option to offload Apache HBase administrative overhead if you opt to launch your cluster on Amazon EMR Performance Optimizations Amazon DynamoDB and Apache HBase are inherently optimized to process large volumes of data with high performance NoSQL databases typically use an on disk column oriented storage format for fast data access and reduced I/O when fulfilling queries This performance characteri stic is evident in both Amazon DynamoDB and Apache HBase Amazon DynamoDB stores items with the same partition key contiguously on disk to optimize fast data retrieval Similarly Apache HBase regions contain contiguous ranges of rows stored together to im prove read operations You can enhance performance even further if you apply techniques that maximize throughput at reduced costs both at the infrastructure and application tiers Tip: A recommended best practice is to monitor Amazon DynamoDB and Apache H Base performance metrics to proactively detect and diagnose performance bottlenecks The following section focuses on several common performance optimizations that are specific to ea ch database or deployment model Amazon DynamoDB Performance Consideration s Performance considerations for Amazon DynamoDB focus on how to define an appropriate read and write throughput and how to design a suitable schema for an application These performance considerations span both infrastruct ure level and application tiers Ondemand Mode – No Capacity Planning Amazon DynamoDB on demand is a flexible billing option capable of serving thousands of requests per second without capacity planning For on demand mode tables you don't need to specify how much read and write through put you expect your application to perform DynamoDB tables using on demand capacity mode automatically adapt to Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 30 your application’s traffic volume On demand capacity mode instantly accommodates up to double the previous peak traffic on a table For more i nformation see On Demand Mode Tip: DynamoDB recommends spacing your traffic growth over at least 30 minutes be fore driving more than 100000 reads per second Provisioned Throughput Considerations Factors that must be taken into consideration when determining the appropriate throughput requirements for an application are item size expected read and write rates consistency and secondary indexes as discussed in the Throughput Model section of this whitepaper If an application performs more reads per second or writes per second than a table’s provisioned throughput capacity a llows requests above the provisioned capacity will be throttled For instance if a table’s write capacity is 1000 units and an application can perform 1500 writes per second for the maximum data item size Amazon DynamoDB will allow only 1000 writes p er second to go through and the extra requests will be throttled Tip: For applications where capacity requirement increases or decreases gradually and the traffic stays at the elevated or depressed level for at least several minutes manage read and write throughput capacity automatically using auto scaling feature With any changes in traffic pattern DynamoDB will scale the provisioned capacity up or down within a specified range to match the desired capacity utilization you enter for a table or a g lobal secondary index Read Performance Considerations With the launch of Amazon DynamoDB Accelerator (DAX) you can now get microsecond access to data that live s in Amazon DynamoDB DAX is an in memory cache in front of DynamoDB and has the identical API as DynamoDB Because reads can be served from the DAX layer for queries with a cache hit and the table will only serve the reads when there is a cache miss th e provisioned read capacity units can be lowered for cost savings Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 31 Tip: Based on the size of your tables and data access pattern consider provisioning a single DAX cluster for multiple smaller tables or multiple DAX clusters for a single bigger table or a hybrid caching strategy that will work best for your application Primary Key Design Considerations Primary key design is critical to the performance of Amazon DynamoDB When storing data Amazon DynamoDB divides a table's items into multiple partitions and distributes the data primarily based on the partition key element The provisioned throughput associated with a table is also divided evenly among the partitions with no sharing of provisioned throughput across partitions Tip: To efficiently use the overall provisioned throughput spread the workload across partition key values For example if a table has a very small number of heavily accessed partition key elements possibly even a single very heavily used partition key element traffic can become concentrated on a single partition and create "hot spots" of read and write activity within a single item collection In extreme cases throttling can occur if a single partition exceeds its maximum capacity To better accommodate uneven access patterns Amazon DynamoDB adaptive capacity enables your application to continue reading and writing to hot partitions without being throttled provided that traffic does not exc eed your table’s total provisioned capacity or the partition maximum capacity Adaptive capacity works by automatically and instantly increasing throughput capacity for partitions that receive more traffic To get the most out of Amazon DynamoDB throughpu t you can build tables where the partition key element has a large number of distinct values Ensure that values are requested fairly uniformly and as randomly as possible The same guidance applies to global secondary indexes Choose partitions and sort keys that provide uniform workloads to achieve the overall provisioned throughput Local Secondary Index Considerations When querying a local secondary index the number of read capacity units consumed depends on how the data is accessed For example whe n you create a local secondary Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 32 index and project non key attributes into the index from the parent table Amazon DynamoDB can retrieve these projected attributes efficiently In addition when you query a local secondary index the query can also retrieve attributes that are not projected into the index Avoid these types of index queries that read attributes that are not projected into the local secondary index Fetching attributes from the parent table that are not specified in the local secondary index c auses additional latency in query responses and incurs a higher provisioned throughput cost Tip: Project frequently accessed non key attributes into a local secondary index to avoid fetches and improve query performance Maintain multiple local secondary indexes in tables that are updated infrequently but are queried using many different criteria to improve query performance This guidance does not apply to tables that experience heavy write activity If very high write activity to the table is e xpected one option to consider is to minimize interference from reads by not reading from the table at all Instead create a global secondary index with a structure that is identical to that of the table and then direct all queries to the index rather t han to the table Global Secondary Index Considerations If a query exceeds the provisioned read capacity of a global secondary index that request will be throttled Similarly if a request performs heavy write activity on the table but a global secondar y index on that table has insufficient write capacity then the write activity on the table will be throttled Tip: For a table write to succeed the provisioned throughput settings for the table and global secondary indexes must have enough write capacity to accommodate the write; otherwise the write will be throttled Global secondary indexes support eventually consistent reads each of which consume one half of a read capacity unit The number of read capacity units is the sum of all projected attribut e sizes across all of the items returned in the index query results With write activities the total provisioned throughput cost for a write consists of the sum of write capacity units consumed by writing to the table and those consumed by updating the g lobal secondary indexes Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 33 Apache HBase Performance Considerations Apache HBase performance tuning spans hardware network Apache HBase configurations Hadoop configurations and the Java Virtual Machine Garbage Collection settings It also includes applyin g best practices when using the client API To optimize performance it is worthwhile to monitor Apache HBase workloads with tools such as Ganglia to identify performance problems early and apply recommended best practices based on observed performance met rics Memory Considerations Memory is the most restrictive element in Apache HBase Performance tuning techniques are focused on optimizing memory consumption From a schema design perspective it is important to bear in mind that every cell stores its val ue as fully qualified with its full row key column family column name and timestamp on disk If row and column names are long the cell value coordinates might become very large and take up more of the Apache HBase allotted memory This can cause severe performance implications especially if the dataset is large Tip: Keep the number of column families small to improve performance and reduce the costs associated with maintaining HFiles on disk Apache HBase Configurations Apache HBase supports built in mechanisms to handle region splits and compactions Split/compaction storms can occur when multiple regions grow at roughly the same rate and eventually split at about the same time This can cause a large spike in disk I/O because of the compactions nee ded to rewrite the split regions Tip: Rather than relying on Apache HBase to automatically split and compact the growing regions you can perform these tasks manually If you handle the splits and compactions manually you can perform them in a time controlled manner and stagger them across all regions to spread the I/O load as much as possible to avoid potential split/compaction storms With the manual option you can further alleviate any problematic split/compaction storms and gain additional performance Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 34 Schema Design A region can run hot when dealing with a write pattern that does not distribute the load across all servers evenly This is a common scenario when dealing with streams processing events with time series data The gradually increa sing nature of time series data can cause all incoming data to be written to the same region This concentrated write activity on a single server can slow down the overall performance of the cluster This is because inserting data is now bound to the perfo rmance of a single machine This problem is easily overcome by employing key design strategies such as the following • Applying salting prefixes to keys; in other words prepending a random number to a row • Randomizing the key with a hash function • Promotin g another field to prefix the row key These techniques can achieve a more evenly distributed load across all servers Client API Considerations There are a number of optimizations to take into consideration when reading or writing data from a client using the Apache HBase API For example when performing a large number of PUT operations you can disable the auto flush feature Otherwise the PUT operations will be sent one at a time to the region server Whenever you use a scan operation to process large numbers of rows use filters to limit the scan scope Using filters can potentially improve performance This is because column over selection can incur a nontrivial performance penalty especially over large data sets Tip: As a recommended best practice set the scanner caching to a value greater than the default of 1 especially if Apache HBase serves as an input source for a MapReduce job Setting the scanner caching value to 500 for example will transfer 500 rows at a time to the client to be proces sed but this might potentially cost more in memory consumption Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 35 Compression Techniques Data compression is an important consideration in Apache HBase production workloads Apache HBase natively supports a number of compression algorithms that you can enab le at the column family level Tip: Enabling compression yields better performance In general compute resources for performing compression and decompression tasks are typically less than the overheard for reading more data from disk Apache HBase on Amazon EMR (HDFS Mode) Apache HBase on Amazon EMR is optimized to run on AWS with minim al administration overhead You still can access the underlying infrastructure and manually configure Ap ache HBase settings if desired Cluster Considerations You can resize an Amazon EMR cluster using core and task nodes You can add more core nodes if desired Task nodes are useful for managing the Amazon EC2 instance capacity of a cluster You can increase capacity to handle peak loads and decrease it later during demand lulls Tip: As a recommended best practice in production workloads you can launch Apache HBase on one cluster and any analysis tools such as Apache Hive on a separate cluster to improve performance Managing two separate clusters ensures that Apache HBase has ready access to the infrastructure resources it requires Amazon EMR provi des a feature to backup Apache HBase data to Amazon S3 You can perform either manual or automated backups with options to perform full or incremental backups as needed Tip: As a best practice every production cluster should always take advantage of the backup feature available on Amazon EMR Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 36 Hadoop and Apache HBase Configurations You can use a bootstrap action to install additional software or change Apache HB ase or Apache Hadoop configuration settings on Amazon EMR Bootstrap actions are scripts that are run on the cluster nodes when Amazon EMR launches the cluster The scripts run before Hadoop starts and before the node begins processing data You can write custom bootstrap actions or use predefined bootstrap actions provided by Amazon EMR For example you can install Ganglia to monitor Apache HBase performance metrics using a predefined bootstrap action on Amazon EMR Apache HBase on Amazon EMR (Amazo n S3 Storage Mode) When you run Apache HBase on Amazon EMR with Amazon S3 storage mode enabled keep in recommended best practices discussed in this section Read Performance Considerations With Amazon S3 storage mode enabled Apache HBase r egion servers us e MemStore to store data writes in memory and use write ahead logs to store data writes in HDFS before the data is written to HBase StoreFiles in Amazon S3 Reading record s directly from th e StoreFile in Amazon S3 results in significantly higher latency a nd higher standard deviation than reading from HDFS Amazon S3 scales to support very high request rates If your request rate grows steadily Amazon S3 automatically partitions your buckets as needed to support higher request rates However the maximum request rates for Amazon S3 are lower than what can be achieved from the local cache For more information about Amazon S3 performance see Performance Optimization For read heavy workloads caching data inmemory or on disk caches in Amazon EC2 instance storage is recommended Because Apache HBase region servers use BlockCache to store data reads in memory and BucketCache to store data reads on EC2 instance storage you can choose an EC2 instance type with sufficient instance store In addition you can add Amazon Elastic Block Store (Amazon EBS) storage to accommodate your required cache size You can increase the BucketCache size on attached instance stores and EBS volumes using the hbasebucketcachesize property Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 37 Write Performance Considerations As discussed in the preceding section t he frequency of MemStore flushes and the number of StoreFiles present during minor and major compactions can contribute significantly to an increase in region server response times and consequently impact write per formance C onsider increasing the size of the MemStore flush and HRegion block multiplier which increases the elapsed time between major compactions for optimal write performance Apache HBase compactions and region servers perform optimally when fewer StoreFiles need to be compacted You may get better performance using larger file block sizes (but less than 5 GB) to trigger Amazon S3 multipart upload functionality in EMRFS In summary whether you are running a managed NoSQL database such as Amazon DynamoDB or Apache HBase on Amazon EMR or manag ing your Apache HBase cluster yourself on Amazon EC2 or on premises you should take performance optimizations into consideration if you want to maximize performance at reduced costs The key difference between a hosted NoSQL solution and managing it yours elf is that a managed solution such as Amazon DynamoDB or Apache HBase on Amazon EMR lets you offload the bulk of the administration overhead so that you can focus on optimizing your application If you are a developer who is getting started with NoSQL Amazon DynamoDB or the hosted Apache HBase on the Amazon EMR solution are suitable options depending on your use case For developers with in depth Apache Hadoop/Apache HBase knowledge who need full control of their Apache HBase clusters the self managed Apache HBase deployment model offers the most flexibility from a cluster management standpoint Conclusion Amazon DynamoDB lets you offload operating and scaling a highly available distributed database cluster making it a suitable choice for today’s rea ltime web based applications As a managed service Apache HBase on Amazon EMR is optimized to run on AWS with minimal administration overhead For advanced users who want to retain full control of their Apache HBase clusters the self managed Apache HBa se deployment model is a good fit Amazon DynamoDB and Apache HBase exhibit inherent characteristics that are critical for successfully processing massive amounts of data With use cases ranging from Amazon Web Services Comparing the Use of Amazon DynamoDB and Apache HBase for NoSQL Page 38 batch oriented processing to real time data serving Ama zon DynamoDB and Apache HBase are both optimized to handle large datasets However knowing your dataset and access patterns are key to choosing the right NoSQL database for your workload Contributors Contributors to this document include : • Wangechi Doble Principal Solutions Architect Amazon Web Services • Ruchika Abbi Solutions Architect Amazon Web Services Further Reading For additional information see: • Amazon DynamoDB Developer Guide • Amazon EC2 User Guide • Amazon EMR Management Guide • Amazon EMR Migration Guide • Amazon S3 Developer Guide • HBase: The Definitive Guide by Lars George • The Apache HBase™ Reference Guide • Dynamo: Amazon’s Highly Available Key value Store Document Revisions Date Description January 2020 Amazon DynamoDB foundational featu res and transaction model updates November 2018 Amazon DynamoDB Apache HBase on EMR and template updates September 2014 First Publication
|
General
|
consultant
|
Best Practices
|
Configuring_Amazon_RDS_as_an_Oracle_PeopleSoft_Database
|
Configuring Amazon RDS as an Oracle PeopleSoft Database July 2019 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor d oes it modify any agreement between AWS and its customers © 2019 Amazon Web Services Inc and DLZP Group All rights reserved Contents About this Guide 4 Introduction 1 Prerequisites 2 Decide Which AWS Region to Use 2 Identify VPC and Subnets 2 Validate IAM Permissions 2 Determine the Size of the Database 3 Set Up the AWS Command Line Interface (Optional) 3 Certification Licensing and Availability 4 PeopleSoft Certification 4 Oracle Licensing 4 Amazon RDS for Oracle Availability 4 Configuring the Database Instance 6 Create Security Groups 6 Create a DB Subnet Group 9 Create an Option Group 11 Create a Parameter Group 13 Modifying Parameters 15 Create the Database Instance 17 Create a DNS Alias for the Database Instance 27 Running the PeopleSoft DB Creation Scripts 30 Editing the Database Scripts 30 Conclusion 35 References 35 Contributors 35 Document Revisions 35 About this Guide Amazon Web Services (AWS) provides a comprehensive set of services and tools for deploying enterprise grade solutions in a rapid reliable and cost effective manner Oracle Database is a widely used rela tional database management system that is dep loyed and used with many Oracle applications of all sizes to manage various forms of data in many phases of business transactions In this guide we describe the preferred method for configuring an Amazon R elational Database Service (Amazon RDS) for Oracle Database as a back end database for Oracle PeopleSoft Enterprise a widely used enterprise resource planning ( ERP) application Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 1 Introduction An Amazon Relational Database Service (Amazon RDS ) for Oracle Database provides scalability performance monitoring and bac kup and restore support Deploying an Amazon RDS for Oracle D atabase in multiple Availability Zones (AZs) simplifies creating a highly available architecture because a multiAZ deployment contains built in support for automated failover from your primary database to a synchronously replicated secondary database in an alternative AZ Amazon RDS for Oracle always provides the latest version of Oracle Database with the latest patch set updates (PSU s) and manages the database upgrade process on your schedule eliminating manual database upgrade and patching tasks You can use Oracle PeopleSoft Enterprise with Amazon RDS and the preferred Oracle Database edition ( using your own license or a license managed by AWS ) to create a production Amazo n RDS for Oracle Database instance or the Standard Edition /Standard Edition One/Standard Edition Two to create Amazon RDS for Oracle preproduction environments Before you can use the PeopleSoft components you must create and populate schemas for them in your Amazon RDS for Oracle Database To do so use the Amazon RDS console or AWS C ommand Line Interface (AWS CLI) to launch your database (DB) instance After the instance is created you need to modify the delivered PeopleSoft Database Creation Scripts and run them against the Amazon RDS for Oracle Database instance After completing the procedure s described in this guide you can leverage the manageability feature s of Amazon RDS for Oracle —such as multiple Availability Zones for high availability hourly pricing of an Oracle D atabase and a virtual private cloud ( VPC) for network security —while operating the PeopleSoft Enterprise application on AWS Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 2 Prerequisites Before you create an Amazon RDS for Oracle Database instance you need to make some decisions about your configuration and complete some basic tasks You will use the decisions you make in this section later to configure your Amazon RDS for Oracle Database instance Decide Which AWS Region to Use Decide which of the available AWS Regions you want to use for your workload When choosing a Region consid er the following factors: • Latency between the end users and the AWS Region 1 • Latency between your data center and the AWS Region This is one of the most critical factors when you have PeopleSoft running in the cloud and backends running on premises • AWS cost: The AWS service cost varies depending on the Region • Legislation and compliance : There might be restrictions on which count ry your customers ' data can be stored in Identify VPC and Subnets Determine which VPC and subnets you will be using to deploy your resources If you don’t have a VPC you can create an Amazon Virtual Private Cloud (Amazon VPC) by referring to the Amazon Virtual Private Cloud User Guide 2 NOTE: If creating an Amazon VPC follow Step 1: Create the VPC from the Amazon Virtual Private Cloud User Guide You will be creating a security group using this guide Validate IAM Permissions You must have AWS Identity and Access Management (IAM) permissions to perform the actions described in this guide You will need permissions to configure the following AWS services : • Amazon Virtual Private Cloud3 • Amazon Elastic Compute Cloud (Amazon EC2)4 Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 3 • Amazon Relational Database Service5 • Amazon Route 536 Determine the Size of the D atabase Determine the database (DB) size you will require for the installation Table 1 lists various DB instance classes by size of the PeopleSoft Environment Note that this table is provided only as a guideline You should validate your individual class size requirements against your actual usage For a current listing of available instance classes refer to Amazon RDS for Oracle Pricing Table 1: DB instan ce classes by size of the PeopleSoft environment DB Instance Class Notes medium Ideal for a small PeopleSoft demo/dev environment large Ideal for a medium PeopleSoft environment: <100 users xlarge Ideal for a medium PeopleSoft environment: <1000 users 2xlarge Ideal for a medium PeopleSoft environment: <10000 users 4xlarge Ideal for a large PeopleSoft environment: <50000 users 8xlarge Ideal for a very large PeopleSoft environment: <250000 users Set Up the AWS C ommand Line Interface (Optional) You can use either the AWS Management Console or the AWS CLI to perform the tasks described in this guide To use the AWS CLI ensure that you have installed AWS CLI and that you have either an Amazon EC2 instance that has an AWS IAM role associated with it (recommended) or an access key ID and secret key Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 4 Certification Licensing and Availability Before getting started with installing a PeopleSoft application on Amazon RDS for Oracle check for certification make licensing considerati ons and verify general availability PeopleSoft Certification Oracle certification for PeopleSoft software is controlled by the PeopleTools version that is being used Use your My Oracle Support account to check that your PeopleTools version is currently certified to run on the Oracle Database Release you plan to use with Amazon RDS as well as review any PeopleSoft application certification notes that may apply NOTE : Oracle has numerous documents on My Oracle Support regarding support for Oracle Applications in the Cloud Documents regarding issues with deploying PeopleSoft on Amazon RDS for Oracle are resolved by the steps in this guide In addition there are features that are specific to a database release that may or may not be available based on the database edition that you own Oracle Licensing When creating an Amazon RDS for Oracle database you can select either Bring Your Own License (BYOL) or License Included (LI) Not all editions are available for License Included Before creating the database instance verify which license (s) your organization holds if any For more details refer to the Amazon RDS for Oracle FAQs Amazon RDS for Oracle Availability After reviewing certification and licensing refer to Table 2 to identify the Oracle Database Release and corresponding PeopleTools Release along with other details that are available on Amazon RDS for Oracle Refer to the Amazon RDS User Guide section on Oracle on Amazon RDS for an up to date list of available RDS Oracle Releases Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 5 Table 2: Certified Oracle releases for PeopleTools available on Amazon RDS PeopleTools Release Amazon RDS Oracle DB Release Amazon RDS Oracle DB Edition Amazon RDS Oracle DB License Model 857 856 12201 Enterprise Edition LI Standard Edition Two LI BYOL 12102 Enterprise Edition LI Standard Edition Two LI BYOL 855 12201 Enterprise Edition LI Standard Edition Two LI BYOL 12102 Enterprise Edition LI Standard Edition Two LI BYOL 11204 Enterprise Edition LI Standard Edition LI BYOL Standard Edition One LI BYOL 854 853 12102 Enterprise Edition LI Standard Edition Two LI BYOL 11204 Enterprise Edition LI Standard Edition LI BYOL Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 6 PeopleTools Release Amazon RDS Oracle DB Release Amazon RDS Oracle DB Edition Amazon RDS Oracle DB License Model Standard Edition One LI BYOL Configuring the Database Instance You can configure AWS resources by using either the AWS Management Console or AWS CLI Steps for both options are provided in this guide If you plan to use AWS CLI follow the console procedure first because it provides context for the step The AWS CLI commands provided in this guide map directly to the tasks executed using the console NOTE : This guide provides the steps for creating a Demo PeopleSoft environment As such the settings and configurations provided apply towards this smaller environment where performance is not a requirement Create Security Groups A security group acts as a virtual firewall for your instance to control inbound and outbound traffic For more information on Security Groups reference Securit y Groups for Your VPC You will create two security group s to define network level traffic to the Amazon RDS database ; one for the Amazon RDS database and one for the Amazon EC2 application servers that will access the database Using security groups to define database network access allows you to be more restrictive and intentional about security F or example by having separate security groups for Production and Development environments you can prevent Development servers from comm unicating with the Prod uction database Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 7 Table 3: Security groups inbound rules Security Group Name Protocol Port Range Source Notes peoplesoft demo app None None None Attach this security group to your PeopleSoft Application server EC2 instance s peoplesoft demo db TCP 1521 peoplesoft demo app Allow 1521 traffic from peoplesoft demo app Using the AWS Management Console Create peoplesoft demo app 1 In the console choose Services VPC Security Groups Create Security Group 2 Enter peoplesoft demo app for the Security group name and PeopleSoft Demo Application Server for the Description Select the appro priate VPC for your account 3 Choose Create Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 8 4 Note the Security Group ID for later use Create peoplesoft demo db 1 In the console choose Services VPC Security Groups Create Security Group 2 For the Security group name enter peoplesoft demo db and for the Description enter PeopleSoft Demo RDS Database 3 Select the appropriate VPC for your account and c hoose Create 4 To update the Inbound Rules : • Select peoplesoft demo db from the security group list • Choose Actions Edit inbound rules • For Type select Oracle RDS • For Source select Custom • Enter the Security Group ID from the previous step • For Description enter PeopleSoft Demo Application Server 5 Choose Save rules Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 9 Using the AWS CLI Update the VPC_ID variable below with your VPC and execute in your CLI environment VPC_ID=< Replace with your VPC ID> PS_APP_SG=$(aws ec2 create security group groupname peoplesoft demoapp description "PeopleSoft Demo Application Server" vpcid $VPC_ID output text) aws ec2 create tags resources $PS_APP_SG tags "Key=NameValue= PeopleSoft Demo Application Se rver" PS_RDS_SG=$(aws ec2 create security group groupname peoplesoft demodb description " PeopleSoft Demo RDS Database " vpcid $VPC_ID output text) aws ec2 create tags resources $ PS_RDS_SG tags "Key=NameValue= PeopleSoft Demo RDS Database" aws ec2 authorize security groupingress groupid $PS_RDS_SG protocol tcp port 1521 sourcegroup $PS_APP_SG Create a DB Subnet Group Before you can create an Amazon RDS for Oracle Database instance you must define a subnet group A subnet group is a collection of subnets (typically private) that you create in a VPC and designate for your DB instances For an Amazon RDS for Oracle Database you must select two subnets each in a different Availability Zone Using the AWS Management Console 1 In the Console choose Services RDS Subnet Groups Create DB Subnet Group 2 For the Subnet Group details section s pecify the following: • For Name enter peoplesoft demo subnetgroup • For Description enter PeopleSoft Demo Subnet Group Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 10 • For VPC choose your VPC 3 For the Add Subnets section use the Availability zone drop down to choose an AZ select a Subnet designated for databases and Choose Add subnet Choose a minimum of 2 Subnet’s each from a different Availability Zone 4 Choose Create Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 11 Using the AWS CLI Update the PS_RDS_SN_1 and PS_RDS_SN_2 variables below with the subnets you created in the previous step and execute in your CLI environment PS_RDS_SN_1=<Replace with Subnet ID 1> PS_RDS_SN_2=<Replace with Subnet ID 2> aws rds create dbsubnetgroup dbsubnetgroupname peoplesoft demosubnetgroup dbsubnetgroupdescription "PeopleSoft Demo Subnet Group " subnetids $PS_RDS_SN_1 $PS_RDS_SN_2 Create an Option Group An Option Group provides additional feature options that you might want to add to your Amazon RDS for Oracle DB instance Amazon RDS provides default Option Groups but they cannot be modified For this reason create a new option group so feature options can be added or modified later You can assign an option group to multiple Amazon RDS for Oracle DB instances For a production database always review your current Oracle licensing agreement For more information reference Options for Oracle DB Instances Licensing may be requ ired IMPORTANT : The only option required for PeopleSoft to run correctly is Timezone This must be set to have the desired timestamps inside PeopleSoft Using the AWS Management Console 1 In the AWS Management Console choose Services RDS Option Groups Create Group 2 Specify values for the following fields : • For Name enter peoplesoft demo oracle ee122 • For Description enter PeopleSoft Demo Option Group • For Engine choose the Engine that correlates with the Oracle Database Edition chosen in the Certification Licensing and Availability section of this paper Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 12 • For Major Engine version choose the Major engine version that correlates with the Oracle Database Release chosen in the Certification Licensing and Availability section of this paper 3 Choose Create 4 To update the Option Group select peoplesoft demo og from the list of option groups and choose Add option 5 Select Timezone from the Option list and choose your local Time Zone you want to be reflected in PeopleSoft Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 13 6 Select whether or not you want the DB instance option to be applied immediately and choose Add Option Using the AWS CLI Update PS_TZ with a valid Time Zone such as US/Pacific and execute in your CLI environment PS_TZ=<Replace with Time Zone> aws rds create optiongroup optiongroupname peoplesoft demooracleee122 enginename oracle ee majorengine version 122 optiongroupdescription "PeopleSoft Demo Option Group" aws rds add optiontooptiongroup optiongroupname peoplesoft demooracleee122 applyimmediately options "OptionName=TimezoneOptionSettings=[{Name=TIME_ZONEValue=$ PS_TZ}]" Create a Parameter Group A DB parameter group acts as a container for engine configuration values that are applied to one or more DB instances If you create a DB instance without specifying a DB parameter group the DB instance uses a default DB parameter group It is not possible to update the default parameter group therefore it is recommend ed that you create a new parameter group even if you don't need to customize any parameters at this point It is also recommend ed to consider how you will reuse the parameter groups among multiple Amazon RDS for Oracle DB instances For a PeopleSoft deployment it is recommended that you use a unique parameter group for each environm ent (DEV T EST PROD) since parameters may need to be modified to suit a particular use case and/or provide you the ability to test a change in the configuration prior to app lying it to a new environment For more information refer to the Amazon RDS User Guide Working with DB Parameter Groups Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 14 Using the AWS Management Console 1 In the AWS Management Console choose Services RDS Parameter Groups Create Parameter Group 2 Specify values for the following fields: • For Parameter group family choose the database edition you want to use in your RDS for Oracle DB instances In this example oracle ee122 is used • For Group name enter s peoplesoft demo oracle ee122 • For Description enter PeopleSoft Demo Parameter Group 3 Choose Create Using the AWS CLI Execute the following command in your CLI environment aws rds create dbparameter group dbparameter groupname peoplesoft demooracleee122 dbparameter groupfamily oracleee122 description "PeopleSoft Demo Parameter Group" Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 15 Modifying Parameters For PeopleSoft there are recommended parameter s for Oracle databases as shown in Table 4 Table 4: List of parameters to customize Parameter Value Notes open_cursors 1000 db_block_size This parameter is automatically set by Amazon RDS (although the creation of nonstandard block size tablespaces and setting DB_nK_CACHE_SIZE parameters is supported) db_files 1021 Optionally you can leave the default setting provided by Amazon RDS nls_length_semantic CHAR for Unicode BYTE for non Unicode memory_target {DBInstanceCla ssMemory*3/4}; The default may be used ; change if you have a specific requirement _gby_hash_aggregation_enabl ed false This hash scheme enables group by and aggregation _unnest_subquery false Enable un nesting of complex sub queries optimizer_adaptive_ features false This parameter is for version 121x You can either enable or disable the adaptive optimize features optimizer_adaptive_ plans true (default) This parameter is for version 122x You can either enable or disable the adaptive optimize features optimizer_adaptive_ statistics false (default) This parameter is for version 122x You can either enable or disable the adaptive optimize features Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 16 Parameter Value Notes _fix_control 14033181:0 This parameter is for Oracle 121x Databases ONLY and is an interim patch resolution that you can enable This patch is not required for Version 122x release NOTE: This is an example of know n settings as of the publication date For more details abou t specific PeopleSoft –Oracle parameter settings refer to the following Oracle Support Document : EORA Advice for the PeopleSoft Oracle DB A (Doc ID 14459651) For instance classes with at least 100 GiB of memory use sga_target and enable Huge Pages Using the AWS Management Console 1 Select the parameter group you created (example shown below ) 2 Choose Parameter group actions Edit 3 Type the name of the Parameter you want to edit (as listed in Table 4 ) 4 Optional: f or example enter open_cursors into the filter change the Values field to 1000 and then choose Save C hanges 5 Repeat step 3 and 4 for each parameter you want to edit Using the AWS CLI Update the commands if you need to customize other parameters E xecute the following command in your CLI environment Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 17 aws rds modify dbparameter group dbparameter groupname peoplesoft demooracleee122 –parameters "ParameterName=open_cursorsParameterValue=1000ApplyMethod= pendingreboot" Create the Database Instance Next you are ready to create a highly available Oracle Database across two Availability Zones Keep in mind that running a database in multiple Availability Zones increases cost Depending on your SLA requirements you can consider running the database in a single Availability Zone instead Using the AWS Management Console 1 In the Amazon RDS console select Create database choose Oracle and your Edition and choose Next Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 18 2 For our example c hoose the Dev/Test template then choose Next 3 Specify the DB details: • License model : Choose the License model which depend s on you r Oracle Edition Reference the Certification Licensing and Availability section • DB engine version : Choose the most recent engine version The most recent version will have all Oracle patches available to Amazon RDS Reference the Certification Licensing and Availability section • DB instance class: Because this is fo r Demo Purposes choose a relatively small DB instance c lass such as dbt3medium You can change the DB instance class at any point which require s restarting the DB instance • Multi AZ deployment : Choose Yes so that you can have a second standby instance running in a second Availability Zone • Storage type: For a Dev/Test environment choose General Purpose (SSD) For a high performance environment in production Provisioned IOPS (SSD) should be used • Allocated storage : Allocate 200 GB Note that b aseline I/O performance for General Purpose SSD storage is 3 IOPS for each GiB This will give you a 600 IOPS baseline and can burst to 3000 IOPS using credits For more information on credits refer to the Amazon RDS User Guide I/O Credits and Burst Performa nce Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 19 4 Review the monthly costs Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 20 5 Specify the Settings : • DB instance identifier : Create a unique name for the DB instance identifier Amazon RDS uses this identifier to define the database hostname Our example use s psfdmo • Master username : This is similar to the SYS user but with fewer privileges because Amazon RDS does not allow you to use either a SYS user or the SYSDBA role Our example uses psftadmin • Master password : Create a password for the master user The m aster password must be at least eight characters long and c an include any printable ASCII character except for the following: / " or @ 6 Choose Next 7 Specify the DB instance advanced settings: • VPC: Choose t he VPC where the database will be deployed • Subnet group : Choose t he subnet group you created in Create a DB Subnet Group The DB instance is deployed against the subnets associated with the subnet group • Public A ccessibility : Allows external access to the database provided it’s deployed in a public subnet In most cases you would choose No to restrict access to (1) within our VPC and (2) use security groups Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 21 • Availability zone: Use the default No preference • VPC security group s: Choose t he security group that will be associated with your DB instance This security group provide s access to the database listener You created this security group peoplesoft demo db in Create Security Groups Remove the default security group Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 22 8 Specify your database options : • Database name: Choose t he database service name which your database clients will use to connect Our example use s PSFDMO This follows Oracle Database DB_NAME naming conventions reference Oracle’s documentation Selecting a Database Name for more info • Port: Choose t he TCP port that the database listener listen s on In our example we chose 1521 which is a default port for Oracle • DB parameter group : The database engine parameters Choose the peoplesoft demo oracle ee122 DB parameter group that you created in Create a Parameter Group • Option group : The features that will be enabled in the database Choose the peoplesoft demo oracle ee122 option group which you created in Create an Option Group • Character set name: Choose t he character set for your database In our example we use WE8ISO8859P15 a non Unicode database Use the character set that is required for you r PeopleSoft installation Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 23 9 Choose whether to enable or disable the e ncryption If you require your database data to be encrypted choose Enable encryption For more information see the Amazon RDS User Guide Encrypting Amazon RDS Resources For our example we chose No 10 Specify the b ackup options : • Backup retention period : Set the backup retention period i n days (maximum of 35) for your database Set to 0 to disable automatic backups Our example uses the default of 7 days • Backup window : Set the backup window for your daily backup Choose the default No Preference • Copy tags to snapshots : When this option is enabled Amazon RDS copies any tag ass ociated with your DB instance to the database snapshots ; useful for tracking usage and cost Select the checkbox to enable this option Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 24 11 Choose whether to enable or disable the monitoring option Enhanced Monitoring provides Amazon RDS metrics in real time for the operating system (OS) that your DB instance runs on For more information see the Amazon RDS User Guide Enhanced Monitoring Our example disables t his option 12 Choose whether to enable or disable the performance insight s option Amazon RDS Performance Insights monitors your Amazon RDS DB instance load so that you can analyze and troubleshoot your database performance For more information see the A mazon RDS User Guide Using Amazon RDS Performance Insights In our example we chose Disable Performance Insights 13 Specify the log(s) to export if any Database logs can be exported to Amazon CloudWatch this can be useful in a Production environment In our example we chose not to export any logs Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 25 14 Specify the m aintenance settings • Auto Minor Ve rsion Upgrade : If you want to manage when database maint enance runs then select to Disable auto minor version upgrade • Maintenance Window: Set the timing fo r the minor maintenance window For our example w e chose No Preference 15 Determine whether to enable the d elete protection option (for most use cases th is should be checked to prevent accidental deletion of the instance ) For o ur example the option is unchecked Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 26 16 Choose Create database to launch the DB instance Using the AWS CLI Update the RDS_MASTER_PWD and PS_RDS_SG variables and modify as necessary E xecute the in your CLI environment RDS_MASTER_PWD= <Replace with password> PS_RDS_SG= <Replace with DB Security Group ID> aws rds create dbinstance \ dbname PSFDMO \ dbinstance identifier psfdmo \ allocatedstorage 200 \ dbinstance class dbt3medium \ engine oracle ee \ masterusername psftadmin \ masteruserpassword $RDS_MASTER_PWD \ vpcsecurity groupids $PS_RDS_SG \ dbsubnetgroupname peoplesoft demosubnetgroup \ dbparameter groupname peoplesoft demooracleee122 \ port 1521 \ multiaz \ engineversion 12201ru 201904rur201904r1 \ noautominorversionupgrade \ licensemodel bring yourownlicense \ optiongroupname peoplesoft demooracleee122 \ character setname WE8ISO8859P15 \ nostorageencrypted \ nopublicly accessible \ noenableperformance insights \ nodeletion protection \ storagetype gp2 \ copytagstosnapshot Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 27 Create a DNS Alias for the Database Instance When you create an RDS for Oracle DB instance Amazon RDS creates a unique DNS hostname for your instance ( for example psdmoc6jc3rya3ntdus east1rdsamazonawscom:1521 ) You can use that hostname to connect to the database However you have no control over the hostname so you end up with a database URL that is not easy to remember In addition you might at some point need to restore the database from a snapshot For example you would need to do so if an operator makes a mistake in manipulating the data or if a bug in your application corrupts the data But you can’t restore a snapshot to an existing RDS DB instance When you restore a database from a snapshot Amazon RDS creates a new DB instance and generates a new hostname To avoid affecting existing applications and having to update their database endpoints create a DNS alias for your RDS DB instance Depending on your architecture you may register the DNS alias either in your corporate DNS server running on premises or in a DNS server running in AWS We will show you how to register a DNS alias in AWS as a private hosted zone using Amazon Route 53 When you use a private hosted zone only hosts in your VPC can resolve the DNS names for your database (There is a way to extend the name resolution outside of the VPC but that's beyond the scope of this guide ) Create a n Amazon Route 53 Private Hosted Zone Create a private hosted name using either the AWS Management Console or using the AWS CLI Using the AWS Management Console 1 In the AWS Management Console choose Services Route 53 Hosted zones and then choose Create Hosted Zone 2 Provide the details of your hosted zone: • Domain Name : Type t he private domain name Our example uses peoplesoftlocal • Type : Choose Private Hosted Zone for Amazon VPC Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 28 • VPC ID : Choose t he ID of the VPC used by your PeopleSoft infrastructure in AWS Using the AWS CLI Execute the following command: aws route53 create hostedzone name peoplesoftlocal vpc '{"VPCRegion":"us east1" "VPCId": "vpc ******21"}' hostedzoneconfig '{"PrivateZone": true}' caller reference 112017 Note : Retain the HostedZone I D because you will need it to create the record sets in the next section For the command provided above the HostedZone I D is: Z234334ABCDEF Create a DNS Alias By creating a DNS alias you can manage your database’s endpoint and avoid the need to change the code in your existing application Using the AWS Management Console 1 Choose Create Record Set and then provide the following details: • Name : The fully qualified domain name which in our example is psfdmo The value you type will be prepended to the domain name In our example it is psfdmopeoplesoftlocal Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 29 • Type : Choose CNAME • Alias : Choose No • TTL (Seconds) : Enter 300 • Value : The RDS instance hostname (do not add the port information 1521) Using the AWS CLI Execute the following command: aws route53 change resource recordsets hostedzoneid Z234334ABCDEF changebatch '{"Changes": [{"Action": "CREATE" "ResourceRecordSet": {"Name": "psdmopeoplesoftlocal ""Type": "CNAME" "TTL": 300 "ResourceRecords": [{"Value": "psdmoak34e3krdsamazonawscom "}]}}]}' After creating a DNS alias connect to our demo database using the following URL: psfdmopeoplesoftlocal:1521/ps fdmo Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 30 Running the PeopleSoft DB Creation Scripts With the database created you are ready to create the PeopleSoft DB The following steps illustrate the procedure detailed in the PeopleTools 85 x Installation for Oracle guide Ensure that you use the appropriate PeopleTools insta ll guide for your installation and note the following changes in the manual installation steps Editing the Database Scripts To start modify the delivered database creation scripts There are 2 types of changes that w e need to make to these scripts : (1) Tablespace creation and (2) SYSDBA SQL commands Tablespace Creation For creating tablespace Amazon RDS supports only the Oracle Managed Files (OMF) for data files log files and control files When you create data f iles and log files you can not specify the physical file names By default Oracle delivers these scripts using physical file paths so they must be updated to the OMF format Reference Amazon RDS User Guide Creating and Sizing Tablespaces for more information SYSDBA SQL Commands When you create a DB instance in Amazon RDS the master account use d to create the instance gets DBA user privileges (with some limitations) Use this account for any administrative tasks such as creating additional user accounts in the database The SYS user SYSTEM user and other administrative accounts can not be used These commands have been identified below along with the RDS procedure to properly run Reference Amazon RDS User Guide Granting SELECT or EXECUTE Privileges to SYS Objects for more information Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 31 Create DB SQL [ createdbsql ] Skip this script It does not need to be modified nor run because you already created the database UTL Space Script [ utlspacesql ] Modify the create tablespace commands for creating the PS DEFAULT tablespace to OMF format Delivered : CREATE TEMPORARY TABLESPACE PSTEMP TEMPFILE '/u03/oradata/<SID>/pstemp01dbf' SIZE 300M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K ; CREATE TABLESPACE PSDEFAULT DATAFILE '/u03/oradata /<SID>/psdefaultdbf' SIZE 100M EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO ; Modi fied: CREATE TEMPORARY TABLESPACE PSTEMP TEMPFILE EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K ; CREATE TABLESPACE PSDEFAULT EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO ; Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 32 Application Specific Table space Creation [ xxddlsql ] Modify the application specific tablespace creation script for the PeopleSoft application you are creating For example epddlsql for FSCM or hcddlsql for HCM Please refer to the PeopleTools Installation for Oracle for details on the DDL scripts that ar e appropriate f or your application Modify all CREATE TABLESPACE commands as below Delivered : CREATE TABLESPACE AMAPP DATAFILE '/u04/oradata /<SID>/amappdbf' SIZE 2M EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO / Modified : CREATE TABLESPACE AMAPP EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO / DB Owner Script [ dbownersql ] Replace system/manager with the RDS Master username and Master password Delivered : CONNECT system/manager; Modified (Replace with your credentials): CONNECT <RDS Master username> /<RDS Master password> ; Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 33 PS Roles Script [ psrolessql ] Modify all grants below if present Delivered: GRANT SELECT ON V_$MYSTAT to PSADMIN; GRANT SELECT ON USER_AUDIT_POLICIES to PSADMIN; GRANT SELECT ON FGACOL$ to PSADMIN; grant execute on dbms_refresh to PSADMIN; GRANT SELECT ON ALL_DEPENDENCIES to PSADMIN; Modified : exec rdsadminrdsadmin_util grant_sys_object('V_$MYSTAT'' PSADMIN''SELECT'); exec rdsadminrdsadmin_utilgrant_sys_object('USER_AUDIT_P OLICIES''PSADMIN''SELECT'); exec rdsadminrdsadmin_utilgrant_sys_object('FGACOL$''PS ADMIN'' SELECT'); exec rdsadminrdsadmin_utilgrant_sys_obj ect('DBMS_REFRESH ''PSADMIN''EXECUTE'); exec rdsadminrdsadmin_utilgrant_sys_object('ALL_DEPENDEN CIES''PSADMIN''SELECT'); Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 34 PS Admin Script [ psadminsql ] Replace system/manager with the Amazon RDS master username and master password Delivered : connect system/manager Modified (Replace with your credentials): connect <RDS Master username> /<RDS Master password> Connect Script [ connectsql ] No changes necessary Execute Database creation scripts After all the scripts have been updated execute on the database as usual as per the PeopleSoft installation guide Create and Run Data Mover Import Scripts Follow the PeopleSoft installation guide for creating and running the Data Mover Import scripts for your PeopleSoft Application Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 35 Conclusion We have n ow complete d the setup of a PeopleSoft demo database that is fully managed on Amazon RDS and ready to perform This guide describe d how to configure Amazon RDS for Oracle as a backend database for an Oracle PeopleSoft Enterprise demo application By using these procedures you can use Amazon RDS for Oracle to set up and operate many different PeopleSoft application databases As a result you have the steps to run your PeopleSoft applic ations on an Amazon RDS for Oracle D atabase References • PeopleT ools 8 57 Installation for Oracle • Amazon Web Services API Reference Contributors The following individuals and organizations contributed to this document: • David Brunet VP Research and Development DLZP Group • Nick Sefiddashti AWS Solutions Architect DLZP Group • Muhammed Sajeed PeopleSoft Architect DLZP Group • Yoav Eilat Senior Product Marketing Manager AWS • Tsachi Cohen Software Development Manager AWS • Michael Barras Senior Database Engineer AWS Document Revisions Date Description March 2017 First publication July 2019 Updated with the latest features of Amazon RDS PeopleSoft release versions AWS Management Console updates Amazon Web Services Configuring Amazon RDS as an Oracle PeopleSoft Database Page 36 1 https://docsawsamazoncom/general/latest/gr/randehtml 2 https://docsawsamazoncom/AmazonVPC/latest/GettingStartedGuide/ GetStartedhtml 3 https://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_IAMhtml 4 https://docsawsamazoncom/AWSEC2/latest/UserGuide/UsingIAMhtml 5 https://docsawsamazoncom/AmazonRDS/latest/UserGuide/UsingWithRDS IAMAccessControlIdentityBasedhtml 6 https://docsawsamazoncom/Rou te53/latest/DeveloperGuide/auth and access controlhtml Notes
|
General
|
consultant
|
Best Practices
|
Considerations_for_Using_AWS_Products_in_GxP_Systems
|
GxP Systems on AWS Published March 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 About AWS 1 AWS Healthcare and Life Sciences 2 AWS Services 2 AWS Cloud Security 4 Shared Security Responsibility Model 6 AWS Certifications and Attestations 8 Infrastructure Description and Controls 13 AWS Quality Management System 17 Quality Infrastructure and Support Processes 18 Software Development 25 AWS Products in GxP Systems 30 Qualif ication Strategy for Life Science Organizations 32 Supplier Assessment and Cloud Management 38 Cloud Platform/Landing Zone Qualification 42 Qualifying Building Blocks 48 Computer Systems Validation (CSV) 54 Conclusion 55 Contributors 55 Further Reading 55 Document Revisions 56 Appendix: 21 CFR 11 Controls – Shared Responsibility for use with AWS services 57 Abstract This whitepaper provides information on how AWS approaches GxP related compliance and security and provides customers guidance on using AWS Produ cts in the context of GxP The content has been developed based on experience with and feedback from AWS pharmaceutical and medical device customers as well as software partners who are currently using AWS Products in their validated GxP systems Amazon Web Services GxP Systems on AWS 1 Introduction According to a recent publication by Deloitte on the outlook of Global Life Sciences in 2020 prioritization of cloud technologies in the life sciences sector has steadily increased as customers seek out highly reliable scalable and secure solutions to operate their regulated IT systems Amazon Web Services (AWS ) provides cloud services designed to help customers run their most sensitive workloads in the cloud including the computerized systems that support Good Manufacturing Practice Good Laboratory Practice and Good Clinical Practice (GxP) GxP guidelines are established by the US Fo od and Drug Administration (FDA) and exist to ensure safe development and manufacturing of medical devices pharmaceuticals biologics and other food and medical product industries The first section of this whitepaper outlines the AWS services and organi zational approach to security along with compliance that support GxP requirements as part of the Shared Responsibility Model and as it relates to the AWS Quality System for Information Security Management After establishing this information the whitepap er provides information to assist you in using AWS services to implement GxP compliant environments Many customers already leverage industry guidance to influence their regulatory interpretation of GxP requirements Therefore the primary industry guidan ce used to form the basis of this whitepaper is the GAMP (Good Automated Manufacturing Practice) guidance from ISPE (International Society for Pharmaceutical Engineering) in effect as a type of Good Cloud Computing Practice While the following content provides information on use of AWS services in GxP environments you should ultimately consult with your own counsel to ensure that your GxP policies and procedures satisfy regulatory compliance requirements Whitepapers containing more specific information about AWS products privacy and data protection considerations are available at https://awsamazoncom/compliance/ About AWS In 2006 Amazon Web Services (AWS) began offering on demand IT infrastructure services to businesses in the form of web services with pay asyougo pricing Today AWS provides a highly reliable scalable low cost infrastructure platform in the cloud that powers hundreds of thousands of businesses in countries around the world Using AWS businesses no longer need to plan for and procure servers and other IT Amazon Web Services GxP Systems on AWS 2 infrastructure weeks or months in advance Instead they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster Offering ov er 175 fully featured services from data centers globally AWS gives you the ability to take advantage of a broad set of global cloud based products including compute storage databases networking security analytics mobile developer tools management tools IoT and enterprise applications AWS's rapid pace of innovation allows you to focus in on what's most important to you and your end users without the undifferentiated heavy lifting AWS Healthcare and Life Sciences AWS started its dedicated Genomi cs and Life Sciences Practice in 201 4 in response to the growing demand for an experienced and reliable life sciences cloud industry leader Today the AWS Life Sciences Practice team consists of members that have been in the industry on average for over 1 7 years and had previous titles such as Chief Medical Officer Chief Digital Officer Physician Radiologist and Researcher among many others The AWS Genomics and Life Sciences practice serves a large ecosystem of life sciences customers including pharm aceutical biotechnology medical device genomics start ups university and government institutions as well as healthcare payers and providers A full list of customer case studies can be found at https://awsamazoncom/health/customer stories In addition to the resources available within the Genomics and Life Science practice at AWS you can also work with AWS Life Sciences Competency Partners to drive innovation and improve efficiency acr oss the life sciences value chain including cost effective storage and compute capabilities advanced analytics and patient personalization mechanisms AWS Life Sciences Competency Partners have demonstrated technical expertise and customer success in building Life Science solutions on AWS A full list of AWS Life Sciences Competency Partners can be found at https://awsamazoncom/health/lifesciences partner solutions AWS Serv ices Amazon Web Services (AWS) delivers a scalable cloud computing platform with high availability and dependability providing the tools that enable you to run a wide range of applications Helping to protect the confidentiality integrity and availabili ty of our customers’ systems and data is of the utmost importance to AWS as is maintaining customer trust and confidence Amazon Web Services GxP Systems on AWS 3 Similar to other general purpose IT products such as operating systems and database engines AWS offers commercial off theshelf (CO TS) IT services according to IT quality and security standards such as ISO NIST SOC and many others For purposes of this paper w e will use the definition of COTS in accordance with the definition established by FedRAMP a United States government wide program for procurement and security assessment FedRAMP references the US Federal Acquisition Regulation (FAR) for its definition of COTS which outlines COTS items as: • Products or services that are offered and sold competitively in substantial quantities in the commercial marketplace based on an established catalog • Offered without modification or customization • Offered under standard commercial terms and conditions Under GAMP guidelines (such as GAMP 5: A Risk Based Approach to Compliant GxP Computerized Systems) organizations implementing GxP compliant environments will need to categorize AWS services using respective GAMP software and hardware categories (eg Software Category 1 for Infrastructure Software including operating systems dat abase managers and security software or Category 5 for custom or bespoke software) Most often organizations utilizing AWS services for validated applications will categorize them under Software Category 1 AWS offers products falling into several categor ies Below is a subset of those AWS offerings spanning Compute Storage Database Networking & Content Delivery and Security and Compliance A later section of this whitepaper AWS Products in GxP Systems will provide information to assist you in using AWS services to implement your GxPcompliant environments Table 1: Subset of AWS offerings by group Group AWS Products Compute Amazon EC2 Amazon EC2 Auto Scaling Amazon Elastic Container Registry Amazon Elastic Container Service Amazon Elastic Kubernetes Service Amazon Lightsail AWS Batch AWS Elastic Beanstalk AWS Fargate AWS Lambda AWS Outposts AWS Serverless Applicati on Repository AWS Wavelength VMware Cloud on AWS Amazon Web Services GxP Systems on AWS 4 Group AWS Products Storage Amazon Simple Storage Service ( Amazon S3) Amazon Elastic Block Store ( Amazon EBS) Amazon Elastic File System ( Amazon EFS) Amazon FSx for Lustre Amazon FSx for Windows File Server Amazon S3 Gl acier AWS Backup AWS Snow Family AWS Storage Gateway CloudEndure Disaster Recovery Database Amazon Aurora Amazon DynamoDB Amazon DocumentDB Amazon ElastiCache Amazon Keyspaces Amazon Neptune Amazon Quantum Ledger Database ( Amazon QLDB) Amazon R DS Amazon RDS on VMware Amazon Redshift Amazon Timestream AWS Database Migration Service Networking & Content Delivery Amazon VPC Amazon API Gateway Amazon CloudFront Amazon Route 53 AWS PrivateLink AWS App Mesh AWS Cloud Map AWS Direct Connect AWS Global Accelerator AWS Transit Gateway Elastic Load Balancing Security Identity and Compliance AWS Identity & Access Management (IAM) Amazon Cognito Amazon Detective Amazon GuardDuty Amazon Inspector Amazon Macie AWS Artifact AWS C ertificate Manager AWS CloudHSM AWS Directory Service AWS Firewall Manager AWS Key Management Service AWS Resource Access Manager AWS Secrets Manager AWS Security Hub AWS Shield AWS Single Sign On AWS WAF Details and specifications for the full portfolio of AWS products are available online at https://awsamazoncom/ AWS Cloud Security AWS infrastructure has been architected to be one of the most flexible and secure cloud computing environments available today It is designed to provide an extremely scalable highly reliable platform that enables customers to deploy applications and data quickly and securely This infrastructure is built and managed not only accordin g to security best practices and standards but also with the unique needs of the cloud in mind AWS uses redundant and layered controls continuous validation and testing and a substantial amount of automation to ensure that the underlying infrastructure is monitored and protected 24x7 Amazon Web Services GxP Systems on AWS 5 We have many customer testimonials that highlight the security benefits of using the AWS cloud in that the security capabilities provided by AWS far exceed the customer’s own on premises capabilities “We had heard urban legends about ‘security issues in the cloud’ but the more we looked into AWS the more it was obvious to us that AWS is a secure environment and we would be able to use it with peace of mind” Yoshihiro Moriya Certified Information System Auditor at Ho ya “There was no way we could achieve the security certification levels that AWS has We have great confidence in the logical separation of customers in the AWS Cloud particularly through Amazon VPC which allows us to customize our virtual networking environment to meet our specific requirements” Michael Lockhart IT Infrastructure Manager at GPT “When you’re in telehealth and you touch protected health information security is paramount AWS is absolutely critical to do what we do today Security and compliance are table stakes If you don’t have those the rest doesn’t matter" Cory Costley Chief Product Officer Avizia Many more customer testimonials including those from health and life science companies can be found here: https://awsamazoncom/compliance/testimonials/ IT Security is often not the core business of our customers IT departments operate on limited budgets and do a good job of securing their data cente rs and software given limited resources In the case of AWS security is foundational to our core business and so significant resources are applied to ensuring the security of the cloud and helping our customers ensure security in the cloud as described f urther below Amazon Web Services GxP Systems on AWS 6 Shared Security Responsibility Model Security and Compliance is a shared responsibility between AWS and the customer This shared model can help relieve your operational burden as AWS operates manages and controls the components from the hos t operating system and virtualization layer down to the physical security of the facilities in which the service operates Customers assume responsibility and management of the guest operating system (including updates and security patches) other associat ed application software as well as the configuration of the AWS provided security group firewall You should carefully consider the services you choose as your responsibilities vary depending on the services used the integration of those services into your IT environment and applicable laws and regulations The following figure provides an overview of the shared responsibility model This differentiation of responsibility is c ommonly referred to as Security “of” the Cloud versus Security “in” the Cloud which will be explained in more detail below Figure 1: AWS Shared Responsibility Model AWS is responsible for the security and compliance of the Cloud the infrastructure that runs all of the services offered in the AWS Cloud Cloud security at AWS is the highest priority AWS customers benefit from a data center and network architecture tha t are built to meet the requirements of the most security sensitive organizations This Amazon Web Services GxP Systems on AWS 7 infrastructure consists of the hardware software networking and facilities that run AWS Cloud services Customers are responsible for the security and compliance in the Cloud which consists of customer configured systems and services provisioned on AWS Responsibility within the AWS Cloud is determined by the AWS Cloud services that you select and ultimatel y the amount of configuration work you must perform as part of your security responsibilities For example a service such as Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and as such requires you to perfo rm all of the necessary security configuration and management tasks Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches) any application software or utilities i nstalled by you on the instances and the configuration of the AWS provided firewall (called a security group) on each instance For abstracted services such as Amazon S3 and Amazon DynamoDB AWS operates the infrastructure layer the operating system an d platforms and customers access the endpoints to store and retrieve data You are responsible for managing your data and component configuration (including encryption options) classifying your assets and using IAM tools to apply the appropriate permiss ions The AWS Shared Security Responsibility model also extends to IT controls Just as the responsibility to operate the IT environment is shared between you and AWS so is the management operation and verification of IT controls shared AWS can help rel ieve your burden of operating controls by managing those controls associated with the physical infrastructure deployed in the AWS environment that may previously have been managed by you As every customer is deployed differently in AWS you can take advan tage of shifting management of certain IT controls to AWS which results in a (new) distributed control environment You can then use the AWS control and compliance documentation available to you as well as techniques discussed later in this whitepaper to perform your control evaluation and verification procedures as required Below are examples of controls that are managed by AWS AWS Customers and/or both Inherited Controls – Controls which you fully inherit from AWS • Physical and Environmental controls Shared Controls – Controls which apply to both the infrastructure layer and customer layers but in completely separate contexts or perspectives In a shared control AWS provides the requirements for the infrastructure and you must provide your own contr ol implementation within your use of AWS services Examples include: Amazon Web Services GxP Systems on AWS 8 • Patch Management – AWS is responsible for patching and fixing flaws within the infrastructure but you are responsible for patching your guest OS and applications • Configuration Management – AWS maintains the configuration of its infrastructure devices but you are responsible for configuring your own guest operating systems databases and applications • Awareness & Training AWS trains AWS employees but you must t rain your own employees Customer Specific – Controls which are ultimately your responsibility based on the application you are deploying within AWS services Examples include: • Data Management – for instance placement of data on Amazon S3 where you activa te encryption While certain controls are customer specific AWS strives to provide you with the tools and resources to make implementation easier For further information about AWS physical and operational security processes for the network and server in frastructure under the management of AWS see: AWS Cloud Security site For customers who are designing the security infrastructure and configuration for applications running in Amazon Web Services (AWS) see the Best Practices for Security Identity & Compliance AWS Certifications and Attestations The AWS global infrastructure is designed and managed according to security best practices as well as a variety of security compliance standards With AWS you can be assured that you are building web architectures on top of some of the most secure computing infrastructure in the world The IT infrastructure that AWS provides to you is designed and managed in alignment with security best practices and a variety of IT security standards including the following that life science customers may find most relevant : • SOC 1 2 3 • ISO 9001 / ISO 27001 / ISO 27017 / ISO 27018 • HITRUST • FedRAMP Amazon Web Services GxP Systems on AWS 9 • CSA Security Trust & Assurance Registry (STAR) There are no specific certifications for GxP comp liance for cloud services to date however the controls and guidance described by this whitepaper in conjunction with additional resources supplied by AWS provide information on AWS service GxP compatibility which will assist you in designing and buildin g your own GxP compliant solutions AWS provides on demand access to security and compliance reports and select online agreements through AWS Artifact with reports accessible via AWS customer accounts unde r NDA AWS Artifact is a go to central resource for compliance related information and is a place that you can go to find additional information on the AWS compliance programs described further below SOC 1 2 3 AWS System and Organization Controls (SOC) Reports are independent third party examination reports that demonstrate how AWS achieves key compliance controls and objectives The purpose of these reports is to help you and your auditors understand the AWS controls established to support operations a nd compliance The SOC 1 reports are designed to focus on controls at a service organization that are likely to be relevant to an audit of a user entity’s financial statements The AWS SOC 1 report is designed to cover specific key controls likely to be required during a financial audit as well as covering a broad range of IT general controls to accommodate a wide range of usage and audit scenarios The AWS SOC1 control objectives include security organization employee user access logical security sec ure data handling physical security and environmental protection change management data integrity availability and redundancy and incident handling The SOC 2 report is an attestation report that expands the evaluation of controls to the criteria set f orth by the American Institute of Certified Public Accountants (AICPA) Trust Services Principles These principles define leading practice controls relevant to security availability processing integrity confidentiality and privacy applicable to service organizations such as AWS The AWS SOC 2 is an evaluation of the design and operating effectiveness of controls that meet the criteria for the security and availability principles set forth in the AICPA’s Trust Services Principles criteria This report pr ovides additional transparency into AWS security and availability based on a pre defined industry standard of leading practices and further demonstrates the commitment of AWS to protecting customer data The SOC2 report information includes outlining AWS controls a description of AWS Services relevant to security availability and Amazon Web Services GxP Systems on AWS 10 confidentiality as well as test results against controls You will likely find the SOC 2 report to be the most detailed and r elevant SOC report as it relates to GxP compliance AWS publishes a Service Organization Controls 3 (SOC 3) report The SOC 3 report is a publicly available summary of the AWS SOC 2 report The report includes the external auditor’s assessment of the opera tion of controls (based on the AICPA’s Security Trust Principles included in the SOC 2 report) the assertion from AWS management regarding the effectiveness of controls and an overview of AWS Infrastructure and Services FedRAMP The Federal Risk and Aut horization Management Program (FedRAMP) is a US government wide program that delivers a standard approach to the security assessment authorization and continuous monitoring for cloud products and services FedRAMP uses the NIST Special Publication 800 se ries and requires cloud service providers to receive an independent security assessment conducted by a third party assessment organization (3PAO) to ensure that authorizations are compliant with the Federal Information Security Management Act (FISMA) For AWS Services in Scope for FedRAMP assessment and authorization see https://awsamazoncom/compliance/services inscope/ ISO 9001 ISO 9001:2015 outlines a process oriented approach to d ocumenting and reviewing the structure responsibilities and procedures required to achieve effective quality management within an organization Specific sections of the standard contain information on topics such as: • Requirements for a quality management system (QMS) including documentation of a quality manual document control and determining process interactions • Responsibilities of management • Management of resources including human resources and an organization’s work environment • Service development including the steps from design to delivery • Customer satisfaction • Measurement analysis and improvement of the QMS through activities like internal audits and corrective and preventive actions Amazon Web Services GxP Systems on AWS 11 The AWS ISO 9001:2015 certification directly supports custome rs who develop migrate and operate their quality controlled IT systems in the AWS cloud You can leverage AWS compliance reports as evidence for your own ISO 9001:2015 programs and industry specific quality programs such as GxP in life sciences and ISO 1 31485 in medical devices ISO/IEC 27001 ISO/IEC 27001:2013 is a widely adopted global security standard that sets out requirements and best practices for a systematic approach to managing company and customer information that’s based on periodic risk asses sments appropriate to ever changing threat scenarios In order to achieve the certification a company must show it has a systematic and ongoing approach to managing information security risks that affect the confidentiality integrity and availability of company and customer information This widely recognized international security standard specifies that AWS do the following: • We s ystematically evaluate AWS information security risks taking into account the impact of threats and vulnerabilities • We d esign and implement a comprehensive suite of information security controls and other forms of risk management to address customer and architecture security risks • We have an overarching management process to ensure that the information security controls meet our needs on an ongoing basis AWS has achieved ISO 27001 certification of the Information Security Management System (ISMS) covering AWS infrastructure data centers and services ISO/IEC 27017 ISO/IEC 27017:2015 provides guidance on the information security aspects of cloud computing recommending the implementation of cloud specific information security controls that supplement the guidance of the ISO/IEC 27002 and ISO/IEC 27001 standards This code of practice provides additional inform ation security controls implementation guidance specific to cloud service providers The AWS attestation to the ISO/IEC 27017:2015 standard not only demonstrates an ongoing commitment to align with globally recognized best practices but also verifies that AWS has a system of highly precise controls in place that are specific to cloud services Amazon Web Services GxP Systems on AWS 12 ISO/IEC 27018 ISO 27018 is the first International code of practice that focuses on protection of personal data in the cloud It is based on ISO information security standard 27002 and provides implementation guidance on ISO 27002 controls applicable to public cloud Personally Identifiable Information (PII) It also provides a set of additional controls and associated guidance intended to address public cloud PII prot ection requirements not addressed by the existing ISO 27002 control set AWS has achieved ISO 27018 certification an internationally recognized code of practice which demonstrates the commitment of AWS to the privacy and protection of your content HITRU ST The Health Information Trust Alliance Common Security Framework (HITRUST CSF) leverages nationally and internationally accepted standards and regulations such as GDPR ISO NIST PCI and HIPAA to create a comprehensive set of baseline security and priv acy controls HITRUST has developed the HITRUST CSF Assurance Program which incorporates the common requirements methodology and tools that enable an organization and its business partners to take a consistent and incremental approach to managing compli ance Further it allows business partners and vendors to assess and report against multiple sets of requirements Certain AWS services have been assessed under the HITRUST CSF Assurance Program by an approved HITRUST CSF Assessor as meeting the HITRUST CS F Certification Criteria The certification is valid for two years describes the AWS services that have been validated and can be accessed at https://awsamazoncom/compliance/hitrust/ You may l ook to leverage the AWS HITRUST CSF certification of AWS services to support your own HITRUST CSF certification in complement to your GxP compliance programs CSA Security Trust & Assurance Registry (STAR) In 2011 the Cloud Security Alliance (CSA) launched STAR an initiative to encourage transparency of security practices within cloud providers The CSA Security Trust & Assur ance Registry (STAR) is a free publicly accessible registry that documents the security controls provided by various cloud computing offerings thereby helping users assess the security of cloud providers they currently use or are considering Amazon Web Services GxP Systems on AWS 13 AWS partic ipates in the voluntary CSA Security Trust & Assurance Registry (STAR) SelfAssessment to document AWS compliance with CSA published best practices AWS publish es the completed CSA Consensus Assessments Initiative Questionnaire (CAIQ) on the AWS website Infrastructure Description and Controls Cloud Models (Nature of the Cloud) Cloud computing is the on demand delivery of compute power da tabase storage applications and other IT resources through a cloud services platform via the Internet with pay asyougo pricing As cloud computing has grown in popularity several different models and deployment strategies have emerged to help meet spe cific needs of different users Each type of cloud service and deployment method provides you with different levels of control flexibility and management Cloud Computing Models Infrastructure as a Service (IaaS) Infrastructure as a Service (IaaS) contai ns the basic building blocks for cloud IT and typically provides access to networking features computers (virtual or on dedicated hardware) and data storage space IaaS provides you with the highest level of flexibility and management control over your IT resources and is most similar to existing IT resources that many IT departments and developers are familiar with today (eg Amazon Elastic Compute Cloud (Amazon EC2)) Platform as a Service (PaaS) Platform as a Service (PaaS) removes the need for organi zations to manage the underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications (eg AWS Elastic Beanstalk) This helps you be more efficient as you don’t need to worry about resource procurement capacity planning software maintenance patching or any of the other undifferentiated heavy lifting involved in running your application Software as a Service (SaaS) Software as a Service (SaaS) provides you with a completed product that is run and managed by the service provider In most cases people referring to Software as a Service are referring to end user applications (eg Amazon Connect) With a SaaS offering you do not have to think about how the se rvice is maintained or how the Amazon Web Services GxP Systems on AWS 14 underlying infrastructure is managed; you only need to think about how you will use that particular piece of software A common example of a SaaS application is web based email which can be used to send and receive email with out having to manage feature additions to the email product or maintain the servers and operating systems on which the email program is running Cloud Computing Deployment Models Cloud A cloud based application is fully deployed in the cloud and all parts of the application run in the cloud Applications in the cloud have either been created in the cloud or have been migrated from an existing infrastructure to take advantage of the benefits of cloud computing ( https://awsamazoncom/what iscloud computing/ ) Cloud based applications can be built on low level infrastructure pieces or can use higher level services that provide abstraction from the management architecting and scaling requi rements of core infrastructure Hybrid A hybrid deployment is a way to connect infrastructure and applications between cloud based resources and existing resources that are not located in the cloud The most common method of hybrid deployment is between t he cloud and existing on premises infrastructure to extend and grow an organization's infrastructure into the cloud while connecting cloud resources to the internal system For more information on how AWS can help you with hybrid deployment visit the AW S hybrid page (https://awsamazoncom/hybrid/ ) Onpremises The deployment of resources on premises using virtualization and resource management tools is sometimes sought for its ability to provide dedicated resources (https://awsamazoncom/hybrid/ ) In most cases this deployment model is the same as legacy IT infrastructure while using application management and virtualization technologies to try and increase r esource utilization Security Physical Security Amazon has many years of experience in designing constructing and operating large scale data centers This experience has been applied to the AWS platform and infrastructure AWS data centers are housed in facilities that are not branded as AWS Amazon Web Services GxP Systems on AWS 15 facilities Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance intrusion detection systems and other electronic me ans Authorized staff must pass two factor authentication a minimum of two times to access data center floors All visitors are required to present identification and are signed in and continually escorted by authorized staff AWS only provides data center access and information to employees and contractors who have a legitimate business need for such privileges When an employee no longer has a business need for these privileges his or her access is immediately revoked even if they continue to be an empl oyee of Amazon or Amazon Web Services All physical access to data centers by AWS employees is logged and audited routinely Additional information on infrastructure security may be found on the webpage on AWS Data Center controls Single or Multi Tenant Environments As cloud technology has rapidly evolved over the past decade one fundamental technique used to maximize physical resources as well as lower customer costs has been to offer multi tenant services to cloud customers To facilitate this architecture AWS has developed and implemented powerful and flexible logical security controls to create strong isolation boundaries between customers Security is job zero at AWS and you will find a rich history of AWS steadily enhancing its features and controls to help customers achieve their security posture requirements such as GxP Coming from operating an on premises environment you will often find that CSPs like AWS enable you to effectively optimize your security configurations in the cloud compared to your onpremises solutions The AWS logical security capabilities as well as security controls in place address the concerns driving physical separation to protect your data The pro vided isolation combined with the automation and flexibility added offers a security posture that matches or bests the security controls seen in traditional physically separated environments Additional detailed information on logical separation on AWS ma y be found in the Logical Separation on AWS whitepaper Amazon Web Services GxP Systems on AWS 16 Cloud Infrastructure Qualification Activities Geography AWS serves over a million active customers i n more than 200 countries As customers grow their businesses AWS will continue to provide infrastructure that meets their global requirements The AWS Cloud infrastructure is built around AWS Regions and Availability Zones An AWS Region is a physical l ocation in the world which has multiple Availability Zones Availability Zones consist of one or more discrete data centers each with redundant power networking and connectivity housed in separate facilities These Availability Zones offer you the abil ity to operate production applications and databases that are more highly available fault tolerant and scalable than would be possible from a single data center The AWS Cloud operates in over 70 Availability Zones within over 20 geographic Regions aroun d the world with announced plans for more Availability Zones and Regions For more information on the AWS Cloud Availability Zones and AWS Regions see AWS Global Infrastructure Each Amazon Region is designed to be completely isolated from the other Amazon Regions This achieves the greatest possible fault tolerance and stability Each Availability Zone is isolated but the Availability Zones in a Region are connected through low laten cy links AWS provides customers with the flexibility to place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each AWS Region Each Availability Zone is designed as an independent failure zo ne This means that Availability Zones are physically separated within a typical metropolitan region and are located in lower risk flood plains (specific flood zone categorization varies by AWS Region) In addition to discrete uninterruptable power supply (UPS) and onsite backup generation facilities they are each fed via different grids from independent utilities to further reduce single points of failure Availability Zones are all redundantly connected to multiple tier 1 transit providers Data Locatio ns Where geographic limitations apply unlike other cloud providers who often define a region as a single data center the multiple Availability Zone (AZ) design of every AWS Region offers you advantages If you are focused on high availability you can desi gn your applications to run in multiple AZ's to achieve even greater fault tolerance AWS infrastructure Regions meet the highest levels of security compliance and data protection If you have data residency requirements you can choose the AWS Region that is in close proximity to your desired location You retain complete control and Amazon Web Services GxP Systems on AWS 17 ownership over the region in which your data is physically located making it easy to meet regional compliance and data residency requirements In addition for moving on premises data to AWS for migrations or ongoing workflows the following AWS website on Cloud Data Migration descr ibes the various tools and services that you may use to ensure data onshoring compliance including: • Hybrid cloud storage (AWS Storage Gateway AWS Direct Connect) • Online data transfer (AWS DataSync AWS Transfer Family Amazon S3 Transfer Acceleration AWS Snowcone Amazon Kinesis Data Firehose APN Partner Products) • Offline data transfer (AWS Snowcone AWS Snowball AWS Snowmobile) Capacity When it comes to capac ity planning AWS examines capacity at both a service and rack usage level The AWS capacity planning process also automatically triggers the procurement process for approval so that AWS doesn’t have additional lag time to account for and AWS relies on ca pacity planning models which are informed in part by customer demand to trigger new data center builds AWS enables you to reserve instances so that space is guaranteed in the region(s) of your choice AWS uses the number of reserved instances to inform planning for FOOB (future out of bound) Uptime AWS maintains SLAs (Service Level Agreements) for various services across the platform which at the time of this writing includes a guaranteed monthly uptime percentage of at least 9999% for Amazon EC2 an d Amazon EBS within a Region A full list of AWS SLAs can be found at https://awsamazoncom/legal/service level agreements/ In addition Amazon Web Services publishes the most up totheminute information on service availability in the AWS Service Health Dashboard (https://statusawsamazoncom/ ) It is important to note that as part of the shared security responsibility model it is your responsibility to architect your application for resilience based on your organization’s requirements AWS Quality Management System Life Science customers with obligations under GxP requirements need to ensure that quality is part of manufacturing and controls during the design development and deployment of their GxP regulated product This quality assurance includes an Amazon Web Services GxP Systems on AWS 18 appropriate assessment of cloud service suppliers like AWS to meet the obligations of your quality system For a deeper description of the AWS Quality Management System you may use AWS Artifact to access additional documents under NDA Below AWS provide s information on some of the concepts and components of the AWS Q uality System of most interest to GxP customers like you Quality Infrastructure and Support Processes Quality Management System Certification AWS has undergone a systematic independent examination of our quality system to determine whether the activitie s and activity outputs comply with ISO 9001:2015 requirements A certifying agent found our quality management system ( QMS ) to comply with the requirements of ISO 9001:2015 for the activities described in the scope of registration The AWS quality manageme nt system has been certified to ISO 9001 since 2014 The reports cover six month periods each year (April September / October March) New reports are released in mid May and mid November To see the AWS ISO 9001 registration certification certification bo dy information as well as date of issuance and renewal please see the information on the ISO 9001 AWS compliance program website: https://awsamazoncom/compliance/iso 9001 faqs/ The certi fication covers the QMS over a specified scope of AWS services and Regions of operations If you are pursuing ISO 9001:2015 certification while operating all or part of your IT systems in the AWS cloud you are not automatically certified by association however using an ISO 9001:2015 certified provider like AWS can make your certification process easier AWS provides additional detailed information on the quality management system accessible within AWS Artifact via customer accounts in the AWS console (https://awsamazoncom/artifact/ ) Software Development Approach AWS’s strategy for design and development of AWS services is to clearly define services in terms of customer use cases service performance marketing and distribution requirements production and testing and legal and regulatory requirements The design of all new services or any significant changes to current services are controlled through a project management system with multi disciplinary Amazon Web Services GxP Systems on AWS 19 participation Requirements and service specifications are established during service development taking into account legal and regulatory requirements customer contractual commitments and requirements to meet the confidentiality integrity and availability of the service in alignment with the quality objectives established within the quality management system Service reviews are completed as part of the developm ent process and these reviews include evaluation of security legal and regulatory impacts and customer contractual commitments Prior to launch each of the following requirements must be complete: • Security Risk Assessment • Threat modeling • Security des ign reviews • Secure code reviews • Security testing • Vulnerability/penetration testing AWS implements open source software or custom code within its services All open source software to include binary or machine executable code from third parties is reviewed and approved by the Open Source Group prior to implementation and has source code that is publicly accessible AWS service teams are prohibited from implementing code from third parties unless it has been approved through the open source review All code developed by AWS is available for review by the applicable service team as well as AWS Security By its nature open source code is available for review by the Open Source Group prior to granting authorization for use within Amazon Quality Proc edures In addition to the software hardware human resource and real estate assets that are encompassed in the scope of the AWS quality management system supporting the development and operations of AWS services it also includes documented information including but not limited to source code system documentation and operational policies and procedures AWS implements formal documented policies and procedures that provide guidance for operations and information security within the organization and the supporting AWS environments Policies address purpose scope roles responsibilities and management Amazon Web Services GxP Systems on AWS 20 commitment All policies are maintained in a centralized location that is accessible by employees Project Management Processes The design of new service s or any significant changes to current services follow secure software development practices and are controlled through a project management system with multi disciplinary participation Quality Organization Roles AWS Security Assurance is responsible for familiarizing employees with the AWS security policies AWS has established information security functions that are aligned with defined structure reporting lines and responsibilities Leadership involvement provides clear direction and visible support for security initiatives AWS has established a formal audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment AWS maintains a documen ted audit schedule of internal and external assessments The needs and expectations of internal and external parties are considered throughout the development implementation and auditing of the AWS control environment Parties include but are not limite d to: • AWS customers including current customers and potential customers • External parties to AWS including regulatory bodies such as the external auditors and certifying agents • Internal parties such as AWS services and infrastructure teams security and overarching administrative and corporate teams Quality Project Planning and Reporting The AWS planning process defines service requirements requirements for projects and contracts and ensures customer needs and expectations are met or exceeded Planning is achieved through a combination of business and service planning project teams quality improvement plans review of service related metrics and documentation selfassessments and supplier audits and employee training The AWS quality system is documented to ensure that planning is consistent with all other requirements AWS continuously monitors service usage to project infrastructure needs to support availability commitments and requirements AWS maintains a capacity planning model Amazon Web Services GxP Systems on AWS 21 to assess infrastructure usage and demands at least monthly and usually more frequently In addition the AWS capacity planning model supports the planning of future demands to acquire and implement additional resources based upon current resources and forecasted requirements Electronics Records and Electronic Signatures In the United States (US) GxP regulations are enforced by the US Food and Drug Administration (FDA) and are contained in Title 21 of the Code of Federal Regulations (21 CFR) Within 21 CFR Part 11 contains the requirements for computer systems that create modify maintain archive retrieve or distribute electronic records and electronic signatures in support of GxP regulated activities (and in the EU EudraLex Volume 4 Good Manufacturing Practice (GMP) guidelines – Annex 11 Computerised Systems) Part 11 was created to permit the adoption of new information technologies by FDA regulated life sciences organizations while simultaneously providing a framework to ensure that the electronic GxP data is trustworthy and reliable There is no GxP certification for a commercial cloud provider such as AWS AWS offers commercial off theshelf (COTS) IT services according to IT quality and security standards such as ISO 27001 ISO 27017 ISO 27018 ISO 9001 NIST 800 53 and many others GxP regulated life sciences customers like you are responsible for purchasing and using AWS services to develop and operate your GxP sys tems and to verify your own GxP compliance and compliance with 21 CFR 11 This document used in conjunction with other AWS resources noted throughout may be used to support your electronic records and electronic signatures requirements A further desc ription of the shared responsibility model as it relates to your use of AWS services in alignment with 21 CFR 11 can be found in the Appendix Company SelfAssessments AWS Security Assurance monitors the implementation and maintenance of the quality management system by performing verification activities through the AWS audit program to ensure compliance suitability and effectiveness of the quality management system The AWS audit program includes selfassessment s third party accreditation audits and supplier audits The objective of these audits are to evaluate the operating effectiveness of the AWS quality management system Selfassessment s are performed periodically Audits by third part ies for accreditation are conducted to review the continued performance of AWS against standards based criteria and to identify general improvement opportunities Supplier audits are performed to assess the supplier’s potential for pro viding services or material that conform to AWS supply requirements Amazon Web Services GxP Systems on AWS 22 AWS maintains a documented schedule of all assessments to ensure implementation and operating effectiveness of the AWS control environment to meet various objectives Contract Reviews AWS offers Services for sale under a standardized customer agreement that has been reviewed to ensure the Services are accurately represented properly promoted and fairly priced Please contact your account team if you have questions about AWS service ter ms Corrective and Preventative Actions AWS takes action to eliminate the cause of nonconformities within the scope of the quality management system in order to prevent recurrence The following procedure is followed when taking corrective and preventiv e actions: 1 Identify the specific nonconformities; 2 Determine the causes of nonconformities; 3 Evaluate the need for actions to ensure that nonconformities do not recur; 4 Determine and implement the corrective action(s) needed; 5 Record results of action(s) taken ; 6 Review of the corrective action(s) taken 7 Determine and implement preventive action needed; 8 Record results of action taken; and 9 Review of preventive action The records of corrective actions may be reviewed during regularly scheduled AWS management meeti ngs Customer Complaints AWS relies on procedures and specific metrics to support you Customer reports and complaints are investigated and where required actions are taken to resolve them You can contact AWS at https://awsamazoncom/contact us/ or speak directly with your account team for support Amazon Web Services GxP Systems on AWS 23 Third Party Management AWS maintains a supplier management team to foster third party relationships and monitor thi rd party performance SLAs and SLOs are implemented to monitor performance AWS creates and maintains written agreements with third parties (for example contractors or vendors) in accordance with the work or service to be provided (for example network s ervices service delivery or information exchange) and implements appropriate relationship management mechanisms in line with their relationship to the business AWS monitors the performance of third parties through periodic reviews using a risk based app roach which evaluate performance against contractual obligations Training Records Personnel at all levels of AWS are experienced and receive training in the skill areas of the jobs and other assigned training Training needs are identified to ensure tha t training is continuously provided and is appropriate for each operation (process) affecting quality Personnel required to work under special conditions or requir ing specialized skills are trained to ensure their competency Records of training and certi fication are maintained to verify that individuals have appropriate training AWS has developed documented and disseminated role based security awareness training for employees responsible for designing developing implementing operating maintaining and monitoring the system affecting security and availability and provides resources necessary for employees to fulfill their responsibilities Training includes but is not limited to the following information (when relevant to the employee’s role): • Workforce conduct standards • Candidate background screening procedures • Clear desk policy and procedures • Social engineering phishing and malware • Data handling and protection • Compliance commitments • Use of AWS security tools • Security precautions while travel ing • How to report security and availability failures incidents concerns and other complaints to appropriate personnel Amazon Web Services GxP Systems on AWS 24 • How to recognize suspicious communications and anomalous behavior in organizational information systems • Practical exercises that reinforce training objectives • HIPAA responsibilities Personnel Records AWS performs periodic formal evaluation s of resourcing and staffing including an assessment of employee qualification alignment with entity objectives Personnel records are managed th rough an internal Amazon System Infrastruc ture Management The Infrastructure team maintains and operates a configuration management framework to address hardware scalability availability auditing and security management By centrally managing hosts thr ough the use of automated processes that manage change Amazon is able to achieve its goals of high availability repeatability scalability security and disaster recovery Systems and network engineers monitor the status of these automated tools on a co ntinuous basis reviewing reports to respond to hosts that fail to obtain or update their configuration and software Internally developed configuration management software is installed when new hardware is provisioned These tools are run on all UNIX host s to validate that they are configured and that software is installed in compliance with standards determined by the role assigned to the host This configuration management software also helps to regularly update packages that are already installed on the host Only approved personnel enabled through the permissions service may log in to the central configuration management servers AWS notifies you of certain changes to the AWS service offerings where appropriate AWS continuously evolves and improves the ir existing services frequently adding new Services or features to existing Services Further as AWS services are controlled using APIs if AWS changes or discontinues any API used to make calls to the Services AWS continues to offer the existing API fo r 12 months (as of this publication) to give you time to adjust accordingly Additionally AWS provides you with a Personal Health Dashboard with service health and status information specific to your account as well as a public Service Health Dashboard t o provide all customers with the real time operational status of AWS services at the regional level at http://statusawsamazoncom Amazon Web Services GxP Systems on AWS 25 Software Development Software Development Processes The Project and Operation stages of the life cycle approach in GAMP for instance are reflected in the AWS information and activities surrounding organizational mechanisms to guide the development and configuration of the information system including software developmen t lifecycles and software change management Elements of the organizational mechanisms include policies and standards the code pathway deployment a change management tool ongoing monitoring security reviews emergency changes management of outsourced and unauthorized development and communication of changes to customers The software development lifecycle activities at AWS include the code development and change management processes at AWS which are centralized across AWS teams developing externally and internally facing code with processes applying to both internal and external service teams Code deployed at AWS is developed and managed in a consistent process regardless of its ultimate destination There are several systems utilized in this proces s including: • A code management system used to assemble a code package as part of development • Internal source code repository • The hosting system in which AWS code pipelines are staged • The tool utilized for automating the testing approval deployment and ongoing monitoring of code • A change management tool which breaks change workflows down into discrete easy to manage steps and tracks change details • A monitoring service to detect unapproved changes to code or configurations in production systems Any variances are escalated to the service owner/team Code Pathway The AWS Code Pathway steps to development and deployment are outlined below This process is executed regardless of whether the code is net new or if it represents a change to a n existing codebase Amazon Web Services GxP Systems on AWS 26 1 Developer writes the code in an approved integrated development environment running on an AWS managed developer desktop environment The developer typically does an initial build and integration test prior to the next step 2 The develop er checks in the code for review to an internal source code repository 3 The code goes through a Code Review Verification in which at least one additional person reviews the code and approves it The list of approvals are stored in an immutable log that is retained within the code review tool 4 The code is then built from source code to the appropriate type of deployable code package (which varies from language to language) in an internal build system 5 After successful build including successful passing of a ll integration tests the code gets pushed to a test environment 6 The code goes through automated integration and verification tests in the pre production environments and upon successful testing the code is pushed to production AWS may implement open sou rce code within its Services but any such use of open source code is still subject to the approval packaging review deployment and monitoring processes described above Open source software including binary or machine executable code and open source licenses is additionally reviewed and approved prior to implementation AWS maintains a list of approved open source as well as open source that is prohibited Deployment and Testing A pipeline represents the path approved code packages take from initia l check in through a series of automated (and potentially manual) steps to execution in production The pipeline is where automation testing and approvals happen At AWS the deployment tool is used to create view and enforce code pipelines This tool is utilized to promote the latest approved revision of built code to the production environment A major factor in ensuring safe code deployment is deploying in controlled stages and requiring continuous approvals prior to pushing code to production As p art of the deployment process pipelines are configured to release to test environments (eg “beta” “gamma” and others as defined by the team) prior to pushing the code to the production environment Automated quality testing (eg integration testing structural Amazon Web Services GxP Systems on AWS 27 testing behavioral testing ) is performed in these environments to ensure code is performing as anticipated If code is found to deviate from standards the release is halted and the team is notified of the need to review These development and test environments emulate the production environment and are used to properly assess and prepare for the impact of a change to the production environment In order to reduce the risks of unauthorized access or change to the production environment the dev elopment test and production environments are all logically separated The tool additionally enforces phased deployment if the code is to be deployed across multiple regions Should a package include deployment for more than one AWS region the pipelin e will enforce deployment on a single region basis If the package were to fail integration tests at any region the pipeline is halted and the team is notified for need to review Configuration and Change Management Configuration management is performed during information system design development implementation and operation through the use of the AWS Change Management process Routine emergency and configuration changes to existing AWS infrastructure are autho rized logged tested approved and documented in accordance with industry norms for similar systems Updates to the AWS infrastructure are done to minimize any impact on you and your use of the services Software AWS applies a systematic approach to managing change so that changes to customer impacting services are thoroughly reviewed tested approved and well communicated The AWS change management process is designed to avoid unintended service disruptions and to maintain th e integrity of service to you Changes deployed into production environments are: • Prepared: this includes scheduling determining resources creating notification lists scoping dependencies minimizing concurrent changes as well as a special process for e mergent or long running changes • Submitted: this includes utilizing a Change Management Tool to document and request the change determine potential impact conduct a code review create a detailed timeline and activity plan and develop a detailed rollback procedure Amazon Web Services GxP Systems on AWS 28 • Reviewed and Approved: Peer reviews of the technical aspects of a change are required Changes must be authorized in order to provide appropriate oversight and understanding of business and security impact The configuration management process includes key organizational personnel that are responsible for reviewing and approving proposed changes to the information system • Tested : Changes being applied are tested to help ensure they will behave as expected and not adversely impact performance • Performed: This includes pre and post change notification managing timeline monitoring service health and metrics and closing out the change AWS service teams maintain a current authoritative baseline configuration for systems and devices Change Manage ment tickets are submitted before changes are deployed (unless it is an emergency change) and include impact analysis security considerations description timeframe and approvals Changes are pushed into production in a phased deployment starting with lo west impact areas Deployments are tested on a single system and closely monitored so impacts can be evaluated Service owners have a number of configurable metrics that measure the health of the service’s upstream dependencies These metrics are closely m onitored with thresholds and alarming in place Rollback procedures are documented in the Change Management (CM) ticket AWS service teams retain older versions of AWS baseline packages and configurations necessary to support rollback and p revious versions are s tored in the repository systems Integration testing and the validation process is performed before rollbacks are implemented When possible changes are scheduled during regular change windows In addition to the preventative controls that are part of the pipeline (eg code review verifications test environments) AWS also uses detective controls configured to alert and notify personnel when a change is detected that may have been made without standard procedure AWS checks deployments to ensure that they have the appropriate reviews and approvals to be applied before the code is committed to production Exceptions for reviews and approvals for production lead to automatic ticketing and notification of the service team After code is depl oyed to the Production environment AWS performs ongoing monitoring of performance through a variety of monitoring processes AWS host configuration settings are also monitored as part of vulnerability monitoring to validate compliance with AWS security st andards Audit trails of the changes are maintained Emergency changes to production systems that require deviations from standard change management procedures are associated with an incident and are logged and Amazon Web Services GxP Systems on AWS 29 approved as appropriate Periodically AWS p erforms self audits of changes to key services to monitor quality maintain high standards and facilitate continuous improvement of the change management process Any exceptions are analyzed to determine the root cause and appropriate actions are taken t o bring the change into compliance or roll back the change if necessary Actions are then taken to address and remediate the process or people issue Reviews AWS performs internal security reviews against Amazon security standards of externally launched pr oducts services and significant feature additions prior to launch to ensure security risks are identified and mitigated before deployment to a customer environment AWS security reviews include evaluating the service’s design threat model and impact to AWS’ risk profile A typical security review starts with a service team initiating a review request to the dedicated team and submitting detailed information about the artifacts being reviewed Based on this information AWS reviews the design and identif ies security considerations; these considerations include but are not limited to: appropriate use of encryption analysis of data handling regulatory considerations and adherence to secure coding practices Hardware firmware and virtualization software also undergo security reviews including a security review of the hardware design actual implementation and final hardware samples Code package changes are subject to the following security activities: • Full security assessment • Threat modeling • Security design reviews • Secure code reviews (manual and automated methods) • Security testing • Vulnerability/penetration testing Success ful completion of the above mentioned activities are pre requisites for Service launch Development teams ar e responsible for the security of the features they develop that meet the security engineering principles Infrastructure teams incorporate security principles into the configuration of servers and network devices with least privilege enforced throughout Findings identified by AWS are categorized in terms of risk and are tracked in an automated workflow tool Amazon Web Services GxP Systems on AWS 30 Product Release For all AWS services information can be found on the associated service website which describes the key attributes of the Servi ce and product details as well as pricing information developer resources (including release notes and developer tools) FAQs blogs presentations and additional documentation such as developer guides API references and use cases where relevant ( https://awsamazoncom/products/ ) Customer Training AWS has implemented various methods of external communication to support its customer base and the community Mechanisms are in place to allow the customer support team to be notified of operational issues that impact your experience A Service Health Dash board is available and maintained by the customer support team to alert you to any issues that may be of broad impact The AWS Cloud Security Center (https://awsamazoncom/security/ ) and Healthcare and Life Sciences Center (https://awsamazoncom/health/ ) is available to provide you with security and compliance details and Life Sciences related enablement information about AWS You can also su bscribe to AWS Support offerings that include direct communication with the customer support team and proactive alerts to any customer impacting issues AWS also has a series of training and certification programs ( https://wwwawstraining/ ) on a number of cloud related topics in addition to a series of service and support offerings available through your AWS account team AWS Products in GxP Systems With limited technical guidance from regulatory and industry bod ies this section aims to describe some of the best practices we’ve seen customers adopting when using cloud services to meet their regulatory compliance needs The Final FDA Guidance Document “ Data Integrity and Compliance With Drug CGMP ” explicitly brings cloud infrastructure into scope through the revised definition of “computer or related systems”: “The American National Standards Institute (ANSI) defines systems as people machines and methods organized to accomplish a set of specific functions Computer or related systems can refer to computer hardware software peripheral devices networks cloud infrastructure personnel and associated documents (eg user manuals and standard operating pr ocedures)“ Amazon Web Services GxP Systems on AWS 31 Further industry organizations like ISPE are increasingly dedicating publications on cloud usage in the life sciences ( Getting Ready For Pharma 40: Data integrity in cloud and big data applications ) As described throughout this whitepaper there is no unique certification for GxP regulations so each customer defines their own risk profile Therefore it is important to note that although this whitepaper i s based on AWS experience with life science customers you must take final accountability and determine your own regulatory obligations To begin with even when deployed in the cloud GxP applications still need to be validated and their underlying infras tructure still needs qualifying The basic principles governing on premise infrastructure qualification still apply to virtualized cloud infrastructure Therefore the current industry guidance should still be leveraged Traditionally a regulated company was accountable and responsible for all aspects of their infrastructure qualification and application validation With the introduction of public cloud providers part of that responsibility has been shifted to a cloud supplier The regulated company is st ill accountable but the cloud supplier is now responsible for the qualification of the physical infrastructure virtualization and service layers and to completely manage the services they provide ie the big difference now is that there is a shared com pliance responsibility model which is similar to the shared security responsibility model described earlier in this whitepaper Previous sections of this whitepaper described how AWS takes care of their part of the shared responsibility model This section provides recommended strategies on how to cover your part of the shared responsibility model for GxP environments Involving AWS Achieving GxP compliance when adopting cloud technology is a journey AWS has helped many customers along this journey and th ere is no compression algorithm for experience For example Core Informatics states: “Using AWS we can help organizations accel erate discovery while maintaining GxP compliance It’s transforming our bu siness and more importantly helping our customers tr ansform their businesses” Richard Duffy Vice President of Engineering Core Informatics Amazon Web Services GxP Systems on AWS 32 For the complete case study see Core Informatics Case Study For a selection of other customer case studies see AWS Custom er Success Industry guidance recommends that companies should try and maximize supplier involvement and leverage our knowledge experience and even our documentation as much as possible as we provide in the following sections and throughout this whitepap er Please contact us to discuss starting your journey to the cloud Qualification Strategy for Life Science Organizations One of the concerns for regulated enterprise customers becomes how to qualify and demonstrate control over a system when so much of the responsibility is now shared with a supplier The purpose of a Qualification Strategy is to answer this question Some customers view a Qualification Strategy as an overarching Validation Plan The str ategy will employ various tactics to address the regulatory needs of the customer To better scope the Qualification Strategy the architecture should be viewed in its entirety Enterprise scale customers typically define the architecture similar to the following: Figure 2: Layered architecture AWS ServicesRegulated Landing ZoneApplicationsCustomer Accountability Customer ResponsibilityAWS Responsibility Building BlocksAmazon Web Services GxP Systems on AWS 33 The diagram il lustrates a layered architecture where a large part is delegated to AWS From this approach a Qualification Strategy can be defined to address four main areas: 1 How to work with AWS as a supplier of services 2 The qualification of the regulated landing zone 3 The qualification of building blocks 4 Supporting the development of GxP applications The situation also changes slightly if the customer leverages a service provider like AWS Managed Services where the build operation and maintenance of the landing zone is done by the service provi der Conversely for workloads that must remain on premises AWS Outposts extends AWS services including compute storage and networking to customer sites Data can be configured to be stored locally and customers are responsible for controlling access around Outposts equipment Data that is processed and stored on premises is accessible over the customer’s local network In this case customer responsibility extends into the AWS Services box ( Figure 3) Figure 3: Layered architecture with service provider In this situation even more responsibility is delegated by the customer and so the controls that are typically to be put in place by the customer to control their own AWS ServicesRegulated Landing ZoneApplicationsCustomer Accountability Customer ResponsibilityAWS Responsibility Building Blocks Service Provider ResponsibilityAmazon Web Services GxP Systems on AWS 34 operations now need adaptations to check that similar controls are implemented by the service provider The controls that are inherited from AWS are shared or that remain with the customer were covered previously in the Shared Security Responsibility Model section of this whitepaper This s ection describe s these layers at a high level These layers are expanded upon in later sections of this whitepaper Industry Guidance The following guidance is at a minimum a best practice for your environment You should still work with your professiona ls to ensure you comply with applicable regulatory requirements The same basic principles that govern on premise s infrastructure qualification also apply to cloud based systems Therefore this strategy uses a tactic of leveraging and building upon that s ame industry guidance using a cloud perspective based on the following ISPE GAMP Good Practice Guides ( Figure 4): • GAMP Good Practice Guide: IT Infrast ructure Control and Compliance 2nd Edition • GAMP 5: A Risk Based Approach to Compliant GxP Computerized Systems Figure 4: Mapping industry guidance to architecture layers AWS ServicesRegulated Landing ZoneApplicationsCustomer Accountability Customer ResponsibilityAWS Responsibility Building BlocksGAMP 5: A RiskBased Approach to Compliant GxP Computerized Systems GAMP Good Practice Guide: IT Infrastructure Control and Compliance 2nd Edition Amazon Web Services GxP Systems on AWS 35 Supplier Assessment and Management Industry guidance suggest s you l everage a supplier ’s experience knowledge and documentation as much as possible However w ith so much responsibility now delegated to a supplier the supplier assessment becomes even more important A regulated company is still ul timately accountable for demonstrating that a GxP system is compliant even if a supplier is responsible for parts of that system so the regulated customer needs to establish enough trust in their supplier The cloud service provider must be assessed to f irst determine if they can deliver the services offered but also to determine the suitability of their quality system and that it is systematically followed The supplier needs to show that they have a QMS and follow a documented set of procedures and st andards governing activities such as: • Infrastructure Qualification and Operation • Software Development • Change Management • Release Management • Configuration Management • Supplier Management • Training • System security Details of the AWS QMS are covered in the software section of this whitepaper The capabilities of AWS to satisfy these areas may be reassessed on a periodic basis typically by reviewing the latest materials available through AWS Artifact (ie AWS certifications and audit reports) It is also important to consider and plan how operational processes that span the shared responsibility model will operate For example how to manage changes made by AWS to services used a s part of your landing zone or applications incident response management in cases of outages or portability requirements should there be a need to change cloud service provider Regulated Landing Zone One of the main functions of the landing zone is to provide a solid foundation for development teams to build on and address as many regulatory requirements as possible thus removing the responsibility from the development teams Amazon Web Services GxP Systems on AWS 36 The GAMP IT Infrastructure Control and Compliance guidance document follows a platform based approach to the qualification of IT infrastructure which aligns perfectly with a customer’s need to qualify their landing zone AWS Control Tower provides the easiest way to set up and g overn a new secure multi account AWS environment based on best practices established through AWS’ experience working with thousands of enterprises as they move to the cloud See AWS Control T ower features for further details of what is included in a typical landing zone GAMP also describes two scenarios for approaching platform qualification 1 The first scenario is independent of any specific application and instead considers generic requireme nts for the platform or landing zone 2 The second scenario is where the requirements of the platform are derived directly from the applications that will run on the platform For many customers when first building their landing zone the exact nature of t he applications that will run on it is unclear Therefore this paper follows scenario 1 and approach es the qualification independent of any specific application The objective of the landing zone is to provide application teams with a solid foundation upo n which to build while addressing as many regulatory requirements as possible so the regulatory burden on the application team is reduced Tooling and Automation Many customers include common tooling and automation as part of the landing zone so it can be qualified and validated once and used by all development teams This common tooling i s often within the shared services account of the landing zone For example standard tooling around requirements management test management CI/CD etc need to be qualified and validated Similarly any automation of IT processes also needs to be validated For example it’s possible to automate the Installation Qualification (IQ) step of your Computer Systems Validation process Leveraging Managed Services Instead of building and operating a landing zone yourself you have the option of delegating this responsibility This delegation could be to AWS by making use of AWS Managed Services or to a partner within t he AWS Partner Network (APN) This means the service provider is responsible for building a landing zone based on AWS best practices operating it in accordance with industry best practices and providing suf ficient evidence to you in meeting your expectations Amazon Web Services GxP Systems on AWS 37 Building Blocks When it comes to the virtualized infrastructure and service instances supporting an application there are two approaches to take 1 Commission service instances for a specific applicatio n Each application team therefore takes care of their own qualification activities but possibly causing duplication of qualification effort across application/product teams 2 Define ‘building blocks’ to be used across all applications Create standard reusable building blocks that can be qualified once and used many times To reduce the overall effort and the increase developer productivity this paper assume s the use of option 2 A ‘building block’ could be a single AWS service such as Amazon EC2 or Ama zon RDS a combination of AWS services such as Amazon VPC and NAT Gateway or a complete stack such as a three tier web app or ML Ops stack The qualification of ‘building blocks’ follow s a process based on the GAMP IT Infrastructure Control and Compliance guidance document’s ‘92 Infrastructure Building Block Concept’ To accelerate application development you could create a library of these standardized and pre qualified building blocks which are made available to development teams to easily consume Computer System Validation With a solid and regulatory compliant foundation from the supplier assessment and landing zone you can look at improving your existing Computer Systems Validatio n (CSV) standard operating procedure (SOP) Most customers already have existing SOPs around Computer Systems Validation Many customers also state that their processes are old slow and very manual in nature and view moving to the cloud as an opportunity to improve these processes and automate as much as possible The ‘building block’ approach described earlier is already a great accelerator for development teams enabling them to stitch together pre qualified building blocks to form the basis of their app lication However the application team is still responsible for the Validation of their application including Installation Qualification (IQ) Again this is another area where customer approach varies Some customers follow existing processes and still g enerate documentation which is stored in their Enterprise Amazon Web Services GxP Systems on AWS 38 Document Management System Other customers have fully adopted automation and achieved ‘ near zero documentation’ by validating their tool chain and relying on the data stored in those tools as evide nce Validation During Cloud Migration One important point that may be covered in a Qualification Strategy is the overarching approach to Computer System Validation (CSV) during migration If you are embarking on a migration effort part of the analysis of the application portfolio will be to identify archetypes or groups of applications with similar architectures A single runbook can be developed and then repeated for each of the applications in the group speeding up migration At this point if the app lications are GxP relevant the CSV/migration strategy can also be defined for the archetype and repeated for each application Supplier Assessment and Cloud Management As mentioned earlier gaining trust in a Cloud Service Provider is critical as you will be inheriting certain cloud infrastructure and security controls from the Cloud Service Provider The approach described by industry guidance involves several steps whi ch we will cover here Basic Supplier Assessment The first (optional) step is to perform a basic supplier assessment to check the supplier’s market reputation knowledge and experience working in regulated industries prior experience working with other re gulated companies and what certifications they hold You can leverage industry assessments such as Gartner’s assessment on the AWS News Blog post AWS Named as a Cloud Leader for the 10th Consecutive Year in Gartner’s Infrastructure & Platform Services Magic Quadrant and customer testimonials Documentation Review A supplier assessment often include s a deep dive into the assets available from the supplier describing their QM S and operations This includes reviewing certifications audit reports and whitepapers For more information see the AWS Risk and Compliance whitepape r Amazon Web Services GxP Systems on AWS 39 AWS and its customers share control over the IT environment and both parties have responsibility for managing the IT environment The AWS part in this shared responsibility inc ludes providing services on a highly secure and cont rolled platform and providing a wide array of security features customers can use The customer’s responsibility includes configuring their IT environments in a secure and controlled manner for their purposes While customers don’t communicate their use and configurations to AWS AWS does communicate its security and control environment relevant to customers AWS does this by doing the following: • Obtaining industry certifications and independent third party attestations • Publishing information about the AWS security and control pra ctices in whitepapers and web site content • Providing certificates reports and other documentation directly to AWS customers under NDA (as required) For a more detailed description of AWS Security see AWS Cloud Security AWS Artifact provides on demand access to AWS security and compliance reports and select online agreements Reports available in AWS Artifact include our Service Organization Control (SOC) reports Payment Card Industry (PCI) reports and certifications from accredita tion bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls Agreements available in AWS Artifact include the Business Associate Addendum (BAA) and the Nondisclosure Agreemen t (NDA) For a more detailed description of AWS Compliance see AWS Compliance If you have additional questions about the AWS certifications or the compliance documentation AWS makes available please bring those questions to your account team Review Service Level Agreements (SLA) AWS offers service level agreements for certain AWS services Further information can be found under Service Level Agreements (SLAs) Audit Mail Audit – To supplement the AWS documentation you have gathered a mail audit questionnaire (sometimes referred to as a supplier questionnaire) may be submitted to AWS to gather additional information or to ask cla rifying questions You should work with your account team to request a mail audit Amazon Web Services GxP Systems on AWS 40 Onsite Audit – AWS regularly undergoes independent third party attestation audits to provide assurance that control activities are operating as intended Currently AWS participates in over 50 different audit programs The results of these audits are documented by the assessing body and made available for all AWS customers through AWS Artifact These thirdparty attestations and certifications of AWS provide you with visibility and independent validation of the control en vironment eliminating the need for customers to perform individual onsite audits Such attestations and certifications may also help relieve you of the requirement to perform certain validation work yourself for your IT environment in the AWS Cloud For d etails see the AWS Quality Management System section of this whitepaper Contractual Agreement Once you have completed a supplier assessment of AWS the next step is to set up a contractual agreement for using AWS services The AWS Customer Agreement is available at: https://awsamazoncom/agreement/ ) You are responsible for interpreting regulations and determining whether the appropriate requirements are included in a contract with standard terms If you have any questions about entering into a service agreement with AWS please contact your account team Cloud Management Processes There are certain processes that span the shared responsibility model and typically must be captured in your QMS in the form of SOPs and work instructions Change Management Change Management is a bidirectional process when dealing with a cloud service provider On the one hand AWS is co ntinually making changes to improve its services as mentioned earlier in this paper On the other hand you can make feature requests which is highly encouraged as 90% of the AWS service features are as a result of direct customer feedback Customers typically use a risk based approach appropriate fo r the type of change to determine the subsequent actions Changes to AWS services which add functionality are not usually a concern because no application will be using that new functionality yet However new functionality may trigger an internal assessme nt to determine if it affects the risk profile of the service and Amazon Web Services GxP Systems on AWS 41 should be allowed for use If mandated by your QMS this may trigger a re qualification of building blocks prior to allowing the new functionality Deprecations are considered more critical because they could break an application A deprecation may include a thirdparty library utility or version of languages such as Python The deprecation of a service or feature is rare Once you receive the notification of a deprecation you should trigger an impact assessment If an impact is found the application teams should plan changes to remediate the impac t The notice period for a deprecation should allow for time for assessme nt and remediation AWS will also help you understand the impact of the change There are other changes such as enhancements and bug fixes which do not change the functionality of th e service and do not trigger notifications to customers These types of changes are synonymous with “standard” changes in ITIL which are usually pre authorized low risk relatively common and follow a specific procedure If you want to generate evidence s howing no regression is introduced due to this class of change you could create a test bed which repeatedly tests the AWS services to detect regression Should a problem be uncovered it should immediately be reported to AWS for resolution Incident Manag ement The Amazon Security Operations team employs industry standard diagnostic procedures to drive resolution during business impacting events Staff operators provide 24x7x365 coverage to detect incidents and to manage the impact and resolution As part o f the process potential breaches of customer content are investigated and escalated to AWS Security and AWS Legal Affected customers and regulators are notified of breaches and incidents where legally required You can subscribe to the AWS Security Bulletins page ( https://awsamazoncom/security/security bulletins ) which provides information regarding identified security issues You can subscribe to the Security Bulletin RSS Feed to keep abreast of security announcements on the Security Bulletin webpage You are responsible for reporting incidents involving your storage virtual machines and applications unless the incident is caused by AWS For more information refer to the AWS Vulnerability Reporting webpage: https://awsamazoncom/security/vulnerability reporting/ Customer Support AWS develops and maintains customer supp ort procedures that include metrics to verify performance When you contact AWS to report that AWS services do not meet Amazon Web Services GxP Systems on AWS 42 their quality objectives your issue is investigated and where required commercially reasonable actions are taken to resolve it Where AWS is the first to become aware of a customer impacting issue procedures exist for notifying impacted customers according to their contract requirements and/or via the AWS Service Health Dashboard http://sta tusawsamazoncom/ You should ensure that your policies and procedures align to the customer support options provided by AWS Additional details may be found in the Customer Complaints and Customer Training sections in this document Cloud Platform/Landing Zone Qualification A landing zone such as the one created by AWS Control Tower is a well architected multi account AWS environment that's based on security and compliance best practices The landing zone includes capabilities for centralized logging security account vending and core network connectivity We recommend that you then build features into the landing zone to satisfy as many regulatory requirements as possible and to effectively remove the burden from the development teams which build on it The objective of the landing zone and the team owning it should be to prov ide the guardrails and features that free the developers to use the ‘right tools for the job’ and focus on delivering differentiated business value rather than on compliance For example account vending could be extended to include account bootstrapping t o automatically direct logs to the central logging account drop default VPCs and instantiate an approved VPC (if needed at all) deploy baseline stack sets and establish standard roles to support things like automated installation qualification (IQ) The Shared Services account would house centralized capabilities and automations such as the mentioned automation of IQ The centralized logging account could satisfy regulatory requirements around audit trails including for example record retention through the use of lifecycle policies The addition of a backup and archive account could provide standard backup and restore along with archiving services for application teams to use Similarly a standardized approach to disaster recovery (DR) can be provided by the landing zone using tools like CloudEndure Disaster Recovery If you follow AWS guidance and implement a Cloud Center of Excellence (CCoE) and consider the landing zone as a product the CCoE team takes on the responsibility of building these capabilities into the landing zone to satisfy regulatory requirements Amazon Web Services GxP Systems on AWS 43 The number of capabilities built into the la nding zone is often influenced by the organizational structure around it If you have a traditional structure with a divide between development teams and infrastructure tasks like server and network management are centralized and these capabilities are built into the platform If you adopt a product centric operating model the development teams become more autonomous and responsible for more of the stack perhaps even the entire stack from the VPC and everything built on it Also consider with serverless architectures you may not need a VPC because there are no servers to manage This underlying cloud platform when supporting GxP applications should be qualified to demonstrate proper configuration and to ensure that a state of control and compliance is maintained The qualification of the cloud can follow a traditional infrastructure qualification project which include s the planning specification and design risk assessment qualification test planning installation qualification (IQ) operational qualif ication (OQ) and handover (as described in Section 5 of GAMP IT Qualification of Platforms) The components (configuration items) that make up the landing zone should all be deployed through automated means ie an automated pipeline This approach support s better change management going forward After the completion of the infrastructure project and the creation of the operations and maintenance SOPs you have a qualified cloud platform upon which GxP workloads can run The SOPs cover topic s such as account provisioning access management change management and so on Maintaining the Landing Zone’s Qualified State Once the landing zone is live it must be maintained in a qualified state Unless the operations are delegated to a partner you typically create a Cloud Platform Operations and Maintenance SOP based on Section 6 of GAMP IT Infrastructure Control and Compliance According to GAMP there are several areas where control must be shown such as change management configuration managemen t security management and others GAMP guidance also suggests that ‘automatic tools’ should be used whenever possible The following sections cover these control areas and how AWS services can help with automation Change Management Change Management processes control how changes to configuration items are made These processes should include an assessment of the potential impact on the GxP Amazon Web Services GxP Systems on AWS 44 applications supported by the landing zone As mentioned earlier all of the landing zone components are deployed using an automated pipeline Therefore once a change has been approved and committed in the source code repository tool like AWS CodeCommit the pipeline is triggered and the change deployed There will likely be multiple pipelines for the va rious parts that make up the landing zone The landing zone is made up of infrastructure and automation components Now through the use of infrastructure as code there is no real difference between how these different components are deployed We recommen d a continuous deployment methodology because it ensures changes are automatically built tested and deployed with the goal of eliminating as many manual steps as possible Continuous deployment seeks to eliminate the manual nature of this process and au tomate each step allowing development teams to standardize the process and increase the efficiency with which they deploy code In continuous deployment an entire release process is a pipeline containing stages AWS CodePipeline can be used along with AW S CodeCommit AWS CodeBuild and AWS CodeDeploy For customers needing additional approval steps AWS CodePipeline also supports the inclusion of manual steps All changes to AWS services either manual or automated are logged by AWS CloudTrail AWS CloudTrail is a service that enables governance compliance operational auditing and risk auditing of your AWS account With CloudTrail you can log continuously monitor and retain account activity related to actions across your AWS infrastructure CloudTrail provides event history of your AWS account activity including actions taken through the AWS Management Console AWS SDKs command line tools and other AWS services This event history simplifies secur ity analysis resource change tracking and troubleshooting In addition you can use CloudTrail to detect unusual activity in your AWS accounts These capabilities help simplify operational analysis and troubleshooting Of course customers also want to b e alerted about any unauthorized and unintended changes You can use a combination of AWS CloudTrail and AWS CloudWatch to detect unauthorized changes made to the production environment and even automate immediate remediation Amazon CloudWatch is a monitoring service for AWS Cloud resources and can be used to trigger responses to AWS CloudTrail events (https://docsawsamazoncom/awscloudtrail/latest/userguide/cloudwatch alarms for cloudtrailhtml ) Amazon Web Services GxP Systems on AWS 45 Configuration Management Going hand in hand wi th change management is configuration management Configuration items (CIs) are the components that make up a system and CIs should only be modified through the change management process Infrastructure as Code brings automation to the provisioning process through tools like AWS CloudFormation Rather than relying on manually performed steps both administrators and develope rs can instantiate infrastructure using configuration files Infrastructure as Code treats these configuration files as software code These files can be used to produce a set of artifacts namely the compute storage network and application services tha t comprise an operating environment Infrastructure as Code eliminates configuration drift through automation thereby increasing the speed and agility of infrastructure deployments AWS Tagging and Resource Groups lets you organize your AWS landscape by applying tags at different levels of granularity Tags allow you to label collect and organize resources and components within services The Tag Editor lets you manage tags across services and AWS Regions Using this approach you can globally manage all the application business data and technology components of your target landscape A Resource Group is a collection of resources that share one or more tags It can be used to create an enterprise architecture view of your IT landscape consolidating AWS resources into a per project (that is the on going programs that realize your target landscape) per entity (that is capabilities roles processes) and per domain (that is Business Application Data Technology) view AWS Config is a service that lets you assess audit and evaluate the configurations of AWS resources AWS Config continuously monitors and records your AWS resource configurations and lets you automate the evaluation of recorded configurations against desired configurations With AWS Config you can review changes in configurations and determine their overall compliance against the configurations specified in your internal guidelines This enable s you to simplify compliance auditing security analysis change management and operational troubleshooting In addition AWS provides conformance packs for AWS Config to provide a general purpose compliance framework designed to enable you to create security operational or cost optimization governance checks using managed or custom AWS Config rules and AWS Conf ig remediation actions including a conformance pack for 21 CFR 11 You can use AWS CloudFormation AWS Config Tagging and Reso urce Groups to see exactly what cloud assets your company is using at any moment These services Amazon Web Services GxP Systems on AWS 46 also make it easier to detect when a rogue server or shadow application appear in your target production landscape Security Management AWS has defined a set o f best practices for customers who are designing the security infrastructure and configuration for applications running in Amazon Web Services (AWS) These AWS resources provides security best practices that will help you define your Information Security Management System (ISMS) and build a set of security policies and processes for your organization so you can protect your data and assets in the AWS Cloud These AWS resources also provide an overview of different security topics such as identifying categorizing and protecting your assets on AWS managing access to AWS resources using accounts users and groups and suggesting ways you can secure your data operating systems applications and overall infrastructure in the cloud AWS provides you with an extensive set of tools to secure workloads in the cloud If you implement full automation it could negate the need for anyone to have direct access to any environment beyond development However if a situation occurs that requires someone to access a production environment they must explicitly request access have the access reviewed and approved by the appropriate owner and upon approval obtain temporary access with the least privileg e needed and only for the duration required You should then track their activities through logging while they have access You can refer to this AWS resource for fu rther information Problem and Incident Management With AWS you get access to many tools and features to help you meet your problem and incident management objectives These capabilities help you establish a configuration and security baseline that meets your objectives for your applications running in the cloud When a deviation from your baseline does occur (such as by a mis configuration) you may need to respond and investigate To successfully do so you must understand the basic concepts of security incident response within your AWS environment as well as the issues needed to consider to prepare educate and train your cloud teams before security issues occur It is important to know which controls and capabilities you can use to review topical examples for resolving potential concerns and to identify remediation methods that can be used to leverage automation and impro ve response speed Amazon Web Services GxP Systems on AWS 47 Because security incident response can be a complex topic we encourage you to start small develop runbooks leverage basic capabilities and create an initial library of incident response mechanisms to iterate from and improve upon Th is initial work should include teams that are not involved with security and should include your legal departments so that they are better able to understand the impact that incident response (IR) and the choices they have made have on your corporate go als For a comprehensive guide see the AWS Security Incident Response Guide Backup Restore Archiving The ability to back up and restore is required for all validat ed applications It is therefore a common capability that can be centralized as part of the regulated landing zone Backup and restore should not be confused with archiving and retrieval but the two areas can be combined into a centralized capability For a cloud based backup and restore capability consider AWS Backup AWS Backup is a fully managed backup service that makes it easy to centralize and automate the backup of data across AWS services Using AWS Backup you can centrally configure backup policies and monitor backup activity for AWS resources such as Amazon E BS volumes Amazon EC2 instances Amazon RDS databases Amazon DynamoDB tables Amazon EFS file systems Amazon FSx file systems and AWS Storage Gateway volumes AWS Backup automates and consolidates backup tasks previously performed service byservice r emoving the need to create custom scripts and manual processes With just a few clicks in the AWS Backup console you can create backup policies that automate backup schedules and retention management AWS Backup provides a fully managed policy based back up solution simplifying your backup management enabling you to meet your business and regulatory backup compliance requirements Disaster Recovery In traditional on premises situations Disaster Recovery (DR) involve s a separate data center located a cer tain distance from the primary data center This separate data center only exists in case of a complete disaster impacting the primary data center Often the infrastructure at the DR site sits idle or at best host s preproduction instances of applications thus running the risk of it being out ofsync with production With the advent of cloud DR is now much easier and cheaper The AWS global infrastructure is built around AWS Regions and Availability Zones (AZ) AWS Regions provide multiple physically sepa rated and isolated Availability Zones which are connected with low latency high throughput and highly redundant Amazon Web Services GxP Systems on AWS 48 networking With Availability Zones you can design and operate applications and databases that automatically fail over between Availability Zones without interruption Availability Zones are more highly available fault tolerant and scalable than traditional single or multiple data center infrastructures With AWS Availability Zones it is very easy to create a multi AZ architecture capable o f withstanding a complete failure of one or more zones For even more resilience multiple AWS Regions can be used With the use of Infrastructure as Code the infrastructure and applications in a DR Region do not need to run all of the time In case of a disaster the entire application stack can be deployed into another Region The only components that must run all the time are those keeping the data repositories in sync With tooling like CloudEndure Disaster Recovery you can now automate disaster recovery Performance Monitoring Amazon CloudWatch is a monitoring service for AWS Cloud resources and the applications you run on AWS You can use CloudWatch to collect and track metrics collect and monitor log files set alarms and automatically react to changes in customer AWS resources CloudWatch monitors and logs the behavior of the customer application landscape CloudWatch can also trigger events based on the behavior of your application Qualifying Building Blocks Customers frequently want to know how AWS gives developers freedom to use any AWS service while still maintaining regulatory compliance and fast development To address this problem you can leverage technology but this also involves changes in process design to move away from blocking steps and towards guardrails The changes required to your processes and IT operating model is beyond the scope of this whitepa per However we cover the core steps of a supporting process to qualify building blocks which is one tactic for maintaining regulatory compliance more efficiently The infrastructure building block concept as defined by GAMP is an approach to qualify individual components or combinations of components which can then be put together to build out the IT infrastructure The approach is applicable to AWS services The benefit of this approach is that you can qualify one instance of a building block once and a ssume all the other instances will perform the same way reducing the overall effort across applications The approach also enables customers to change a building block Amazon Web Services GxP Sys tems on AWS 49 without needing to re qualify all of the others or revalidate the applications dependen t upon the infrastructure Service Approval Service approval is a technique used by many customers as part of architecture governance that is it’s used across regulated and non regulated workloads Customers often consider multiple regulations when appro ving a service for use by development teams For example you may allow all services to be used in sandbox accounts but may restrict the services in an account to only HIPAA eligible services if the application is subject to HIPAA regulations Service app roval is implemented through the use of AWS Organizations and Service Control Policies You could take this approach to allow services to be used as part of GxP relevant applications For example a combination of ISO PCI SOC and HIPAA eligibility may provide sufficient confidence Sometimes customers want to implement automated controls over the approved service as described in Approving AWS services for GxP workloads You may prefer to follow a more rigorous qualification process like the following building block qualification Building Block Qualification The qualification of AWS service building blocks follow s a process based on the GAMP IT Infrastructure Control and Compliance guidance documents ‘Infrastructure Building Block Concept’ (Section 9 / Appendix 2 of GAMP IT) According to EU GMP the definition of qualification is: “Action of proving that any equipment works correctly and actually leads to the expected results” The equipment also needs to continue to lead to the expected results over it s lifetime In other words your process should show that the building block works as intended and is kept under control throughout its operational life There will be written procedures in place and when executed records will show that the activities ac tually occurred Also the staff operating the services need to be appropriately trained This process is often described in an SOP describing the overall qualification and commissioning strategy the scope roles and responsibilities a deliverables list and any good engineering practices that will be followed to satisfy qualification and commissioning requirements Amazon Web Services GxP Systems on AWS 50 With the number of AWS services it can be difficult for you to qualify all AWS services at once An iterative and risk based approach is recommended where services are qualified in priority order Initial prioritization will take into account the needs of the first applications moving to cloud and then the prioritization can be reass essed as demand for cloud services increases Design Stage Requirements The first activity is to consider the requirements for the building block One approach is to look at the service API definition Each AWS service has a clearly documented API describi ng the entire functionality of that service Many service APIs are extensive and support some advanced functionality However not all of this advanced functionality may be required initially so any existing business use cases can be considered to help refine the scope For example when noting Amazon S3 requirements you include the core functionality of creating/deleting buckets and the ability to put/get/delete objects However you may not include the lifecycle policy functionality because this function ality is not yet needed These requirements are captured in the building block requirements specification / requirements repository It’s also important to consider non functional requirements To ensure suitability of a service you can look at the service s SLA and limits Gap Analysis Where application requirements already exist in the same way you can restrict the scope you can also identify any gaps Either the gap can be addressed by including more functionality for the building block like bringing t he Amazon S3 Bucket Lifecycle functionality into scope or the service is not suitable for satisfying the requirements and an alternate building block should be used If no other service seems to meet the requirements you can custom develop a service or make a feature request to AWS for service enhancement Risk Assessment Infrastructure is qualified to ensure reliability security and business continuity for the validated applications running on it These three dimensions are usually included as part of any risk assessment The published AWS SLA provides confidence in AWS services reliability Data regarding the current status of the service plus historical Amazon Web Services GxP Systems on AWS 51 adherence to SLAs is available from https://statusa wsamazoncom For confidence in security the AWS certifications can be checked for the relevant service For business continuity AWS builds to guard against outages and incidents and accounts for them in the design of AWS services so when disruptions do occur their impact on customers and the continuity of services is as minimal as possible This step is also not only for GxP qualification purposes The risk assessment should include any additional check s for other regulations such as HIPAA When assessing the risks for a cloud service it’s important to consider the relationship to other building blocks For example an Amazon RDS database may have a relationship to the Amazon VPC building block because you decided a database is only allowed to exist within the private subnet of a VPC Therefore the VPC is taking care of many of the risks around access control These dependencies will be captured in the risk assessment and then focus on additional risks s pecific to the service or residual risks which cannot be catered for by the surrounding production environment Each cloud service building block goes through a risk assessment that identifies a list of risks For each identified risk a mitigation plan is created The mitigation plan can influence one or more of the following components : • Service Control Policy • Technical Design/Infrastructure as Code Template • Monitoring & Alerting of Automated Compliance Controls A risk can be mitigated through the use of Service Control Policies (SCPs) where a service or specific operation is deemed too risky and its use explicitly denied through such a policy For example you can use an SCP to restrict the deletion of an Amazon S3 object through the AWS Management Consol e Another option is to control service usage through the technical design of an approved Infrastructure as Code (IaC) template where certain configuration parameters are restricted or parameterized For example you may use an AWS CloudFormation template to always configure an Amazon S3 bucket as private Finally you can define rules that feed into monitoring and alerting For example if the policy states Amazon S3 buckets cannot be public but this configuration is not enforce d in the infrastructure tem plate then the infrastructure can be monitored for any public Amazon S3 buckets When an S3 bucket is configured as public an alert trigger s remediation such as immediately changing a bucket to private Technical Design In response to the specified requ irements and risks an architecture design specification will be created by a Cloud Infrastructure Architect describing the logical service building Amazon Web Services GxP Systems on AWS 52 block design and traceability from risk or requirement to the design This design specification will among other things describe the capabilities of the building block to the end users and application development teams Design Review To verify that the proposed design is suitable for the intended purpose within the surrounding IT infrastructure design a design review can be performed by a suitably trained person as a final check Construction Stage The logical design may be captured in a document but the physical design is captured in an Infrastructure as Code (IaC) template like a n AWS CloudFormation template This IaC template is always used to deploy an instance of the building block ensuring consistency For one approach see the Automating GxP compliance in the cloud: Best practices and architecture guidelines blog post The IaC template will u se parameters to deal with workload variances As part of the design effort it will be determined often by IT Quality and Security which parameters affect the risk profile of the service and so should be controlled and which parameters can be set by the user For example the name of a database can be set by the template user and generally does not affect the risk profile of a database service However any parameter controlling encryption does affect the risk profile and therefore is fixed in the templa te and not changeable by the template user The template is a text file that can be edited However the rules expressed in the template are also automated within the surrounding monitoring and alerting For example the rule stating that the encryption se tting on a database must be set can be checked by automated rules Therefore a developer may override the encryption setting in the development environment but that change isn’t allowed to progress to a validated environment or beyond At this point automated test scripts can be prepared for executing during the qualification step to generate test evidence The author of the automated tests must be suitably trained and a separate and suitably trained person perform s a code review and/or random testing of the automated tests to ensure the quality level The automated tests ensure the building block initially functions as expected These tests can be run again to ensure the building block continues to function as expected especially after any change Howev er to ensure nothing has changed once in production you should identify and create automated controls Using the Amazon S3 example again all buckets should be private If a public bucket is detected it can be Amazon Web Services GxP Systems on AWS 53 switched back to private and an alert raised and notification sent You can also determine the individual that created the S3 bucket and revoke their permissions The final part of construction is the authoring and approval of any needed additio nal guidance and operations manuals For example how to recover a database would be included in the operations manual of an Amazon RDS building block Qualification and Commissioning Stage It’s important to note that infrastructure is deployed in the same way for every building block ie through AWS CloudFormation using an Infrastructure as Code template Therefore there is usually no need for building block specific installation instructions Also you are confident that every deployment is done according to specification and has the correct configuration Automated Testing If you want to generate test evidence you can demonstrat e that the functional requirements are fulfilled and that all identifi ed risks have been mitigated thus indicating the building block is fit for its intended use through the execution of the automated tests created during construction The output of these automated tests are captured into a secure repository and can be use d as test evidence This automation deploy s the building block template into a test environment execute s the automated tests capture s the evidence and then destroy s the stack again avoiding any ongoing costs Testing may only make sense in combination with other building blocks For example the testing of a NAT gateway can only be done within an existing VPC One alternative is to test within the context of standard archetypes ie a complete stack for a typical application architecture Handover to Operations Stage The handover stage ensures that the cloud operation team is familiar with the new building block and is trained in any service specific operations Once the operations team approves the new building block the service can be app roved by changing a Service Control Policy (SCP) The Infrastructure as Code template can be made available for use by adding it into the AWS Service Catalog or other secure template repository If the response to a risk was a SCP or Monitoring Rule change then the process to deploy those changes are triggered at this stage Amazon Web Services GxP Systems on AWS 54 Computer Systems Validation (CSV) You must still perform computer systems validation activities even if an application is running in the cloud In fact the overarching qualification strategy we have laid out in this paper has ensured that this CSV process can fundamentally be the same as before and hasn’t become more difficult for the application development teams through the introd uction of cloud technologies However with the solid foundation provided by AWS and the regulated landing zone we can shift the focus to improving a traditional CSV process You typically have a Standard Operating Procedure ( SOP ) describing your Software Development Lifecycle (SDLC ) which is often based on GAMP 5: A Risk Based Approach to Compliant GxP Computerized Systems Many SOPs we have seen involve a lot of manual work and approvals which slow down the process The more automation that can be introduced the quicker the process and the lower the chances of human error The automation of IT processes is nothing new and customers have been implementing automated toolchains for years for on premises development The move to cloud provides all those same capabilities but also introduces some additional opportunities especially in the virtualized infrastructure areas In this section we will focus primarily on those additional capabilities now available through the cloud Automating Installatio n Qualification (IQ) It’s important to note that even though we are qualifying the underlying building blocks the application teams still need to validate their application including performing the installation qualification (IQ) as part o f their normal CSV activities in orde r to demonstrate their application specific combination of infrastructure building blocks was deployed and is functioning as expected However they can focus on testing the interaction between building blocks rather than the functionality of each building block itself As mentioned the automation of the development toolchain is nothing new to any high performing engineering team The use of CI/CD and automated testing tools has been around for a long time What hasn’t been possible before is the fully aut omated deployment of infrastructure and execution of the Installation Qualification (IQ) step The use of Infrastructure as Code opens up the possibility to automate the IQ step as described in this blog post The controlled infrastructure template acts as the pre Amazon Web Services GxP Systems on AWS 55 approved specification which can be compared against the stacks deployed by AWS CloudFormation Summary reports and test evidence can be created or if a deviation is found the stack can be rolled back to the last known good state Assuming the IQ step completes successfully the automation can continue to the automation of Operational Qualification (OQ) and Performance Qualification (PQ) Maintainin g an Application ’s Qualified State Of course once an application has been deployed it needs to be maintained under a state of control However a lot of the heavy lifting for things like change management configuration management security management b ackup and restore have been built into the regulated landing zone for the benefit of all application teams Conclusion If you are a Life Science customer with GxP obligations you retain accountability and responsibility for your use of AWS products inclu ding the applications and virtualized infrastructure you develop validate and operate using AWS Products Using the recommendations in this whitepaper you can evaluate your use of AWS products within the context of your quality system and consider strat egies for implementing the controls required for GxP compliance as a component of your regulated products and systems Contributors Contributors to this document include : • Sylva Krizan PhD Security Assurance AWS Global Healthcare and Life Sciences • Rye Ro binson Solutions Architect AWS Global Healthcare and Life Sciences • Ian Sutcliffe Senior Solutions Architect AWS Global Healthcare and Life Sciences Further Reading For additional information see: • AWS Compliance • Healthcare & Life Sciences on AWS Amazon Web Services GxP Systems on AWS 56 Document Revisions Date Description March 2021 Updated to include more elements of AWS Quality System Information and updated guidance on customer approach to GxP compliance on AWS January 2016 First publication Amazon Web Services GxP Systems on AWS 57 Appendix: 21 CFR 11 Controls – Shared Responsibility for use with AWS services Applicability of 21 CFR 11 to regulated medical products and GxP systems are the responsibility of the customer as determined by the intended use of the system(s) or product(s) AWS has mapped some of these requirements based on the AWS Shared Responsibility Model ; however customers are responsible for meeting their own regulatory obligations Below we have identified each subpart of 21 CFR 11 and clarified areas where AWS services and operations and the customer share responsibility in order to meet 21 CFR 11 requirements 21 CFR Subpart AWS Responsibility Customer Responsibility 1110 Controls for closed systems Persons who use closed systems to create modify maintain or transmit electronic records shall employ procedures and controls designed to ensure the authenticity integrity and when appropriate the confidentiality of electronic records and to ensure that the signer cannot readily repudiate the signe d record as not genuine Such procedures and controls shall include the following: Amazon Web Services GxP Systems on AWS 58 1110(a) Validation of systems to ensure accuracy reliability consistent intended performance and the ability to discern invalid or altered records AWS services are b uilt and tested to conform to IT industry standards including SOC ISO PCI and others https://awsamazoncom/compliance/programs/ AWS compliance programs and reports provide objective evidenc e that AWS has implemented several key controls including but not limited to: Control over the installation and operation of AWS product components including both software components and hardware components; Control over product changes and configuratio n management; Risk management program; Management review planning and operational monitoring; Security management of information availability integrity and confidentiality; and Data protection controls including mechanisms for data backup restore and archiving All purchased materials and services intended for use in production processes are documented and documentation is reviewed and approved prior to use and verified to be in conformance with the specifications Final inspection and testing is perf ormed on AWS services prior to their release to general availability The final service release review procedure includes a verification that all acceptance data is present and that all product requirements were met Once in production AWS services underg o continuous performance monitoring In addition AWS’s significant customer base authorization for use by government agencies AWS products are basic building blocks that allow you to create private virtualized infrastructure environments for your custom software applications and commercial offthe shelf applications In this way you remain responsible for enabling (ie installing) configuring and operating AWS products to meet your data application and industry specific needs like GxP software validation and GxP infrastructure qualification as well as validation to support 21 CFR Part 11 requirements AWS products are however unlike traditional infrastructure software products in that they are highly automatable allowing you to programmatically create qualified infrastructure via version controlled JSON[1] scripts instead of manually executed paper p rotocols where applicable This automation capability not only reduces effort it increases control and consistency of the infrastructure environment such that continuous qualification [2] is possible Installation qualification of AWS services into your environment operational and performance qualification (IQ/OQ/PQ) are your responsibility as are the validation activities to demonstrate that systems with GxP workloads managing electronic records are appropriate for the intended use and meet regulatory requirements Amazon Web Services GxP Systems on AWS 59 21 CFR Subpart AWS Responsibility Customer Responsibility and recognition by industry analysts as a leading cloud services provider are further evidence of AWS products delivering their documented functionality https://awsamazoncom/documentation/ Relevant SOC2 Common Criteria: CC12 CC14 CC32 CC71 CC72 CC73 CC74 1110(b) The ability to generate accurate and complete copies of records in both human readable and electronic form suitable for inspection review and copying by the agency Persons should contact the agency if there are any questions reg arding the ability of the agency to perform such review and copying of the electronic records Controls are implemented subject to industry best practices in order to ensure services provide complete and accurate outputs with expected performance committed to in SLAs; Relevant SOC2 Common Criteria: A11 AWS has a series of Security Best Practices (https://awsamazoncom/security/security resources/ ) and additional resources you may referen ce to help protect data hosted within AWS You ultimately will verify that electronic records are accurate and complete within your AWS environment and determine the format by which data is human and/or machine readable and is suitable for inspection by regulators per the regulatory requirements Amazon Web Services GxP Systems on AWS 60 (c) Protection of records to enable their accurate and ready retrieval throughout the records retention period Controls are implemented subject to industry best practices in order to ensure services provide com plete and accurate outputs with expected performance committed to in SLAs; Relevant SOC2 Common Criteria: A11 AWS has identified critical system components required to maintain the availability of our system and recover service in the event of outage Critical system components are backed up across multiple isolated locations known as Availability Zones and back ups are maintained Each Availability Zone is engineered to operate independently with high reliability Backups of critical AWS system components are monitored for successful replication across multiple Availability Zones Refer to the AWS SOC 2 Report C C A12 The AWS Resiliency Program encompasses the processes and procedures by which AWS identifies responds to and recovers from a major event or incident within our environment This program builds upon the traditional approach of addressing contingenc y management which incorporates elements of business continuity and disaster recovery plans and expands this to consider critical elements of proactive risk mitigation strategies such as engineering physically separate Availability Zones (AZs) and continu ous infrastructure capacity planning AWS service resiliency plans are periodically reviewed by members of the Senior Executive management team and the Audit Committee of the Board of Directors The AWS Business Continuity Plan outlines measures to avoid a nd lessen environmental disruptions It includes operational details AWS has a series of Security Best Practices (https://awsamazoncom/security/security resources/ ) and additional resources you may reference to help protect your data hosted within AWS You are responsible for implementation of appropriate security configurations for your environment to protect data integrity as well as ensure data and resources are only retrieved by appropriate permission You are also responsible for creating and testing record retention policies as well as backup and recovery processes You are responsible for properly configuring and using the Service Offerings and taking your own steps to maintain appropriate security protection and backup of your Customer Content which may include the use of encryption technology (to protect your content from unauthorized access) and routine archiving Using Service Offerings such as Amazon S3 Amazon Glacier and Amazon RDS in combination with replication and high availability configurations AWS's broad range of storage solutions for backup and reco very are designed for many customer workloads https://awsamazoncom/backup recovery/ AWS services provide you with capabilities to design for resiliency and maintain business continuity including the utilization of frequent server instance back ups data redundancy replication and the flexibility to place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each region You need to architect your AWS usage to take advantage of multiple regions and availability zones Distributing applications across multiple availability zones provides the ability to remain Amazon Web Services GxP Systems on AWS 61 21 CFR Subpart AWS Responsibility Customer Responsibility about steps to take before during and after an event The Business Continuity Plan is supported by testing that includes simulations of different scenarios During and after testing AWS documents people and process performance corrective actions and lessons learned with the aim of continuous improvement AWS data centers are designed to anticipate and tolerate failure while maintaining service levels In case of failure automated pro cesses move traffic away from the affected area Core applications are deployed to an N+1 standard so that in the event of a data center failure there is sufficient capacity to enable traffic to be load balanced to the remaining sites Refer to the AWS S OC 2 Report CC31 CC32 A12 A13 resilient in the face of most failure modes including natural disasters or system failures The AWS cloud supports many popular disaster recovery (DR) architectures from “pilot light” environments that are ready to scale up at a moment’s notice to “hot standby” environments that enable rapid failover You are responsible for DR planning and testing Amazon Web Services GxP Systems on AWS 62 (d) Limiting system access to authorized individuals AWS implements both physical and logical security controls Physical access to all AWS data centers housing IT infrastructure components is restricted to authorized data cent er employees vendors and contractors who require access in order to execute their jobs Employees requiring data center access must first apply for access and provide a valid business justification These requests are granted based on the principle of least privilege where requests must specify to which layer of the data center the individual needs access and are time bound Requests are reviewed and approved by authorized personnel and access is revoked after the requested time expires Once granted admittance individuals are restricted to areas specified in their permissions Access to data centers is regularly reviewed Access is automatically revoked when an employee’s record is terminated in Amazon’s HR system In addition when an employee or contractor’s access expires in accordance with the approved request duration his or her access is revoked even if he or she continues to be an employee of Amazon AWS restricts logical user access priv ileges to the internal Amazon network based on business need and job responsibilities AWS employs the concept of least privilege allowing only the necessary access for users to accomplish their job function New user accounts are created to have minimal access User access to AWS systems requires approval from the authorized personnel and validation of the active user Access privileges to AWS systems are reviewed on a regular AWS provides you with the ability to configure and use the AWS service offerings in order to maintain appropriate security prot ection and backup of content which may include the use of encryption technology to protect your content from unauthorized access You maintain full control and responsibility for establishing and verifying configuration of access to your data and AWS acc ounts as well as periodic review of access to data and resources Using AWS Identity and Access Management (IAM) a web service that allows you to securely control access to AWS resources you must control who can access and use your data and AWS resource s (authentication) and what data and resources they can use and in what ways (authorization) IAM is a feature of all AWS accounts offered at no additional charge You will be charged only for use of other AWS services by your users https://awsamazoncom/iam/ IAM Best Practices can be found here: http://docsawsamazoncom/IAM/latest/UserG uide/best pract iceshtml Maintaining physical access to your facilities and assets is solely your responsibility Amazon Web Services GxP Systems on AWS 63 21 CFR Subpart AWS Responsibility Customer Responsibility basis When an employee no longer requires these privileges his or her access is revoked Refer to the AWS SOC 2 Report C12 C13 and CC61 66 to verify the AWS physical and logical security controls Amazon Web Services GxP Systems on AWS 64 (e) Use of secure computer generated timestamped audit trails to independently record the date and time of operator entries and actions that create mod ify or delete electronic records Record changes shall not obscure previously recorded information Such audit trail documentation shall be retained for a period at least as long as that required for the subject electronic records and shall be available f or agency review and copying AWS maintains centralized repositories that provide core log archival functionality available for internal use by AWS service teams Leveraging S3 for high scalability durability and availability it allows service teams to collect archive and view service logs in a central log service Production hosts at AWS are equipped with logging for security purposes This service logs all human actions on hosts including logons failed logon attempts and logoffs These logs are stored and accessible by AWS security teams for root cause analysis in the event of a suspected security incident Logs for a given host are also available to the team that owns that host A frontend log analysis tool is available to service teams to search their logs for operational and security analysis Processes are implemented to protect logs and audit tools from unauthorized access modification and deletion Refer to the AWS SOC 2 Report CC51 CC71 Verification and implementation of audit trails as well as back up and retention procedures of your electronic records are your responsibility AWS provides you with the ability to properly configure and use the Service Offerings in order to maintain appropriate audit trail and logging of data access use and modification (including prohibiting disablement of audit trail functionality) Logs within your control (described below) can be used for monitoring and detection of unauthorized changes to your data Using Service Offerings such as AWS CloudTrail AWS CloudWatch Logs and VPC Flow Logs you can monitor your AWS data operations in the cloud by getting a history of AWS API calls for your account including API calls made via the AWS Management Console the AWS SDKs the command line t ools and higher level AWS services You can also identify which users and accounts called AWS APIs for services that support AWS CloudTrail the source IP address the calls were made from and when the calls occurred You can integrate AWS CloudTrail into applications using the API automate trail creation for your organization check the status of your trails and control how administrators turn logging services on and off AWS CloudTrail records two types of events: (1) Management Events: Represent stan dard API activity for AWS services For example AWS CloudTrail delivers management events for API calls such as launching EC2 instances or creating S3 buckets (2) Data Events: Represent S3 object level API activity such as Get Put Delete and List Amazon Web Services GxP Systems on AWS 65 21 CFR Subpart AWS Responsibility Customer Responsibility actions https://awsamazoncom/cloudtrail/ https://awsamazoncom/documentation/cloudtr ail/ http://docsawsamazoncom/AmazonVPC/late st/UserGuide/flow logshtml (f) Use of operational system checks to enforce permitted sequencing of steps and events as appropriate Not appl icable to AWS – this requirement only applies to the customer’s system You are responsible for configuring establishing and verifying enforcement of permitted sequencing of steps and events within the regulated environment (g) Use of authority checks to ensure that only authorized individuals can use the system electronically si gn a record access the operation or computer system input or output device alter a record or perform the operation at hand Not applicable to AWS – this requirement only applies to the customer’s system AWS provides you with the ability to configure and use the AWS service offerings in order to maintain appropriate security protection and backup of content which may include the use of encryption technology to protect your content from unauthorized access You maintain full control and responsibility for establishing and verifying configuration of access to your data and AWS accounts as well as periodic review of access to data and resources Using AWS Identity and Access Management (IAM) a web service that allows you to securely control access to A WS resources you must control who can access and use your data and AWS resources (authentication) and what data and resources they can use and in what ways (authorization) IAM is a feature of all AWS accounts offered at no additional charge You will be charged only for use of other AWS services by your users https://awsamazoncom/iam/ IAM Best Practices can be found here: http://docsawsamazoncom/IAM/latest/UserG uide/best practiceshtml Amazon Web Services GxP Systems on AWS 66 21 CFR Subpart AWS Responsibility Customer Responsibility (h) Use of device (eg terminal) checks to determine as appropriate the validit y of the source of data input or operational instruction Not applicable to AWS – this requirement only applies to the customer’s system You are responsible for establishing and verifying the source of the data input into your system is valid whether ma nually or for example by enforcing only certain input devices or sources are utilized (i) Determination that persons who develop maintain or use electronic record/electronic signature systems have the education training and experience to perform t heir assigned tasks AWS has implemented formal documented training policies and procedures that address purpose scope roles responsibilities and management commitment AWS maintains and provides security awareness training to all information system u sers on an annual basis The policy is disseminated through the internal Amazon communication portal to all employees Relevant SOC2 Common Criteria: CC13 CC14 CC22 CC23 You are responsible for ensuring your AWS users — including IT staff developers validation specialists and IT auditors —review the AWS product documentation and complete the product training programs you have determined are appropriate for your personnel AWS products are extensively documen ted online https://awsamazoncom/documentation/ and a wide range of user training and certification resources are available including introductory labs videos self paced online courses instructor lead training and AWS Certification https://awsamazoncom/training/ Adequacy of training programs for your personnel as well as maintenance of documentation of personnel training and qualifications (such as training record job description and resumes) are your responsibility (j) The establishment of and adherence to written policies that hold individuals accountable and responsible for actions initiated under their electronic signatures in order to d eter record and signature falsification Not applicable to AWS – this requirement only applies to the customer’s system Establishment and enforcement of policies to hold personnel accountable and responsible for actions initiated under their electronic signatures is your responsibility including training and associated documentation (k) Use of appropriate controls over systems documentation including: Amazon Web Services GxP Systems on AWS 67 21 CFR Subpart AWS Responsibility Customer Responsibility (1) Adequate controls over the distribution of access to and use of documentation for system operation and maintenance AWS maintains formal documented policies and procedures that provide guidance for operations and i nformation security within the organization and the supporting AWS environments Policies are maintained in a centralized location that is only accessible by employees Security p olicies are reviewed and approved on an annual basis by Security Leadership and are assessed by third party auditors as part of our audits Refer to SOC2 Common Criteria CC22 CC23 CC53 You are responsible to establish and maintain your own controls over the distribution access and use of documentation and documentation systems for system operation and maintenance Amazon Web Services GxP Systems on AWS 68 21 CFR Subpart AWS Responsibility Customer Responsibility (2) Revision and change control procedures to maintain an audit trail that documents timesequenced development and modification of systems documentation AWS policies and procedures go through processes for appro val version control and distribution by the appropriate personnel and/or members of management These documents are reviewed periodically and when necessary supporting data is evaluated to ensure the document fulfills its intended use Revisions are re viewed and approved by the team that owns the document unless otherwise specified Invalid or obsolete documents are identified and removed from use Internal policies are reviewed and approved by AWS leadership at least annually or following a significa nt change to the AWS environment Where applicable AWS Security leverages the information system framework and policies established and maintained by Amazon Corporate Information Security AWS service documentation is maintained in a publicly accessible online location so that the most current version is available by default https://awsamazoncom/documentation/ Refer to the AWS SOC 2 Report CC23 CC34 CC67 CC81 You are responsible for changes to your computerized systems running within your AWS accounts System components must be authorized designed developed configured documented tested approved and implemented according to your security and availability com mitments and system requirements Using Service Offerings such as AWS Config you can manage and record your AWS resource inventory configuration history and configuration change notifications to enable security and governance AWS Config Rules also enab les you to create rules that automatically check the configuration of AWS resources recorded by AWS Config https://awsamazoncom/documentation/config/ Change records and associated logs within your environment may be retained according to your record retention schedule You are responsible for storing managing and tracking electronic documents in your AWS account and as part of your overall quality management system including maintaining an audit trail that documents time sequenced development and modification of systems documentation Amazon Web Services GxP Systems on AWS 69 21 CFR Subpart AWS Responsibility Customer Responsibility §1130 Controls for open systems Persons who use open systems to create modify maintain or t ransmit electronic records shall employ procedures and controls designed to ensure the authenticity integrity and as appropriate the confidentiality of electronic records from the point of their creation to the point of their receipt Such procedures a nd controls shall include those identified in §1110 as appropriate and additional measures such as document encryption and use of appropriate digital signature standards to ensure as necessary under the circumstances record authenticity integrity an d confidentiality Industry standard controls and procedures are in place to protect and maintain the authenticity integrity and confidentiality of customer data Refer to the AWS SOC 2 Report C11 C12 You are responsible for determining whether your use of AWS services within your environment meets the definition of an open or closed system and whether these requirements apply Refer to the responsibilities in §1110 above for more information for recommended procedures and controls Additional measure s such as document encryption and use of appropriate digital signature standards are your responsibility to maintain data integrity authenticity and confidentiality §1150 Signature manifestations (a) Signed electronic records shall contain information associated with the signing that clearly indicates all of the following: (1) The printed name of the signer; (2) The date and time when the signature was executed; and (3) The meaning (such as review approval responsibility or authorship) as sociated with the signature (b) The items identified in paragraphs (a)(1) (a)(2) and (a)(3) of this section shall be subject to the same controls as for electronic records and shall be included as part of any human readable form of the electronic record (such as electronic display or printout) Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your applications meet the signed electronic records requirements iden tified Amazon Web Services GxP Systems on AWS 70 21 CFR Subpart AWS Responsibility Customer Responsibility §1170 Signature/ record linking Electronic signatures and handwritten signatures executed to electronic records shall be linked to their respective electronic records to ensure that the signatures cannot be excised copied or otherwise transferred to falsify an electronic record by ordinary means Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your application s/systems meet the signature/record linking requirements identified including any required policies and procedures Subpart C —Electronic Signatures §11100 General requirements (a) Each electronic signature shall be unique to one individual and shall no t be reused by or reassigned to anyone else Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your applications/systems meet the general electronic signature re quirements identified including any required policies and procedures to enforce electronic signature governance (b) Before an organization establishes assigns certifies or otherwise sanctions an individual's electronic signature or any element of su ch electronic signature the organization shall verify the identity of the individual Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your applications/systems meet the general electronic signature requirements identified including any required policies and procedures to verify individual identity prior to use of an electronic signature Amazon Web Services GxP Systems on AWS 71 21 CFR Subpart AWS Responsibility Customer Responsibility (c) Persons using electronic signatures shall prior to or at the time of such use certify to the agency that the electronic signatures in their system used on or after August 20 1997 are intended to be the legally binding equivalent of traditional handwritten signatures (1) The certification shall be submitted in paper fo rm and signed with a traditional handwritten signature to the Office of Regional Operations (HFC 100) 5600 Fishers Lane Rockville MD 20857 (2) Persons using electronic signatures shall upon agency request provide additional certification or testimony that a specific electronic signature is the legally binding equivalent of the signer's handwritten signature Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establis hing and verifying that your applications/systems meet the general electronic signature requirements identified including determining whether any required notification to the agency is required and documenting accordingly §11200 Electronic signature c omponents and controls (a) Electronic signatures that are not based upon biometrics shall: Not applicabl e to AWS – this requirement only applies to the customer’s applications Amazon Web Services GxP Systems on AWS 72 21 CFR Subpart AWS Responsibility Customer Responsibility (1) Employ at least two distinct identification components such as an identification code and password (i) When an individual executes a series of signings duri ng a single continuous period of controlled system access the first signing shall be executed using all electronic signature components; subsequent signings shall be executed using at least one electronic signature component that is only executable by a nd designed to be used only by the individual (ii) When an individual executes one or more signings not performed during a single continuous period of controlled system access each signing shall be executed using all of the electronic signature compone nts (2) Be used only by their genuine owners; and (3) Be administered and executed to ensure that attempted use of an individual's electronic signature by anyone other than its genuine owner requires collaboration of two or more individuals You are responsible for establishing and verifying that your applications/systems meet the electronic signature components and controls identified including establishing the procedu res for use of identifying components and use by genuine owners (b) Electronic signatures based upon biometrics shall be designed to ensure that they cannot be used by anyone other than their genuine owners Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your applications/systems meet the electronic signature components and controls identified including establishing the procedures for use by genuine owners Amazon Web Services GxP Systems on AWS 73 21 CFR Subpart AWS Responsibility Customer Responsibility §11300 Controls for identification codes/passwords Persons who use electronic signatures based upon use of identification codes in combination with passwords shall employ controls to ensure their security and integrity Such controls shall include: (a) Maintaining the uniqueness of each combined identification code and password such that no two individuals have the same combination of identification code and password Not applicable to AWS – this requirement only applies to the customer’s applicatio ns You are responsible for establishing and verifying that your applications/systems meet the electronic signature controls identified including establishing the procedures and controls for uniqueness of password and ID code combinations (b) Ensuring that identification code and password issuances are periodically checked recalled or revised (eg to cover such events as password aging) Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for es tablishing and verifying that your applications/systems meet the electronic signature controls identified including establishing the procedures and controls for periodic review of password issuance (c) Following loss management procedures to electronica lly deauthorize lost stolen missing or otherwise potentially compromised tokens cards and other devices that bear or generate identification code or password information and to issue temporary or permanent replacements using suitable rigorous contro ls Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your applications/systems meet the electronic signature controls identified including establishing the proce dures and controls for loss management of compromised devices that generate ID code or passwords Amazon Web Services GxP Systems on AWS 74 21 CFR Subpart AWS Responsibility Customer Responsibility (d) Use of transaction safeguards to prevent unauthorized use of passwords and/or identification codes and to detect and report in an immediate and urgent manner any attempts at their unauthorized use to the system security unit and as appropriate to organizational management Not applicable to AWS – this requirement only applies to the customer’s applications You are responsible for establishing and verifying that your applications/systems meet the electronic signature controls identified including establishing the procedures and controls to prevent detect and report unauthorized use of ID codes and/or passwords (e) Initial and periodic testi ng of devices such as tokens or cards that bear or generate identification code or password information to ensure that they function properly and have not been altered in an unauthorized manner Not applicable to AWS – this requirement only applies to th e customer’s applications You are responsible for establishing and verifying that your applications/systems meet the electronic signature controls identified including establishing the procedures and controls to periodically test devices that generate I D codes or passwords for proper functionality [1] In computing JSON (JavaScript Object Notation) is the open standard syntax used for AWS CloudFormation templates https://awsamazonc om/documentation/cloudformation/ [2] https://wwwcontinuousvalidationcom/what iscontinuous validation/
|
General
|
consultant
|
Best Practices
|
Core_Tenets_of_IoT
|
ArchivedCore Tenets of IoT July 2017 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchived© 2017 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its c ustomers ArchivedContents Overview 1 Core Tenets of IoT 2 Agility 2 Scalability and Global Footprint 2 Cost 3 Security 3 AWS Services for IoT Solutions 4 AWS IoT 4 Event Driven Services 6 Automation and DevOps 7 Administration and Security 8 Bringing Services and Solutions Together 9 Pragma Architecture 10 Summary 11 Contributors 12 Further Reading 12 ArchivedAbstract This paper outlines core tenets that should be consider ed when developing a strategy for the Internet of Things (IoT) The paper help s customers understand the benefits of Amazon Web Services (AWS) and how the AWS cloud platform can be the critical component supporting the core tenets of an IoT solution The paper also provides an overview of AWS services that should be part of an overall IoT strat egy This paper is intended for decision makers who are learning about Internet of Things platforms ArchivedAmazon Web Services – Core Tenets of IoT Page 1 Overview One of the value propositions of an Internet of Things (IoT) strategy is the ability to provide insight into context that was previously invisibl e to the business But before a business can develop a strategy for IoT it need s a platform that meets the foundational principles of an IoT solution AWS believes in some basic freedoms that are driving organizational and economic benefits of the cloud into businesses These freedoms are why more than a million customers already use the AWS platform to support virtually any cloud workload These freedoms are also why the AWS pla tform is proving itself as the primary catalyst to any Internet of Things strategy across commercial consumer and industrial solutions AWS customers working across such a spectrum of solutions have identified core tenets vital to the success of any IoT platform T hese core tenets are agility scale cost and security ; which have been shown as essential to the long term success of any IoT strategy This whitepaper defines the se tenets as: Agility – The freedom to quickly analyze execute and build business and technical initiatives in an unfettered fashion Scale – Seamlessly expand infrastructure regionally or globally to meet operational demands Cost – Understand and control the costs of operating an IoT platform Security – Secure communication from device through cloud while maintaining compliance and iterating rapidly By using the AWS platform companies are able to build agile solution s that can scale to meet exponential device growth with an ability to manage cost while building on top of s ome of the most secure computing infrastructure in the world A company that selects a platform that has these freedoms and promotes these core tenets will improve organizational focus on the differentiators of its business and the strategic value of imple menting solutions within the Internet of Things ArchivedAmazon Web Services – Core Tenets of IoT Page 2 Core Tenets of IoT Agility A leading benefit companies seek when creating an IoT solution is the ability to efficiently quantify opportunities These opportunities are derived from reliable sensor data remote diagnostics and remote command and control between users and devices Companies that can effectively collect these metrics open the door to explore different business hypotheses based on their IoT data For example manufacturers can build predic tive analytics solutions to measure test and tune the ideal maintenance cycle for their products over time The IoT lifecycle is comprised of multiple stages that are required to procure manufacture onboard test deploy and manage large fleets of phy sical devices When developing physical devices the waterfall like process introduces challenges and friction that can slow down business agility This friction coupled with the upfront hardware costs of developing and deploying physical assets at scale often result in the requirement to keep devices in the field for long periods of time to achieve the necessary return on investment (ROI) With the ever growing challenges and opportunities that face companies today a company’s IT division is a competiti ve differentiator that supports business performance product development and operations In order for a company’s IoT strategy to be a competitive advantage the IT organization relies on having a broad set of tools that promote interoperability througho ut the IoT solution and among a heterogeneous mix of devices Companies that can achieve a successful balance between the waterfall processes of hardware releases and the agile metho dologies of software development can continuously optimize the value that’s derived from their IoT strategy Scalability and Global Footprint Along with an exponential growth of connected devices each thing in the Internet of Things communicates packets of data that require reliable connectivity and durable storage Prior to cloud platforms IT departments would procure additional hardware and maintain underutilized overprovisioned capacity in order to handle the increasing growth of data emitted by devices also known as telemetry With IoT an organization is challenged with managing monitoring and securing the immense number of network connections from these dispersed connected devices ArchivedAmazon Web Services – Core Tenets of IoT Page 3 In addition to scaling and growing a solution in one regional location IoT solutions require the ability to scale globally and across different physical locations IoT solutions should be deployed in multiple physical locations to meet the business objectives of a global enterprise solution such as data compliance data sovereignty and lower communication latency for better respo nsiveness from devices in the field Cost Often the greatest value of an IoT solution is in the telemetric and context ual data that is generated and sent from devices Building onpremise infrastructure requires upfront capital purchase of hardware ; it can be a large fixed expense that does not directly correlate to the value of the telemetry that a device will produce sometime in the future To balance the need to receive telemetry today with an uncertain value derived from telemetr ic data in the future an IoT strategy should leverage an elastic and scalable cloud platform With the AWS platform a company pays only for the services it consumes without requiring a long term contract By leveraging a flexible consumption based pricing model the cost of a n IoT solution and the related infrastructure can be directly accessed alongside the business value delivered by ingesting processing storing and analyzing the telemetr y received by that same IoT solution Security The foundation of an IoT solution st arts and ends with security Since d evices may send large amounts of sensitive data and end users of IoT application s may also have the ability to directly control a device the security of things must be a pervasive design requirement IoT solutions shoul d not just be designed with security in mind but with security controls permeating every layer of the solution Security is not a static formula ; IoT applications must be able to continuously model monitor and iterate on security best practices In the Internet of Things the attack surface is different than traditional web infrastructure The pervasiveness of ubiquitous computing means that IoT vulnerabilities could lead to exploits that result in the loss of life for example from a compromised control system for gasoline pipelines or power grids A competing dynamic for IoT security is the lifecycle of a physical device and the constrained hardware for sensors microcontrollers actuators and embedded libraries These constrained factors may limit the security capabilities each ArchivedAmazon Web Services – Core Tenets of IoT Page 4 device can perform With these additional dynamics IoT solutions must continuously adapt their architecture firmware and software to stay ahead of the changing security landscape Although the constrained factors of devices can present increased risks hurdles and potential tradeoffs between security and cost building a secure IoT solution must be the primary objective for any organization AWS Services for IoT Solutions The AWS platform provides a foundation for executing an agile scalable secure and cost effective IoT strategy In order to achieve the business value that IoT can bring to an organization customers should evaluate the breadth and depth of AWS services that are common ly used in large scale distr ibuted IoT deployments AWS provides a range of services to accelerate time to market: from device SDKs for embedded software to real time data processing and event driven compute services In these sections we will cover the most common AWS services used in IoT applications and how these services correspond to the core tenets of an IoT solution AWS IoT The Internet of Things cannot exist without things Every IoT solution must first establish connectivity in order to begin interacting with devices AWS IoT is an AWS managed service that addresses the challenges of connecting managing and operating large fleets of devices for an application The combination of scalability of connectivity and security mechanisms for data transmission within AWS IoT provides a foundation for IoT communication as part of an IoT solution Once data has been sent to AWS IoT a solution is able to leverage an ecosystem of AWS services spanning databases mobile services big data analytics machine learning and more Device Gateway A device gateway is responsible for maintaining the sessions and subscriptions for all connected devices in an IoT solution The AWS IoT Device Gateway enables secure bi directional communication between connected devices and the AWS platf orm over MQTT Web Sock ets and HTTP Communication protocols such as MQTT and HTTP enable a company to utilize industry ArchivedAmazon Web Services – Core Tenets of IoT Page 5 standard protocol s instead of using a proprietary protocol that would limit future interoperability As a publish and subscribe protoco l MQTT inherently encourages scalable fault tolerant communication patterns and fosters a wide range of communication options among devices and the Device Gateway These message patterns range from communication between two devices to broadcast pattern s where one device can send a message to a large field of devices over a shared topic In addition the MQTT protocol exposes different levels of Quality of Service (QoS) to control the retransmission and delivery of message s as they are published to subscr ibers The combination of p ublish and subscribe with QoS not only opens the possibilities for IoT solutions to control how devices interact in a solution but also drive more predictability in how messages are delivered acknowledged and retried in the ev ent of network or device failures Shadows Device Registry and Rules Engine AWS IoT consists of additional features that are essential to building a robust IoT application The AWS IoT service includes the R ules Engine which is capable of filtering transforming and forwarding device messages as they are received by the Device Gateway The Rules Engine utilizes a SQL based syntax that selects data from message payloads and triggers actions based on the characteristics of the IoT data AWS IoT also provi des a Device Shadow that maintains a virtual representation of a device The Device Shadow acts as a message channel to send commands reliably to a device and store the last known state of a device in the AWS platform For managing the lifecycle of a fleet of devices AWS IoT has a Device Registry The Device Registry is the central location for storing and querying a predefined set of attributes related to each thing The Device Registry supports the creation of a holistic management view for an IoT solution to control the associations between things shadows permissions and identities Security and Identity For connected devices an IoT platform should utilize concepts of identity least privilege encryption and authorization throughout the hardware and software development lifecycle AWS IoT encrypts traffic to and from the service over Transport Layer Security (TLS) with support for most major cipher suites For identification AWS IoT requires a connected d evice to authenticate using a X509 certificate Each certificate must be provisioned activated and then ArchivedAmazon Web Services – Core Tenets of IoT Page 6 installed on a device before it can be used as a valid identity with AWS IoT In order to support this separation of identity and access for devices AWS IoT provides IoT Policies for device identities AWS IoT also utilizes AWS Identity and Access Management ( AWS IAM) policies for AWS users groups and roles By using IoT Policies an organization has control over allowing and denying communication s on IoT topics for each specific device’s identity AWS IoT policies certificates and AWS IAM are designed for explicit whitelist configur ation of the communication channels of every device in a company’s AWS IoT ecosystem Event Driven Services In order to achieve the tenets of scalability and flexibility in an IoT solution an organization should incorporate the techniques of an event driven architecture An e vent driven architecture fosters scalable and decoupled communication through the creat ion storage consumption and reaction to events of interest that occur in an IoT solution Messages that are generated in an IoT solution should first be categorized and mapped to a series of events A n IoT solution should then associate these events with business logic that execute s commands and possibly generate s additional events in the IoT system The AWS platform provides several application services for building a distributed event driven IoT architecture Foundationally event driven architectures rely on the ability to durably store and transfer events through an ecosystem of interested subscribers In order to support decoupled event orchestration the AWS platform has several application services that are designed for reliable event storage and highly scalable event driven computation An event driven IoT solution should utilize Amazon Simple Queue Service ( Amazon SQS) Amazon Simple Notification Service ( Amazon SNS ) and AWS Lambda as foundational applica tion components for creat ing simple and complex event workflow s Amazon SQS is a fast durable scalable and fully managed message queuing service Amazon SNS is a web service that publishes messages from an application and immediately delivers them to su bscribers or other applications AWS Lambda is designed to run code in response to events while the underlying computer resources are automatically managed AWS Lambda can receive and respond to notifications directly from other AWS services In an event driven IoT architecture AWS Lambda is where the business logic is executed to determine when events of interest have occurred in the context of an IoT ecosystem ArchivedAmazon Web Services – Core Tenets of IoT Page 7 AWS services such as Amazon SQS Amazon SNS and AWS Lambda can separate the consuming of events from the processing and business logic applied to t hose events This separation of responsibilities creates flexibility and agility in an end toend solution This separation enables the rapid modification of event trigger logic or the logic used t o aggregate contextual data between parts of a system Finally this separation allows changes to be introduce d in an IoT solution without blocking the continuous stream of data being sent between end devices and the AWS platform Automation and DevOps In IoT solutions the initial release of an application is the beginning of a long term approach to constant ly refine the business advantages of an IoT strategy After the first release of an application a majority of time and effort will be spent adding new features to the current IoT solution With the tenet of remaining agile throughout the solution lifecycle customers should evaluate services that enable rapid development and deployment as business needs change Unlike traditional web architectures where DevOps technologies only apply to the backend servers an IoT application will also require the ability to incrementally roll out changes to disparate globally connected devices With the AWS platfo rm a company can implement server side and device side DevOps practices to automate operation s Applications deployed in the AWS cloud platform can take advantage of several DevOps technologies on AWS For an overview of AWS DevOps we recommend reviewing the document Introduction to DevOps on AWS 1 Although most solutions will differ in deployment and operations requirements IoT solutions can utilize AWS CloudFormation to define th eir server side infrastructure as code Infrastructure treated as code h as the benefits of being reproducible testable and more easily deployable across other AWS regions Enterprise organizations that utilize AWS CloudFormation in addition to other DevOps tools greatly increase their agility and pace of application changes In order to design an IoT so lution that adheres to the tene ts of security and agility organizations must also update their connected devices after they have been deployed into the environment Firmware updates provide a company a mechanism to ad d new features to a device and are a critical path for delivering security patches during the lifetime of a device To implement firmware updates to connected devices an IoT solution should first store the firmware in a ArchivedAmazon Web Services – Core Tenets of IoT Page 8 globally accessible service such as Amazon Simple Storage Service (Amazon S3) for secure durable highly scalable cloud storage Then the IoT solution can implement Amazon CloudFront a global content delivery network (CDN) service to bring the the firmware stored in Amazon S3 to the lower latency points of presence for connected devices Finally a customer can leverage the AWS IoT Shadow to push a command to a device to request that it download the new version of firmware from a pre signed Amazon CloudFront URL that restricts access to the firmware objects available through the CDN Once the upgrade is complete the device should acknowledge success by sending a message back into the IoT solution By orchestrating this small set of services for firmware updates customers control their Device DevOps approach and can scale it in a way that aligns with their overall IoT strategy In IoT automation and DevOps procedures expand beyond the application services that are deployed in the AWS platform and include the connected devices that have been deployed as part of the overall IoT architecture By designing a system that can easily perform regular and global updates for new software changes and firmware changes organizations can iterate on ways to increase value from their IoT solution and t o continuously innovate as new market opportunities arise Administration and Security Security in IoT is more than data anonymization; it is the ability to have insight auditability and control throughout a system IoT security includes the capability to monitor events throughout the solution and react to those events to achieve the desired compliance and governance Security at AWS is our number one priority Through the AWS Shared Responsibility Model an organization has the flexibil ity agility and control to implement their security requirements 2 AWS manages the security of the cloud while customers are responsible for sec urity in the cloud Customers maintain control o ver what security mechanisms they implement to protect their data applications devices systems and networks In addition companies can leverage the broad set of security and administrative tools that AWS and AWS partners provide to create a strong logically isolated and secure IoT solution for a fleet of devi ces The first service that should be enabled for monitoring and visibility is AWS CloudTrail AWS CloudTrail is a web service that records AWS API calls for an account and delivers log files to Amazon S3 After enabling AWS CloudTrail a ArchivedAmazon Web Services – Core Tenets of IoT Page 9 solution should build security and governance processes that are based on the realtime input from API calls made across an AWS account AWS CloudTrail provides an additional level of visibility and flexibility in creating and iterating on operational openness in a system In addition to logging API calls customers should enable Amazon CloudWatch for all AWS services used in the system Amazon CloudWatch allows applications to monitor AWS metrics and create custom metrics generated by an application These metrics can th en trigger alerts based off of those events Along with Amazon CloudWatch metrics there are Amazon CloudWatch Logs which store additional logs from AWS services or customer application s and can then trigger events based off of those additional metrics AWS services such as AWS IoT directly integrate with Amazon CloudWatch Logs; these logs can be dynamically read as a stream of data and processed using the business logic and context of the system for real time anomaly detection or security threats By pairing services like Amazon CloudWatch and Amazon CloudTrail with the capabilities of AWS IoT identities and policies a company can immediately collect valuable data around security practices at the start of the IoT strategy and meet the need s for a proa ctive implementation of security within their IoT solution Bringing Services and Solutions Together To better understand customer usage predict future trends or run an IoT fleet more efficiently an organization needs to collect and process the potentia lly vast amount of data gathered from connected devices in addition to connecting with and managing large fleets of things AWS provides a breadth of services for collecting and analyzing large scale datasets often called big data These services may be in tegrated tightly within an IoT solution to support collecting processing and analyzing the solution’s data as well as proving or disproving hypotheses based upon IoT data The ability to formulate and answer questions with the same platform one is using to manage fleets of things ultimately empowers an organization to avoid undifferentiated work and to unlock business innovations in an agile fashion ArchivedAmazon Web Services – Core Tenets of IoT Page 10 The high level cohesive architectural perspective of an IoT solution that brings IoT big data and other services together is called the Pragma Architecture The Pragma Architecture is comprised of layers of solutions: Things The device and fleet of devices Control Layer The control point for access to the Speed Layer and the nexus for fleet management Speed Layer The inbound high bandwidth device telemetry data bus and the outbound device command bus Serving Layer The access point for systems and humans to interact with th e devices in a fleet to perform analysis archive and correlate data and to use realtime views of the fleet Pragma Architecture The Pragma Architecture is a single cohesive perspective of how the core tenets of IoT manifest as an IoT solution when using AWS services One scenario of a Pragma Architecture based IoT Solution is around processing of data emitted by devices; data also known as telemetry In the diagram above after a device authenticates using a device certificate obtained from the AWS IoT service in the control layer the device regularly sends telemetry data to the AWS IoT Device G ateway in the Speed Layer That telemetry data is then processed by the IoT Ru les Engine as an event to be output by Amazon Kinesis or AWS Lambda for use by web users interacting with the serving layer ArchivedAmazon Web Services – Core Tenets of IoT Page 11 Another scenario of a Pragma Architecture based IoT Solution is to send a command to a device In the diagram above the user’s application would write the desired command value to the target device’s IoT Shadow Then the AWS IoT Shadow and the Device Gate way work together to overcome an intermittent network to convey the command to the specific device These are just two device focused scenarios from a broad tapestry of solutions that fit the Pragma Architecture Neither of these scenarios address the nee d to process the potentially vast amount of data gathered from connected devices this is where having an integrated Big Data Backend starts to become important The Big Data Backend in this diagram is congruent with the entire ecosystem of real time and b atch mode big data solutions that customers already leverage the AWS platform to create Simply put from the big data perspective IoT telemetry equals “ingest ed data” in big data solutions If you’d like to learn more about big data solutions on AWS plea se check below for a link to further reading There is a colorful and broad tapestry of big data solutions that companies have already created using the AWS platform The Pragma Architecture shows that by building an IoT solution on that same platform the entire ecosystem of big data solutions is available Summary Defining your Internet of Things strategy can be a truly transformational endeavor that opens the door for unique business innovations As organizations start striving for their own IoT innov ation s it is critical to select a platform that promotes the core tenets: business and technical agility scalability cost and security The AWS platform over delivers on the core tenets of an IoT solution by not just providing IoT services but offerin g those services alongside a broad deep and highly regarded set of platform services across a global footprint This over delivery also brings freedoms that increase your business’ control over its own destiny and enables your business’ IoT solutions to more rapidly iterate toward the outcomes sought in your IoT strategy As next steps in evaluating IoT platforms we recommend the further reading section below to learn more about AWS IoT big data solutions on AWS and customer case studies on AWS ArchivedAmazon Web Services – Core Tenets of IoT Page 12 Contributors The following individuals authored this document: Olawale Oladehin Solutions Architect Amazon Web Services Brett Francis Principal Solutions Architect Amazon Web Services Further Reading For additional reading please consult the following sources: AWS IoT Service3 Getting Started with AWS IoT4 AWS Case Studies5 Big Data Analytics Options on AWS6 1 https://d0awsstaticcom/whitepapers/AWS_DevOpspdf 2 https://awsamazoncom/compliance/shared responsibility model/ 3 https://awsamazoncom/iot/ 4 https://awsamazoncom/iot/getting started/ 5 https://awsamazoncom/solutions/case studies/ 6 https://d0awsstaticcom/whitepapers/Big_Data_Analytics_Options_on_AW Spdf Notes
|
General
|
consultant
|
Best Practices
|
Cost_Management_in_the_AWS_Cloud
|
ArchivedCost Management in the AWS Cloud Marc h 201 8 This paper has been archived For the latest technical guidance on Cost Management see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/Archived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or service s each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Cost Management in the Cloud 1 Creating a Cost Conscious Culture 1 Cost Governance Best Practices 2 Getting Started with Cost Management 3 AWS Cost Explorer 3 AWS Cost and Usage Report 5 AWS Budgets 5 Other Cost Related Metrics 6 Conclusion 7 Archived Abstract This is the second in a series of whitepapers designed to support your cloud journey This paper seeks to empower you to maximize value from your investments improve forecasting accuracy and cost predictability create a culture of ownership and cost transparency and cont inuously measure your optimization status Amazon Web Services (AWS) provides a suite of cost management tools out of the box to help you get the most value from your AWS investment This paper provides an overview of many of these tools as well as organ izational best practices for creating a cost conscious mindset ArchivedAmazon Web Services – Cost Management Page 1 Cost Management in the Cloud Migrating to the cloud enhances your business’s ability to scale and flex to the demands of your company’s workloads Historically compu ting costs were tied to a quarterly or yearly hardware procurement investment With cloud technology you now have the flexibility to initialize resources and services at any time —you pay only for what you use This has shifted the way that costs are unde rstood managed and optimized In the past hardware costs were treated as a capital expense which led to predictable resource procurement and cost patterns You had to purchase enough servers to support your company’s most highly trafficked day which resulted in waste because many of these servers would lie idle for much of the year Because the cloud lets you scale on demand you pay only for the resources you use which minimizes waste but can result in variable cost patterns The ability to scale up a nd down on demand has allowed resource procurement to transition from sole ownership of the finance team to stakeholders across IT engineering finance and other teams This democratization of resource procurement has initiated an ever growing group of c ostconscious stakeholders who are now responsible for understanding managing and ultimately optimizing costs Creating a Cost Conscious Culture One of the first steps on your company’s cloud journey is to establish best pract ices for cloud cost management Your organization should create a Cloud Center of Excellence and designate key stakeholders to oversee technical and architectural quality and advance a cost conscious agenda This group often starts small and grows over time A typical journey might look something like this: • Cost awareness – An individual from the finance or engineering team allocates a few hours per week to learn the basics of cloud cost management using AWS training resources helps establish basic governance best practices and participates in organization wide cloud direction discussions This individual also tends to evangelize using out ofthebox AWS reports and tools ArchivedAmazon Web Services – Cost Management Page 2 • Cost management and optimization – Over time this individual or small group expands to a larger team w hose members define custom metrics adopt and disseminate advanced reporting methodologies and enforce cost allocation strategies (often via AWS resource tags) • Evangelism and process optimization – As financial and cost management needs become more compl ex a larger dedicated team with advanced skills supports cost management across the organization and establishes internal communities of interest to support education and collaboration on key cloud topics Cost Governance Best Practices To scale increasi ngly complex workloads that are run on AWS your organization should emphasize the creation of clear effective policies and governance mechanisms around cloud deployment usage and cost responsibility Keep in mind that executive support for cost ma nagem ent processes is critical • Resource controls (policy based and automated) govern who can deploy resources and the process for identifying monitoring and categorizing these new resources These controls can use tools such as AWS Service Catalog AWS Ident ity and Access Management (IAM) roles and permissions and AWS Organizations as well as third party tools such as ServiceNow • Cost allocation applies to teams using resources shifting the emphasis from the IT ascostcenter mentality to one of shared res ponsibility • Budgeting processes include reviewing budgets and realized costs and then acting on them • Architecture optimization focuses on the need to continually refine workloads to be more cost conscious to create better architected systems • Tagging an d tagging enforcement ensure cost tracking and visibility across organization lines Establishing effective processes ensures that the right information and controls are available to the right people This reinforces channels of communication for costrelated inquiries which strengthens your cost conscious culture ArchivedAmazon Web Services – Cost Management Page 3 Getting Started with Cost Management The best place to start with gaining insight and taking action on your costs is the monthly AWS bill which is accessible via the AWS Billing and Cost Management console Your AWS bill breaks down costs by service AWS Region and linked account Although this is a great place to start for high level cost information the AWS Management C onsole also comprises a suite of billing and cost management tools that give you fine grain access understanding and control over your AWS costs and usage These tools include AWS Cost Explorer the AWS Cost and Usage Reports and A WS Budgets AWS Cost Explorer AWS Cost Explorer helps you visualize understand and manage your AWS costs and usage over time This is done via an intuitive interface that enabl es you to quickly create custom reports that include charts and tabular data You can analyze your cost and usage data in aggregate (such as total costs and usage across all accounts) down to granular details (for example m22xlarge costs within the Dev a ccount tagged “project: Blackthorn”) Cost Explorer equips you with data exploration functionality such as the ability to group and filter your cost and usage information to help you quickly and easily get to the data you need to make data driven decisio ns You can also change the chart type and time frame as well as access advanced filters When you sign up for Cost Explorer AWS prepares the data about your costs for the current month and the last 3 months and then calculates the forecast for the next 3 months Cost Explorer can display up to 12 months of historical data data for the current month and the forecasted costs for the next 3 months To help you get started Cost Explorer provides a selection of default reports to help you pinpoint cost an d usage trends These reports include: • Monthly costs by AWS service – Visualize the costs and usage associated with the top five cost accruing AWS services and get a detailed breakdown on all services in a table view ArchivedAmazon Web Services – Cost Management Page 4 • Amazon EC2 monthly cost and usage – View all Amazon Elastic Compute Cloud (Amazon EC2) costs over the past three months as well as current month todate costs • Monthly costs by linked account – View the distribution of costs across your organizat ion To recreate this chart add Linked Account as the grouping dimension in Cost Explorer • Monthly running costs – See all running costs over the past three months and view forecasted costs for the coming month with a corresponding confidence interval • Reserved Instance (RI) reports – To learn more about the RI Utilization and Coverage reports see Reserved Instance (RI) Reporting To create and save persona lized reports you can use the following functionality: • Set time interval and granularity – Set a custom time interval and determine whether you would like to view your data monthly or daily • Filter/ group your data – Dig deeper into your data by taking advantage of filtering and grouping functionality using a variety of available dimensions • Forecast future costs and usage – Use forecasting to get a better idea of what your costs and usage may look like in the future Available filters in Cost Explorer in clude: • API Operation – Requests made to and tasks performed by a service • AWS Services – Individual AWS services such as Amazon EC2 or Amazon Simple Storage Service (Amazon S3) • AWS Regions – Geographic areas in whi ch AWS hosts your resources • Availability Zones – Distinct locations within an AWS R egion • Usage Types – The units that each service employs to measure the usage of a specific type of resource • Usage Type Groups – Predefined filters that collect specific cate gories of usage into a single filter (eg EC2 ELB – Running Hours) • Cost Allocation Tags – AWS resource tags that have been activated for cost allocation ArchivedAmazon Web Services – Cost Management Page 5 • Instance Types – The type you specified when launching an EC2 host • Linked Accounts – Members of a con solidated billing family • Purchase Option – Identify On Demand Spot and Reserved Instance usage Once you arrive at a helpful view you can save your progress as a new report that you can refer to in the future To learn more about AWS Cost Explorer see AWS Cost Explorer AWS Cost and Usage Report The AWS Cost and Usage Report tracks your AWS usage and provides estimated charges associated with that usage You can conf igure t his report to present the data hourly or daily It is updated at least once a day until it is finalized at the end of the billing period The AWS Cost and Usage Report gives you the most granular insight possible into your costs and usage and it is the source of truth for the billing pipeline It can be used to develop advanced custom metrics using business intelligence data analytics and third party cost optimization tools The AWS Cost and Usage Report is delivered automatically to an S3 bucket that you specify and it can be downloaded directly from there (standard S3 storage rates apply) It can also be ingested into Amazon Redshift or uploaded to Amazon QuickSight To learn more about th e AWS Cost and Usage Report see AWS Cost and Usage Report AWS Budgets AWS Budgets lets you set custom cost and usage budgets and receive alerts when you approach or exceed your budgeted amount You can create b udgets from the AWS Budgets Dashboard or programmatically via the AWS Budgets API Budgets can track cost or usage monthly quar terly or yearly You can create a b udget by using the same filters available in Cost Explorer You can monitor b udgets via the Budgets Dashboard in the AWS Management Console For both cost and usage budgets alerts can be set against actual or forecasted budgeted values ArchivedAmazon Web Services – Cost Management Page 6 From there you can further specify the percent accrual toward the cost or usage threshold For example specifying 100 % of the actual costs of a $1000 budget will alert you when the $1000 threshold is exceeded Creating a second alert that notifies you when 90 % of your $1000 budget has been reached will give you more time to take proactive action You can also supplement these alerts by setting one against forecasted cost or usage values (eg 105 %of your budgeted value) which will alert you of possible an omalies or changes in behavior Each budget can have up to five associated alerts Each alert can have up to 10 email subscribers and can optionally be published to an SNS topic Other Cost Related Metrics Creating cost related metr ics and then tracking them supports a data driven decision making culture This makes it easy to understand and manage your costs and identify opportunities for savings Some examples of cost related metrics that you can implement include percentage of: • Resource utilization • Instances turned off daily • Instances tagged • Amazon EC2 instances that have undergone EC2 Right Sizing Organizations taking advantage of AWS cost optimization offerings such as Reserved Instances and Spot Instances should develop metrics around them such as percentage of: • Reserved Instance coverage of key workloads • Aggregate utilization of EC2 Reserved Instances • Application of EC2 Spot Instances and any associated discounts As organizational needs evolve cost management requirements tend to evolve as well toward quantifying savings Savings can be realized as a result of cost optimization efforts: • Workload management – Gain elasticity by turning off dev elopment test and staging workloads when not in use A common approach is to ArchivedAmazon Web Services – Cost Management Page 7 mandate on/off for all such instances except those flagged manually as exceptions: On/off savings = (Highest hourly cost x hours per month) − actual monthly cost • Reserved Instance utilization – Maximize Reserved Instance utilization using the EC2 Reserved Instance Reports in AWS Cost Explorer A typical utilization target is 70 % of always on workloads • Reserved Instance Right Sizing – Apply a benchmark to a point in time and measure savings potential by right sizing your EC2 instances Over time you can measure savings achieved through right sizing and compare that to your initial benchmark These are a few examples of possible metric s that you can implement in your cost optimization journey You can further refine your metrics to track unit costs along the following dimensions: • Number of customers or active subscribers • Revenue generated • Product or business unit • Internal user • Experiment Using the cost metrics outlined above you can link your cloud computing costs and usage to your business objectives Conclusion AWS provides a set of cost management tools out of the box to help you manage monitor and ultimately optimize yo ur costs To get started identify someone to set the standard for cloud excellence at your organization get started using cost management tools for your needs and define and track against a set of cost related benchmarks for cost optimization As your c ost management capabilities grow you can begin to use more advanced metrics set budgets and alerts and us e advanced analytics to identify ad ditional savings opportunities ArchivedAmazon Web Services – Cost Management Page 8 To learn more about the tools that AWS provides to help you access understand a llocate control and optimize your AWS costs and usage see AWS Cost Management
|
General
|
consultant
|
Best Practices
|
Creating_a_Culture_of_Cost_Transparency_and_Accountability
|
ArchivedCreating a Culture of Cost Transparency and Accountability March 2018 This paper has been archived For the latest technical guidance on Cost Management see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/Archived © 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its a ffiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Abstract 4 From Cloud Cost to Cloud Value 1 Speed and Cost Tradeoffs 3 Cost is Everyone’s Responsibility 3 Promoting Visibility Transparency and Accountability 4 Determining Cost Allocation 5 Evangelizing Best Practices 6 Conclusion 6 Archived Abstract This is the fifth in a series of whitepapers designed to support your cloud journey This paper seeks to empower you to maximize value from your investments improve forecasting accuracy and cost predictability create a culture of ownership and cost transparency and continuously me asure your optimization status This paper discusses the tools best practic es and tips that your organization can use to create a lean cost culture and maximize the benefits of the cloud ArchivedAmazon Web Services – Creating a Culture of Cost Transparency and Accountability Page 1 From Cloud Cost to Cloud Value Migrating to the cloud is an iterative process that evolves as your organization develops new s kills processes tools and capabilities These skills build momentum and acc elerate your migration efforts The prospect of moving to the cloud does not need to be a daunting or arduous proposition Establishing the right cultural foundation to build on is key to a successful migration Because cloud services are purchased deployed and managed in fundamentally different ways from traditional IT adopting them successfully requires a cultural shift as well as a technolo gical one inside organizations Culture consists of the attitudes and behaviors that define how a business operates Organizations can improve cost optimization by promoting a culture where employees view change as normal and welcome responsibility in the interest of following best practice s and adapting to new technology This is what lean cost culture means In traditional environments IT infrastructure requires significant upfront investment and labor The decision to incur these costs typically must go through multiple layers of approva l In legacy IT models IT purchases are ordered and managed through a central services model at significant expense What’s more the sources of these costs are difficult to identify and allocate in part because of limited transparency The cloud present s an entirely different situation IT infrastructure requires more limited capital investments and labor can focus on undifferentiated work as opposed to managing infrastructure You can easily spin up cloud services without IT intervention using a depart mental credit card Specialist teams are not always required to get infrastructure to a functioning state and business units can more easily deploy their own technology needs While the initial costs might be lower they are also easier to incur Without the right infrastructure and processes in pl ace costs are not always easy to manage There’s also a major difference in how cloud services and data center infrastructure are paid for If you create a virtual machine on a physical server in a data center there’s no inherent way to measure the cost of that action If you create this machine in the cloud costs immediately begin to accrue Cloud ArchivedAmazon Web Services – Creating a Culture of Cost Transparency and Accountability Page 2 costs are tightly coupled with usage often down to the second Most actions have a hard dollar cost implication Because cloud resources are easier to deploy and incur usage based costs organizations must rely on good governance and user behavior to manage costs —in other words they need to create a lean cost culture This is especially important becaus e with the cloud and modern agile DevOps practices implementation is a continuous cycle with new resources services and projects being adopted regularly A lean cost culture is essential when architecting cloud based solutions and should be part of pla nning design and development Cost management should not be delegated only after the technology has been developed Fortunately in many ways creating a lean cost culture is much easier to do in the Amazon Web Services (AWS) Cloud than in the data center environment You can closely track the costs incurred by specific individuals groups projects or functions Your teams can share i nformation through consoles and reports Rich cost analytics and management tools are built into the platform and cost saving management automation is relatively easy to implement By u sing the tools best practices and tips detailed in this paper your organization can maximize the benefits of the cloud while keeping costs under control Ultimately the goal is to move from thinking about cloud costs to understanding cloud value —the return on investment ( ROI ) your organization obtains fr om various initiatives and workloads that leverage the cloud It’s important to understand not just what you’re spending but the value you’re getting in return A bigger bill doesn’t necessarily indicate a problem if it means you’re growing your business your margins or your capabilities Therefore your organization need s to clearly identify key performance indicators and success factors that are impacted by cloud adoption In the absence of well identified metrics determining success is complicated an d it can be difficult to derive value Examples of categories that can help define success are business agility operational resilienc y and total cost of ownership One example of how to evaluate cloud value is by looking at unit cost The unit can be any object of value in your organization such as subscribers API calls or page views The unit cost is the total cost of a service divided by the number of units By focusing on reducing unit cost over time and understanding how ArchivedAmazon Web Services – Creating a Culture of Cost Transparency and Accountability Page 3 spending and margins are re lated you can concentrate on getting more for your money Arriving at this level of understanding can be an incremental process Best practices that can help get you there are discussed below Speed and Cost Tradeoffs With cost optimization as with the o ther pillars in the AWS WellArchitected Framework there are trade offs to consider for example whether to optimize for speed tomarket or for cost In some cases it’s best to optimi ze for speed — going to market quickly shipping new features or simply meeting a deadline — rather than investing in upfront cost optimization Sometimes d esign decisions are directed by haste rather than data and the temptation always exists to overcompens ate just in case rather than spend time benchmarking for the most cost optimal deployment This might lead to overprovisioned and under optimized deployments However this is a reasonable choice when you need to lift and shift resources from your on premises environment to the clo ud and then optimize afterward Investing in a cost optimization strategy upfront allows you to realize the economic benefits of the cloud more readily by ensuring a consistent adherence to best practi ces and avoiding unnecessary overprovisioning Cost is Everyone’s Responsibility All teams can help manage cloud costs and cost optimization is everyone’s responsibility Many variables affect cost and different levers can be pulled to drive operational excellence The following are e xamples of different teams that need to consider cost optimization : • Engineering needs to know the cost of deploying resources and how to architect for cost optimization • Finance needs cost data for accounting reporting and decision making • Operations makes large scale decisions that affect IT costs • Business decision makers must track costs against budgets and understand ROI ArchivedAmazon Web Services – Creating a Culture of Cost Transparency and Accountability Page 4 • Executives need to understand the impact of cloud spending to help with divestitures acquisitions and organizational strategy In the past f ew of these roles were tasked with the responsibility of understanding let alone managing IT costs Now s takeholders need training policies and tools to do this effectively The best starting point is to crea te visibility into cloud costs Promoting Visibility Transparency and Accountability In the cloud it’s easy to get into a situation where the people watching costs are not the same people incurring them One of the goals of creating a lean cost culture is turning everybody into a cost watcher By providing alerts dashboards and reports relevant to each stakeholder you reduce the feedback loop between the data and the action that i s required to make corrections In addition to giving stakeholders visib ility it’s a good idea to encourage transparency —in other words let teams see how others are spending — showcasing trends best practices and opportunities for improvement This can help create a shared sense of ownership over cloud costs and incentivize people to minimize them You can even go so far as to encourage friendly rivalries between teams to achieve higher levels of optim ization through gamification To achieve true success cost optimization must become a cultural norm in your organization Get everyone involved Encourage everyone to track their cost optimization daily so they can establish a habit of efficiency and see the daily impact of their cost savings over time Although everyone shares the ownership of cost optimization best practices call for someone to take primary responsibility for cost optimization Typically this is someone from either the finance or IT department who is responsible for ensuring that cost controls are monitored so that business goals can be met The costoptimiza tion engineer makes sure that the organization is positioned to derive optimal value from the decision to adopt AWS As the organization matures this role can become a Cloud Center of Excellence responsible for continually driving cost optimization best p ractices For more on developing a Cloud Center of Excellence see the second whitepaper in this series ArchivedAmazon Web Services – Creating a Culture of Cost Transparency and Accountability Page 5 Determining Cost Allocation To help you understand your responsibility for cloud costs use AWS tools for resource allocation The two main mechanisms of cost allocation in AW S are linked accounts and tags Linked Accounts Linked accounts let you split the AWS bill by cost center or business unit while centralizing payment through the organizational account Linked accounts are managed through the consolidated billing feature in AWS Organizations With consolidated billing you can see a combined view of AWS charges incurred by all your accounts You also can get a cost report for each mem ber account that is assoc iated with your master account Tags To help you manage your instances images and other Amazon EC2 resources you can optionally assign your own metadata to each resource in the form of tags Tags enable you to categorize your AWS resources in different ways for example by purpose owner or environment You can use tags for many purposes and they are an especially powerful way to create a lean cost culture AWS Cost Explorer and detailed billing reports let you analyze your AWS costs by tag Typically you use business tags such as cost center/business unit customer or project to associate AWS costs with traditional cost allocation dimensions However a cost allocation report can include any tag which means you can easily associate costs with technical or security dimensions such as specific applications environments or compliance programs Using tags can make it easy to create usage reports specific to role business function application project a nd more Your o rganization should create a common taxonomy as early as possible —one that embodies the organizational structure and enables easy accountability for costs It is also important to track untagged resources because these can represent unallocat ed costs Many organizations enforce tagging programmatically and even implement a tag or ArchivedAmazon Web Services – Creating a Culture of Cost Transparency and Accountability Page 6 terminate rule With proper tagging people can easily see which costs they are responsible for Evangelizing Best Practices As with all cloud activities the key to developing best practices stems from infusing a business culture into everything you do When a culture of accountability and transparency becomes intrinsic to the way you conduct business you can see benefits quickly A cost conscious cloud culture does not come about on its own Changing processes and behaviors takes time and effort Clear policies around cost ownership deployment processes reporting and other best practices should be developed and evangelized across your organization Training can he lp staff understand how cloud costs work and steps th ey can take to eliminate waste Some fundamental policies to consider include: • Turning off unused resources • Using Amazon EC2 Spot Instances Amazon EC2 Reserved Instances and other service reservation types where appropriate • Using alerts no tifications and AWS Budgets to help teams stay on track • Reporting waste on a team and company level • Applying showbacks and chargebacks to enable cost accountability • Setting up dashboards to enable widespread monitoring of cloud usage • Setting up communication cadences to ensure visibility of cost management issues to the right people Conclusion Every organization is differ ent Some organizations are used to rapid change and will adopt a lean cost culture quickly Others have more entrenched processes and approaches and will require more time to get there The key is to understand that cultural change is required and that it should be addressed early in the cloud adoption journey More than any specific tool or approach getting your people on board is the foundat ion of cost management success
|
General
|
consultant
|
Best Practices
|
Criminal_Justice_Information_Service_Compliance_on_AWS
|
ArchivedCriminal J ustice Information Service Compliance on AWS (This document is part of the CJIS Workbook package which also includes CJIS Security Policy Requirements CJIS Security Policy Template and CJIS Security Policy Workbook ) March 2017 This paper has been archived For the latest compliance content see https://awsamazoncom/compliance/resources/ Archived © 201 7 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessmen t of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitme nts conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS a nd its customers Archived Contents Introduction 1 What is Criminal Justice Information? 1 What is the CJIS Security Policy 2 CJIS Security Addendums (Agreements) 2 AWS Approach on CJIS 3 CJIS and relationship to FedRAMP 3 AWS Shared Responsibility Model 4 Service Categories 4 AWS Regions Availability Zones and Endpoints 6 Security & Compliance OF the Cloud 7 Security & Compliance IN the Cloud 8 Creating a CJIS Environment on AWS 9 Auditing and Accountability 10 Identification and Authentication 11 Configuration Management 12 Media Protection & Information Integrity 13 System and Communication Protection and Information Integrity 14 Conclusion 15 Further Reading 16 Document Revisions 17 Archived Abstract There is a long and successful track record of AWS customers using the AWS cloud for a wide range of sensitive federal and state government workloads including Criminal Justice Information (CJI) data Law enforcement customers (and partners who manage CJI) are taking advantage of AWS services to dramatically improve the security and protection of CJI data using the advanced security services and features of AWS such as a ctivity logging ( AWS CloudTrail ) encryption of data in motion and at rest (Amazon S3’s Server Side Encryption with the option to bring your own key) comprehensive key management and protection ( AWS Key Management Service and AWS CloudHSM ) along with integrated permission management (IAM federated identity management multi factor authentication) To enable this AWS complies with Criminal Justice Information Services Division (CJIS) Security Policy requirements where applicable such as providing states with fingerprint cards for GovCloud administrators and signing CJIS security addendum agreements with our customers ArchivedAmazon Web Services – CJIS Compliance on AWS Page 1 Introduction Amazon Web Services (AWS) delivers a scalable cloud computing platform with high availability and dependability providing the tools that enable customers to run a wide range of applications Because AWS designed their cloud implementation with security in mind you can use AWS services to satisfy a wide range of regulatory requirements including the Criminal Justice Information Services (CJIS) Security Policy The CJIS Security Policy provides Criminal Justice Agencies (CJA) and Noncriminal Justice Agencies (NCJA) with a minimum set of security requirements for access to FBI CJIS systems and information for the protection and saf eguarding of CJI The essential premise of the CJIS Security Policy is to provide the appropriate controls to protect CJI from creation through dissemination whether at rest or in transit This minimum standard of security requirements ensures continuity of information protection What is Criminal Justice Information? Criminal Justice Information (CJI) refers to the FBI CJIS provided data necessary for law enforcement agencies to perform their mission and enforce the laws such as biometric identity his tory person organization property and case/incident history data CJI also refers to data necessary for civil agencies to perform their mission including data used to make hiring decisions CJIS Security Policy 52 A 3 defines CJI as: Criminal Justic e Information is the abstract term used to refer to all of the FBI CJIS provided data necessary for law enforcement agencies to perform their mission and enforce the laws including but not limited to: biometric identity history person organization property and case/incident history data In addition CJI refers to the FBI CJIS provided data necessary for civil agencies to perform their mission; including but not limited to data used to make hiring decisions — CJIS Security Policy 52 A 3 Law enforcement must be able to access CJI wherever and whenever is necessary in a timely and secure manner in order to reduce and stop crime ArchivedAmazon Web Services – CJIS Compliance on AWS Page 2 What is the CJIS Security Policy The intent of the CJIS Security Policy is to ensure the protection of the CJI until the information is 1) released to the public via authorized dissemination (eg within a court system presented in crime reports data or released in the interest of public safety) and 2) purged or destroyed in accordance with applicable record retention rules The Criminal Justice Information Services Division (CJIS) is a division of the United States Federal Bureau of Investigation (FBI) and is responsible for publishing the Criminal Justice Information Services (CJIS) Security Policy which is currently on version 55 The CJIS Security Policy outlines a minimum set of security requirements that create security controls for managing and maintaining Criminal Justice Information (CJI) data The CJIS Advisory Policy Board (APB) manages the policy with national oversight from the CJIS division of the FBI There is no centralized adjudication body for determining what is or isn’t compliant with the Security Policy in the way that FedRAMP has standardized security assessments across the federal government That means vendors/CS Ps wanting to provide CJIS compliant solutions to multiple law enforcement agencies must gain formal CJIS authorizations from city county or state level authority CJIS Security Addendums (Agreements) Unlike many of the compliance frameworks that AWS supports there is no central CJIS authorization body no accredited pool of independent assessors nor a standardized assessment approach to determining whether a particular solution is considered "CJIS compliant" Simply put a standardized "CJIS compliant” solution which works across all law enforcement agencies does not exist It is often falsely misunderstood and miscommunicated that a cloud service provider can be “CJIS certified” It is imperative to understand that delivering a CJIS compliant solution relies on a Shared Responsibility Model between the cloud service provider and the CJA Each law enforcement organization granting CJIS authorizations interprets solutions according to their own risk acceptance standard of what can be construed as compliant within the CJIS requirements Authorizations from one state do not necessarily find reciprocity within another state (or even necessarily ArchivedAmazon Web Services – CJIS Compliance on AWS Page 3 within the same state) Providers must submit solutions for review with each agency authorizing official(s) possibly to include duplicate fingerprint and background checks and other state/jurisdiction specific requirements Each authorization is an agreement with that particular organization; something that must be repeated locally at each law enforcement agency Thu s be wary of vendors that may represent themselves as having a nationally recognized or 50 state compliant CJIS service AWS Approach on CJIS AWS has evaluated the 13 Policy Areas along with the 131 security requirements and has determined that 10 controls can be directly inherited from AWS both AWS and the CJIS customer share 78 and 43 are customer specific controls AWS has documented these requirements with a detailed workb ook which can be downloaded at CJIS Security Policy Workbook The AWS CJIS Security Policy Workbook outlines the shared responsibility between AWS and the CJIS customer on how AWS directly supports the requirements within our FedRAMP accreditation (Note: the CJIS Advisory Policy Board (APB) also has mapping for CJIS to NIST 800 53rev4 requirements which are the base controls for Federal Risk and Authorization Management Program (FedRAMP) dated 6/1/2016) This document and our approach h as been reviewed by the CJIS APB subcommittee chairmen partners in the CJIS space with favorable support on the efficacy of our workbook and approach CJIS and relationship to FedRAMP All Federal Agencies including Criminal Justice Agencies (CJA’s) may leverage the AWS package completed as part of the Federal Risk and Management Program (FedRAMP) FedRAMP is a government wide program that provides a standardized approach to security assessment authorization and continuous monitoring for cloud service providers (CSP’s ) This approach utilizes a “do once use many times” model to ensure cloud based services have adequate information security eliminate duplication of effort reduce risk management costs and accelerate cloud adoption FedRAMP conforms to the National Institute of Science & Technology (NIST) 800 Series Publications to verify that ArchivedAmazon Web Services – CJIS Compliance on AWS Page 4 all authorizations are compliant with the Federal Information Security Management Act (FISMA) The CJIS Security Policy integrates presidential directives federal laws FBI directives the criminal justice community’s APB decisions along with nationally recognized guidance from the National Institute of Standards and Technology (NIST) and the National Crime Prevention and Privacy Compact Council (Compact Council ) AWS Shared Responsibility Model AWS offers a variety of different infrastructure and platform services For the purpose of understanding security and shared responsibility of these AWS services consider the following three main categories: • Infrastructure • Platform • Software Each category comes with a slightly different security ownership model based on how you interact and access the functionality The main focus of this document the CJIS Security Policy Template document the CJIS Security Policy Requirements document and the CJIS Security Policy Workbook is on the Infrastructure services The other categories are highlighted for awareness and can also be addressed by AWS services as outlined in the following sections Service Categories Infrastructure Services This category includes compute services such as Amazon EC2 and related services such as Amazon Elastic Block Store (Amazon EBS) AWS Auto Scaling and Amazon Virtual Private Cloud (Amazon VPC) With these services you can architect and build a cloud infrastructure using technologies similar to and largely compatible with on premises solutions You control the operating ArchivedAmazon Web Services – CJIS Compliance on AWS Page 5 system and you configure and operate any identity management system that provides access to the user layer of the virtualization stack Platform as a Service Services in this category typically run on separate Amazon EC2 or other infrastructure instances but sometimes you don’t manage the operating system or the platform layer AWS provides service for these application “c ontainers” You are responsible for setting up and managing network controls such as firewall rules and the underlying platform – eg level identity and access management separately from Identity and Access Management ( IAM ) Examples of container servic es include Amazon Relational Database Services ArchivedAmazon Web Services – CJIS Compliance on AWS Page 6 (Amazon RDS) Amazon Elastic Map Reduce (Amazon EMR) and AWS Elastic Beanstalk Software as a Service This category includes high level storage database and messaging services such as Amazon Simple Storage Service (Amazon S3) Amazon Glacier Amazon DynamoDB Amazon Simple Queuing Service (Amazon SQS) and Amazon Simple Email Service (Amazon SES) These services abstract the platform or management layer on which you can build and operate cloud applications You access the endpoints of these abstracted services using AWS APIs and AWS manages the underlying service components or the operating system on which they reside You share the underlying infrastructure and abstracted services provide a multi tenant platform which isolates your data in a secure fashion and provides for powerful integration with IAM AWS Regions Availability Zones and Endpoints AWS has datacenters in multiple locations around the world The recommended region for CJIS workloads is t he AWS GovCloud region Regions are designed with availability in mind and consist of at least two often more Availability Zones Availability Zones are designed for fault isolation They are connected to multiple Internet Service Providers (ISPs) and different power grids The y are interconnected using high speed links so applications ArchivedAmazon Web Services – CJIS Compliance on AWS Page 7 can rely on Local Area Network (LAN0) connectivity for communication between Availability Zones within the same region You are responsible for carefully selecting the Availability Zone(s) where your systems will reside Systems can span multiple Availability Zones and we recommend that you design your systems to survive temporary or prolonged failure of an Availability Zone in the case of a disaster AWS provides web access to services through t he AWS Management Console AWS provides programmatic access to services through Application Programming Interfaces (APLs) and command line interfaces (CLIs) Service endpoints which are managed by AWS provide management (“backplane”) access Security & C ompliance OF the Cloud One of the tenets within the CJIS Security Policy is the risk verse realism approach of applying risk based approaches that can be used to mitigate risks based on Every “shall” statement contained within the CJIS Security Policy has been scrutinized for risk versus the reality of resource constraints and realworld application The purpose of the CJIS Security Policy is to establish the minimum security requirements; therefore individual agencies are encouraged to implement additiona l controls to address agency specific risks Each agency faces risk unique to that agency It is quite possible that several agencies could encounter the same type of risk however depending on resources would mitigate that risk differently In that light a risk based approach can be used when implementing requirements” — 23 Risk Versus Realism In order to manage risk and security within the cloud a variety of processes and guidelines have been created to differentiate between the security of a cloud service provider and the responsibilities of a customer consuming the cloud services One of the primary concepts that have emerged is the increased understanding and documentation of shared inherited or dual (AWS & Customer) security controls in a cloud env ironment A common question for ArchivedAmazon Web Services – CJIS Compliance on AWS Page 8 AWS is: “how does leveraging AWS make my security and compliance activities easier?” This question can be answered by demonstrating the security controls that are met by approaching the AWS Cloud in two distinct ways: first reviewing compliance of the AWS Infrastructure gives an idea of “Security & Compliance OF the cloud”; and second reviewing the security of workloads running on top of the AWS infrastructure gives an idea of “Security & Compliance IN the cloud” AWS opera tes manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the AWS services operate Customers running workloads on the AWS infrastructure depend on AWS for a nu mber of security controls AWS has several additional whitepapers which provide additional information to assist AWS customers with integrating AWS into their existing security frameworks and to help design and execute security assessments of an organizat ion’s use of AWS For more information see the AWS Compliance Whitepapers Security & Compliance IN the Cloud Security & Compliance IN the Cloud refers to how the customer manages the secur ity of their workloads through the use of various applications and architecture (virtual private clouds security groups operating systems databases authentication etc) • Cross service security controls – are security controls which a customer needs to implement across all services within their AWS customer instance While each customer’s use of AWS services may vary along with their own risk posture and security control interpretation cross service controls will need to be documented within the customer’s use of AWS services Example: Multi factor authentication can be used to help secure Identity and Access Management (IAM) users groups and roles within the customer environment in order to meet CJIS Access Management Authentication and Authorization requirements for the particular agency or CJIS organization • Service Specific security controls – are service specific security implementation such as the Amazon S3 security access permission ArchivedAmazon Web Services – CJIS Compliance on AWS Page 9 settings l ogging event notification and/or encryption A customer may need to document service specific controls within their use of Amazon S3 in order to meet a specific security control objective related to criminal justice data and/or investigative related reco rds Example: Server Side Encryption (SSE) can be enabled for all objects classified as CJI and/or directory information related to the CJIS security • Optimized Network Operating Systems (OS) and Application Controls – controls a customer may need to docu ment in order to meet specific control elements related to the use of an Operating System and/or application deployed within AWS Example: Customer Server Secure hardening rules or an optimized private Amazon Machine Images (AMI) in order to meet specific security controls within Change Management Creating a CJIS Environment on AWS AWS has several partner solutions that collect transfer manage as well as share digital evidence (eg video and audio files) related to law enforcement interactions AWS is also working with several partners who are delivering electronic warrant services as well as other unique CJIS law enforcement applications and services directly or indirectly to CJIS customers a s illustrated above CJIS Agency/Customer CJIS Technology ArchivedAmazon Web Services – CJIS Compliance on AWS Page 10 Similar to other AWS compliance frameworks the CJIS Security Policy takes advantage of the shared responsibility model between you and AWS Using a cloud se rvice which aligns to CJIS security requirements doesn't mean that your environment automatically adheres to applicable CJIS requirements It’s up to you (or your AWS partner/systems integrator) to architect a solution that meets the applicable CJIS requirements outlined in the CJIS Security Policy One advantage of using AWS for CJIS workloads is that you inherit a significant portion of the security control implementation from AWS and the partner solution that address and meet CJIS security policy elem ents You and your AWS customers and partners should enable several applicable security features functions and utilize leading practices in order to create an AWS CJIS compliant environment within their use of AWS As such t he following section provides a high level overview of services and tools you and your partners should consider as part of your AWS CJIS implementation Auditing and Accountability (Ref CJIS Policy Area 4) • AWS CloudTrail – A service that records AWS API calls for your account and del ivers log files to you AWS CloudTrail logs all user activity within your AWS account You can see who performed what actions on each of your AWS resources The AWS API call history produced by AWS CloudTrail enables security analysis resource change tracking and compliance auditing For more information go here • Amazon CloudWatch – A service that monitors AWS cloud resources and the applications that you run on AWS You can use AWS CloudWatch to monitor your AWS resources in near real time including Amazon EC2 instances Amazon EBS volumes AWS Elastic Load Balancers and Amazon RDS DB instances For more information go here • AWS Trusted Advisor – This online resource provides best practices (or checks) in fo ur categories: cost optimization security fault tolerance and performance improvement For each check you can review a detailed description of the recommended best practice a set of alert criteria guidelines for action and a list of useful resources on the topic For more information go here ArchivedAmazon Web Services – CJIS Compliance on AWS Page 11 • Amazon SNS – You can use this service to send email or SMS based notifications to administrative and security staff Within an AWS account you can create Amazon SNS topics to which applications and AWS CloudFormation deployments can publish These push notifications can automatically be sent to individuals or groups within the organization who need to be notified of Amazon CloudWatch alarms resource deployments or other activity published by applications to Amazon SNS For more information go here Identification and Authentication (Ref CJIS Policy Area 6) • Access Control – IAM is central to securely controlling access to AWS resources Administrators can create users groups and roles with specific access policies to control the actions that users and applications can perform through the AWS Management Console or AWS API Federation allows IAM rol es to be mapped to permissions from central directory services • AWS Identity and Access Management ( IAM) configuration – Creating user groups and assignment of rights including creation of groups for internal auditors an IAM super user and application administrative groups segregated by functionality (eg database and Unix administrators) For more information go here • AWS Multi Factor Authentication (MFA) – A simple best practice that adds an extra l ayer of protection on top of your user name and password With MFA enabled when a user signs in to an AWS website they will be prompted for their user name and password (the first factor —what they know) as well as for an authentication code from their A WS MFA device (the second factor —what they have) For more information go here • AWS Account Password Policy Settings – Within the IAM console under account settings a password policy can be set which supports the password policy requirements as outlined within the CJIS security policy For more information go here ArchivedAmazon Web Services – CJIS Compliance on AWS Page 12 Configuration Management (Ref CJIS Policy Area 7) • Amazon EC2 – A web service that provides resizable compute capacity in the cloud It provides you with complete control of your computing resources and lets you run Amazon Machine Images (AMI) For more information go here • Amazon Machine Image (AMI) – An Amazon Machine Image (AMI) provides the information required to launch an instance which is a virtual server in the cloud You specify an AMI when you launch an instance and you can launch as many instances from the AMI as you need You can also launch instances from as many different AMIs as you need For more information go here • Amazon Machine Images (AMIs) management – Organizations commonly ensure security and complia nce by centrally providing workload owners with pre built AMIs These “golden” AMIs can be preconfigured with host based security software and hardened based on predetermined security guidelines Workload owners and developers can then use the AMIs as star ting images on which to install their own software and configuration knowing the images are already compliant For more information go here • Choosing an AMI – While AWS d oes provide images that can be used for deployment of host operating systems you need to develop and implement system configuration and hardening standards to align with all applicable CJIS requirements for your operating systems For more information go here • AWS EC2 Security Groups – You can control how accessible your virtual instances in EC2 are by configuring built in firewall rules (Security Groups) – from totally public to completely private or somewhere in between For more information go here • Resource Tagging – Almost all AWS resources allow the addition of user defined tags These tags are metadata and irrelevant to the functionality of the resource but are critical for cost management and access control When multiple groups of users or multiple workload owners exist within the same AWS account it is important to restrict access to resources based on tagging Regardless of account structure ArchivedAmazon Web Services – CJIS Compliance on AWS Page 13 you can use tag based IAM policies to place extra security restrictions on critical resources For more information go here • AWS Config – A fully managed service that provides you with an AWS resource inventory configuration history and configuration change notifications to enable security and governance With AWS Config service you can immediately discover all of your AWS resources and view the configuration of each You can receive notifications each time a configuration changes as well as dig into the configuration history to perform incident analysis For more information go here • CloudFormation Templates – Creating preapproved AWS CloudFormation templates for common use cases Using templates allows CJIworkload owners to inherit the security implementation of the approved template thereby lim iting their authorization documentation to the features that are unique to their application Templates can be reused to shorten the time required to approve and deploy new applications For more information go here • AWS Service Catalog – Allows CJIS IT administrators to create manage and distribute portfolios of approved products to end users who can then access the products they need in a personalized portal Typical products include servers databases websites or applications that are deployed using AWS resources (for example an Amazon EC2 instance or an Amazon RDS database) For more information go here Media Protection & Information Integrity (Ref CJIS Policy Area 8 & 10) • AWS Storage Gateway – A service that connects an on premises software appliance to cloud based storage providing seamless and secure integration between your on premises IT environment and AWS’s storage infrastructure For more information go here • Storage – AWS provides various options for storage of information including Amazon Elastic Block Store (Amazon EBS) Amazon Simple Storage Service (Amazon S3) and Amazon Relational Database Service (Amazon RDS) to allow you to make data easily accessible to your appl ications or for backup purposes Before you store sensitive data you should use CJIS requirements for restricting direct inbound and outbound data to select the correct storage option ArchivedAmazon Web Services – CJIS Compliance on AWS Page 14 For example Amazon S3 can be configured to encrypt your data at rest with server side encryption (SSE) In this scenario Amazon S3 will automatically encrypt your data on write and decrypt your data on retrieval When Amazon S3 SSE encrypts data at rest it uses Advanced Encryption Standard (AES) 256 bit symmetric keys If you choose server side encryption with Amazon S3 you can use one of the following methods: o AWS Key Management Service (KMS) – A service that makes it easy for you to create and control the encryption keys used to encrypt your data AWS KMS uses Hardware Security Modules (HSMs) to protect the security of your keys For customers who use encryption extensively and require strict control of their keys AWS KMS provides a convenient management option for creating and administering the keys used to encrypt yo ur data at rest For more information go here o KMS Service Integration – AWS KMS seamlessly integrates with Amazon EBS Amazon S3 Amazon RDS Amazon Redshift Amazon Elastic Transcoder Amazon WorkMail and Amazon EMR This integration means that you can use AWS KMS master encryption keys to encrypt the data you store with these services by simply selecting a check box in the AWS Management Console For more information go here o AWS CloudHSM Service – A service that helps you meet corporate contractual and regulatory compliance re quirements for data security by using dedicated Hardware Security Module (HSM) appliances within the AWS cloud AWS CloudHSM supports a variety of use cases and applications such as database encryption Digital Rights Management (DRM) and Public Key Infr astructure (PKI) including authentication and authorization document signing and transaction processing For more information go here System and Communication Protection and Information Integrity (Ref CJIS Policy Area 10) • AWS Virtual Private Cloud (VPC) – You can use VPC to connect existing infrastructure to a set of logically isolated AWS compute ArchivedAmazon Web Services – CJIS Compliance on AWS Page 15 resources via a Virtual Private Network (VPN) connection and to extend existing management capabilit ies such as security services firewalls and intrusion detection systems to include virtual resources built on AWS For more information go here • AWS Direct Connect (DX) – AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS For more information go here • Perfect Forward Secrecy – For even greater communication privacy several AWS services such as AWS Elastic Load Balancer and Amazon CloudFront offer newer stronger cipher suites SSL/TLS clients can use these cipher suites to use Perfect Forward Secrec y a technique that uses session keys that are ephemeral and not stored anywhere This prevents the decoding of captured data even if the secret long term key itself is compromised • Protect data in transit – You should implement SSL encryption on your server instances You will need a certificate from an external certification authority like VeriSign or Entrust The public key included in the certificate authenticates each session and serves as the basis for creating the shared session key used to encrypt the data AWS security engineers and solution architects have developed whitepapers and operational checklists to help you select the best options for your needs and recommend security best practices For example guidance on securely storing and rotating or changing secret keys and passwords Conclusion There are few key points to remember in supporting CJIS work loads: Security is a shared responsibility as AWS doesn't manage the customer environment or data this means you are responsible for implementing the applicable CJIS Security Policy requirements in your AWS environment over and above the AWS implementation of security requirements within the infrastructure Encryption of data in transit and at rest is critical AWS provides several "key" resources to help you achieve this imp ortant solution From Solutions Architect personnel available to assist you to our Encrypting Data at Rest Whitepaper as ArchivedAmazon Web Services – CJIS Compliance on AWS Page 16 well as multiple Encryption leading practices AWS strives to provide the resources you need to implement secure solutions AWS directly addresses the relevant CJIS Security Policy requirements applicable to the AWS infrastructure As AWS provides a self provisioned platform that customers wholly manage AWS isn't directly subject to the CJIS Security Policy However we are absolutely committed to maintaining world class cloud security and complian ce programs in support of our customer needs AWS demonstrates compliance with applicable CJIS requirements as supported by our third party assessed frameworks (such as FedRAMP) incorporating on site data center audits by our FedRAMP accredited 3PAO In th e spirit of a shared responsibility philosophy the AWS CJIS Requirements Matrix and the CJIS Security Policy Workbook (in a system security plan template) ha ve been developed which aligns to the CJIS Policy Areas The Workbook is intended to support customers in systematically documenting their implementation of CJIS requirements alongside the AWS approach to each requirement (along with guidance on submitting the document for review and authorization) AWS provides multiple built in security features in support of CJIS workloads such as: • Secure access using AWS Identity and Access Management (IAM) with multi factor authentication • Encrypted data storage with either AWS provided options or customer maintained opti ons • Logging and monitoring with Amazon S3 logging AWS CloudTrail Amazon CloudWatch and AWS Trusted Advisor • Centralized customer controlled key management with AWS CloudHSM and AWS Key Management Ser vice (KMS) Further Reading For additional help see the following sources: • AWS Compliance Center: http://awsamazoncom/compliance ArchivedAmazon Web Services – CJIS Compliance on AWS Page 17 • AWS Security Center: http:// awsamazoncom/security • AWS Security Resources: http://awsamazoncom/security/security resources • FedRAMP FAQ: http://awsamazoncom/compliance/fedramp faqs/ • Risk and Compliance Whitepaper: https://d0awsstaticcom/whitepapers/compliance/AWS_Risk_and_Co mpliance_Whitepaperpdf • Cloud Architecture Best Practices Whitepaper: http://mediaamazonwebservicescom/AWS_Cloud_Best_Practicespdf • AWS Products Overview: http://awsamazoncom/products/ • AWS Sales and Business Development: https://awsamazoncom/compliance/public sector contact/ Document Revisions Date Description March 2017 Revised for 55 combined CJIS 54 Workbook and CJIS Whitepaper July 2015 First publication
|
General
|
consultant
|
Best Practices
|
CrossDomain_Solutions_on_AWS
|
ArchivedCrossDomain Solutions on AWS December 2016 This paper has been archived For the latest technical content see https://docsawsamazoncom/whitepapers/latest/cross domainsolutions/welcomehtml Archived© 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 What is a CrossDomain Solution? 1 OneWay Transfer Device 1 Multidomain Data Guard 2 Traditional Deployment 2 How Is a CrossDomain Solution Different from Other Security Appliances? 3 When is a CrossDomain Solution Required? 4 Connecting OnPremises Infrastructure 4 Amazon VPC 4 AWS Direct Connect 5 Amazon EC2 5 Amazon S3 5 AWS Advantages for Secure Workloads 6 Cost 6 Elasticity 6 PurposeBuilt Infrastructure 6 Auditability 6 Security and Governance 7 Sample Architectures 7 Deploying a CDS via the Internet 7 Deploying a CDS via AWS Direct Connect 8 Deploying a CDS across Multiple Regions 9 Deploying a CDS in a Colocation Environment 11 Conclusion 11 Contributors 12 Further Reading 12 ArchivedNotes 12 ArchivedAbstract Many corporations government entities and institutions maintain multiple security domains as part of their information technology (IT) infrastructure For the purposes of this document a security domain is an environment with a set of resources accessible only by users or entities who have permitted access to those resources The resources are likely to include the resource’s network fabric as defined by the security domain’s policy Some organization’s users need to interact with multiple domains simultaneously or a system or user within one security domain needs to communicate directly or obtain data from a system or user in a separate security domain For security domains with highly sensitive data a crossdomain solution (CDS) can be deployed to allow data transfer between security domains while ensuring integrity of the domain’s security perimeter ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 1 Introduction To control access across security domains it’s common to employ a specialized hardware solution such as a crossdomain solution (CDS ) to manage and control the interactions between two security boundaries When security domains extend across data centers or expand into the cloud you can encounter additional challenges when including the hardware solution you want in your architecture You are not limited to any particular vendor solution to deploy a CDS on the AWS Cloud However one challenge is that you cannot place your own hardware within an AWS data center This requirement is part of the AWS commitment to maintain security within AWS data centers This whitepaper provides best practices for designing hybrid architectures where AWS services are incorporated as one or more security domains within a multidomain environment What is a CrossDomain Solution? The Committee on National Security Systems (CNSS) defines a CDS as a form of controlled interface that enables manual or automatic access or transfer of information between different security domains Two types of CDS are discussed in this whitepaper a o neway transfer (OWT) device and a multidomain data guard OneWay Transfer Device An OWT device allows data to flow in a single direction from one security domain to another A common implementation of an OWT device uses a fiber optic cable To ensure data flows only in one direction the OWT uses a single optical transmitter The optical transmitter is placed on only one end of the fiber optic cable (eg data producer) and the optical receiver is placed on the opposite end (eg data consumer) OWT devices are often referred to as diodes due to their ability to transfer data only in one direction similar to the semiconductor of the same name ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 2 Multidomain Data Guard A multidomain data guard enables bidirectional data flow between security domains A common implementation of a multidomain data guard is a single server running a trusted hardened operating system with multiple network interface cards (NICs) Each NIC provides a physical demarcation for a single security domain The multidomain data guard inspects all data transmitted between domains to ensure the data remains in compliance with a unique rule set that is specific to the guard’s deployment Traditional Deployment Figure 1 shows a traditional crossdomain solution deployment between two security domains Security Domain “A” is connected to Security Domain “B” using a CDS If the CDS is an OWT device resources deployed in Network “A” can communicate to resources deployed in Network “B” by sending data via the CDS If instead the CDS is a multi domain data guard resources in either security domain can communicate with the other security domain by sending data via the CDS In the following example the CDS is administrated and also physically located within the protections of Security Domain “B” ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 3 Figure 1: Traditional CDS deployment How Is a CrossDomain Solution Differ ent from Other Security Appliances? A CDS differs from other security appliances such as firewalls web application firewalls (WAFs) and intrusion detection or prevention systems In addition to providing physical network and logical isolation between domains cross domain solutions offer additional security mechanisms such as virus scanning auditing and logging and deep content inspection in a single solution In Security Domain “A” Network “A” Security Domain “B” Network “B” Cross Domain Solution ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 4 combination when the CDS is included in a larger security program these capabilities help prevent both exploitation and data leakage When is a CrossDomain Solution Required? A business decision to employ a CDS should evaluate the high cost of ownership involved with integration procurement and maintenance Be aware that a high degree of customization is often required for each individual CDS deployment You would often deploy a CDS due to regulatory or policy requirements or in situations where a data breach would be catastrophic to your organization Because of these reasons the CDS is an integral component of the architecture and may even be required to achieve an Authority to Operate (ATO) from your organization’s security and compliance program Once an ATO is achieved it can be cumbersome to make changes to a CDS configuration (eg alter the message rule set) without affecting the ATO ’s approval If these drawbacks outweigh the additional security provided by a CDS you should consider other options such as WAF s Connecting OnPremises Infrastructure AWS provides service offerings to help you connect your existing onpremises infrastructures The following sections describe some o f the key services that AWS offers including: Amazon Virtual Private Cloud (Amazon VPC) AWS Direct Connect Amazon Elastic Compute Cloud (Amazon EC2 ) and Amazon Simple Storage Service (Amazon S3) Amazon VPC Amazon VPC lets you provision a logically isolated section of your AWS environment so that you can launch resources in a virtual network you define You have complete control over your virtual networking environment including the selection of your own IP address range creation of subnets and configuration of route tables and network gateways The network configuration for a VPC is ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 5 easily customized using multiple layers of security including security groups and network access control lists The security layers control access to Amazon EC2 instances in each subnet Additionally you can create a hardware Virtual Private Network (VPN) connection between your corporate data center and your VPC and leverage AWS as an extension of your corporate data center AWS Direct Connect Using Direct Connect you can establish private connectivity between AWS and your data center office or colocation environment Direct Connect enables you to establish a dedicated network connection between your network and one of the Direct Connect locations Using industry standard 8021q VLANs this dedicated connection can be partitioned into multiple virtual interfaces This enables you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space and private resources such as Amazon EC2 instances running within Amazon VPC using private IP address space while maintaining network separation between the public and private environments You can reconfigure virtual interfaces at any time to meet your changing needs Amazon EC2 Amazon EC2 is a web service that provides resizable compute capacity in the cloud It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment Amazon S3 Amazon S3 provides costeffective object storage for a wide variety of use cases including cloud applications content distribution backup and archiving disaster recovery and big data analytics Objects stored in Amazon S3 can be protected in transit by using SSL or clientside encryption Data at rest in Amazon S3 can be protected by using serverside encryption (you request Amazon S3 to encrypt your object before saving it on disks in its data centers and decrypt it when you download the objects) and/or using clientside encryption (you encrypt data clientside and then upload the data to Amazon S3) Using clientside encryption you manage the encryption process the encryption keys and related tools ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 6 AWS Advantages for Secure Workloads The AWS Cloud provides several advantages if you want to deploy secure workloads using a CDS Cost Pay only for the storage and compute consumed for your workloads Amazon S3 offers multiple storage classes you can use to control the cost of storage objec ts based on the frequency and availability required at the object level Eliminate the costs associated with data duplication data fragmentation system maintenance and upgrades Provision compute resources for specific jobs and stop paying for the comp ute resources when the jobs are complete Elasticity Scale as workload volumes increase and decrease paying only for what you use Eliminate large capital expenditures by no longer guessing what levels of storage and compute are required for your workloads Scaling resources is not limited to just meeting demand Workload owners can also leverage the scalability value of AWS by scaling up compute resources for timesensitive jobs PurposeBuilt Infrastructure You tailor AWS purposebuilt tools to your requirements and scaling and audit objectives in addition to supporting realtime verification and reporting through the use of internal tools such as AWS CloudTrail1 AWS Config2 and Amazon CloudWatch3 These tools are built to help you maximize the protection of your services data and applications This means as an AWS customer you can spend less time on routine security and audit tasks and focus on proactive measures that can continue to enhance security and audit capabilities of your AWS environment Auditability AWS manages the underlying infrastructure and you manage the security of anything you deploy in AWS As a modern platform AWS enables you to ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 7 formalize the design of security as well as audit controls through reliable automated and verifiable technical and operational processes that are built into every AWS customer account The cloud simplifies system use for administrators and those running IT and makes your AWS environment much simpler to audit sample testing as AWS can shift audits toward a 100 percent verification versu s traditional sample testing Security and Governance AWS Compliance enables you to understand the robust controls in place at AWS to maintain security and data protection in the cloud As systems are built on top of AWS Cloud infrastructure compliance responsibilities are shared By tying together governancefocused auditfriendly service features with applicable compliance or audit standards AWS Compliance enablers build on traditional programs This helps you establish and operate in an AWS security control environment The IT infrastructure that AWS provides is designed and managed in alignment with security best practices and numerous security accreditations Sample Architectures You can set up your CDS in many ways The following examples describe some of the more common architectures in use Deploying a CDS via the Internet Figure 2 shows two onpremises customer networks that are connected by a CDS using the traditional deployment as shown earlier in Figure 1 In this configuration Security Domain “A” is extended to provide connectivity to an Amazon VPC in the AWS Cloud while Security Domain “B” exist s solely within the customer’s data center ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 8 Figure 2: Deploying a CDS via the Internet The customer is using the Internet as a WAN to connect to the Amazon VPC A secure IPSEC tunnel encapsulates data crossing the Internet betwee n on premises infrastructure and the customer’s VPC Additional security mechanisms such as a WAF or an intrusion detection system (IDS) can be deployed within Security Domain “A” for added protection from Internet facing systems Because Amazon VPC is a n extension of Security Domain “A” Amazon EC2 instances launched within Amazon VPC can communicate with resources in Security Domain “B” via the CDS Deploying a CDS via AWS Direct Connect Figure 3 shows a similar deployment to Figure 2 b ut Direct Connect is used instead of the Internet to provide the WAN connectivity for extending Security Domain “A” to Amazon VPC Internet Secure IPSEC Tunnel Security Domain “A” Extension Virtual Private Cloud AWS Region VPC subnet 1 Availability Zone A VPC subnet 2 Availability Zone B EC2 Instances EC2 Instances Virtual Private Gateway Security Domain “A” Customer Data Center Security Domain “B” Customer Gateway Network “A” Hardware Network “B” Hardware Cross Domain Solution ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 9 Figure 3: Deploying a CDS via Direct Connect Direct Connect gives you greater control and visibility of the WAN network path required to connect to Amazon VPC Using Direct Connect also reduces the threat vector posed by the Internet All data flowing between your data center and AWS Regions is doing so across your procured communication links Deploying a CDS across Multiple Regions Figure 4 shows two individual security domains connected to two separate AWS Regions As shown earlier in Figure 3 the security domains are extended by using a combination of Direct Connect and a secure IPSEC VPN tunnel All data flowing between the security domains flows from AWS to the customer’s data center first where it is inspected by the CDS before flowing back to AWS Security Domain “A” AWS Direct Connect colocation environment Security Domain “A” Extension Virtual Private Cloud AWS Region Customer Data Center 8021q VLAN Customer WAN Secure IPSEC Tunnel VPC subnet 1 Availability Zone A VPC subnet 2 Availability Zone B EC2 Instances EC2 Instances Security Domain “B” Customer Gateway Virtual Private Gateway Network “A” Hardware Network “B” Hardware Cross Domain Solution ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 10 Figure 4: Deploying a CDS across multiple regions You should implement a multiregion deployment when the unique capabilities of an individual AWS Region apply to only a single security domain For example an entity might choose to provision an Amazon Redshift data warehouse in one of the AWS Regions in the European Union (EU) to comply with data locality requirements while also maintaining a production data processing cluster in a USbased region to comply with FedRamp requirements Even though these two systems are deployed in separate geographic locations to comply with separate compliance programs and regulations they still might have a requirement to communicate and share an approved subset of data Deploying a CDS between these two security domains might be an acceptable way to share data while maintaining the integrity of the security domain’s boundary AWS Direct Connect colocation environment Security Domain “A” Extension Virtual Private Cloud AWS Region “A” 8021q VLAN Customer WAN “A” Secure IPSEC Tunnel VPC subnet EC2 Instances Virtual Private Gateway AWS Direct Connect colocation environment Security Domain “B” Extension Virtual Private Cloud AWS Region “B” 8021q VLAN Customer WAN “B” Secure IPSEC Tunnel VPC subnet EC2 Instances Virtual Private Gateway Security Domain “A” Customer Data Center Security Domain “B” Customer Gateway Network “A” Hardware Network “B” Hardware Cross Domain Solution Customer Gateway ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 11 Deploying a CDS in a Colocation Environment Figure 5 depicts an additional potential configuration using space at colocation environments In Figure 5 the CDS is still deployed in a customercontrolled area that is leased from the colocation facility provider Figure 5 shows a fully off premises implementation that includes a CDS Figure 5: Deploying a CDS in a colocation environment Conclusion Organizations with workloads across multiple security domains can leverage all the benefits that AWS services offer by using Direct Connect VPN crossdomain hardware and a colocation Organizations can select the hardware needed to meet their security domain transfer requirements and extend resources that live in other AWS Regions or onpremises locations In addition to the ability to connect resources across security domains AWS offers a wide variety of tools AWS Direct Connect colocation environment Security Domain “A” Extension Virtual Private Cloud AWS Region “A” 8021q VLAN Customer WAN “A” Secure IPSEC Tunnel VPC subnet EC2 Instances Virtual Private Gateway AWS Direct Connect colocation environment Security Domain “B” Extension Virtual Private Cloud AWS Region “B” 8021q VLAN Customer WAN “B” Secure IPSEC Tunnel VPC subnet EC2 Instances Virtual Private Gateway Customer WAN “A” Security Domain “A” Customer Data Center Security Domain “B” Customer Gateway Network “A” Hardware Network “B” Hardware Cross Domain Solution Customer Gateway ArchivedAmazon Web Services – CrossDomain Solutions: OnPremises to AWS Page 12 that you and your organization can leverage to meet security and compliance requirements of workloads hosted within AWS Contributors The following individuals and organizations contributed to this document: Andrew Lieberthal Solutions Architect AWS Public Sector SalesVar Further Reading For additional help please consult the following sources: Amazon VPC Network Connectivity Options4 AWS Security Best Practices5 Intro to AWS Security6 Overview of AWS7 Notes 1 https://awsamazoncom/cloudtrail/ 2 http://awsamazoncom/config 3 http://awsamazoncom/cloudwatch 4http://mediaamazonwebservicescom/AWS_Amazon_VPC_Connectivity_Opti onspdf 5 http://d0awsstaticcom/whitepapers/awssecuritybestpracticespdf 6 https://d0awsstaticcom/whitepapers/Security/Intro_to_AWS_Securitypdf 7 http://d0awsstaticcom/whitepapers/awsoverviewpdf
|
General
|
consultant
|
Best Practices
|
CSA_Consensus_Assessments_Initiative_Questionnaire
|
CSA Consensus Assessments Initiative Questionnaire (CAIQ) May 2022 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2022 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 4 CSA Consensus Assessments Initiative Questionnaire 5 Further Reading 100 Document Revisions 100 Abstract The CSA Consensus Assessments Initiative Questionnaire provides a set of questions the CSA anticipates a cloud consumer and/or a cloud auditor would ask of a cloud provider It provides a series of security control and process questions which can then be used for a wide range of uses including cloud provider selection and security evaluation AWS has completed this questionnaire with the answers below The questionnaire has been completed using the current CSA CAIQ standard v402 (06072021 Update) Introduction The Cloud Security Alliance (CSA) is a “notforprofit organization with a mission to promote the use of best practices for providing security assurance within Cloud Computing and to provide education on the uses of Cloud Computing to help secure all other forms of computing” For more information see https://cloudsecurityallianceorg/about/ A wide range of industry security practitioners corporations and associations participate in this organization to achieve its mission CSA Consensus Assessments Initiative Questionnaire Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title A&A 011 Are audit and assurance policies procedures and standards established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and content Maintaining customer trust and confidence is of the utmost importance to AWS AWS works to comply with applicable federal state and local laws statutes ordinances and regulations concerning security privacy and data protection of AWS services in order to minimize the risk of accidental or unauthorized access or disclosure of customer content A&A01 Establish document approve communicate apply evaluate and maintain audit and assurance policies and procedures and standards Review and update the policies and procedures at least annually Audit and Assurance Policy and Procedures Audit & Assurance A&A 012 Are audit and assurance policies procedures and standards reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis A&A01 Establish document approve communicate apply evaluate and maintain audit and assurance policies and procedures and standards Review and update the policies and procedures at least annually Audit and Assurance Policy and Procedures Audit & Assurance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title A&A 021 Are independent audit and assurance assessments conducted according to relevant standards at least annually? Yes CSPowned AWS has established a formal audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment Internal and external audits are planned and performed according to a documented audit schedule to review the continued performance of AWS against standardsbased criteria like the ISO/IEC 27001 and to identify improvement opportunities Compliance reports from these assessments are made available to customers enabling them to evaluate AWS You can access assessments in AWS Artifact: https://awsamazoncom/artifact The AWS Compliance reports identify the scope of AWS services and regions assessed as well the assessor’s attestation of compliance Customers can perform vendor or supplier evaluations by leveraging these reports and certifications A&A02 Conduct independent audit and assurance assessments according to relevant standards at least annually Independent Assessments Audit & Assurance A&A 031 Are independent audit and assurance assessments performed according to risk based plans and policies? Yes CSPowned AWS internal and external audit and assurance uses riskbased plans and approach to conduct assessments at least annually AWS Compliance program covers sections including but not limited to assessment methodology security assessment and results and nonconforming controls A&A03 Perform independent audit and assurance assessments according to riskbased plans and policies Risk Based Planning Assessment Audit & Assurance A&A 041 Is compliance verified regarding all relevant standards regulations legal/contractual and statutory requirements applicable to the audit? Yes CSPowned AWS maintains Security Governance Risk and Compliance relationships with internal and external parties to verity monitor legal regulatory and contractual requirements Should a new security directive be issued AWS has documented plans in place to implement that directive with designated timeframes A&A04 Verify compliance with all relevant standards regulations legal/contractual and statutory requirements applicable to the audit Requirement s Compliance Audit & Assurance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title A&A 051 Is an audit management process defined and implemented to support audit planning risk analysis security control assessments conclusions remediation schedules report generation and reviews of past reports and supporting evidence? Yes CSPowned Internal and external audits are planned and performed according to the documented audit scheduled to review the continued performance of AWS against standardsbased criteria and to identify general improvement opportunities Standardsbased criteria includes but is not limited to the ISO/IEC 27001 Federal Risk and Authorization Management Program (FedRAMP) the American Institute of Certified Public Accountants (AICPA): AT 801 (formerly Statement on Standards for Attestation Engagements [SSAE] 16) and the International Standards for Assurance Engagements No3402 (ISAE 3402) professional standards A&A05 Define and implement an Audit Management process to support audit planning risk analysis security control assessment conclusion remediation schedules report generation and review of past reports and supporting evidence Audit Management Process Audit & Assurance A&A 061 Is a riskbased corrective action plan to remediate audit findings established documented approved communicated applied evaluated and maintained? Yes CSPowned In alignment with ISO 27001 AWS maintains a Risk Management program to mitigate and manage risk AWS management has a strategic business plan which includes risk identification and the implementation of controls to mitigate or manage risks AWS management re evaluates the strategic business plan at least biannually This process requires management to identify risks within its areas of responsibility and to implement appropriate measures designed to address those risks A&A06 Establish document approve communicate apply evaluate and maintain a riskbased corrective action plan to remediate audit findings review and report remediation status to relevant stakeholders Remediation Audit & Assurance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title A&A 062 Is the remediation status of audit findings reviewed and reported to relevant stakeholders? Yes CSPowned AWS has established a formal audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment Internal and external audits are planned and performed according to a documented audit schedule to review the continued performance of AWS against standardsbased criteria like the ISO/IEC 27001 and to identify improvement opportunities External audits are planned and performed according to a documented audit schedule to review the continued performance of AWS against standardsbased criteria and to identify improvement opportunities Standardsbased criteria include but are not limited to Federal Risk and Authorization Management Program (FedRAMP) the American Institute of Certified Public Accountants (AICPA): AT 801 (formerly Statement on Standards for Attestation Engagements [SSAE] 18) the International Standards for Assurance Engagements No3402 (ISAE 3402) professional standards and the Payment Card Industry Data Security standard PCI DSS 321 Compliance reports from these assessments are made available to customers enabling them to evaluate AWS You can access assessments in AWS Artifact: https://awsamazoncom/artifact The AWS Compliance reports identify the scope of AWS services and regions assessed as well the assessor’s attestation of compliance Customers can perform vendor or supplier evaluations by leveraging these reports and certifications A&A06 Establish document approve communicate apply evaluate and maintain a riskbased corrective action plan to remediate audit findings review and report remediation status to relevant stakeholders Remediation Audit & Assurance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title AIS 011 Are application security policies and procedures established documented approved communicated applied evaluated and maintained to guide appropriate planning delivery and support of the organization's application security capabilities? Yes CSPowned AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and content Maintaining customer trust and confidence is of the utmost importance to AWS AWS works to comply with applicable federal state and local laws statutes ordinances and regulations concerning security privacy and data protection of AWS services in order to minimize the risk of accidental or unauthorized access or disclosure of customer content AIS01 Establish document approve communicate apply evaluate and maintain policies and procedures for application security to provide guidance to the appropriate planning delivery and support of the organization's application security capabilities Review and update the policies and procedures at least annually Application and Interface Security Policy and Procedures Application & Interface Security AIS 012 Are application security policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis AIS01 Establish document approve communicate apply evaluate and maintain policies and procedures for application security to provide guidance to the appropriate planning delivery and support of the organization's application security capabilities Review and update the policies and procedures at least annually Application and Interface Security Policy and Procedures Application & Interface Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title AIS 021 Are baseline requirements to secure different applications established documented and maintained? Yes CSPowned AWS maintains a systematic approach to planning and developing new services for the AWS environment to ensure the quality and security requirements are met with each release The design of new services or any significant changes to current services follow secure software development practices and are controlled through a project management system with multidisciplinary participation Prior to launch each of the following requirements must be reviewed: • Security Risk Assessment • Threat modeling • Security design reviews • Secure code reviews • Security testing • Vulnerability/penetration testing AIS02 Establish document and maintain baseline requirements for securing different applications Application Security Baseline Requirement s Application & Interface Security AIS 031 Are technical and operational metrics defined and implemented according to business objectives security requirements and compliance obligations? Yes CSCowned See response to Question ID AIS021 AIS03 Define and implement technical and operational metrics in alignment with business objectives security requirements and compliance obligations Application Security Metrics Application & Interface Security AIS 041 Is an SDLC process defined and implemented for application design development deployment and operation per organizationally designed security requirements? Yes CSPowned See response to Question ID AIS021 AIS04 Define and implement a SDLC process for application design development deployment and operation in accordance with security requirements defined by the organization Secure Application Design and Developmen t Application & Interface Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title AIS 051 Does the testing strategy outline criteria to accept new information systems upgrades and new versions while ensuring application security compliance adherence and organizational speed of delivery goals? Yes CSPowned See response to Question ID AIS021 AIS05 Implement a testing strategy including criteria for acceptance of new information systems upgrades and new versions which provides application security assurance and maintains compliance while enabling organizational speed of delivery goals Automate when applicable and possible Automated Application Security Testing Application & Interface Security AIS 052 Is testing automated when applicable and possible? Yes CSPowned Where appropriate a continuous deployment methodology is conducted to ensure changes are automatically built tested and pushed to production with the goal of eliminating as many manual steps as possible Continuous deployment seeks to eliminate the manual nature of this process and automate each step allowing service teams to standardize the process and increase the efficiency with which they deploy code In continuous deployment an entire release process is a "pipeline" containing "stages” AIS05 Implement a testing strategy including criteria for acceptance of new information systems upgrades and new versions which provides application security assurance and maintains compliance while enabling organizational speed of delivery goals Automate when applicable and possible Automated Application Security Testing Application & Interface Security AIS 061 Are strategies and capabilities established and implemented to deploy application code in a secure standardized and compliant manner? Yes CSPowned Where appropriate a continuous deployment methodology is conducted to ensure changes are automatically built tested and pushed to production with the goal of eliminating as many manual steps as possible Continuous deployment seeks to eliminate the manual nature of this process and automate each step allowing service teams to standardize the process and increase the efficiency with which they deploy code In continuous deployment an entire release process is a "pipeline" containing "stages” AIS06 Establish and implement strategies and capabilities for secure standardized and compliant application deployment Automate where possible Automated Secure Application Deployment Application & Interface Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title AIS 062 Is the deployment and integration of application code automated where possible? Yes CSPowned Automated code analysis tools are run as a part of the AWS Software Development Lifecycle and all deployed software undergoes recurring penetration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoing operations Refer to the AWS Overview of Security Processes for further details That whitepaper is located here https://d1awsstaticcom/whitepapers/Security /AWS_Security_Whitepaperpdf AIS06 Establish and implement strategies and capabilities for secure standardized and compliant application deployment Automate where possible Automated Secure Application Deployment Application & Interface Security AIS 071 Are application security vulnerabilities remediated following defined processes? Yes CSPowned Static code analysis tools are run as a part of the standard build process and all deployed software undergoes recurring penetration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoing operations Refer to the Best Practices for Security Identity & Compliance website for further details https://awsamazoncom/architecture/security identitycompliance/?cardsallsort by=itemadditionalFieldssortDate&cards allsortorder=desc&awsfcontent type=*all&awsfmethodology=*all AIS07 Define and implement a process to remediate application security vulnerabilities automating remediation when possible Application Vulnerability Remediation Application & Interface Security AIS 072 Is the remediation of application security vulnerabilities automated when possible? Yes CSPowned Automated code analysis tools are run as a part of the AWS Software Development Lifecycle and all deployed software undergoes recurring penetration testing performed by carefully selected industry experts Our security risk assessment reviews begin during the design phase and the engagement lasts through launch to ongoing operations Refer to the Best Practices for Security Identity & Compliance website for further details https://awsamazoncom/architecture/security identitycompliance/?cardsallsort by=itemadditionalFieldssortDate&cards allsortorder=desc&awsfcontent type=*all&awsfmethodology=*all AIS07 Define and implement a process to remediate application security vulnerabilities automating remediation when possible Application Vulnerability Remediation Application & Interface Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title BCR 011 Are business continuity management and operational resilience policies and procedures established documented approved communicated applied evaluated and maintained? Yes CSPowned The AWS business continuity policy is designed to ensure minimum outage time and maximum effectiveness of the recovery and reconstitution efforts which include • Activation and Notification • Recovery and • Reconstitution Phase AWS business continuity mechanisms are designed to ensure minimum outage time and maximum effectiveness of the recovery and reconstitution efforts AWS resiliency encompasses the processes and procedures to identify respond to and recover from a major event or incident within our environment BCR01 Establish document approve communicate apply evaluate and maintain business continuity management and operational resilience policies and procedures Review and update the policies and procedures at least annually Business Continuity Management Policy and Procedures Business Continuity Management and Operational Resilience BCR 012 Are the policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis BCR01 Establish document approve communicate apply evaluate and maintain business continuity management and operational resilience policies and procedures Review and update the policies and procedures at least annually Business Continuity Management Policy and Procedures Business Continuity Management and Operational Resilience BCR 021 Are criteria for developing business continuity and operational resiliency strategies and capabilities established based on business disruption and risk impacts? Yes Shared CSP and CSC AWS Business Continuity Policies and Plans have been developed and tested in alignment with ISO 27001 standards Refer to ISO 27001 standard annex A domain 17 for further details on AWS and business continuity BCR02 Determine the impact of business disruptions and risks to establish criteria for developing business continuity and operational resilience strategies and capabilities Risk Assessment and Impact Analysis Business Continuity Management and Operational Resilience BCR 031 Are strategies developed to reduce the impact of withstand and recover from business disruptions in accordance with risk appetite? Yes Shared CSP and CSC AWS Business Continuity Policies and Plans have been developed and tested in alignment with ISO 27001 standards Refer to ISO 27001 standard annex A domain 17 for further details on AWS and business continuity BCR03 Establish strategies to reduce the impact of withstand and recover from business disruptions within risk appetite Business Continuity Strategy Business Continuity Management and Operational Resilience Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title BCR 041 Are operational resilience strategies and capability results incorporated to establish document approve communicate apply evaluate and maintain a business continuity plan? Yes Shared CSP and CSC AWS Business Continuity Policies and Plans have been developed and tested in alignment with ISO 27001 standards Refer to ISO 27001 standard annex A domain 17 for further details on AWS and business continuity BCR04 Establish document approve communicate apply evaluate and maintain a business continuity plan based on the results of the operational resilience strategies and capabilities Business Continuity Planning Business Continuity Management and Operational Resilience BCR 051 Is relevant documentation developed identified and acquired to support business continuity and operational resilience plans? Yes CSPowned The AWS business continuity plan details the threephased approach that AWS has developed to recover and reconstitute the AWS infrastructure: • Activation and Notification Phase • Recovery Phase • Reconstitution Phase This approach ensures that AWS performs system recovery and reconstitution efforts in a methodical sequence maximizing the effectiveness of the recovery and reconstitution efforts and minimizing system outage time due to errors and omissions BCR05 Develop identify and acquire documentation that is relevant to support the business continuity and operational resilience programs Make the documentation available to authorized stakeholders and review periodically Documentati on Business Continuity Management and Operational Resilience BCR 052 Is business continuity and operational resilience documentation available to authorized stakeholders? Yes CSPowned Information System Documentation is made available internally to AWS personnel through the use of Amazon's Intranet site Refer to ISO 27001 Appendix A Domain 12 BCR05 Develop identify and acquire documentation that is relevant to support the business continuity and operational resilience programs Make the documentation available to authorized stakeholders and review periodically Documentati on Business Continuity Management and Operational Resilience BCR 053 Is business continuity and operational resilience documentation reviewed periodically? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis BCR05 Develop identify and acquire documentation that is relevant to support the business continuity and operational resilience programs Make the documentation available to authorized stakeholders and review periodically Documentati on Business Continuity Management and Operational Resilience Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title BCR 061 Are the business continuity and operational resilience plans exercised and tested at least annually and when significant changes occur? Yes CSPowned AWS Business Continuity Policies and Plans have been developed and tested at least annually in alignment with ISO 27001 standards Refer to ISO 27001 standard annex A domain 17 for further details on AWS and business continuity at least annually BCR06 Exercise and test business continuity and operational resilience plans at least annually or upon significant changes Business Continuity Exercises Business Continuity Management and Operational Resilience BCR 071 Do business continuity and resilience procedures establish communication with stakeholders and participants? Yes CSPowned The AWS Business Continuity policy provides a complete discussion of AWS services roles and responsibilities and AWS processes for managing an outage from detection to deactivation AWS Service teams create administrator documentation for their services and store the documents in internal AWS document repositories Using these documents teams provide initial training to new team members that covers their job duties oncall responsibilities service specific monitoring metrics and alarms along with the intricacies of the service they are supporting Once trained service team members can assume oncall duties and be paged into an engagement as a resolver In addition to the documentation stored in the repository AWS also uses GameDay Exercises to train coordinators and Service Teams in their roles and responsibilities BCR07 Establish communication with stakeholders and participants in the course of business continuity and resilience procedures Communicat ion Business Continuity Management and Operational Resilience Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title BCR 081 Is cloud data periodically backed up? Yes Shared CSP and CSC AWS maintains a retention policy applicable to AWS internal data and system components in order to continue operations of AWS business and services Critical AWS system components including audit evidence and logging records are replicated across multiple Availability Zones and backups are maintained and monitored Customers retain control and ownership of their content When customers store content in a specific region it is not replicated outside that region It is the customer's responsibility to replicate content across regions if business needs require that Backup and retention policies are the responsibility of the customer AWS offers best practice resources to customers including guidance and alignment to the Well Architected Framework Snapshots are AWS objects to which IAM users groups and roles can be assigned permissions so that only authorized users can access Amazon backups AWS Backup allows customers to centrally manage and automate backups across AWS services The service enables customers to centralize and automate data protection across AWS services For additional details refer to https://awsamazoncom /backup BCR08 Periodically backup data stored in the cloud Ensure the confidentiality integrity and availability of the backup and verify data restoration from backup for resiliency Backup Business Continuity Management and Operational Resilience BCR 082 Is the confidentiality integrity and availability of backup data ensured? Yes Shared CSP and CSC See response to Question ID BCR081 BCR08 Periodically backup data stored in the cloud Ensure the confidentiality integrity and availability of the backup and verify data restoration from backup for resiliency Backup Business Continuity Management and Operational Resilience Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title BCR 083 Can backups be restored appropriately for resiliency? Yes CSCowned AWS Backup allows customers to centrally manage and automate backups across AWS services For additional details refer to https://awsamazoncom /backup BCR08 Periodically backup data stored in the cloud Ensure the confidentiality integrity and availability of the backup and verify data restoration from backup for resiliency Backup Business Continuity Management and Operational Resilience BCR 091 Is a disaster response plan established documented approved applied evaluated and maintained to ensure recovery from natural and manmade disasters? Yes Shared CSP and CSC The AWS business continuity policy is designed to ensure minimum outage time and maximum effectiveness of the recovery and reconstitution efforts which include • Activation and Notification • Recovery and • Reconstitution Phase AWS business continuity mechanisms are designed to ensure minimum outage time and maximum effectiveness of the recovery and reconstitution efforts AWS resiliency encompasses the processes and procedures to identify respond to and recover from a major event or incident within our environment AWS maintains a ubiquitous security control environment across its infrastructure Each data center is built to physical environmental and security standards in an activeactive configuration employing an n+1 redundancy model to ensure system availability in the event of component failure Components (N) have at least one independent backup component (+1) so the backup component is active in the operation even if other components are fully functional In order to eliminate single points of failure this model is applied throughout AWS including network and data center implementation Data centers are online and serving traffic; no data center is “cold” In case of failure there is sufficient capacity to enable traffic to be loadbalanced to the remaining sites AWS provides customers with the capability to implement a robust continuity plan including the utilization of frequent server instance backups data redundancy replication and the flexibility to place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each region Customers are responsible for properly implementing contingency planning training and testing for their systems hosted on AWS BCR09 Establish document approve communicate apply evaluate and maintain a disaster response plan to recover from natural and man made disasters Update the plan at least annually or upon significant changes Disaster Response Plan Business Continuity Management and Operational Resilience Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title BCR 092 Is the disaster response plan updated at least annually and when significant changes occur? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis BCR09 Establish document approve communicate apply evaluate and maintain a disaster response plan to recover from natural and man made disasters Update the plan at least annually or upon significant changes Disaster Response Plan Business Continuity Management and Operational Resilience BCR 101 Is the disaster response plan exercised annually or when significant changes occur? Yes CSPowned AWS tests the business continuity at least annually to ensure effectiveness of the associated procedures and the organization readiness Testing consists of gameday exercises that execute on activities that would be performed in an actual outage AWS documents the results including lessons learned and any corrective actions that were completed BCR10 Exercise the disaster response plan annually or upon significant changes including if possible local emergency authorities Response Plan Exercise Business Continuity Management and Operational Resilience BCR 102 Are local emergency authorities included if possible in the exercise? No CSPowned BCR10 Exercise the disaster response plan annually or upon significant changes including if possible local emergency authorities Response Plan Exercise Business Continuity Management and Operational Resilience BCR 111 Is businesscritical equipment supplemented with redundant equipment independently located at a reasonable minimum distance in accordance with applicable industry standards? Yes CSPowned AWS maintains a ubiquitous security control environment across its infrastructure Each data center is built to physical environmental and security standards in an activeactive configuration employing an n+1 redundancy model to ensure system availability in the event of component failure Components (N) have at least one independent backup component (+1) so the backup component is active in the operation even if other components are fully functional In order to eliminate single points of failure this model is applied throughout AWS including network and data center implementation Data centers are online and serving traffic; no data center is “cold” In case of failure there is sufficient capacity to enable traffic to be loadbalanced to the remaining sites BCR11 Supplement business critical equipment with redundant equipment independently located at a reasonable minimum distance in accordance with applicable industry standards Equipment Redundancy Business Continuity Management and Operational Resilience Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CCC 011 Are risk management policies and procedures associated with changing organizational assets including applications systems infrastructure configuration etc established documented approved communicated applied evaluated and maintained (regardless of whether asset management is internal or external)? Yes CSPowned AWS applies a systematic approach to managing change to ensure that all changes to a production environment are reviewed tested and approved The AWS Change Management approach requires that the following steps be complete before a change is deployed to the production environment: 1 Document and communicate the change via the appropriate AWS change management tool 2 Plan implementation of the change and rollback procedures to minimize disruption 3 Test the change in a logically segregated nonproduction environment 4 Complete a peerreview of the change with a focus on business impact and technical rigor The review should include a code review 5 Attain approval for the change by an authorized individual Where appropriate a continuous deployment methodology is conducted to ensure changes are automatically built tested and pushed to production with the goal of eliminating as many manual steps as possible Continuous deployment seeks to eliminate the manual nature of this process and automate each step allowing service teams to standardize the process and increase the efficiency with which they deploy code In continuous deployment an entire release process is a "pipeline" containing "stages” CCC01 Establish document approve communicate apply evaluate and maintain policies and procedures for managing the risks associated with applying changes to organization assets including application systems infrastructure configuration etc regardless of whether the assets are managed internally or externally (ie outsourced) Review and update the policies and procedures at least annually Change Management Policy and Procedures Change Control and Configuration Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CCC 012 Are the policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis CCC01 Establish document approve communicate apply evaluate and maintain policies and procedures for managing the risks associated with applying changes to organization assets including application systems infrastructure configuration etc regardless of whether the assets are managed internally or externally (ie outsourced) Review and update the policies and procedures at least annually Change Management Policy and Procedures Change Control and Configuration Management CCC 021 Is a defined quality change control approval and testing process (with established baselines testing and release standards) followed? Yes CSPowned See response to Question ID CCC011 CCC02 Follow a defined quality change control approval and testing process with established baselines testing and release standards Quality Testing Change Control and Configuration Management CCC 031 Are risks associated with changing organizational assets (including applications systems infrastructure configuration etc) managed regardless of whether asset management occurs internally or externally (ie outsourced)? Yes CSPowned See response to Question ID CCC011 CCC03 Manage the risks associated with applying changes to organization assets including application systems infrastructure configuration etc regardless of whether the assets are managed internally or externally (ie outsourced) Change Management Technology Change Control and Configuration Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CCC 041 Is the unauthorized addition removal update and management of organization assets restricted? Yes CSPowned Authorized staff must pass two factor authentication a minimum of two times to access data center floors Physical access points to server locations are recorded by closed circuit television camera (CCTV) as defined in the AWS Data Center Physical Security Policy CCC04 Restrict the unauthorized addition removal update and management of organization assets Unauthorize d Change Protection Change Control and Configuration Management CCC 051 Are provisions to limit changes that directly impact CSCowned environments and require tenants to authorize requests explicitly included within the service level agreements (SLAs) between CSPs and CSCs? No CSPowned AWS notifies customers of changes to the AWS service offering in accordance with the commitment set forth in the AWS Customer Agreement AWS continuously evolves and improves our existing services and frequently adds new services Our services are controlled using APIs If we change or discontinue any API used to make calls to the services we will continue to offer the existing API for 12 months Additionally AWS maintains a public Service Health Dashboard to provide customers with the realtime operational status of our services at http://statusawsamazoncom/ CCC05 Include provisions limiting changes directly impacting CSCs owned environments/tenant s to explicitly authorized requests within service level agreements between CSPs and CSCs Change Agreements Change Control and Configuration Management CCC 061 Are change management baselines established for all relevant authorized changes on organizational assets? Yes CSPowned See response to Question ID CCC011 CCC06 Establish change management baselines for all relevant authorized changes on organization assets Change Management Baseline Change Control and Configuration Management CCC 071 Are detection measures implemented with proactive notification if changes deviate from established baselines? Yes CSPowned See response to Question ID CCC081 CCC07 Implement detection measures with proactive notification in case of changes deviating from the established baseline Detection of Baseline Deviation Change Control and Configuration Management CCC 081 Is a procedure implemented to manage exceptions including emergencies in the change and configuration process? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis CCC08 'Implement a procedure for the management of exceptions including emergencies in the change and configuration process Align the procedure with the requirements of GRC04: Policy Exception Process' Exception Management Change Control and Configuration Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CCC 082 'Is the procedure aligned with the requirements of the GRC04: Policy Exception Process?' Yes CSPowned See response to Question ID CCC081 CCC08 'Implement a procedure for the management of exceptions including emergencies in the change and configuration process Align the procedure with the requirements of GRC04: Policy Exception Process' Exception Management Change Control and Configuration Management CCC 091 Is a process to proactively roll back changes to a previously known "good state" defined and implemented in case of errors or security concerns? Yes CSPowned See response to Question ID CCC011 CCC09 Define and implement a process to proactively roll back changes to a previous known good state in case of errors or security concerns Change Restoration Change Control and Configuration Management CEK 011 Are cryptography encryption and key management policies and procedures established documented approved communicated applied evaluated and maintained? Yes Shared CSP and CSC Internally AWS establishes and manages cryptographic keys for required cryptography employed within the AWS infrastructure AWS produces controls and distributes symmetric cryptographic keys using NIST approved key management technology and processes in the AWS information system An AWS developed secure key and credential manager is used to create protect and distribute symmetric keys AWS credentials needed on hosts RSA public/private keys and X509 Certifications AWS customers are responsible for managing encryption keys within their AWS environments Customers can leverage AWS services such as AWS KMS and CloudHSM to manage the lifecycle of their keys according to internal policy requirements See following: AWS KMS https://awsamazoncom /kms/ AWS CloudHSM https://awsamazoncom /cloudhsm/ CEK01 Establish document approve communicate apply evaluate and maintain policies and procedures for Cryptography Encryption and Key Management Review and update the policies and procedures at least annually Encryption and Key Management Policy and Procedures Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 012 Are cryptography encryption and key management policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis CEK01 Establish document approve communicate apply evaluate and maintain policies and procedures for Cryptography Encryption and Key Management Review and update the policies and procedures at least annually Encryption and Key Management Policy and Procedures Cryptography Encryption & Key Management CEK 021 Are cryptography encryption and key management roles and responsibilities defined and implemented? Yes CSCowned See response to CEK011 CEK02 Define and implement cryptographic encryption and key management roles and responsibilities CEK Roles and Responsibiliti es Cryptography Encryption & Key Management CEK 031 Are data atrest and intransit cryptographically protected using cryptographic libraries certified to approved standards? NA CSCowned AWS allows customers to use their own encryption mechanisms (for storage and intransit) for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS Refer to AWS: Overview of Security Processes Whitepaper for additional details available at: http://awsamazoncom/ security/security learning/ CEK03 Provide cryptographic protection to data atrest and intransit using cryptographic libraries certified to approved standards Data Encryption Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 041 Are appropriate data protection encryption algorithms used that consider data classification associated risks and encryption technology usability? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure CEK04 Use encryption algorithms that are appropriate for data protection considering the classification of data associated risks and usability of the encryption technology Encryption Algorithm Cryptography Encryption & Key Management CEK 051 Are standard change management procedures established to review approve implement and communicate cryptography encryption and key management technology changes that accommodate internal and external sources? Yes Shared CSP and CSC See response to CEK011 AWS customers are responsible for managing encryption keys within their AWS environments according to their internal policy requirements CEK05 Establish a standard change management procedure to accommodate changes from internal and external sources for review approval implementation and communication of cryptographic encryption and key management technology changes Encryption Change Management Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 061 Are changes to cryptography encryption and key management related systems policies and procedures managed and adopted in a manner that fully accounts for downstream effects of proposed changes including residual risk cost and benefits analysis? Yes Shared CSP and CSC See response to CEK011 AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS Refer to AWS: Overview of Security Processes Whitepaper for additional details available at: http://awsamazoncom/ security/security learning/ CEK06 Manage and adopt changes to cryptography encryption and key managementrelated systems (including policies and procedures) that fully account for downstream effects of proposed changes including residual risk cost and benefits analysis Encryption Change Cost Benefit Analysis Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 071 Is a cryptography encryption and key management risk program established and maintained that includes risk assessment risk treatment risk context monitoring and feedback provisions? Yes CSPowned AWS has established an information security management program with designated roles and responsibilities that are appropriately aligned within the organization AWS management reviews and evaluates the risks identified in the risk management program at least annually The risk management program encompasses the following phases: Discovery – The discovery phase includes listing out risks (threats and vulnerabilities) that exist in the environment This phase provides a basis for all other risk management activities Research – The research phase considers the potential impact(s) of identified risks to the business and its likelihood of occurrence and includes an evaluation of internal control effectiveness Evaluate – The evaluate phase includes ensuring controls processes and other physical and virtual safeguards in place to prevent and detect identified and assessed risks Resolve – The resolve phase results in risk reports provided to managers with the data they need to make effective business decisions and to comply with internal policies and applicable regulations Monitor – The monitor phase includes performing monitoring activities to evaluate whether processes initiatives functions and/or activities are mitigating the risk as designed CEK07 Establish and maintain an encryption and key management risk program that includes provisions for risk assessment risk treatment risk context monitoring and feedback Encryption Risk Management Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 081 Are CSPs providing CSCs with the capacity to manage their own data encryption keys? Yes CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK08 CSPs must provide the capability for CSCs to manage their own data encryption keys CSC Key Management Capability Cryptography Encryption & Key Management CEK 091 Are encryption and key management systems policies and processes audited with a frequency proportional to the system's risk exposure and after any security event? Yes CSPowned AWS has established a formal periodic audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment CEK09 Audit encryption and key management systems policies and processes with a frequency that is proportional to the risk exposure of the system with audit occurring preferably continuously but at least annually and after any security event(s) Encryption and Key Management Audit Cryptography Encryption & Key Management CEK 092 Are encryption and key management systems policies and processes audited (preferably continuously but at least annually)? Yes CSPowned AWS has established a formal periodic audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment CEK09 Audit encryption and key management systems policies and processes with a frequency that is proportional to the risk exposure of the system with audit occurring preferably continuously but at least annually and after any security event(s) Encryption and Key Management Audit Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 101 Are cryptographic keys generated using industry accepted and approved cryptographic libraries that specify algorithm strength and random number generator specifications? Yes Shared CSP and CSC AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom/kms/) Refer to AWS SOC reports for more details on KMS AWS establishes and manages cryptographic keys for required cryptography employed within the AWS infrastructure AWS produces controls and distributes symmetric cryptographic keys using NIST approved key management technology and processes in the AWS information system An AWS developed secure key and credential manager is used to create protect and distribute symmetric keys and is used to secure and distribute: AWS credentials needed on hosts RSA public/private keys and X509 Certifications AWS cryptographic processes are reviewed by independent thirdparty auditors for our continued compliance with SOC PCI DSS and ISO 27001 AWS customers are responsible for managing encryption keys within their AWS environments according to their internal policy requirements CEK10 Generate Cryptographic keys using industry accepted cryptographic libraries specifying the algorithm strength and the random number generator used Key Generation Cryptography Encryption & Key Management CEK 111 Are private keys provisioned for a unique purpose managed and is cryptography secret? NA CSCowned Customers determine whether they want to leverage AWS KMS to store encryption keys in the cloud or use other mechanisms (onprem HSM other key management technologies) to store keys within their on premises environments CEK11 Manage cryptographic secret and private keys that are provisioned for a unique purpose Key Purpose Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 121 Are cryptographic keys rotated based on a cryptoperiod calculated while considering information disclosure risks and legal and regulatory requirements? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK12 Rotate cryptographic keys in accordance with the calculated cryptoperiod which includes provisions for considering the risk of information disclosure and legal and regulatory requirements Key Rotation Cryptography Encryption & Key Management CEK 131 Are cryptographic keys revoked and removed before the end of the established cryptoperiod (when a key is compromised or an entity is no longer part of the organization) per defined implemented and evaluated processes procedures and technical measures to include legal and regulatory requirement provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK13 Define implement and evaluate processes procedures and technical measures to revoke and remove cryptographic keys prior to the end of its established cryptoperiod when a key is compromised or an entity is no longer part of the organization which include provisions for legal and regulatory requirements Key Revocation Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 141 Are processes procedures and technical measures to destroy unneeded keys defined implemented and evaluated to address key destruction outside secure environments revocation of keys stored in hardware security modules (HSMs) and include applicable legal and regulatory requirement provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK14 Define implement and evaluate processes procedures and technical measures to destroy keys stored outside a secure environment and revoke keys stored in Hardware Security Modules (HSMs) when they are no longer needed which include provisions for legal and regulatory requirements Key Destruction Cryptography Encryption & Key Management CEK 151 Are processes procedures and technical measures to create keys in a preactivated state (ie when they have been generated but not authorized for use) being defined implemented and evaluated to include legal and regulatory requirement provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK15 Define implement and evaluate processes procedures and technical measures to create keys in a pre activated state when they have been generated but not authorized for use which include provisions for legal and regulatory requirements Key Activation Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 161 Are processes procedures and technical measures to monitor review and approve key transitions (eg from any state to/from suspension) being defined implemented and evaluated to include legal and regulatory requirement provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK16 Define implement and evaluate processes procedures and technical measures to monitor review and approve key transitions from any state to/from suspension which include provisions for legal and regulatory requirements Key Suspension Cryptography Encryption & Key Management CEK 171 Are processes procedures and technical measures to deactivate keys (at the time of their expiration date) being defined implemented and evaluated to include legal and regulatory requirement provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK17 Define implement and evaluate processes procedures and technical measures to deactivate keys at the time of their expiration date which include provisions for legal and regulatory requirements Key Deactivation Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 181 Are processes procedures and technical measures to manage archived keys in a secure repository (requiring least privilege access) being defined implemented and evaluated to include legal and regulatory requirement provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK18 Define implement and evaluate processes procedures and technical measures to manage archived keys in a secure repository requiring least privilege access which include provisions for legal and regulatory requirements Key Archival Cryptography Encryption & Key Management CEK 191 Are processes procedures and technical measures to encrypt information in specific scenarios (eg only in controlled circumstances and thereafter only for data decryption and never for encryption) being defined implemented and evaluated to include legal and regulatory requirement provisions? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure CEK19 Define implement and evaluate processes procedures and technical measures to use compromised keys to encrypt information only in controlled circumstance and thereafter exclusively for decrypting data and never for encrypting data which include provisions for legal and regulatory requirements Key Compromise Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title CEK 201 Are processes procedures and technical measures to assess operational continuity risks (versus the risk of losing control of keying material and exposing protected data) being defined implemented and evaluated to include legal and regulatory requirement provisions? Yes Shared CSP and CSC AWS establishes and manages cryptographic keys for required cryptography employed within the AWS infrastructure AWS produces controls and distributes symmetric cryptographic keys using NIST approved key management technology and processes in the AWS information system An AWS developed secure key and credential manager is used to create protect and distribute symmetric keys and is used to secure and distribute: AWS credentials needed on hosts RSA public/private keys and X509 Certifications AWS cryptographic processes are reviewed by independent thirdparty auditors for our continued compliance with SOC PCI DSS and ISO 27001 AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS CEK20 Define implement and evaluate processes procedures and technical measures to assess the risk to operational continuity versus the risk of the keying material and the information it protects being exposed if control of the keying material is lost which include provisions for legal and regulatory requirements Key Recovery Cryptography Encryption & Key Management CEK 211 Are key management system processes procedures and technical measures being defined implemented and evaluated to track and report all cryptographic materials and status changes that include legal and regulatory requirements provisions? NA CSCowned AWS allows customers to use their own encryption mechanisms for nearly all the services including S3 EBS and EC2 IPSec tunnels to VPC are also encrypted In addition customers can leverage AWS Key Management Systems (KMS) to create and control encryption keys (refer to https://awsamazoncom /kms/) Refer to AWS SOC reports for more details on KMS In addition refer to AWS Cloud Security Whitepaper for additional details available at http://awsamazoncom/ security CEK21 Define implement and evaluate processes procedures and technical measures in order for the key management system to track and report all cryptographic materials and changes in status which include provisions for legal and regulatory requirements Key Inventory Management Cryptography Encryption & Key Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 011 Are policies and procedures for the secure disposal of equipment used outside the organization's premises established documented approved communicated enforced and maintained? Yes CSPowned Environments used for the delivery of the AWS services are managed by authorized personnel and are located in an AWS managed data centers Media handling controls for the data centers are managed by AWS in alignment with the AWS Media Protection Policy This policy includes procedures around access marking storage transporting and sanitation Live media transported outside of data center secure zones is escorted by authorized personnel DCS01 Establish docum ent approve communicate apply evaluate and maintain policies and procedures for the secure disposal of equipment used outside the organization's premises If the equipment is not physically destroyed a data destruction procedure that renders recovery of information impossible must be applied Review and update the policies and procedures at least annually OffSite Equipment Disposal Policy and Procedures Datacenter Security DCS 012 Is a data destruction procedure applied that renders information recovery information impossible if equipment is not physically destroyed? Yes CSPowned When a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals AWS uses the techniques detailed in NIST 80088 (“Guidelines for Media Sanitization”) as part of the decommissioning process Refer to AWS: Overview of Security Processes Whitepaper for additional details available at: http://awsamazoncom/security/security learning/ DCS01 Establish document approve communicate apply evaluate and maintain policies and procedures for the secure disposal of equipment used outside the organization's premises If the equipment is not physically destroyed a data destruction procedure that renders recovery of information impossible must be applied Review and update the policies and procedures at least annually OffSite Equipment Disposal Policy and Procedures Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 013 Are policies and procedures for the secure disposal of equipment used outside the organization's premises reviewed and updated at least annually? Yes Policies are reviewed approved by AWS leadership at least annually or as needed basis DCS01 Establish document approve communicate apply evaluate and maintain policies and procedures for the secure disposal of equipment used outside the organization's premises If the equipment is not physically destroyed a data destruction procedure that renders recovery of information impossible must be applied Review and update the policies and procedures at least annually OffSite Equipment Disposal Policy and Procedures Datacenter Security DCS 021 Are policies and procedures for the relocation or transfer of hardware software or data/information to an offsite or alternate location established documented approved communicated implemented enforced maintained? Yes AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and content Maintaining customer trust and confidence is of the utmost importance to AWS AWS works to comply with applicable federal state and local laws statutes ordinances and regulations concerning security privacy and data protection of AWS services in order to minimize the risk of accidental or unauthorized access or disclosure of customer content DCS02 Establish document approve communicate apply evaluate and maintain policies and procedures for the relocation or transfer of hardware software or data/information to an offsite or alternate location The relocation or transfer request requires the written or cryptographically verifiable authorization Review and update the policies and procedures at least annually OffSite Transfer Authorizatio n Policy and Procedures Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 022 Does a relocation or transfer request require written or cryptographically verifiable authorization? Yes Environments used for the delivery of the AWS services are managed by authorized personnel and are located in an AWS managed data centers Media handling controls for the data centers are managed by AWS in alignment with the AWS Media Protection Policy This policy includes procedures around access marking storage transporting and sanitation Live media transported outside of data center secure zones is escorted by authorized personnel DCS03 Establish document approve communicate apply evaluate and maintain policies and procedures for the relocation or transfer of hardware software or data/information to an offsite or alternate location The relocation or transfer request requires the written or cryptographically verifiable authorization Review and update the policies and procedures at least annually OffSite Transfer Authorizatio n Policy and Procedures Datacenter Security DCS 023 Are policies and procedures for the relocation or transfer of hardware software or data/information to an offsite or alternate location reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis DCS04 Establish document approve communicate apply evaluate and maintain policies and procedures for the relocation or transfer of hardware software or data/information to an offsite or alternate location The relocation or transfer request requires the written or cryptographically verifiable authorization Review and update the policies and procedures at least annually OffSite Transfer Authorizatio n Policy and Procedures Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 031 Are policies and procedures for maintaining a safe and secure working environment (in offices rooms and facilities) established documented approved communicated enforced and maintained? Yes CSPowned AWS engages with external certifying bodies and independent auditors to review and validate our compliance with compliance frameworks AWS SOC reports provide additional details on the specific physical security control activities executed by AWS Refer to ISO 27001 standards; Annex A domain 11 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard DCS03 Establish document approve communicate apply evaluate and maintain policies and procedures for maintaining a safe and secure working environment in offices rooms and facilities Review and update the policies and procedures at least annually Secure Area Policy and Procedures Datacenter Security DCS 032 Are policies and procedures for maintaining safe secure working environments (eg offices rooms) reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis DCS03 Establish document approve communicate apply evaluate and maintain policies and procedures for maintaining a safe and secure working environment in offices rooms and facilities Review and update the policies and procedures at least annually Secure Area Policy and Procedures Datacenter Security DCS 041 Are policies and procedures for the secure transportation of physical media established documented approved communicated enforced evaluated and maintained? Yes CSPowned Environments used for the delivery of the AWS services are managed by authorized personnel and are located in an AWS managed data centers Media handling controls for the data centers are managed by AWS in alignment with the AWS Media Protection Policy This policy includes procedures around access marking storage transporting and sanitation Live media transported outside of data center secure zones is escorted by authorized personnel DCS04 Establish document approve communicate apply evaluate and maintain policies and procedures for the secure transportation of physical media Review and update the policies and procedures at least annually Secure Media Transportati on Policy and Procedures Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 042 Are policies and procedures for the secure transportation of physical media reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis DCS04 Establish document approve communicate apply evaluate and maintain policies and procedures for the secure transportation of physical media Review and update the policies and procedures at least annually Secure Media Transportati on Policy and Procedures Datacenter Security DCS 051 Is the classification and documentation of physical and logical assets based on the organizational business risk? Yes CSPowned In alignment with ISO 27001 standards AWS assets are assigned an owner tracked and monitored by the AWS personnel with AWS proprietary inventory management tools DCS05 Classify and document the physical and logical assets (eg applications) based on the organizational business risk Assets Classification Datacenter Security DCS 061 Are all relevant physical and logical assets at all CSP sites cataloged and tracked within a secured system? Yes CSPowned In alignment with ISO 27001 standards AWS Hardware assets are assigned an owner tracked and monitored by the AWS personnel with AWS proprietary inventory management tools DCS06 Catalogue and track all relevant physical and logical assets located at all of the CSP's sites within a secured system Assets Cataloguing and Tracking Datacenter Security DCS 071 Are physical security perimeters implemented to safeguard personnel data and information systems? Yes CSPowned Physical security controls include but are not limited to perimeter controls such as fencing walls security staff video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access data center floors The AWS SOC reports provide additional details on the specific control activities executed by AWS Refer to ISO 27001 standards; Annex A domain 11 for further information AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard For more information on the design layout and operations of our data centers please visit this site: AWS Data Center Overview DCS07 Implement physical security perimeters to safeguard personnel data and information systems Establish physical security perimeters between the administrative and business areas and the data storage and processing facilities areas Controlled Access Points Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 072 Are physical security perimeters established between administrative and business areas data storage and processing facilities? Yes CSPowned Physical security controls include but are not limited to perimeter controls such as fencing walls security staff video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access data center floors The AWS SOC reports provide additional details on the specific control activities executed by AWS Refer to ISO 27001 standards; Annex A domain 11 for further information AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard For more information on the design layout and operations of our data centers please visit this site: AWS Data Center Overview DCS07 Implement physical security perimeters to safeguard personnel data and information systems Establish physical security perimeters between the administrative and business areas and the data storage and processing facilities areas Controlled Access Points Datacenter Security DCS 081 Is equipment identification used as a method for connection authentication? Yes CSPowned AWS manages equipment identification in alignment with ISO 27001 standard AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard DCS08 Use equipment identification as a method for connection authentication Equipment Identification Datacenter Security DCS 091 Are solely authorized personnel able to access secure areas with all ingress and egress areas restricted documented and monitored by physical access control mechanisms? Yes CSPowned Physical access is strictly controlled both at the perimeter and at building ingress points and includes but is not limited to professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access data center floors Physical access points to server locations are recorded by closed circuit television camera (CCTV) as defined in the AWS Data Center Physical Security Policy DCS09 Allow only authorized personnel access to secure areas with all ingress and egress points restricted documented and monitored by physical access control mechanisms Retain access control records on a periodic basis as deemed appropriate by the organization Secure Area Authorizatio n Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 092 Are access control records retained periodically as deemed appropriate by the organization? Yes CSPowned Authentication logging aggregates sensitive logs from EC2 hosts and stores them on S3 The log integrity checker inspects logs to ensure they were uploaded to S3 unchanged by comparing them with local manifest files Access and privileged command auditing logs record every automated and interactive login to the systems as well as every privileged command executed External access to data stored in Amazon S3 is logged and the logs are retained for at least 90 days including relevant access request information such as the data accessor IP address object and operation DCS09 Allow only authorized personnel access to secure areas with all ingress and egress points restricted documented and monitored by physical access control mechanisms Retain access control records on a periodic basis as deemed appropriate by the organization Secure Area Authorizatio n Datacenter Security DCS 101 Are external perimeter datacenter surveillance systems and surveillance systems at all ingress and egress points implemented maintained and operated? Yes CSPowned Physical access is strictly controlled both at the perimeter and at building ingress points and includes but is not limited to professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access data center floors Physical access points to server locations are recorded by closed circuit television camera (CCTV) as defined in the AWS Data Center Physical Security Policy DCS10 Implement maintain and operate datacenter surveillance systems at the external perimeter and at all the ingress and egress points to detect unauthorized ingress and egress attempts Surveillance System Datacenter Security DCS 111 Are datacenter personnel trained to respond to unauthorized access or egress attempts? Yes CSPowned Physical access is strictly controlled both at the perimeter and at building ingress points and includes but is not limited to professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access data center floors Physical access points to server locations are recorded by closed circuit television camera (CCTV) as defined in the AWS Data Center Physical Security Policy DCS11 Train datacenter personnel to respond to unauthorized ingress or egress attempts Unauthorize d Access Response Training Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DCS 121 Are processes procedures and technical measures defined implemented and evaluated to ensure riskbased protection of power and telecommunicatio n cables from interception interference or damage threats at all facilities offices and rooms? Yes CSPowned AWS equipment is protected from utility service outages in alignment with ISO 27001 standard AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard AWS SOC reports provide additional details on controls in place to minimize the effect of a malfunction or physical disaster to the computer and data center facilities DCS12 Define implement and evaluate processes procedures and technical measures that ensure a riskbased protection of power and telecommunication cables from a threat of interception interference or damage at all facilities offices and rooms Cabling Security Datacenter Security DCS 131 Are data center environmental control systems designed to monitor maintain and test that on site temperature and humidity conditions fall within accepted industry standards effectively implemented and maintained? Yes CSPowned AWS data centers incorporate physical protection against environmental risks AWS' physical protection against environmental risks has been validated by an independent auditor and has been certified as being in alignment with ISO 27002 best practices Refer to ISO 27001 standard Annex A domain 11 and link below for Data center controls overview: https://awsamazoncom/compliance/data center/controls/ DCS13 Implement and maintain data center environmental control systems that monitor maintain and test for continual effectiveness the temperature and humidity conditions within accepted industry standards Environment al Systems Datacenter Security DCS 141 Are utility services secured monitored maintained and tested at planned intervals for continual effectiveness? Yes CSPowned AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard AWS SOC reports provide additional details on controls in place to minimize the effect of a malfunction or physical disaster to the computer and data center facilities Please refer to link below for Data center controls overview: https://awsamazoncom/compliance/data center/controls/ DCS14 Secure monitor maintain and test utilities services for continual effectiveness at planned intervals Secure Utilities Datacenter Security DCS 151 Is businesscritical equipment segregated from locations subject to a high probability of environmental risk events? Yes CSPowned The AWS Security Operations Center performs quarterly threat and vulnerability reviews of datacenters and colocation sites These reviews are in addition to an initial environmental and geographic assessment of a site performed prior to building or leasing The quarterly reviews are validated by third parties during our SOC PCI and ISO assessments DCS15 Keep businesscritical equipment away from locations subject to high probability for environmental risk events Equipment Location Datacenter Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 011 Are policies and procedures established documented approved communicated enforced evaluated and maintained for the classification protection and handling of data throughout its lifecycle according to all applicable laws and regulations standards and risk level? Yes CSPowned AWS has implemented data handling and classification requirements which provide specifications around: • Data encryption • Content in transit and during storage • Access • Retention • Physical controls • Mobile devices • Handling requirements AWS services are content agnostic in that they offer the same high level of security to customers regardless of the type of content being stored We are vigilant about our customers' security and have implemented sophisticated technical and physical measures against unauthorized access AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP01 Establish document approve communicate apply evaluate and maintain policies and procedures for the classification protection and handling of data throughout its lifecycle and according to all applicable laws and regulations standards and risk level Review and update the policies and procedures at least annually Security and Privacy Policy and Procedures Data Security and Privacy Lifecycle Management DSP 012 Are data security and privacy policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis DSP01 Establish document approve communicate apply evaluate and maintain policies and procedures for the classification protection and handling of data throughout its lifecycle and according to all applicable laws and regulations standards and risk level Review and update the policies and procedures at least annually Security and Privacy Policy and Procedures Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 021 Are industry accepted methods applied for secure data disposal from storage media so information is not recoverable by any forensic means? Yes CSPowned When a storage device has reached the end of its useful life AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals AWS uses the techniques detailed in NIST 80088 (“Guidelines for Media Sanitization”) as part of the decommissioning process Refer to AWS: Overview of Security Processes Whitepaper for additional details available at: http://awsamazoncom/security/security learning/ DSP02 Apply industry accepted methods for the secure disposal of data from storage media such that data is not recoverable by any forensic means Secure Disposal Data Security and Privacy Lifecycle Management DSP 031 Is a data inventory created and maintained for sensitive and personal information (at a minimum)? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP03 Create and maintain a data inventory at least for any sensitive data and personal data Data Inventory Data Security and Privacy Lifecycle Management DSP 041 Is data classified according to type and sensitivity levels? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP04 Classify data according to its type and sensitivity level Data Classification Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 051 Is data flow documentation created to identify what data is processed and where it is stored and transmitted? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP05 Create data flow documentation to identify what data is processed stored or transmitted where Review data flow documentation at defined intervals at least annually and after any change Data Flow Documentati on Data Security and Privacy Lifecycle Management DSP 052 Is data flow documentation reviewed at defined intervals at least annually and after any change? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP05 Create data flow documentation to identify what data is processed stored or transmitted where Review data flow documentation at defined intervals at least annually and after any change Data Flow Documentati on Data Security and Privacy Lifecycle Management DSP 061 Is the ownership and stewardship of all relevant personal and sensitive data documented? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP06 Document ownership and stewardship of all relevant documented personal and sensitive data Perform review at least annually Data Ownership and Stewardship Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 062 Is data ownership and stewardship documentation reviewed at least annually? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP06 Document ownership and stewardship of all relevant documented personal and sensitive data Perform review at least annually Data Ownership and Stewardship Data Security and Privacy Lifecycle Management DSP 071 Are systems products and business practices based on security principles by design and per industry best practices? Yes CSPowned AWS maintains a systematic approach to planning and developing new services for the AWS environment to ensure the quality and security requirements are met with each release The design of new services or any significant changes to current services follow secure software development practices and are controlled through a project management system with multidisciplinary participation Prior to launch each of the following requirements must be reviewed: • Security Risk Assessment • Threat modeling • Security design reviews • Secure code reviews • Security testing • Vulnerability/penetration testing DSP07 Develop systems products and business practices based upon a principle of security by design and industry best practices Data Protection by Design and Default Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 081 Are systems products and business practices based on privacy principles by design and according to industry best practices? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP08 Develop systems products and business practices based upon a principle of privacy by design and industry best practices Ensure that systems' privacy settings are configured by default according to all applicable laws and regulations Data Privacy by Design and Default Data Security and Privacy Lifecycle Management DSP 082 Are systems' privacy settings configured by default and according to all applicable laws and regulations? NA CSCowned This is a customer responsibility AWS customers are responsible to adhere to regulatory requirements in the jurisdictions their business are active in DSP08 Develop systems products and business practices based upon a principle of privacy by design and industry best practices Ensure that systems' privacy settings are configured by default according to all applicable laws and regulations Data Privacy by Design and Default Data Security and Privacy Lifecycle Management DSP 091 Is a data protection impact assessment (DPIA) conducted when processing personal data and evaluating the origin nature particularity and severity of risks according to any applicable laws regulations and industry best practices? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP09 Conduct a Data Protection Impact Assessment (DPIA) to evaluate the origin nature particularity and severity of the risks upon the processing of personal data according to any applicable laws regulations and industry best practices Data Protection Impact Assessment Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 101 Are processes procedures and technical measures defined implemented and evaluated to ensure any transfer of personal or sensitive data is protected from unauthorized access and only processed within scope (as permitted by respective laws and regulations)? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP10 Define implement and evaluate processes procedures and technical measures that ensure any transfer of personal or sensitive data is protected from unauthorized access and only processed within scope as permitted by the respective laws and regulations Sensitive Data Transfer Data Security and Privacy Lifecycle Management DSP 111 Are processes procedures and technical measures defined implemented and evaluated to enable data subjects to request access to modify or delete personal data (per applicable laws and regulations)? NA CSCowned This is a customer responsibility AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP11 Define and implement processes procedures and technical measures to enable data subjects to request access to modification or deletion of their personal data according to any applicable laws and regulations Personal Data Access Reversal Rectification and Deletion Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 121 Are processes procedures and technical measures defined implemented and evaluated to ensure personal data is processed (per applicable laws and regulations and for the purposes declared to the data subject)? Yes Shared CSP and CSC AWS has established a formal Data Subject Access Request (DSAR) according to General Data Protection Regulation (GDPR) For this they have to call AWS and open a Harbinger ticket by contacting a CS Team Manager who will work with Legal to open a ticket which includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment AWS customers are responsible for the management of the data (including adhering to applicable laws and regulations) they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure DSP12 Define implement and evaluate processes procedures and technical measures to ensure that personal data is processed according to any applicable laws and regulations and for the purposes declared to the data subject Limitation of Purpose in Personal Data Processing Data Security and Privacy Lifecycle Management DSP 131 Are processes procedures and technical measures defined implemented and evaluated for the transfer and sub processing of personal data within the service supply chain (according to any applicable laws and regulations)? NA Note: AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure AWS does not utilize third parties to provide services to customers There are no subcontractors authorized by AWS to access any customerowned content that you upload onto AWS To monitor subcontractor access yearround please refer to https://awsamazoncom/compliance/sub processors/ DSP13 Define implement and evaluate processes procedures and technical measures for the transfer and sub processing of personal data within the service supply chain according to any applicable laws and regulations Personal Data Sub processing Data Security and Privacy Lifecycle Management DSP 141 Are processes procedures and technical measures defined implemented and evaluated to disclose details to the data owner of any personal or sensitive data access by sub processors before processing initiation? NA AWS does not utilize third parties to provide services to customers There are no subcontractors authorized by AWS to access any customerowned content that you upload onto AWS To monitor subcontractor access yearround please refer to https://awsamazoncom/compliance/third partyaccess/ DSP14 Define implement and evaluate processes procedures and technical measures to disclose the details of any personal or sensitive data access by subprocessors to the data owner prior to initiation of that processing Disclosure of Data Sub processors Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 151 Is authorization from data owners obtained and the associated risk managed before replicating or using production data in nonproduction environments? NA Customer data is not used for testing DSP15 Obtain authorization from data owners and manage associated risk before replicating or using production data in non production environments Limitation of Production Data Use Data Security and Privacy Lifecycle Management DSP 161 Do data retention archiving and deletion practices follow business requirements applicable laws and regulations? Yes Shared CSP and CSC AWS maintains a retention policy applicable to AWS internal data and system components in order to continue operations of AWS business and services Critical AWS system components including audit evidence and logging records are replicated across multiple Availability Zones and backups are maintained and monitored AWS customers are responsible for the management of the data they place into AWS services including retention archiving and deletion policies and practices DSP16 Data retention archiving and deletion is managed in accordance with business requirements applicable laws and regulations Data Retention and Deletion Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 171 Are processes procedures and technical measures defined and implemented to protect sensitive data throughout its lifecycle? NA CSCowned Customers control their customer content With AWS customers: • Determine where their customer content will be stored including the type of storage and geographic region of that storage • Customers can replicate and back up their customer content in more than one region and we will not move or replicate customer content outside of the customer's chosen region(s) except as legally required and as necessary to maintain the AWS services and provide them to our customers and their end users • Choose the secured state of their customer content We offer customers strong encryption for customer content in transit or at rest and we provide customers with the option to manage their own encryption keys • Manage access to their customer content and AWS services and resources through users groups permissions and credentials that customers control DSP17 Define and implement processes procedures and technical measures to protect sensitive data throughout it's lifecycle Sensitive Data Protection Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 181 Does the CSP have in place and describe to CSCs the procedure to manage and respond to requests for disclosure of Personal Data by Law Enforcement Authorities according to applicable laws and regulations? Yes CSPowned We are vigilant about our customers' privacy AWS policy prohibits the disclosure of customer content unless we’re required to do so to comply with the law or with a valid and binding order of a governmental or regulatory body Unless we are prohibited from doing so or there is clear indication of illegal conduct in connection with the use of Amazon products or services Amazon notifies customers before disclosing customer content so they can seek protection from disclosure It's also important to point out that our customers can encrypt their customer content and we provide customers with the option to manage their own encryption keys We know transparency matters to our customers so we regularly publish a report about the types and volume of information requests we receive here: https://awsamazoncom/compliance/amazon informationrequests/ DSP18 The CSP must have in place and describe to CSCs the procedure to manage and respond to requests for disclosure of Personal Data by Law Enforcement Authorities according to applicable laws and regulations The CSP must give special attention to the notification procedure to interested CSCs unless otherwise prohibited such as a prohibition under criminal law to preserve confidentiality of a law enforcement investigation Disclosure Notification Data Security and Privacy Lifecycle Management DSP 182 Does the CSP give special attention to the notification procedure to interested CSCs unless otherwise prohibited such as a prohibition under criminal law to preserve confidentiality of a law enforcement investigation? Yes Shared CSP and CSC See response to Question ID DSP181 DSP18 The CSP must have in place and describe to CSCs the procedure to manage and respond to requests for disclosure of Personal Data by Law Enforcement Authorities according to applicable laws and regulations The CSP must give special attention to the notification procedure to interested CSCs unless otherwise prohibited such as a prohibition under criminal law to preserve confidentiality of a law enforcement investigation Disclosure Notification Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title DSP 191 Are processes procedures and technical measures defined and implemented to specify and document physical data locations including locales where data is processed or backed up? NA CSCowned This is a customer responsibility Customers manage access to their customer content and AWS services and resources We provide an advanced set of access encryption and logging features to help you do this effectively (such as AWS CloudTrail) We do not access or use customer content for any purpose other than as legally required and for maintaining the AWS services and providing them to our customers and their end users Customers choose the region(s) in which their customer content will be stored We will not move or replicate customer content outside of the customer’s chosen region(s) except as legally required and as necessary to maintain the AWS services and provide them to our customers and their end users Customers choose how their customer content is secured We offer our customers strong encryption for customer content in transit or at rest and we provide customers with the option to manage their own encryption keys DSP19 Define and implement processes procedures and technical measures to specify and document the physical locations of data including any locations in which data is processed or backed up Data Location Data Security and Privacy Lifecycle Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title GRC 011 Are information governance program policies and procedures sponsored by organizational leadership established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS has established formal policies and procedures to provide employees a common baseline for information security standards and guidance The AWS Information Security Management System policy establishes guidelines for protecting the confidentiality integrity and availability of customers’ systems and content Maintaining customer trust and confidence is of the utmost importance to AWS AWS works to comply with applicable federal state and local laws statutes ordinances and regulations concerning security privacy and data protection of AWS services in order to minimize the risk of accidental or unauthorized access or disclosure of customer content GRC01 Establish document approve communicate apply evaluate and maintain policies and procedures for an information governance program which is sponsored by the leadership of the organization Review and update the policies and procedures at least annually Governance Program Policy and Procedures Governance Risk and Compliance GRC 012 Are the policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis GRC01 Establish document approve communicate apply evaluate and maintain policies and procedures for an information governance program which is sponsored by the leadership of the organization Review and update the policies and procedures at least annuall y Governance Program Policy and Procedures Governance Risk and Compliance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title GRC 021 Is there an established formal documented and leadership sponsored enterprise risk management (ERM) program that includes policies and procedures for identification evaluation ownership treatment and acceptance of cloud security and privacy risks? Yes CSPowned AWS has established an information security management program with designated roles and responsibilities that are appropriately aligned within the organization AWS management reviews and evaluates the risks identified in the risk management program at least annually The risk management program encompasses the following phases: Discovery – The discovery phase includes listing out risks (threats and vulnerabilities) that exist in the environment This phase provides a basis for all other risk management activities Research – The research phase considers the potential impact(s) of identified risks to the business and its likelihood of occurrence and includes an evaluation of internal control effectiveness Evaluate – The evaluate phase includes ensuring controls processes and other physical and virtual safeguards in place to prevent and detect identified and assessed risks Resolve – The resolve phase results in risk reports provided to managers with the data they need to make effective business decisions and to comply with internal policies and applicable regulations Monitor – The monitor phase includes performing monitoring activities to evaluate whether processes initiatives functions and/or activities are mitigating the risk as designed GRC02 Establish a formal documented and leadershipsponsored Enterprise Risk Management (ERM) program that includes policies and procedures for identification evaluation ownership treatment and acceptance of cloud security and privacy risks Risk Management Program Governance Risk and Compliance GRC 031 Are all relevant organizational policies and associated procedures reviewed at least annually or when a substantial organizational change occurs? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis GRC03 Review all relevant organizational policies and associated procedures at least annually or when a substantial change occurs within the organization Organization al Policy Reviews Governance Risk and Compliance GRC 041 Is an approved exception process mandated by the governance program established and followed whenever a deviation from an established policy occurs? Yes CSPowned Management reviews exceptions to security policies to assess and mitigate risks AWS Security maintains a documented procedure describing the policy exception workflow on an internal AWS website Policy exceptions are tracked and maintained with the policy tool and exceptions are approved rejected or denied based on the procedures outlined within the procedure document GRC04 Establish and follow an approved exception process as mandated by the governance program whenever a deviation from an established policy occurs Policy Exception Process Governance Risk and Compliance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title GRC 051 Has an information security program (including programs of all relevant CCM domains) been developed and implemented? Yes CSPowned AWS has established an information security management program with designated roles and responsibilities that are appropriately aligned within the organization AWS management reviews and evaluates the risks identified in the risk management program at least annually The risk management program encompasses the following phases: Discovery – The discovery phase includes listing out risks (threats and vulnerabilities) that exist in the environment This phase provides a basis for all other risk management activities Research – The research phase considers the potential impact(s) of identified risks to the business and its likelihood of occurrence and includes an evaluation of internal control effectiveness Evaluate – The evaluate phase includes ensuring controls processes and other physical and virtual safeguards in place to prevent and detect identified and assessed risks Resolve – The resolve phase results in risk reports provided to managers with the data they need to make effective business decisions and to comply with internal policies and applicable regulations Monitor – The monitor phase includes performing monitoring activities to evaluate whether processes initiatives functions and/or activities are mitigating the risk as designed GRC05 Develop and implement an Information Security Program which includes programs for all the relevant domains of the CCM Information Security Program Governance Risk and Compliance GRC 061 Are roles and responsibilities for planning implementing operating assessing and improving governance programs defined and documented? Yes CSPowned See response to Question ID GRC051 GRC06 Define and document roles and responsibilities for planning implementing operating assessing and improving governance programs Governance Responsibilit y Model Governance Risk and Compliance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title GRC 071 Are all relevant standards regulations legal/contractual and statutory requirements applicable to your organization identified and documented? Yes CSPowned AWS documents tracks and monitors its legal regulatory and contractual agreements and obligations In order to do so AWS performs and maintains the following activities: 1) Identifies and evaluates applicable laws and regulations for each of the jurisdictions in which AWS operates 2) Documents and implements controls to help ensure its conformity with statutory regulatory and contractual requirements relevant to AWS 3) Categorizes the sensitivity of information according to the AWS information security policies to help protect from loss destruction falsification unauthorized access and unauthorized release 4) Informs and continually trains personnel that must be made aware of information security policies to help protect sensitive AWS information 5) Monitors for nonconformities to the information security policies with a process in place to take corrective actions and enforce appropriate disciplinary action AWS maintains relationships with internal and external parties to monitor legal regulatory and contractual requirements Should a new security directives be issued AWS creates and documents plans to implement the directive within a designated timeframe AWS provides customers with evidence of its compliance with applicable legal regulatory and contractual requirements through audit reports attestations certifications and other compliance enablers Visit awsamazoncom/artifact for information on how to review the AWS external attestation and assurance documentation GRC07 Identify and document all relevant standards regulations legal/contractual and statutory requirements which are applicable to your organization Information System Regulatory Mapping Governance Risk and Compliance GRC 081 Is contact established and maintained with cloudrelated special interest groups and other relevant entities? Yes CSPowned AWS personnel are part of special interest groups including relevant external parties such as security groups AWS personnel use these groups to improve their knowledge about security best practices and to stay up to date with relevant security information GRC08 Establish and maintain contact with cloudrelated special interest groups and other relevant entities in line with business context Special Interest Groups Governance Risk and Compliance Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 011 Are background verification policies and procedures of all new employees (including but not limited to remote employees contractors and third parties) established documented approved communicated applied evaluated and maintained? Yes CSPowned Where permitted by law AWS requires that employees undergo a background screening at hiring commensurate with their position and level of access (Control AWSCA92) AWS has a process to assess whether AWS employees who have access to resources that store or process customer data via permission groups are subject to a posthire background check as applicable with local law AWS employees who have access to resources that store or process customer data will have a background check no less than once a year (Control AWSCA99) HRS01 Establish document approve communicate apply evaluate and maintain policies and procedures for background verification of all new employees (including but not limited to remote employees contractors and third parties) according to local laws regulations ethics and contractual constraints and proportional to the data classification to be accessed the business requirements and acceptable risk Review and update the policies and procedures at least annually Background Screening Policy and Procedures Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 012 Are background verification policies and procedures designed according to local laws regulations ethics and contractual constraints and proportional to the data classification to be accessed business requirements and acceptable risk? Yes CSPowned AWS conducts criminal background checks as permitted by applicable law as part of pre employment screening practices for employees commensurate with the employee’s position and level of access to AWS facilities The AWS SOC reports provide additional details regarding the controls in place for background verification HRS01 Establish document approve communicate apply evaluate and maintain policies and procedures for background verification of all new employees (including but not limited to remote employees contractors and third parties) according to local laws regulations ethics and contractual constraints and proportional to the data classification to be accessed the business requirements and acceptable risk Review and update the policies and procedures at least annually Background Screening Policy and Procedures Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 013 Are background verification policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis HRS01 Establish document approve communicate apply evaluate and maintain policies and procedures for background verification of all new employees (including but not limited to remote employees contractors and third parties) according to local laws regulations ethics and contractual constraints and proportional to the data classification to be accessed the business requirements and acceptable risk Review and update the policies and procedures at least annually Background Screening Policy and Procedures Human Resources HRS 021 Are policies and procedures for defining allowances and conditions for the acceptable use of organizationally owned or managed assets established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS has implemented data handling and classification requirements that provide specifications around: • Data encryption • Content in transit and during storage • Access • Retention • Physical controls • Mobile devices • Data handling requirements Employees are required to review and signoff on an employment contract which acknowledges their responsibilities to overall Company standards and information security HRS02 Establish document approve communicate apply evaluate and maintain policies and procedures for defining allowances and conditions for the acceptable use of organizationally owned or managed assets Review and update the policies and procedures at least annually Acceptable Use of Technology Policy and Procedures Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 022 Are the policies and procedures for defining allowances and conditions for the acceptable use of organizationally owned or managed assets reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis HRS02 Establish d ocument approve communicate apply evaluate and maintain policies and procedures for defining allowances and conditions for the acceptable use of organizationally owned or managed assets Review and update the policies and procedures at least annually Acceptable Use of Technology Policy and Procedures Human Resources HRS 031 Are policies and procedures requiring unattended workspaces to conceal confidential data established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS roles and responsibilities for maintaining safe and secure working environment are reviewed by independent external auditors during audits for our SOC PCI DSS and ISO 27001 compliance HRS03 Establish document approve communicate apply evaluate and maintain policies and procedures that require unattended workspaces to not have openly visible confidential data Review and update the policies and procedures at least annually Clean Desk Policy and Procedures Human Resources HRS 032 Are policies and procedures requiring unattended workspaces to conceal confidential data reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis HRS03 Establish document approve communicate apply evaluate and maintain policies and procedures that require unattended workspaces to not have openly visible confidential data Review and update the policies and procedures at least annually Clean Desk Policy and Procedures Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 041 Are policies and procedures to protect information accessed processed or stored at remote sites and locations established documented approved communicated applied evaluated and maintained? Yes Shared CSP and CSC AWS has a formal access control policy that is reviewed and updated on an annual basis (or when any major change to the system occurs that impacts the policy) The policy addresses purpose scope roles responsibilities and management commitment AWS employs the concept of least privilege allowing only the necessary access for users to accomplish their job function All access from remote devices to the AWS corporate environment is managed via VPN and MFA The AWS production network is separated from the corporate network by multiple layers of security documented in various control documents discussed in other sections of this response HRS04 Establish document approve communicate apply evaluate and maintain policies and procedures to protect information accessed processed or stored at remote sites and locations Review and update the policies and procedures at least annually Remote and Home Working Policy and Procedures Human Resources HRS 042 Are policies and procedures to protect information accessed processed or stored at remote sites and locations reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis HRS04 Establish document approve communicate apply evaluate and maintain policies and procedures to protect information accessed processed or stored at remote sites and locations Review and update the policies and procedures at least annually Remote and Home Working Policy and Procedures Human Resources HRS 051 Are return procedures of organizationally owned assets by terminated employees established and documented? Yes CSPowned Upon termination of employee or contracts AWS assets in their possessions are retrieved on the date of termination In case of immediate termination the employee/contractor manager retrieves all AWS assets (eg Authentication tokens keys badges) and escorts them out of AWS facility HRS05 Establish and document procedures for the return of organizationowned assets by terminated employees Asset returns Human Resources HRS 061 Are procedures outlining the roles and responsibilities concerning changes in employment established documented and communicated to all personnel? Yes CSPowned AWS Human Resources team defines internal management responsibilities to be followed for termination and role change of employees and vendors AWS SOC reports provide additional details HRS06 Establish document and communicate to all personnel the procedures outlining the roles and responsibilities concerning changes in employment Employment Termination Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 071 Are employees required to sign an employment agreement before gaining access to organizational information systems resources and assets? Yes CSPowned Personnel supporting AWS systems and devices must sign a nondisclosure agreement prior to being granted access Additionally upon hire personnel are required to read and accept the Acceptable Use Policy and the Amazon Code of Business Conduct and Ethics (Code of Conduct) Policy HRS07 Employees sign the employee agreement prior to being granted access to organizational information systems resources and assets Employment Agreement Process Human Resources HRS 081 Are provisions and/or terms for adherence to established information governance and security policies included within employment agreements? Yes CSPowned In alignment with ISO 27001 standard AWS employees complete periodic rolebased training that includes AWS Security training and requires an acknowledgement to complete Compliance audits are periodically performed to validate that employees understand and follow the established policies Refer to SOC reports for additional details HRS08 The organization includes within the employment agreements provisions and/or terms for adherence to established information governance and security policies Employment Agreement Content Human Resources HRS 091 Are employee roles and responsibilities relating to information assets and security documented and communicated? Yes CSPowned AWS implements formal documented policies and procedures that provide guidance for operations and information security within the organization and the supporting AWS environments Policies address purpose scope roles responsibilities and management commitment All policies are maintained in a centralized location that is accessible by employees HRS09 Document and communicate roles and responsibilities of employees as they relate to information assets and security Personnel Roles and Responsibiliti es Human Resources HRS 101 Are requirements for non disclosure/confide ntiality agreements reflecting organizational data protection needs and operational details identified documented and reviewed at planned intervals? Yes CSPowned Amazon Legal Counsel manages and periodically revises the Amazon NDA to reflect AWS business needs HRS10 Identify document and review at planned intervals requirements for non disclosure/confidenti ality agreements reflecting the organization's needs for the protection of data and operational details Non Disclosure Agreements Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 111 Is a security awareness training program for all employees of the organization established documented approved communicated applied evaluated and maintained? Yes CSPowned In alignment with ISO 27001 standard all AWS employees complete periodic Information Security training which requires an acknowledgement to complete Compliance audits are periodically performed to validate that employees understand and follow the established policies AWS roles and responsibilities are reviewed by independent external auditors during audits for our SOC PCI DSS and ISO 27001 compliance HRS11 Establish document approve communicate apply evaluate and maintain a security awareness training program for all employees of the organization and provide regular training updates Security Awareness Training Human Resources HRS 112 Are regular security awareness training updates provided? Yes CSPowned See response to Question ID HRS111 HRS11 Establish document approve communicate apply evaluate and maintain a security awareness training program for all employees of the organization and provide regular training updates Security Awareness Training Human Resources HRS 121 Are all employees granted access to sensitive organizational and personal data provided with appropriate security awareness training? Yes CSPowned In alignment with ISO 27001 standard all AWS employees complete periodic Information Security training which requires an acknowledgement to complete Compliance audits are periodically performed to validate that employees understand and follow the established policies AWS roles and responsibilities are reviewed by independent external auditors during audits for our SOC PCI DSS and ISO 27001 compliance HRS12 Provide all employees with access to sensitive organizational and personal data with appropriate security awareness training and regular updates in organizational procedures processes and policies relating to their professional function relative to the organization Personal and Sensitive Data Awareness and Training Human Resources Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title HRS 122 Are all employees granted access to sensitive organizational and personal data provided with regular updates in procedures processes and policies relating to their professional function? Yes CSPowned AWS has a formal access control policy that is reviewed and updated on an annual basis (or when any major change to the system occurs that impacts the policy) The policy addresses purpose scope roles responsibilities and management commitment AWS employs the concept of least privilege allowing only the necessary access for users to accomplish their job function All access from remote devices to the AWS corporate environment is managed via VPN and MFA The AWS production network is separated from the corporate network by multiple layers of security documented in various control documents discussed in other sections of this response Customers retain the control and responsibility of their data and associated media assets It is the responsibility of the customer to manage mobile security devices and the access to the customer’s content HRS12 Provide all employees with access to sensitive organizational and personal data with appropriate security awareness training and regular updates in organizational procedures processes and policies relating to their professional function relative to the organization Personal and Sensitive Data Awareness and Training Human Resources HRS 131 Are employees notified of their roles and responsibilities to maintain awareness and compliance with established policies procedures and applicable legal statutory or regulatory compliance obligations? Yes CSPowned AWS has implemented various methods of internal communication at a global level to help employees understand their individual roles and responsibilities and to communicate significant events in a timely manner These methods include orientation and training programs for newly hired employee as well as electronic mail messages and the posting of information via the Amazon intranet Refer to ISO 27001 standard Annex A domain 7 and 8 AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard HRS13 Make employees aware of their roles and responsibilities for maintaining awareness and compliance with established policies and procedures and applicable legal statutory or regulatory compliance obligations Compliance User Responsibilit y Human Resources IAM 011 Are identity and access management policies and procedures established documented approved communicated implemented applied evaluated and maintained? Yes CSPowned In alignment with ISO 27001 AWS has a formal access control policy that is reviewed and updated on an annual basis (or when any major change to the system occurs that impacts the policy) The policy addresses purpose scope roles responsibilities and management commitment Access control procedures are systematically enforced through proprietary tools Refer to ISO 27001 Annex A domain 9 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard IAM01 Establish document approve communicate implement apply evaluate and maintain policies and procedures for identity and access management Review and update the policies and procedures at least annually Identity and Access Management Policy and Procedures Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 012 Are identity and access management policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis IAM01 Establish document approve communicate implement apply evaluate and maintain policies and procedures for identity and access management Review and update the policies and procedures at least annually Identity and Access Management Policy and Procedures Identity & Access Management IAM 021 Are strong password policies and procedures established documented approved communicated implemented applied evaluated and maintained? Yes CSPowned AWS internal Password Policies and guidelines outlines requirements of password strength and handling for passwords used to access internal systems AWS Identity and Access Management (IAM) enables customers to securely control access to AWS services and resources for their users Additional information about IAM can be found on website at https://awsamazoncom/iam/ AWS SOC reports provide details on the specific control activities executed by AWS IAM02 Establish document approve communicate implement apply evaluate and maintain strong password policies and procedures Review and update the policies and procedures at least annually Strong Password Policy and Procedures Identity & Access Management IAM 022 Are strong password policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis IAM02 Establish document approve communicate implement apply evaluate and maintain strong password policies and procedures Review and update the policies and procedures at least annually Strong Password Policy and Procedures Identity & Access Management IAM 031 Is system identity information and levels of access managed stored and reviewed? Yes Shared CSP and CSC Amazon personnel with a business need to access the management plane are required to first use multifactor authentication distinct from their normal corporate Amazon credentials to gain access to purposebuilt administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems are revoked AWS customers are responsible for access management within their AWS environments IAM03 Manage store and review the information of system identities and level of access Identity Inventory Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 041 Is the separation of duties principle employed when implementing information system access? Yes Shared CSP and CSC AWS has a formal access control policy that is reviewed and updated on an annual basis (or when any major change to the system occurs that impacts the policy) The policy addresses purpose scope roles responsibilities and management commitment AWS employs the concept of least privilege allowing only the necessary access for users to accomplish their job function All access from remote devices to the AWS corporate environment is managed via VPN and MFA The AWS production network is separated from the corporate network by multiple layers of security documented in various control documents discussed in other sections of this response Customers retain the ability to manage segregations of duties of their AWS resources AWS best practices for Identity & Access Management can be found here: https://docsawsamazon com/IAM/ Search for AWS best practices for Identity & Access Management IAM04 Employ the separation of duties principle when implementing information system access Separation of Duties Identity & Access Management IAM 051 Is the least privilege principle employed when implementing information system access? Yes CSPowned See response to Question ID IAM041 IAM05 Employ the least privilege principle when implementing information system access Least Privilege Identity & Access Management IAM 061 Is a user access provisioning process defined and implemented which authorizes records and communicates data and assets access changes? Yes CSPowned In alignment with ISO 27001 AWS has a formal access control policy that is reviewed and updated on an annual basis (or when any major change to the system occurs that impacts the policy) The policy addresses purpose scope roles responsibilities and management commitment Access control procedures are systematically enforced through proprietary tools Refer to ISO 27001 Annex A domain 9 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard IAM06 Define and implement a user access provisioning process which authorizes records and communicates access changes to data and assets User Access Provisioning Identity & Access Management IAM 071 Is a process in place to de provision or modify the access in a timely manner of movers / leavers or system identity changes to effectively adopt and communicate identity and access management policies? Yes CSPowned Access privilege reviews are triggered upon job and/or role transfers initiated from HR system IT access privileges are reviewed on a quarterly basis by appropriate personnel on a regular cadence IT access from AWS systems is terminated within 24 hours of termination or deactivation AWS SOC reports provide further details on User access revocation In addition the AWS Security White paper section "AWS Access" provides additional information Refer to ISO 27001 Annex A domain 9 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard IAM07 Deprovision or respectively modify access of movers / leavers or system identity changes in a timely manner in order to effectively adopt and communicate identity and access management policies User Access Changes and Revocation Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 081 Are reviews and revalidation of user access for least privilege and separation of duties completed with a frequency commensurate with organizational risk tolerance? Yes CSPowned Access privilege reviews are triggered upon job and/or role transfers initiated from HR system IT access privileges are reviewed on a quarterly basis by appropriate personnel on a regular cadence IT access from AWS systems is terminated within 24 hours of termination or deactivation AWS SOC reports provide further details on User access revocation In addition the AWS Security White paper section "AWS Access" provides additional information Refer to ISO 27001 Annex A domain 9 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard IAM08 Review and revalidate user access for least privilege and separation of duties with a frequency that is commensurate with organizational risk tolerance User Access Review Identity & Access Management IAM 091 Are processes procedures and technical measures for the segregation of privileged access roles defined implemented and evaluated such that administrative data access encryption key management capabilities and logging capabilities are distinct and separate? Yes CSPowned AWS has a formal access control policy that is reviewed and updated on an annual basis (or when any major change to the system occurs that impacts the policy) The policy addresses purpose scope roles responsibilities and management commitment AWS employs the concept of least privilege allowing only the necessary access for users to accomplish their job function All access from remote devices to the AWS corporate environment is managed via VPN and MFA The AWS production network is separated from the corporate network by multiple layers of security documented in various control documents discussed in other sections of this response Customers retain the control and responsibility of their data and associated media assets It is the responsibility of the customer to manage mobile security devices and the access to the customer’s content IAM09 Define implement and evaluate processes procedures and technical measures for the segregation of privileged access roles such that administrative access to data encryption and key management capabilities and logging capabilities are distinct and separated Segregation of Privileged Access Roles Identity & Access Management IAM 101 Is an access process defined and implemented to ensure privileged access roles and rights are granted for a limited period? Yes CSPowned Amazon personnel with a business need to access the management plane are required to first use multifactor authentication distinct from their normal corporate Amazon credentials to gain access to purposebuilt administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems are revoked Refer to SOC2 report for additional details IAM10 Define and implement an access process to ensure privileged access roles and rights are granted for a time limited period and implement procedures to prevent the culmination of segregated privileged access Management of Privileged Access Roles Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 102 Are procedures implemented to prevent the culmination of segregated privileged access? Yes CSPowned Access to AWS systems are allocated based on least privilege approved by an authorized individual prior to access provisioning Duties and areas of responsibility (for example access request and approval change management request and approval change development testing and deployment etc) are segregated across different individuals to reduce opportunities for an unauthorized or unintentional modification or misuse of AWS systems Group or shared accounts are not permitted within the system boundary IAM10 Define and implement an access process to ensure privileged access roles and rights are granted for a time limited period and implement procedures to prevent the culmination of segregated privileged access Management of Privileged Access Roles Identity & Access Management IAM 111 Are processes and procedures for customers to participate where applicable in granting access for agreed high risk as (defined by the organizational risk assessment) privileged access roles defined implemented and evaluated? No IAM11 Define implement and evaluate processes and procedures for customers to participate where applicable in the granting of access for agreed high risk (as defined by the organizational risk assessment) privileged access roles CSCs Approval for Agreed Privileged Access Roles Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 121 Are processes procedures and technical measures to ensure the logging infrastructure is "readonly" for all with write access (including privileged access roles) defined implemented and evaluated? Yes CSPowned AWS has identified auditable event categories across systems and devices within the AWS system Service teams configure the auditing features to record continuously the security related events in accordance with requirements The log storage system is designed to provide a highly scalable highly available service that automatically increases capacity as the ensuing need for log storage grows Audit records contain a set of data elements in order to support necessary analysis requirements In addition audit records are available for AWS Security team or other appropriate teams to perform inspection or analysis on demand and in response to securityrelated or businessimpacting events Designated personnel on AWS teams receive automated alerts in the event of an audit processing failure Audit processing failures include for example software/hardware errors When alerted oncall personnel issue a trouble ticket and track the event until it is resolved AWS logging and monitoring processes are reviewed by independent third party auditors for our continued compliance with SOC PCI DSS and ISO 27001 compliance IAM12 Define implement and evaluate processes procedures and technical measures to ensure the logging infrastructure is readonly for all with write access including privileged access roles and that the ability to disable it is controlled through a procedure that ensures the segregation of duties and break glass procedures Safeguard Logs Integrity Identity & Access Management IAM 122 Is the ability to disable the "read only" configuration of logging infrastructure controlled through a procedure that ensures the segregation of duties and break glass procedures? Yes CSPowned AWS has identified auditable event categories across systems and devices within the AWS system Service teams configure the auditing features to record continuously the security related events in accordance with requirements The log storage system is designed to provide a highly scalable highly available service that automatically increases capacity as the ensuing need for log storage grows Audit records contain a set of data elements in order to support necessary analysis requirements In addition audit records are available for AWS Security team or other appropriate teams to perform inspection or analysis on demand and in response to securityrelated or businessimpacting events Designated personnel on AWS teams receive automated alerts in the event of an audit processing failure Audit processing failures include for example software/hardware errors When alerted oncall personnel issue a trouble ticket and track the event until it is resolved AWS logging and monitoring processes are reviewed by independent third party auditors for our continued compliance with SOC PCI DSS and ISO 27001 compliance IAM12 Define implement and evaluate processes procedures and technical measures to ensure the logging infrastructure is readonly for all with write access including privileged access roles and that the ability to disable it is controlled through a procedure that ensures the segregation of duties and break glass procedures Safeguard Logs Integrity Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 131 Are processes procedures and technical measures that ensure users are identifiable through unique identification (or can associate individuals with user identification usage) defined implemented and evaluated? Yes CSPowned AWS controls access to systems through authentication that requires a unique user ID and password AWS systems do not allow actions to be performed on the information system without identification or authentication User access privileges are restricted based on business need and job responsibilities AWS employs the concept of least privilege allowing only the necessary access for users to accomplish their job function New user accounts are created to have minimal access User access to AWS systems (for example network applications tools etc) requires documented approval from the authorized personnel (for example user's manager and/or system owner) and validation of the active user in the HR system Refer to SOC2 report for additional details IAM13 Define implement and evaluate processes procedures and technical measures that ensure users are identifiable through unique IDs or which can associate individuals to the usage of user IDs Uniquely Identifiable Users Identity & Access Management IAM 141 Are processes procedures and technical measures for authenticating access to systems application and data assets including multifactor authentication for a least privileged user and sensitive data access defined implemented and evaluated? Yes Shared CSP and CSC Amazon personnel with a business need to access the management plane are required to first use multifactor authentication distinct from their normal corporate Amazon credentials to gain access to purposebuilt administration hosts These administrative hosts are systems that are specifically designed built configured and hardened to protect the management plane All such access is logged and audited When an employee no longer has a business need to access the management plane the privileges and access to these hosts and relevant systems are revoked Refer to SOC2 report for additional details IAM14 Define implement and evaluate processes procedures and technical measures for authenticating access to systems application and data assets including multifactor authentication for at least privileged user and sensitive data access Adopt digital certificates or alternatives which achieve an equivalent level of security for system identities Strong Authenticati on Identity & Access Management IAM 142 Are digital certificates or alternatives that achieve an equivalent security level for system identities adopted? Yes CSPowned AWS Identity Directory and Access Services enable you to add multifactor authentication (MFA) to your applications IAM14 Strong Authenticati on Identity & Access Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IAM 151 Are processes procedures and technical measures for the secure management of passwords defined implemented and evaluated? Yes CSPowned AWS Identity and Access Management (IAM) enables customers to securely control access to AWS services and resources for their users Additional information about IAM can be found on website at https://awsamazoncom/iam/ AWS SOC reports provide details on the specific control activities executed by AWS IAM15 Define implement and evaluate processes procedures and technical measures for the secure management of passwords Passwords Management Identity & Access Management IAM 161 Are processes procedures and technical measures to verify access to data and system functions authorized defined implemented and evaluated? Yes Shared CSP and CSC Controls in place limit access to systems and data and provide that access to systems or data is restricted and monitored In addition customer data and server instances are logically isolated from other customers by default Privileged user access controls are reviewed by an independent auditor during the AWS SOC ISO 27001 and PCI audits AWS Customers retain control and ownership of their data AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure IAM16 Define implement and evaluate processes procedures and technical measures to verify access to data and system functions is authorized Authorizatio n Mechanisms Identity & Access Management IPY 011 Are policies and procedures established documented approved communicated applied evaluated and maintained for communications between application services (eg APIs)? Yes CSPowned Details regarding AWS APIs can be found on the AWS website at: https://awsamazoncom/documentation/ IPY01 Establish document approve communicate apply evaluate and maintain policies and procedures for interoperability and portability including requirements for: a Communications between application interfaces b Information processing interoperability c Application development portability d Information/Data exchange usage portability integrity and persistence Review and update the policies and procedures at least annually Interoperabil ity and Portability Policy and Procedures Interoperabilit y & Portability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IPY 012 Are policies and procedures established documented approved communicated applied evaluated and maintained for information processing interoperability? Yes CSPowned Details regarding AWS interoperability of each service can be found on the AWS website at: https://awsamazoncom/documentation/ IPY02 Establish document approve communicate apply evaluate and maintain policies and procedures for interoperability and portability including requirements for: a Communications between application interfaces b Information processing interoperability c Application development portability d Information/Data exchange usage portability integrity and persistence Review and update the policies and procedures at least annually Interoperabil ity and Portability Policy and Procedures Interoperabilit y & Portability IPY 013 Are policies and procedures established documented approved communicated applied evaluated and maintained for application development portability? Yes CSPowned Details regarding AWS interoperability of each service can be found on the AWS website at: https://awsamazoncom/documentation/ IPY03 Establish document approve communicate apply evaluate and maintain policies and procedures for interoperability and portability including requirements for: a Communications between application interfaces b Information processing interoperability c Application development portability d Information/Data exchange usage portability integrity and persistence Review and update the policies and procedures at least annually Interoperabil ity and Portability Policy and Procedures Interoperabilit y & Portability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IPY 014 Are policies and procedures established documented approved communicated applied evaluated and maintained for information/data exchange usage portability integrity and persistence? Yes CSPowned Details regarding AWS interoperability of each service can be found on the AWS website at: https://awsamazoncom/documentation/ IPY04 Establish document approve communicate apply evaluate and maintain policies and procedures for interoperability and portability including requirements for: a Communications between application interfaces b Information processing interoperability c Application development portability d Information/Data exchange usage portability integrity and persistence Review and update the policies and procedures at least annually Interoperabil ity and Portability Policy and Procedures Interoperabilit y & Portability IPY 015 Are interoperability and portability policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis IPY05 Establish document approve communicate apply evaluate and maintain policies and procedures for interoperability and portability including requirements for: a Communications between application interfaces b Information processing interoperability c Application development portability d Information/Data exchange usage portability integrity and persistence Review and update the policies and procedures at least annually Interoperabil ity and Portability Policy and Procedures Interoperabilit y & Portability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IPY 021 Are CSCs able to programmatically retrieve their data via an application interface(s) to enable interoperability and portability? Yes CSCowned Details regarding AWS interoperability of each service can be found on the AWS website at: https://awsamazoncom /documentation/ IPY02 Provide application interface(s) to CSCs so that they programmatically retrieve their data to enable interoperability and portability Application Interface Availability Interoperabilit y & Portability IPY 031 Are cryptographically secure and standardized network protocols implemented for the management import and export of data? Yes CSPowned AWS APIs and the AWS Management Console are available via TLS protected endpoints which provide server authentication Customers can use TLS for all of their interactions with AWS AWS recommends that customers use secure protocols that offer authentication and confidentiality such as TLS or IPsec to reduce the risk of data tampering or loss AWS enables customers to open a secure encrypted session to AWS servers using HTTPS (Transport Layer Security [TLS]) IPY03 Implement cryptographically secure and standardized network protocols for the management import and export of data Secure Interoperabil ity and Portability Management Interoperabilit y & Portability IPY 041 Do agreements include provisions specifying CSC data access upon contract termination and have the following? a Data format b Duration data will be stored c Scope of the data retained and made available to the CSCs d Data deletion policy Yes Shared CSP and CSC AWS customer agreements include data related provisions upon termination Details regarding contract termination can be found in the example customer agreement see Section 7 Term; Termination https://awsamazoncom/agreement/ IPY04 Agreements must include provisions specifying CSCs access to data upon contract termination and will include: a Data format b Length of time the data will be stored c Scope of the data retained and made available to the CSCs d Data deletion policy Data Portability Contractual Obligations Interoperabilit y & Portability IVS 011 Are infrastructure and virtualization security policies and procedures established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS implements formal documented policies and procedures that provide guidance for operations and information security within the organization and the supporting AWS environments Policies address purpose scope roles responsibilities and management commitment All policies are maintained in a centralized location that is accessible by employees IVS01 Establish document approve communicate apply evaluate and maintain policies and procedures for infrastructure and virtualization security Review and update the policies and procedures at least annually Infrastructur e and Virtualization Security Policy and Procedures Infrastructure & Virtualization Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IVS 012 Are infrastructure and virtualization security policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis IVS01 Establish document approve communicate apply evaluate and maintain policies and procedures for infrastructure and virtualization security Review and update the policies and procedures at least annually Infrastructur e and Virtualization Security Policy and Procedures Infrastructure & Virtualization Security IVS 021 Is resource availability quality and capacity planned and monitored in a way that delivers required system performance as determined by the business? Yes Shared CSP and CSC AWS maintains a capacity planning model to assess infrastructure usage and demands at least monthly and usually more frequently (eg weekly) In addition the AWS capacity planning model supports the planning of future demands to acquire and implement additional resources based upon current resources and forecasted requirements IVS02 Plan and monitor the availability quality and adequate capacity of resources in order to deliver the required system performance as determined by the business Capacity and Resource Planning Infrastructure & Virtualization Security IVS 031 Are communications between environments monitored? Yes Shared CSP and CSC Monitoring and alarming are configured by Service Owners to identify and notify operational and management personnel of incidents when early warning thresholds are crossed on key operational metrics IVS03 Monitor encrypt and restrict communications between environments to only authenticated and authorized connections as justified by the business Review these configurations at least annually and support them by a documented justification of all allowed services protocols ports and compensating controls Network Security Infrastructure & Virtualization Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IVS 032 Are communications between environments encrypted? NA CSCowned AWS APIs are available via TLS protected endpoints which provide server authentication Customers can use TLS for all of their interactions with AWS and within their multiple environment AWS provides open encryption methodologies and enables customers to encrypt and authenticate all traffic and to enforce the latest standards and ciphers IVS04 Monitor encrypt and restrict communications between environments to only authenticated and authorized connections as justified by the business Review these configurations at least annually and support them by a documented justification of all allowed services protocols ports and compensating controls Network Security Infrastructure & Virtualization Security IVS 033 Are communications between environments restricted to only authenticated and authorized connections as justified by the business? Yes Shared CSP and CSC AWS implements least privilege throughout its infrastructure components AWS prohibits all ports and protocols that do not have a specific business purpose AWS follows a rigorous approach to minimal implementation of only those features and functions that are essential to use of the device Network scanning is performed and any unnecessary ports or protocols in use are corrected Customers maintain information related to their data and individual architecture Customers retain the control and responsibility of their data and associated media assets It is the responsibility of the customer to manage their AWS environments and associated access IVS05 Monitor encrypt and restrict communications between environments to only authenticated and authorized connections as justified by the business Review these configurations at least annually and support them by a documented justification of all allowed services protocols ports and compensating controls Network Security Infrastructure & Virtualization Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IVS 034 Are network configurations reviewed at least annually? Yes Shared CSP and CSC Regular internal and external vulnerability scans are performed on the host operating system web application and databases in the AWS environment utilizing a variety of tools Vulnerability scanning and remediation practices are regularly reviewed as a part of AWS continued compliance with PCI DSS and ISO 27001 AWS customers are responsible for configuration management within their AWS environments IVS06 Monitor encrypt and restrict communications between environments to only authenticated and authorized connections as justified by the business Review these configurations at least annually and support them by a documented justification of all allowed services protocols ports and compensating controls Network Security Infrastructure & Virtualization Security IVS 035 Are network configurations supported by the documented justification of all allowed services protocols ports and compensating controls? Yes Shared CSP and CSC AWS implements least privilege throughout its infrastructure components AWS prohibits all ports and protocols that do not have a specific business purpose AWS follows a rigorous approach to minimal implementation of only those features and functions that are essential to use of the device Network scanning is performed and any unnecessary ports or protocols in use are corrected Customers maintain information related to their data and individual architecture AWS customers are responsible for network management within their AWS environments IVS07 Monitor encrypt and restrict communications between environments to only authenticated and authorized connections as justified by the business Review these configurations at least annually and support them by a documented justification of all allowed services protocols ports and compensating controls Network Security Infrastructure & Virtualization Security IVS 041 Is every host and guest OS hypervisor or infrastructure control plane hardened (according to their respective best practices) and supported by technical controls as part of a security baseline? Yes Shared CSP and CSC Regular internal and external vulnerability scans are performed on the host operating system web application and databases in the AWS environment utilizing a variety of tools Vulnerability scanning and remediation practices are regularly reviewed as a part of AWS continued compliance with PCI DSS and ISO 27001 AWS customers are responsible for server and system management within their AWS environments IVS04 Harden host and guest OS hypervisor or infrastructure control plane according to their respective best practices and supported by technical controls as part of a security baseline OS Hardening and Base Controls Infrastructure & Virtualization Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IVS 051 Are production and non production environments separated? Yes CSPowned The development test and production environments emulate the production system environment and are used to properly assess and prepare for the impact of a change to the production system environment In order to reduce the risks of unauthorized access or change to the production environment the development test and production environments are logically separated IVS05 Separate production and nonproduction environments Production and Non Production Environment s Infrastructure & Virtualization Security IVS 061 Are applications and infrastructures designed developed deployed and configured such that CSP and CSC (tenant) user access and intra tenant access is appropriately segmented segregated monitored and restricted from other tenants? Yes CSPowned Customer environments are logically segregated to prevent users and customers from accessing resources not assigned to them Customers maintain full control over who has access to their data Services which provide virtualized operational environments to customers (ie EC2) ensure that customers are segregated from one another and prevent crosstenant privilege escalation and information disclosure via hypervisors and instance isolation Different instances running on the same physical machine are isolated from each other via the hypervisor In addition the Amazon EC2 firewall resides within the hypervisor layer between the physical network interface and the instance's virtual interface All packets must pass through this layer thus an instance’s neighbors have no more access to that instance than any other host on the Internet and can be treated as if they are on separate physical hosts The physical randomaccess memory (RAM) is separated using similar mechanisms IVS06 Design develop deploy and configure applications and infrastructures such that CSP and CSC (tenant) user access and intra tenant access is appropriately segmented and segregated monitored and restricted from other tenants Segmentatio n and Segregation Infrastructure & Virtualization Security IVS 071 Are secure and encrypted communication channels including only uptodate and approved protocols used when migrating servers services applications or data to cloud environments? Yes CSCowned AWS offers a wide variety of services and partner tools to help customer migrate data securely AWS migration services such as AWS Database Migration Service and AWS Snowmobile are integrated with AWS KMS for encryption Learn more about AWS cloud migration services at: https://awsamazoncom /clouddatamigration/ IVS07 Use secure and encrypted communication channels when migrating servers services applications or data to cloud environments Such channels must include only uptodate and approved protocols Migration to Cloud Environment s Infrastructure & Virtualization Security Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title IVS 081 Are highrisk environments identified and documented? NA CSCowned AWS Customers retain responsibility to manage their own network segmentation in adherence with their defined requirements Internally AWS network segmentation is aligned with the ISO 27001 standard AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard IVS08 Identify and document highrisk environments Network Architecture Documentati on Infrastructure & Virtualization Security IVS 091 Are processes procedures and defenseindepth techniques defined implemented and evaluated for protection detection and timely response to networkbased attacks? Yes CSPowned AWS Security regularly scans all Internet facing service endpoint IP addresses for vulnerabilities (these scans do not include customer instances) AWS Security notifies the appropriate parties to remediate any identified vulnerabilities In addition external vulnerability threat assessments are performed regularly by independent security firms Findings and recommendations resulting from these assessments are categorized and delivered to AWS leadership In addition the AWS control environment is subject to regular internal and external risk assessments AWS engages with external certifying bodies and independent auditors to review and test the AWS overall control environment AWS security controls are reviewed by independent external auditors during audits for our SOC PCI DSS and ISO 27001 compliance IVS09 Define implement and evaluate processes procedures and defenseindepth techniques for protection detection and timely response to networkbased attacks Network Defense Infrastructure & Virtualization Security LOG 011 Are logging and monitoring policies and procedures established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS implements formal documented policies and procedures that provide guidance for operations and information security within the organization and the supporting AWS environments Policies address purpose scope roles responsibilities and management commitment All policies are maintained in a centralized location that is accessible by employees LOG01 Establish document approve communicate apply evaluate and maintain policies and procedures for logging and monitoring Review and update the policies and procedures at least annually Logging and Monitoring Policy and Procedures Logging and Monitoring Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title LOG 012 Are policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis LOG01 Establish document approve communicate apply evaluate and maintain policies and procedures for logging and monitoring Review and update the policies and procedures at least annually Logging and Monitoring Policy and Procedures Logging and Monitoring LOG 021 Are processes procedures and technical measures defined implemented and evaluated to ensure audit log security and retention? Yes CSPowned In alignment with ISO 27001 standards audit logs are appropriately restricted and monitored AWS SOC reports provide details on the specific control activities executed by AWS Refer to AWS: Overview of Security Processes for additional details available at: http://awsamazoncom/security/security learning/ LOG02 Define implement and evaluate processes procedures and technical measures to ensure the security and retention of audit logs Audit Logs Protection Logging and Monitoring LOG 031 Are security related events identified and monitored within applications and the underlying infrastructure? NA CSCowned This is a customer responsibility AWS customers are responsible for the applications within their AWS environment LOG03 Identify and monitor securityrelated events within applications and the underlying infrastructure Define and implement a system to generate alerts to responsible stakeholders based on such events and corresponding metrics Security Monitoring and Alerting Logging and Monitoring LOG 032 Is a system defined and implemented to generate alerts to responsible stakeholders based on security events and their corresponding metrics? Yes Shared CSP and CSC AWS Security Metrics are monitored and analyzed in accordance with ISO 27001 standard Refer to ISO 27001 Annex A domain 16 for further details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard AWS customers are responsible for incident management within their AWS environments LOG03 Identify and monitor securityrelated events within applications and the underlying infrastructure Define and implement a system to generate alerts to responsible stakeholders based on such events and corresponding metrics Security Monitoring and Alerting Logging and Monitoring Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title LOG 041 Is access to audit logs restricted to authorized personnel and are records maintained to provide unique access accountability? Yes CSPowned In alignment with ISO 27001 standards audit logs are appropriately restricted and monitored AWS SOC reports provide details on the specific control activities executed by AWS Refer to AWS: Overview of Security Processes for additional details available at: http://awsamazoncom/security/security learning/ LOG04 Restrict audit logs access to authorized personnel and maintain records that provide unique access accountability Audit Logs Access and Accountabilit y Logging and Monitoring LOG 051 Are security audit logs monitored to detect activity outside of typical or expected patterns? Yes CSPowned AWS provides near real time alerts when the AWS monitoring tools show indications of compromise or potential compromise based upon threshold alarming mechanisms determined by AWS service and Security teams AWS correlates information gained from logical and physical monitoring systems to enhance security on an asneeded basis Upon assessment and discovery of risk Amazon disables accounts that display atypical usage matching the characteristics of bad actors The AWS Security team extracts all log messages related to system access and provides reports to designated officials Log analysis is performed to identify events based on defined risk management parameters LOG05 Monitor security audit logs to detect activity outside of typical or expected patterns Establish and follow a defined process to review and take appropriate and timely actions on detected anomalies Audit Logs Monitoring and Response Logging and Monitoring LOG 052 Is a process established and followed to review and take appropriate and timely actions on detected anomalies? Yes CSPowned See response to Question ID LOG0051 LOG05 Monitor security audit logs to detect activity outside of typical or expected patterns Establish and follow a defined process to review and take appropriate and timely actions on detected anomalies Audit Logs Monitoring and Response Logging and Monitoring LOG 061 Is a reliable time source being used across all relevant information processing systems? Yes CSPowned In alignment with ISO 27001 standards AWS information systems utilize internal system clocks synchronized via NTP (Network Time Protocol) AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard LOG06 Use a reliable time source across all relevant information processing systems Clock Synchronizati on Logging and Monitoring Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title LOG 071 Are logging requirements for information meta/data system events established documented and implemented? Yes CSPowned AWS has identified auditable event categories across systems and devices within the AWS system Service teams configure the auditing features to record continuously the security related events in accordance with requirements The log storage system is designed to provide a highly scalable highly available service that automatically increases capacity as the ensuing need for log storage grows Audit records contain a set of data elements in order to support necessary analysis requirements In addition audit records are available for AWS Security team or other appropriate teams to perform inspection or analysis on demand and in response to securityrelated or businessimpacting events Designated personnel on AWS teams receive automated alerts in the event of an audit processing failure Audit processing failures include for example software/hardware errors When alerted oncall personnel issue a trouble ticket and track the event until it is resolved AWS logging and monitoring processes are reviewed by independent thirdparty auditors for our continued compliance with SOC PCI DSS and ISO 27001 compliance LOG07 Establish document and implement which information meta/data system events should be logged Review and update the scope at least annually or whenever there is a change in the threat environment Logging Scope Logging and Monitoring LOG 072 Is the scope reviewed and updated at least annually or whenever there is a change in the threat environment? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis LOG07 Establish document and implement which information meta/data system events should be logged Review and update the scope at least annually or whenever there is a change in the threat environment Logging Scope Logging and Monitoring Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title LOG 081 Are audit records generated and do they contain relevant security information? Yes CSPowned AWS has identified auditable event categories across systems and devices within the AWS system Service teams configure the auditing features to record continuously the security related events in accordance with requirements The log storage system is designed to provide a highly scalable highly available service that automatically increases capacity as the ensuing need for log storage grows Audit records contain a set of data elements in order to support necessary analysis requirements In addition audit records are available for AWS Security team or other appropriate teams to perform inspection or analysis on demand and in response to securityrelated or businessimpacting events LOG08 Generate audit records containing relevant security information Log Records Logging and Monitoring LOG 091 Does the information system protect audit records from unauthorized access modification and deletion? Yes CSPowned In alignment with ISO 27001 standards audit logs are appropriately restricted and monitored AWS SOC reports provide details on the specific control activities executed by AWS Refer to AWS: Overview of Security Processes for additional details available at: http://awsamazoncom/security/security learning/ LOG09 The information system protects audit records from unauthorized access modification and deletion Log Protection Logging and Monitoring Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title LOG 101 Are monitoring and internal reporting capabilities established to report on cryptographic operations encryption and key management policies processes procedures and controls? Yes Shared CSP and CSC AWS has identified auditable event categories across systems and devices within the AWS system Service teams configure the auditing features to record continuously the security related events in accordance with requirements The log storage system is designed to provide a highly scalable highly available service that automatically increases capacity as the ensuing need for log storage grows Audit records contain a set of data elements in order to support necessary analysis requirements In addition audit records are available for AWS Security team or other appropriate teams to perform inspection or analysis on demand and in response to securityrelated or businessimpacting events Designated personnel on AWS teams receive automated alerts in the event of an audit processing failure Audit processing failures include for example software/hardware errors When alerted oncall personnel issue a trouble ticket and track the event until it is resolved AWS logging and monitoring processes are reviewed by independent thirdparty auditors for our continued compliance with SOC PCI DSS and ISO 27001 compliance AWS customers are responsible for key management within their AWS environments LOG10 Establish and maintain a monitoring and internal reporting capability over the operations of cryptographic encryption and key management policies processes procedures and controls Encryption Monitoring and Reporting Logging and Monitoring LOG 111 Are key lifecycle management events logged and monitored to enable auditing and reporting on cryptographic keys' usage? NA CSCowned This is a customer responsibility LOG11 Log and monitor key lifecycle management events to enable auditing and reporting on usage of cryptographic keys Transaction/ Activity Logging Logging and Monitoring LOG 121 Is physical access logged and monitored using an auditable access control system? Yes CSPowned Access to data center is logged Only authorized users are allowed into data centers Visitors follow the visitor access process and their relevant details along with business purpose is logged in the data center access log system The access log is retained for 90 days unless longer retention is legally required LOG12 Monitor and log physical access using an auditable access control system Access Control Logs Logging and Monitoring Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title LOG 131 Are processes and technical measures for reporting monitoring system anomalies and failures defined implemented and evaluated? Yes CSPowned In alignment with ISO 27001 standards audit logs are appropriately restricted and monitored AWS SOC reports provide details on the specific control activities executed by AWS Refer to AWS: Overview of Security Processes for additional details available at: http://awsamazoncom/security/security learning/ LOG13 Define implement and evaluate processes procedures and technical measures for the reporting of anomalies and failures of the monitoring system and provide immediate notification to the accountable party Failures and Anomalies Reporting Logging and Monitoring LOG 132 Are accountable parties immediately notified about anomalies and failures? Yes CSPowned AWS provides near real time alerts when the AWS monitoring tools show indications of compromise or potential compromise based upon threshold alarming mechanisms determined by AWS service and Security teams AWS correlates information gained from logical and physical monitoring systems to enhance security on an asneeded basis Upon assessment and discovery of risk Amazon disables accounts that display atypical usage matching the characteristics of bad actors The AWS Security team extracts all log messages related to system access and provides reports to designated officials Log analysis is performed to identify events based on defined risk management parameters LOG13 Define implement and evaluate processes procedures and technical measures for the reporting of anomalies and failures of the monitoring system and provide immediate notification to the accountable party Failures and Anomalies Reporting Logging and Monitoring SEF 011 Are policies and procedures for security incident management e discovery and cloud forensics established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS' incident response program plans and procedures have been developed in alignment with ISO 27001 standard AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard In addition the AWS: Overview of Security Processes Whitepaper provides further details available at: http://awsamazoncom/security/security learning/ SEF01 Establish document approve communicate apply evaluate and maintain policies and procedures for Security Incident Management E Discovery and Cloud Forensics Review and update the policies and procedures at least annually Security Incident Management Policy and Procedures Security Incident Management EDiscovery & Cloud Forensics Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title SEF 012 Are policies and procedures reviewed and updated annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis SEF01 Establish document approve communicate apply evaluate and maintain policies and procedures for Security Incident Management E Discovery and Cloud Forensics Review and update the policies and procedures at least annually Security Incident Management Policy and Procedures Security Incident Management EDiscovery & Cloud Forensics SEF 021 Are policies and procedures for timely management of security incidents established documented approved communicated applied evaluated and maintained? Yes CSPowned See response to Question ID SEF011 SEF02 Establish document approve communicate apply evaluate and maintain policies and procedures for the timely management of security incidents Review and update the policies and procedures at least annually Service Management Policy and Procedures Security Incident Management EDiscovery & Cloud Forensics SEF 022 Are policies and procedures for timely management of security incidents reviewed and updated at least annually? Yes CSPowned See response to Question ID SEF012 SEF02 Establish document approve communicate apply evaluate and maintain policies and procedures for the timely management of security incidents Review and update the policies and procedures at least annually Service Management Policy and Procedures Security Incident Management EDiscovery & Cloud Forensics Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title SEF 031 Is a security incident response plan that includes relevant internal departments impacted CSCs and other businesscritical relationships (such as supplychain) established documented approved communicated applied evaluated and maintained? Yes CSPowned See response to Question ID SEF011 SEF03 'Establish document approve communicate apply evaluate and maintain a security incident response plan which includes but is not limited to: relevant internal departments impacted CSCs and other business critical relationships (such as supply chain) that may be impacted' Incident Response Plans Security Incident Management EDiscovery & Cloud Forensics SEF 041 Is the security incident response plan tested and updated for effectiveness as necessary at planned intervals or upon significant organizational or environmental changes? Yes CSPowned AWS incident response plans are tested on at least on an annual basis SEF04 Test and update as necessary incident response plans at planned intervals or upon significant organizational or environmental changes for effectiveness Incident Response Testing Security Incident Management EDiscovery & Cloud Forensics SEF 051 Are information security incident metrics established and monitored? Yes CSPowned AWS Security Metrics are monitored and analyzed in accordance with ISO 27001 standard Refer to ISO 27001 Annex A domain 16 for further details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard SEF05 Establish and monitor information security incident metrics Incident Response Metrics Security Incident Management EDiscovery & Cloud Forensics SEF 061 Are processes procedures and technical measures supporting business processes to triage security related events defined implemented and evaluated? Yes CSPowned AWS' incident response program plans and procedures have been developed in alignment with ISO 27001 standard AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard In addition the AWS: Overview of Security Processes Whitepaper provides further details available at: http://awsamazoncom/security/security learning/ SEF06 Define implement and evaluate processes procedures and technical measures supporting business processes to triage security related events Event Triage Processes Security Incident Management EDiscovery & Cloud Forensics Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title SEF 071 Are processes procedures and technical measures for security breach notifications defined and implemented? Yes CSPowned AWS employees are trained on how to recognize suspected security incidents and where to report them When appropriate incidents are reported to relevant authorities AWS maintains the AWS security bulletin webpage located at: https://awsamazoncom/security/security bulletins to notify customers of security and privacy events affecting AWS services Customers can subscribe to the Security Bulletin RSS Feed to keep abreast of security announcements on the Security Bulletin webpage The customer support team maintains a Service Health Dashboard webpage located at: http://statusawsamazoncom/ to alert customers to any broadly impacting availability issues SEF07 Define and implement processes procedures and technical measures for security breach notifications Report security breaches and assumed security breaches including any relevant supply chain breaches as per applicable SLAs laws and regulations Security Breach Notification Security Incident Management EDiscovery & Cloud Forensics SEF 072 Are security breaches and assumed security breaches reported (including any relevant supply chain breaches) as per applicable SLAs laws and regulations? Yes CSPowned AWS maintains the AWS security bulletin webpage located at: https://awsamazoncom/security/security bulletins to notify customers of security and privacy events affecting AWS services Customers can subscribe to the Security Bulletin RSS Feed to keep abreast of security announcements on the Security Bulletin webpage The customer support team maintains a Service Health Dashboard webpage located at: http://statusawsamazoncom/ to alert customers to any broadly impacting availability issues SEF07 Define and implement processes procedures and technical measures for security breach notifications Report security breaches and assumed security breaches including any relevant supply chain breaches as per applicable SLAs laws and regulations Security Breach Notification Security Incident Management EDiscovery & Cloud Forensics SEF 081 Are points of contact maintained for applicable regulation authorities national and local law enforcement and other legal jurisdictional authorities? Yes CSPowned AWS maintains contacts with industry bodies risk and compliance organizations local authorities and regulatory bodies as required by the ISO 27001 standard AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard SEF08 Maintain points of contact for applicable regulation authorities national and local law enforcement and other legal jurisdictional authorities Points of Contact Maintenance Security Incident Management EDiscovery & Cloud Forensics Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title STA 011 Are policies and procedures implementing the shared security responsibility model (SSRM) within the organization established documented approved communicated applied evaluated and maintained? Yes CSPowned Security and Compliance is a shared responsibility between AWS and the customer The shared model can help relieve the customer's operational burden as AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates Refer to shared responsibility model: https://awsamazoncom/compliance/shared responsibilitymodel/ STA01 Establish document approve communicate apply evaluate and maintain policies and procedures for the application of the Shared Security Responsibility Model (SSRM) within the organization Review and update the policies and procedures at least annually SSRM Policy and Procedures Supply Chain Management Transparency and Accountability STA 012 Are the policies and procedures that apply the SSRM reviewed and updated annually? Yes CSPowned Security and Compliance is a shared responsibility between AWS and the customer AWS Information Security Management System policies that are in scope for SSRM are reviewed and updated annually and as necessary The shared model can help relieve the customer's operational burden as AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates Refer to shared responsibility model: https://awsamazoncom/compliance/shared responsibilitymodel/ STA01 Establish document approve communicate apply evaluate and maintain policies and procedures for the application of the Shared Security Responsibility Model (SSRM) within the organization Review and update the policies and procedures at least annually SSRM Policy and Procedures Supply Chain Management Transparency and Accountability STA 021 Is the SSRM applied documented implemented and managed throughout the supply chain for the cloud service offering? NA CSPowned AWS proactively informs our customers of any subcontractors who have access to customer owned content you upload onto AWS including content that may contain personal data There are no subcontractors authorized by AWS to access any customerowned content that you upload onto AWS To monitor subcontractor access yearround please refer to: https://awsamazoncom/compliance/third partyaccess/ STA02 Apply document implement and manage the SSRM throughout the supply chain for the cloud service offering SSRM Supply Chain Supply Chain Management Transparency and Accountability STA 031 Is the CSC given SSRM guidance detailing information about SSRM applicability throughout the supply chain? NA CSPowned AWS proactively informs our customers of any subcontractors who have access to customer owned content you upload onto AWS including content that may contain personal data There are no subcontractors authorized by AWS to access any customerowned content that you upload onto AWS To monitor subcontractor access yearround please refer to: https://awsamazoncom/compliance/third partyaccess/ STA03 Provide SSRM Guidance to the CSC detailing information about the SSRM applicability throughout the supply chain SSRM Guidance Supply Chain Management Transparency and Accountability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title STA 041 Is the shared ownership and applicability of all CSA CCM controls delineated according to the SSRM for the cloud service offering? Yes CSPowned Security and Compliance is a shared responsibility between AWS and the customer This varies by cloud services used the shared model can help relieve the customer's operational burden as AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates Refer to shared responsibility model: https://awsamazoncom/compliance/shared responsibilitymodel/ STA04 Delineate the shared ownership and applicability of all CSA CCM controls according to the SSRM for the cloud service offering SSRM Control Ownership Supply Chain Management Transparency and Accountability STA 051 Is SSRM documentation for all cloud services the organization uses reviewed and validated? Yes CSPowned Security and Compliance is a shared responsibility between AWS and the customer The shared model can help relieve the customer's operational burden as AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates Refer to shared responsibility model: https://awsamazoncom/compliance/shared responsibilitymodel/ STA05 Review and validate SSRM documentation for all cloud services offerings the organization uses SSRM Documentati on Review Supply Chain Management Transparency and Accountability STA 061 Are the portions of the SSRM the organization is responsible for implemented operated audited or assessed? Yes CSPowned AWS has established a formal periodic audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment STA06 Implement operate and audit or assess the portions of the SSRM which the organization is responsible for SSRM Control Implementati on Supply Chain Management Transparency and Accountability STA 071 Is an inventory of all supply chain relationships developed and maintained? NA CSPowned AWS performs periodic reviews of SSRM service and colocation providers to validate adherence with AWS security and operational standards AWS maintains standard contract review and signature processes that include legal reviews with consideration of protecting AWS resources AWS proactively informs our customers of any subcontractors who have access to customerowned content you upload onto AWS including content that may contain personal data There are no subcontractors authorized by AWS to access any customer owned content that you upload onto AWS STA07 Develop and maintain an inventory of all supply chain relationships Supply Chain Inventory Supply Chain Management Transparency and Accountability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title STA 081 Are risk factors associated with all organizations within the supply chain periodically reviewed by CSPs? NA CSPowned AWS performs periodic reviews of SSRM service and colocation providers to validate adherence with AWS security and operational standards AWS maintains standard contract review and signature processes that include legal reviews with consideration of protecting AWS resources AWS proactively informs our customers of any subcontractors who have access to customerowned content you upload onto AWS including content that may contain personal data There are no subcontractors authorized by AWS to access any customer owned content that you upload onto AWS STA08 CSPs periodically review risk factors associated with all organizations within their supply chain Supply Chain Risk Management Supply Chain Management Transparency and Accountability STA 091 Do service agreements between CSPs and CSCs (tenants) incorporate at least the following mutually agreed upon provisions and/or terms? • Scope characteristics and location of business relationship and services offered • Information security requirements (including SSRM) • Change management process • Logging and monitoring capability • Incident management and communication procedures • Right to audit and thirdparty assessment • Service termination • Interoperability and portability requirements • Data privacy Yes Shared CSP and CSC AWS service agreements includes multiple provisions and terms For additional details refer to following sample AWS Customer Agreement online https://awsamazoncom/agreement/ STA09 Service agreements between CSPs and CSCs (tenants) must incorporate at least the following mutuallyagreed upon provisions and/or terms: • Scope characteristics and location of business relationship and services offered • Information security requirements (including SSRM) • Change management process • Logging and monitoring capability • Incident management and communication procedures • Right to audit and third party assessment • Service termination • Interoperability and portability requirements • Data privacy Primary Service and Contractual Agreement Supply Chain Management Transparency and Accountability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title STA 101 Are supply chain agreements between CSPs and CSCs reviewed at least annually? Yes CSPowned AWS' third party agreement processes include periodic review and reporting and are reviewed by independent auditors STA10 Review supply chain agreements between CSPs and CSCs at least annually Supply Chain Agreement Review Supply Chain Management Transparency and Accountability STA 111 Is there a process for conducting internal assessments at least annually to confirm the conformance and effectiveness of standards policies procedures and SLA activities? Yes CSPowned AWS has established a formal periodic audit program that includes continual independent internal and external assessments to validate the implementation and operating effectiveness of the AWS control environment STA11 Define and implement a process for conducting internal assessments to confirm conformance and effectiveness of standards policies procedures and service level agreement activities at least annuall y Internal Compliance Testing Supply Chain Management Transparency and Accountability STA 121 Are policies that require all supply chain CSPs to comply with information security confidentiality access control privacy audit personnel policy and service level requirements and standards implemented? Yes CSPowned AWS' third party agreement processes include periodic review and reporting and are reviewed by independent auditors STA12 Implement policies requiring all CSPs throughout the supply chain to comply with information security confidentiality access control privacy audit personnel policy and service level requirements and standards Supply Chain Service Agreement Compliance Supply Chain Management Transparency and Accountability STA 131 Are supply chain partner IT governance policies and procedures reviewed periodically? NA CSPowned AWS does not utilize third parties to provide services to customers but does utilize co location provides in limited capacity to house some AWS data centers These controls are audited twice annually in our SOC 1/2 audits and annually in our ISO 27001/17/18 audits There are no subcontractors authorized by AWS to access any customerowned content that customers upload onto AWS To monitor subcontractor access yearround please refer to: https://awsamazoncom/compliance/third partyaccess/ STA13 Periodically review the organization's supply chain partners' IT governance policies and procedures Supply Chain Governance Review Supply Chain Management Transparency and Accountability Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title STA 141 Is a process to conduct periodic security assessments for all supply chain organizations defined and implemented? NA CSPowned AWS does not utilize third parties to provide services to customers but does utilize co location provides in limited capacity to house some AWS data centers These controls are audited twice annually in our SOC 1/2 audits and annually in our ISO 27001/17/18 audits There are no subcontractors authorized by AWS to access any customerowned content that customers upload onto AWS To monitor subcontractor access yearround please refer to: https://awsamazoncom/compliance/third partyaccess/ STA14 Define and implement a process for conducting security assessments periodically for all organizations within the supply chain Supply Chain Data Security Assessment Supply Chain Management Transparency and Accountability TVM 011 Are policies and procedures established documented approved communicated applied evaluated and maintained to identify report and prioritize the remediation of vulnerabilities to protect systems against vulnerability exploitation? Yes CSPowned The AWS Security team notifies and coordinates with the appropriate Service Teams when conducting securityrelated activities within the system boundary Activities include vulnerability scanning contingency testing and incident response exercises AWS performs external vulnerability assessments at least quarterly and identified issues are investigated and tracked to resolution Additionally AWS performs unannounced penetration tests by engaging independent thirdparties to probe the defenses and device configuration settings within the system TVM01 Establish document approve communicate apply evaluate and maintain policies and procedures to identify report and prioritize the remediation of vulnerabilities in order to protect systems against vulnerability exploitation Review and update the policies and procedures at least annually Threat and Vulnerability Management Policy and Procedures Threat & Vulnerability Management TVM 012 Are threat and vulnerability management policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis TVM01 Establish document approve communicate apply evaluate and maintain policies and procedures to identify report and prioritize the remediation of vulnerabilities in order to protect systems against vulnerability exploitation Review and update the policies and procedures at least annually Threat and Vulnerability Management Policy and Procedures Threat & Vulnerability Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title TVM 021 Are policies and procedures to protect against malware on managed assets established documented approved communicated applied evaluated and maintained? Yes CSPowned AWS' program processes and procedures to managing antivirus / malicious software is in alignment with ISO 27001 standards Refer to AWS SOC reports provides further details In addition refer to ISO 27001 standard Annex A domain 12 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard TVM02 Establish document approve communicate apply evaluate and maintain policies and procedures to protect against malware on managed assets Review and update the policies and procedures at least annually Malware Protection Policy and Procedures Threat & Vulnerability Management TVM 022 Are asset management and malware protection policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis TVM02 Establish document approve communicate apply evaluate and maintain policies and procedures to protect against malware on managed assets Review and update the policies and procedures at least annually Malware Protection Policy and Procedures Threat & Vulnerability Management TVM 031 Are processes procedures and technical measures defined implemented and evaluated to enable scheduled and emergency responses to vulnerability identifications (based on the identified risk)? Yes CSPowned See response to Question ID TVM011 TVM03 Define implement and evaluate processes procedures and technical measures to enable both scheduled and emergency responses to vulnerability identifications based on the identified risk Vulnerability Remediation Schedule Threat & Vulnerability Management TVM 041 Are processes procedures and technical measures defined implemented and evaluated to update detection tools threat signatures and compromise indicators weekly (or more frequent) basis? Yes CSPowned AWS' program processes and procedures to managing antivirus / malicious software is in alignment with ISO 27001 standards Refer to AWS SOC reports provides further details In addition refer to ISO 27001 standard Annex A domain 12 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard TVM04 Define implement and evaluate processes procedures and technical measures to update detection tools threat signatures and indicators of compromise on a weekly or more frequent basis Detection Updates Threat & Vulnerability Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title TVM 051 Are processes procedures and technical measures defined implemented and evaluated to identify updates for applications that use third party or open source libraries (according to the organization's vulnerability management policy)? Yes CSPowned AWS implements open source software or custom code within its services All open source software to include binary or machine executable code from thirdparties is reviewed and approved by the Open Source Group prior to implementation and has source code that is publicly accessible AWS service teams are prohibited from implementing code from third parties unless it has been approved through the open source review All code developed by AWS is available for review by the applicable service team as well as AWS Security By its nature open source code is available for review by the Open Source Group prior to granting authorization for use within Amazon TVM05 Define implement and evaluate processes procedures and technical measures to identify updates for applications which use third party or open source libraries according to the organization's vulnerability management policy External Library Vulnerabilitie s Threat & Vulnerability Management TVM 061 Are processes procedures and technical measures defined implemented and evaluated for periodic independent thirdparty penetration testing? Yes CSPowned AWS Security regularly performs penetration testing These engagements may include carefully selected industry experts and independent security firms AWS does not share the results directly with customers AWS thirdparty auditors review the results to verify frequency of penetration testing and remediation of findings TVM06 Define implement and evaluate processes procedures and technical measures for the periodic performance of penetration testing by independent third parties Penetration Testing Threat & Vulnerability Management TVM 071 Are processes procedures and technical measures defined implemented and evaluated for vulnerability detection on organizationally managed assets at least monthly? No CSPowned AWS Security performs regular vulnerability scans on the host operating system web application and databases in the AWS environment using a variety of tools External vulnerability assessments are conducted by an AWS approved third party vendor at least quarterly TVM07 Define implement and evaluate processes procedures and technical measures for the detection of vulnerabilities on organizationally managed assets at least monthly Vulnerability Identification Threat & Vulnerability Management TVM 081 Is vulnerability remediation prioritized using a riskbased model from an industry recognized framework? Yes CSPowned AWS Security performs regular vulnerability scans on the host operating system web application and databases in the AWS environment using a variety of tools TVM08 Use a riskbased model for effective prioritization of vulnerability remediation using an industry recognized framework Vulnerability Prioritization Threat & Vulnerability Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title TVM 091 Is a process defined and implemented to track and report vulnerability identification and remediation activities that include stakeholder notification? Yes CSPowned The AWS Security team notifies and coordinates with the appropriate Service Teams when conducting securityrelated activities within the system boundary Activities include vulnerability scanning contingency testing and incident response exercises AWS performs external vulnerability assessments at least quarterly and identified issues are investigated and tracked to resolution Additionally AWS performs unannounced penetration tests by engaging independent thirdparties to probe the defenses and device configuration settings within the system TVM09 Define and implement a process for tracking and reporting vulnerability identification and remediation activities that includes stakeholder notification Vulnerability Management Reporting Threat & Vulnerability Management TVM 101 Are metrics for vulnerability identification and remediation established monitored and reported at defined intervals? Yes Shared CSP and CSC AWS tracks metrics for internal process measurements and improvements that align with our policies and standards AWS customers are responsible for vulnerability management within their AWS environments TVM10 Establish monitor and report metrics for vulnerability identification and remediation at defined intervals Vulnerability Management Metrics Threat & Vulnerability Management UEM 011 Are policies and procedures established documented approved communicated applied evaluated and maintained for all endpoints? Yes CSPowned AWS implements formal documented policies and procedures that provide guidance for operations and information security within the organization and the supporting AWS environments Policies address purpose scope roles responsibilities and management commitment All policies are maintained in a centralized location that is accessible by employees UEM01 Establish document approve communicate apply evaluate and maintain policies and procedures for all endpoints Review and update the policies and procedures at least annually Endpoint Devices Policy and Procedures Universal Endpoint Management UEM 012 Are universal endpoint management policies and procedures reviewed and updated at least annually? Yes CSPowned Policies are reviewed approved by AWS leadership at least annually or as needed basis UEM01 Establish document approve communicate apply evaluate and maintain policies and procedures for all endpoints Review and update the policies and procedures at least annually Endpoint Devices Policy and Procedures Universal Endpoint Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title UEM 021 Is there a defined documented applicable and evaluated list containing approved services applications and the sources of applications (stores) acceptable for use by endpoints when accessing or storing organization managed data? Yes CSPowned Amazon has established baseline infrastructure standards in alignment with industry best practices All software installations are still monitored by AWS security and mandatory security controls and software is always required Users cannot continue to use their laptop or desktop if required software is not installed Their device will be quarantined from network access until the nonconformance is resolved UEM02 Define document apply and evaluate a list of approved services applications and sources of applications (stores) acceptable for use by endpoints when accessing or storing organization managed data Application and Service Approval Universal Endpoint Management UEM 031 Is a process defined and implemented to validate endpoint device compatibility with operating systems and applications? Yes CSPowned Amazon has established baseline infrastructure standards in alignment with industry best practices This includes endpoint compatibility with operating systems and applications UEM03 Define and implement a process for the validation of the endpoint device's compatibility with operating systems and applications Compatibilit y Universal Endpoint Management UEM 041 Is an inventory of all endpoints used and maintained to store and access company data? Yes CSPowned Amazon has established baseline infrastructure standards in alignment with industry best practices This includes endpoint inventory management UEM04 Maintain an inventory of all endpoints used to store and access company data Endpoint Inventory Universal Endpoint Management UEM 051 Are processes procedures and technical measures defined implemented and evaluated to enforce policies and controls for all endpoints permitted to access systems and/or store transmit or process organizational data? NA AWS employees do not access process or change customer data in the course of providing our services AWS has separate CORP and PROD environments which are separated from each other via physical and logical controls Only approved users would have the ability to be granted access from CORP to PROD That access is then managed by separate permission system requires an approved ticket requires MFA is time limited and all activities are tracked UEM05 Define implement and evaluate processes procedures and technical measures to enforce policies and controls for all endpoints permitted to access systems and/or store transmit or process organizational data Endpoint Management Universal Endpoint Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title UEM 061 Are all relevant interactiveuse endpoints configured to require an automatic lock screen? Yes CSPowned Amazon has established baseline infrastructure standards in alignment with industry best practices These include automatic lockout after defined period of inactivity UEM06 Configure all relevant interactiveuse endpoints to require an automatic lock screen Automatic Lock Screen Universal Endpoint Management UEM 071 Are changes to endpoint operating systems patch levels and/or applications managed through the organizational change management process? Yes CSPowned Amazon has established baseline infrastructure standards in alignment with industry best practices All software installations are still monitored by AWS security and mandatory security controls and software is always required Users cannot continue to use their laptop or desktop if required software is not installed Their device will be quarantined from network access until the nonconformance is resolved UEM07 Manage changes to endpoint operating systems patch levels and/or applications through the company's change management processes Operating Systems Universal Endpoint Management UEM 081 Is information protected from unauthorized disclosure on managed endpoints with storage encryption? NA CSPowned AWS employees do not access process or change customer data in the course of providing our services AWS has separate CORP and PROD environments which are separated from each other via physical and logical controls Only approved users would have the ability to be granted access from CORP to PROD That access is then managed by separate permission system requires an approved ticket requires MFA is time limited and all activities are tracked Additionally customers are provided tools to encrypt data within AWS environment to add additional layers of security The encrypted data can only be accessed by authorized customer personnel with access to encryption keys UEM08 Protect information from unauthorized disclosure on managed endpoint devices with storage encryption Storage Encryption Universal Endpoint Management UEM 091 Are antimalware detection and prevention technology services configured on managed endpoints? Yes CSPowned AWS' program processes and procedures to managing antivirus / malicious software is in alignment with ISO 27001 standards Refer to AWS SOC reports provides further details In addition refer to ISO 27001 standard Annex A domain 12 for additional details AWS has been validated and certified by an independent auditor to confirm alignment with ISO 27001 certification standard UEM09 Configure managed endpoints with anti malware detection and prevention technology and services Anti Malware Detection and Prevention Universal Endpoint Management UEM 101 Are software firewalls configured on managed endpoints? Yes CSPowned Amazon assets (eg laptops) are configured with antivirus software that includes email filtering software firewalls and malware detection UEM10 Configure managed endpoints with properly configured software firewalls Software Firewall Universal Endpoint Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title UEM 111 Are managed endpoints configured with data loss prevention (DLP) technologies and rules per a risk assessment? NA AWS employees do not access process or change customer data in the course of providing our services AWS has separate CORP and PROD environments which are separated from each other via physical and logical controls AWS customers are responsible for the management of the data they place into AWS services AWS has no insight as to what type of content the customer chooses to store in AWS and the customer retains complete control of how they choose to classify their content where it is stored used and protected from disclosure UEM11 Configure managed endpoints with Data Loss Prevention (DLP) technologies and rules in accordance with a risk assessment Data Loss Prevention Universal Endpoint Management UEM 121 Are remote geolocation capabilities enabled for all managed mobile endpoints? No CSPowned No response is required as we have indicated no UEM12 Enable remote geo location capabilities for all managed mobile endpoints Remote Locate Universal Endpoint Management UEM 131 Are processes procedures and technical measures defined implemented and evaluated to enable remote company data deletion on managed endpoint devices? Yes CSPowned AWS scope for mobile devices are iOS and Android based mobile phones and tablets AWS maintains a formal mobile device policy and associated procedures Specifically AWS mobile devices are only allowed access to AWS corporate fabric resources and cannot access AWS production fabric where customer content is stored AWS production fabric is separated from the corporate fabric by boundary protection devices that control the flow of information between fabrics Approved firewall rule sets and access control lists between network fabrics restrict the flow of information to specific information system services Access control lists and rule sets are reviewed and approved and are automatically pushed to boundary protection devices on a periodic basis (at least every 24 hours) to ensure rulesets and access control lists are up todate Consequently mobile devices are not relevant to AWS customer content access UEM13 Define implement and evaluate processes procedures and technical measures to enable the deletion of company data remotely on managed endpoint devices Remote Wipe Universal Endpoint Management Questi on Question CSP CAIQ Answer SSRM Control Ownership CSP Implementation Description (Optional/Recommended) CSC Responsibilities (Optional/Recommen ded) CCM Control ID CCM Control Specification CCM Control Title CCM Domain Title UEM 141 Are processes procedures and technical and/or contractual measures defined implemented and evaluated to maintain proper security of third party endpoints with access to organizational assets? NA AWS does not utilize third parties to provide services to customers but does utilize co location provides in limited capacity to house some AWS data centers These controls are audited twice annually in our SOC 1/2 audits and annually in our ISO 27001/17/18 audits There are no subcontractors authorized by AWS to access any customerowned content that customers upload onto AWS To monitor subcontractor access yearround please refer to: https://awsamazoncom/compliance/third partyaccess/ UEM14 Define implement and evaluate processes procedures and technical and/or contractual measures to maintain proper security of thirdparty endpoints with access to organizational assets ThirdParty Endpoint Security Posture Universal Endpoint Management End of Standard Further Reading For additional information see the following sources: AWS Compliance Quick Reference Guide AWS Answers to Key Compliance Questions AWS Cloud Security Alliance (CSA) Overview Document Revisions Date Description April 2022 Updated CAIQ template and updated responses to individual questions based on CAIQ v402 July 2018 2018 validation and update January 2018 Migrated to new template January 2016 First publication
|
General
|
consultant
|
Best Practices
|
Data_Warehousing_on_AWS
|
Data Warehousing on AWS January 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved Contents Introduction 1 Introducing Amazon Redshift 2 Modern Analytics and Data Warehousing Architecture 3 AWS Analytics Services 3 Analytics Architecture 4 Data Warehou se Technology Options 10 RowOriented Databases 10 Column Oriented Databases 11 Massively Parallel Processing (MPP) Architectures 12 Amazon Redshift Deep Dive 12 Integration with Data Lake 12 Performance 13 Durability and Availability 14 Elasticity and Scalability 15 Operations 16 Redshift Advisor 16 Interfaces 17 Security 17 Cost Model 18 Ideal Usage Patterns 18 AntiPatterns 19 Migrating to Amazon Redshift 20 OneStep Migration 20 TwoStep Migration 20 Wave based Migration 21 Tools and Additional Help for Database Migration 21 Designing Data Warehousing Workflows 22 Conclusion 25 Contributors 25 Further Reading 25 Document Revisions 26 Abstract Enterprises across the globe want to migrate data warehousing to the cloud to improve performance and lower costs This whitepaper discusses a modern approach to analytics and data warehousing architecture It outlines services available on Amazon Web Services (AWS) to implement this arch itecture and provides common design patterns to build data warehousing solutions using these services This whitepaper is aimed at d ata engineers data analysts business analysts and developers Amazon Web Services Data Warehousing on AWS 1 Introduction Data is an enterpri se’s most valuable asset To fuel innovation which fuels growth an enterprise must: • Store every relevant data point about their business • Give data access to everyone who needs it • Have the ability to analyze the data in different ways • Distill the data d own to insights Most large enterprises have data warehouses for reporting and analytics purposes They use data from a variety of sources including their own transaction processing systems and other databases In the past building and running a data w arehouse —a central repository of information coming from one or more data sources —was complicated and expensive Data warehousing systems were complex to set up cost millions of dollars in upfront software and hardware expenses and took months of plannin g procurement implementation and deployment processes After making the initial investments and setting up the data warehouse enterprises had to hire a team of database administrators to keep their queries running fast and protect against data loss Traditional data warehouse architectures and on premises data warehousing pose many challenges : • They are difficult to scale and have long lead times for hardware procurement and upgrades • They have high overhead costs for administration • Proprietary form ats and siloed data make it costly and complex to access refine and join data from different sources • They cannot separate cold (infrequently used) and warm (frequently used) data which results in bloated costs and wasted capacity • They limit the number of users and the amount of accessible data which leads to antidemocratization of data • They i nspire other legacy architecture patterns such as retrofitting use cases to accommodate the wrong tools for the job instead of using the correct tool for each use case In this whitepaper we provide the information you need to take advantage of the strategic shift happening in the data warehousing space from on premises to the cloud: Amazon Web Services Data Warehousing on AWS 2 1 Modern analytics architecture 2 Data warehousing technology choices available within that architecture 3 A deep dive on Amazon Redshift and its differentiating features 4 A blueprint for building a complete data warehousing system on AWS with Amazon Redshift and other AWS S ervices 5 Practical tips for migrating from other data warehousing solutions and tapping into our partner ecosystem Introducing Amazon Redshift In the past when data volumes grew or an enterprise wanted to make analytics and reports available to more users they had to choose between accepting slow query performance or investing time and effort on an expensive upgrade process In fact some IT teams discourage augmenting data or adding queries to protect existing service level agreements Many enterprises struggled with maintaining a healthy relationship with traditional database vendors They were often forced to either upgrade hardware for a managed system or enter a protracted negotiation cycle for an expired term license When they hit the scaling limit on one data warehouse engine they were forced to migrate to another engine from th e same vendor with different SQL semantics Cloud data warehouses like Amazon Redshift changed how enterprises think about data warehousing by dramatically lowering the cost and effort associated with deplo ying data warehouse systems without compromising on features scale and performance Amazon Redshift is a fast fully managed petabyte scale data warehousing solution that makes it simple and cost effective to analyze large volumes of data using existi ng business intelligence (BI) tools With Amazon Redshift you can get the performance of columnar data warehousing engines that perform massively parallel processing (MPP) at a tenth of th e cost You can start small for $025 per hour with no commitments and scale to petabytes for $1000 per terabyte per year You can grow to exabyte scale storage by storing data in an Amazon Simple Storage Servi ce (Amazon S3) data lake and taking a lake house approach to data warehousing with the Amazon Redshift Spectrum feature With this setup you can query data directly from files on Amazon S3 for as low as $5 per terabyte of data scanned Since launching in February 2013 Amazon Redshift has been one of the fastest growing AWS Services with tens of thousands of customers across many industries and company sizes Enterprises such as NTT DOCOMO FINRA Johnson & Johnson McDonalds Equinox Fannie Mae Hearst Amgen and NASDAQ have migrated to Amazon Redshift Amazon Web Services Data Warehousing on AWS 3 Modern Analytics and Data Warehousing Architecture Data typically flows into a data warehouse from transactional systems and other relational databases and typically includes struc tured semi structured and unstructured data This data is processed transformed and ingested at a regular cadence Users including data scientists business analysts and decision makers access the data through BI tools SQL clients and other tools So w hy build a data warehouse at all ? Why not just run analytics queries directly on an online transaction processing (OLTP) database where the transactions are recorded? To answer the question let’s look at the differences between data warehouses and OLTP databases • Data warehouses are optimized for batched write operations and reading high volumes of data • OLTP databases are optimized for continuous write operations and high volumes of small read operations Data warehouses generally employ denormal ized schemas like the Star schema and Snowflake schema because of high data throughput requirements whereas OLTP databases employ highly normalized schemas which are more suited for high transaction throughput requirements To get the benefits of using a data warehouse managed as a separate data store with your source OLTP or other source system we recommend that you build an efficient data pipeline Such a pipeline extracts the data from the source system converts it into a schema suitable for data warehousing and then loads it into the data warehouse In the next section we discuss the building blocks of an a nalytics pipeline and the different AWS Services you can use to architect the pipeline AWS Analytics Services AWS analytics services help enterprises quickly convert their data to answers by providing mature and integrated analytics services ranging fro m cloud data warehouses to serverless data lakes Getting answers quickly means less time building plumbing and configuring cloud analytics services to work together AWS helps you do exactly that by giving you : 1 An easy path to build data lak es and data w arehouses and start running diverse analytics workloads 2 A secure cloud storage compute and network infrastructure that meets the specific needs of analytic workloads Amazon Web Services Data Wareho using on AWS 4 3 A fully integrated analytics stack with a mature set of analytics tools covering all common use cases and leveraging open file formats standard SQL language open source engines and platforms 4 The best performance the most scalability and the lowest co st for analytics Many enterprises choose cloud data lakes and cloud data warehouses as the foundation for their data and analytics architectures AWS is focused on helping customers build and secure data lakes and data warehouses in the cloud within days not months AWS Lake Formation enables secured selfservice discovery and access for users Lake Formation provides easy ondemand access to specific resources that fit the requirements of each ana lytics workload The data is curated and cataloged already prepared for any type of analytics Related records are matched and de duplicated with machine learning AWS provides a diverse se t of analytics services that are deeply integrated with the infrastructure layers This enables you to take advantage of features like intelligent tiering and Amazon Elastic Compute Cloud (Amazon EC2) spot instan ces to reduce cost and run analytics faster When you ’re ready for more advanced analytic approaches use our broad collection of machine learning ( ML) and artificial intelligence (AI) services against that same data in S3 to gain even more insight without the delays and cost s of moving or transforming your data Analytics Architecture Analytics pipelines are designed to handle large volumes of incoming streams of data from heterogeneous sources such as databases applications and devices A typical analytics pipeline has the following stages: 1 Collect data 2 Store the data 3 Process the data 4 Analyze and visualize the data Figure 1: Analytics Pipeline Amazon Web Services Data Warehousing on AWS 5 Data Collection At the data collection stage consider that you probably have different types of data such as transactional data log data streaming data and Internet of Things (IoT) da ta AWS provides solutions for data storage for each of these types of data Transactional Data Transactional data such as e commerce purchase transactions and financial transactions is typically stored in relational database management systems (RDBMS) or NoSQL database systems The choice of database solution depends on the use case and application characteristics • A NoSQL database is suitable when the data is not well structured to fit into a defined schema or when the schema changes often • An RDBMS solution is suitable when transactions happen across multiple table rows and the queries require complex joins Amazon DynamoDB is a fully managed NoSQL database service that you can use as an OLTP store for your applications Amazon Aurora and Amazon Relational Database Service (Amazon RDS ) enable you to implement a n SQLbased relational database solution for your application : • Amazon Aurora is a MySQL and PostgreSQL compatible relational database built for the cloud • Amazon RDS is a service that enables you to easily set up operate and scale relational databases on the cloud For more information about the different AWS database services see Databases on AWS Log Data Reliably capturing system generated logs help s you troubl eshoot issues conduct audits and perform analytics using the information stored in the logs Amazon S3 is a popular storage solution for non transactional data such as log data that is used for analytics Beca use it provides 99999999999 percent durability S3 is also a popular archival solution Streaming Data Web applications mobile devices and many software applications and services can generate staggering amounts of streaming data —sometimes terabytes per hour —that need to be collected stored and processed continuously Using Amazon Kinesis services you can do that simply and at a low cost Alterna tively you can use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to run applications that use Apache Kafka to process streaming data With Amazon MSK you can use native Amazon Web Services Data Warehousing on AWS 6 Apache Kafka application programming interfaces ( APIs ) to populate data lakes stream changes to and from databases and power ML and analytics applications IoT Data Devices and sensors around the world send messages continuously Enterprises today need to capture this data and derive intelligence from it Using AWS IoT connected devices interact easily and securely with the AWS Cloud Use AWS IoT to leverage AWS services like AWS Lambda Amazon Kinesis Services Amazon S3 Amazon Machine Learning and Amazon DynamoDB to build applications that gather process analyze and act on IoT data without having to manage any infrastructure Data Processing The collection process provides data that potentially has useful information You can analyze the extracted information for intelligence that will help you grow your business This intelligence might for example tell you about your user behavior and the relative popularity of your products The best practice to gather this intelligence is to load your raw data into a data warehouse to perform further analysis There are two types of processing workflows to accomplish this: batch processing and realtime processing The most common forms of processing online analytic processing (OLAP) and OLTP each use one of these types OLAP processing is generally batch based OLTP systems are oriented toward real time processing and are generally not well suited fo r batch based processing If you decouple data processing from your OLTP system you keep the data processing from affecting your OLTP workload First let's look at what is involved in batch processing Batch Processing • Extract Transform Load (ETL) — ETL is the process of pulling data from multiple sources to load into data warehousing systems ETL is normally a continuous ongoing process with a well defined workflow During this process data is initially extracted from one or more sources The extracted data is then cleansed enriched transformed and loaded into a data warehouse For batch ETL use AWS Glue or Amazon EMR AWS Glue is a fully managed ETL service You can create and run an ETL job with a few clicks in the AWS Management Console Amazon EMR is for big data processing and analysis EMR offers an expandable lowconfigurat ion service as an easier alternative to running in house cluster computing Amazon Web Services Data Warehousing on AWS 7 • Extract Load Transform (ELT) — ELT is a variant of ETL where the extracted data is loaded into the target system first Transformations are performed after the data is loaded into the data warehouse ELT typically works well when your target system is powerful enough to handle transformations Amazon Redshift is often used in ELT pipelines because it is highly efficient in performing transformations • Online Analytical Processing (OLAP) — OLAP systems store aggregated historical data in multidimensional schemas Used widely for query reporting and analytics OLAP systems enable you to extract data and spot trends on multiple dimensions Because it is optimized for fast joins Ama zon Redshift is often used to build OLAP systems Now let’s look at what’s involved in real time processing of data Real Time Processing We talked about streaming data earlier and mentioned Amazon Kinesis Services and Amazon MSK as solutions to capture and store streaming data You can process this data sequentially and incrementally on a record byrecord basis or over sliding time windows Use the processed data for a wide variety of analytics including correlations aggregations filtering a nd sampling This type of processing is called realtime processing Information derived from real time processing gives companies visibility into many aspects of their business and customer activity such as service usage (for metering or billing) serve r activity website clicks and geolocation of devices people and physical goods This enables them to respond promptly to emerging situations Real time processing requires a highly concurrent and scalable processing layer To process streaming data i n real time use AWS Lambda Lambda can process the data directly from AWS IoT or Amazon Kinesis Data Streams Lambda enables you to run code withou t provisioning or managing servers Amazon Kinesis Client Library (KCL) is another way to process data from Amazon Kinesis Streams KCL gives you more flex ibility than Lambda to batch your incoming data for further processing You can also use KCL to apply extensive transformations and customizations in your processing logic Amazon Kinesis Data Firehose is the easiest way to load streaming data into AWS It can capture streaming data and automatically load it into Amazon Redshift enabling nearrealtime analytics with existing BI tools and dashboards you’re already using today Define batching rules with Kinesis Data Firehose and it takes care of reliably batching the data and delivering it to Amazon Redshift Amazon MSK is an easy way to build and run applications that use Apache Kafka to process streaming data Apache Kafka is an open source platform for building real time streaming data pipelines and applications With Amazon MSK you can use native Amazon Web Services Data Warehousing on AWS 8 Apache Kafka APIs t o populate data lakes stream changes to and from databases and power machine learning and analytics applications AWS Glue streaming jobs enable you to perform complex ETL on streaming data Streaming ETL jobs in AWS Glue can consume data from streaming sources like Amazon Kinesis Data Streams and Amazon MSK clean and transform those data streams in flight and continuously load the results into S3 data lakes data warehouses or other data stores As you process streaming data in a n AWS Glue job you have access to the full capabilities of Spark Structured Streaming to implement data transformations such as ag gregating partitioning and formatting as well as joining with other data sets to enrich or cleanse the data for easier analysis Data Storage You can store your data in a lake house data warehouse or data mart • Lake house — A lake house is an archit ectural pattern that combines the best elements of data warehouses and data lakes Lake houses enable you to query data across your data warehouse data lake and operational databases to gain faster and deeper insights that are not possible otherwise Wit h a lake house architecture you can store data in open file formats in your data lake and query it in place while joining with data warehouse data This enables you to make this data easily available to other analytics and machine learning tools rather t han locking it in a new silo • Data warehouse — Using data warehouses you can run fast analytics on large volumes of data and unearth patterns hidden in your data by leveraging BI tools Data scientists query a data warehouse to perform offline analytics a nd spot trends Users across the enterprise consume the data using SQL queries periodic reports and dashboards as needed to make critical business decisions • Data mart — A data mart is a simple form of data warehouse focused on a specific functional area or subject matter For example you can have specific data marts for each division in your enterprise or segment data marts based on regions You can build data marts from a large data warehouse operational stores or a hybrid of the two Data marts are simple to design build and administer However because data marts are focused on specific functional areas querying across functional areas can become complex because of distribution You can use Amazon Redshift to build lake houses d ata marts and data warehouses Redshift enables you to easily query data in your data lake and write data back to your data lake in open formats You can use familiar SQL statements to combine and process data across all your data stores and execute que ries on live data in your operational databases without requiring any data loading and ETL pipelines Amazon Web Services Data Warehousing on AWS 9 Analysis and Visualization After processing the data and making it available for further analysis you need the right tools to analyze and visualize the processed data In many cases you can perform data analysis using the same tools you use for processing data You can use tools such as MySQL Workbench to analyze your data in Amazon Redshift with ANSI SQL Amazon Redshift also works well with popular third party BI solutions available on the market such as Tableau and MicroStrategy Amazon QuickSight is a fast cloud powered BI service that enables you to create visualizations perform analysis as needed and quickly get business insights from your data Amazon QuickSight offers native integration with AWS data sources such as Amazon Redshift Amazon S3 and Amazon RDS Amazon Redshift sources can be autodetected by Amazon QuickSight and can be queried either using a direct query or SPICE mode SPICE is the in memory optimized calculation engine for Amazon QuickSight designed specifically for fast asneeded data vis ualization You can improve the performance of database datasets by importing the data into SPICE instead of using a direct query to the database If you are using S3 as your primary storage you can use Amazon Athena /QuickSight integration to perform analysis and visualization Amazon Athena is an interactive query service that makes it easy to analyze data in S3 using standard SQL You can run SQL queries using Athena on data stored in S3 and build business dashboards within QuickSight For another visualization approach Apache Zeppelin is an open source BI solution th at you can run on Amazon EMR to visualize data in S3 using Spark SQL You can also use Apache Zeppelin to visualize data in Amazon Redshift Analytics Pipeline with AWS Services AWS offers a broad set of services to implement an end toend analytics platform Figure 2 shows the services we discussed and where they fit within the analytics pipeline Amazon Web Services Data Warehousing on AWS 10 Figure 2: Analytics Pipeline with AWS Services Data Warehouse Technology Options In this section we discuss o ptions available for building a data warehouse: row oriented databases column oriented databases and massively parallel processing architectures Row Oriented Databases Roworiented databases typically store whole rows in a physical block High perform ance for read operations is achieved through secondary indexes Databases such as Oracle Database Server Microsoft SQL Server MySQL and PostgreSQL are roworiented database systems These systems have been traditionally used for data warehousing but th ey are better suited for transactional processing (OLTP) than for analytics To optimize performance of a row based system used as a data warehouse developers use a number of techniques including : • Building materialized views • Creating pre aggregated rollup tables • Building indexes on every possible predicate combination Amazon Web Services Data Warehousing on AWS 11 • Implementing data partitioning to leverage partition pruning by query optimizer • Performing index based joins Traditional row based data stores are limited by t he resources available on a single machine Data marts alleviate the problem to an extent by using functional sharding You can split your data warehouse int o multiple data marts each satisfying a specific functional area However when data marts grow large over time data processing slows down In a row based data warehouse every query has to read through all of the columns for all of the rows in the bloc ks that satisfy the query predicate including columns you didn’t choose This approach creates a significant performance bottleneck in data warehouses where your tables have more columns but your queries use only a few Column Oriented Databases Colu mnoriented databases organize each column in its own set of physical blocks instead of packing the whole rows into a block This functionality allows them to be more input/output ( I/O) efficient for read only queries because they have to read only those columns accessed by a query from disk (or from memory) This approach makes column oriented databases a better choice than row oriented databases for data warehousing Figure 3 illustrates the primary difference between row oriented and column oriented databases Rows are packed into their own blocks in a row oriented database and columns are packed into their own blocks in a column oriented database Figure 3: Row oriented vs column oriented databases After faster I/O the next biggest benefit to usin g a column oriented database is improved compression Because every column is packed into its own set of blocks Amazon Web Services Data Warehousing on AWS 12 every physical block contains the same data type When all the data is the same data type the database can use extremely efficient compression algorithms As a result you need less storage compared to a row oriented database This approach also results in significantly lesser I/O because the same data is stored in fewer blocks Some column oriented databases that are used for data warehousing include Amazon Redshift Vertica Greenplum Teradata Aster Netezza and Druid Massively Parallel Processing (MPP) Architectures An MPP architecture enables you to use all the resources available in the cluster for processing data which dramatically increas es performance of petabyte scale data warehouses MPP data warehouses allow you improve performance by simply adding more nodes to the cluster Amazon Reds hift Druid Vertica Greenplum and Teradata Aster are some of the data warehouses built on an MPP architecture Open source frameworks such as Hadoop and Spark also supp ort MPP Amazon Redshift Deep Dive As a columnar MPP technology Amazon Redshift offers key benefits for performant costeffective data warehousing including efficient compression reduced I/O and lowe r storage requirements It is based on ANSI SQL so you can run existing queries with little or no modification As a result it is a popular choice for enterprise data warehouses Amazon Redshift delivers fast query and I/O performance for virtually any data size by using columnar storage and by parallelizing and distributing queries across multiple nodes It automates most of the common administrative tasks associated with provisioning configuring monitoring backing up and securing a data warehouse making it easy and inexpensive to manage Using this automation you can build petabyte scale data warehouses in minutes instead of the weeks or months taken by traditional on premises imp lementations You can also run exabytes scale queries by storing data on S3 and query ing it using Amazon Redshift Spectrum Amazon Redshift also enables yo u to scale compute and storage separately using Amazon Redshift RA3 nodes RA3 nodes come with Redshift Managed Storage (RMS) which leverages your workload patterns and advanced data management techniques such as automatic fine grained data eviction and intelligent data pre fetching You can size your cluster based on your compute needs only and pay only for the storage used Integration with Data Lake Amazon Redshift provides a feature called Redshift Spectrum that makes it easier to both query data and write data back to your data lake in open file formats With Spectrum you can query open fi le formats such as Parquet ORC JSON Avro CSV Amazon Web Services Data Warehousing on AWS 13 and more directly in S3 using familiar ANSI SQL To export data to your data lake you simply use the Redshift UNLOAD command in your SQL code and specify Parquet as the file format and Redshift automatica lly takes care of data formatting and data movement into S3 To query data in S3 you create an external schema if the S3 object is already cataloged or create an external table You can write data to external tables by running CREATE EXTERNAL TABLE AS SE LECT or INSERT INTO an external table This gives you the flexibility to store highly structured frequently accessed data in a Redshift data warehouse while also keeping up to exabytes of structured semi structured and unstructured data in S3 Exporting data from Amazon Redshift back to your data lake enables you to analyze the data further with AWS services like Amazon Athena Amazon EMR and Amazon SageMaker Performance Amazon Redshift offers fast industry leading performance with flexibility Amazon Redshift offers multiple features to achieve this superior performance including: • High performing hardware — The Amazon Redshift Service offers multiple node types to choose from based on your requirements The latest generation RA3 instances are built on the AWS Nitro System and featur e high bandwidth networking and performance indistinguishable from bare metal These Amazon Redshift instances maximize speed for performance intensive workloads that require large amounts of compute capacity with the flexibility to pay by usage for storage and pay separately for compute by specifying the number of instances you need • AQUA (preview) — AQUA (Advanced Query Accelerator) is a distributed and hardware accelerated cache that enables Amazon Redshift to run up to ten times faster than any other cloud data warehouse AQUA accelerates Amazon Redshift queries by running data intensive tasks such as filtering and aggregation closer to the storage layer This avoids networking bandwidth limitations by eliminating unnecessary data movement between where data is stored and compute clusters AQUA uses AWS designed processors to accelerate queries This includes AWS Nitro chips adapted to speed up data encryption and compression and c ustom analytics processors implemented in fieldprogrammable gate arrays ( FPGAs ) to accelerate operations such as filtering and aggregation AQUA can process large amounts of data in parallel across multiple nodes and automatically scales out to add mor e capacity as your storage needs grow over time Amazon Web Services Data Warehousing on AWS 14 • Efficient storage and high performance query processing — Amazon Redshift delivers fast query performance on datasets ranging in size from gigabytes to petabytes Columnar storage data compression and zone maps reduce the amount of I/O needed to perform queries Along with the industry standard encodings such as LZO and Zstandard Amazon Redshift also o ffers purpose built compression encoding AZ64 for numeric and date/time types to provide both storage savings and optimized query performance • Materialized views — Amazon R edshift materialized views enable you to achieve significantly faster query performance for analytical workloads such as dashboarding queries from BI tools and ELT data processing jobs You can use materialized views to store frequently used precomputati ons to speed up slow running queries Amazon Redshift can efficiently maintain the materialized views incrementally to speed up ELT and provide low latency performance benefits For more information see Creating materialized views in Amazon Redshift • Auto workload management to maximize throughput and performance — Amazon Redshift uses machine learning to tune configuration to achieve high throughput and perfor mance even with varying workloads or concurrent user activity Amazon Redshift utilizes sophisticated algorithms to predict and classify incoming queries based on their run times and resource requirements to dynamically manage resources and concurrency wh ile also enabling you to prioritize your business critical workloads Short query acceleration (SQA) sends short queries to an express queue for immediate processing rather than waiting behind long running queries You can set the priority of your most imp ortant queries even when hundreds of queries are being submitted Amazon Redshift is also a self learning system that observes the user workload continuously detecting opportunities to improve performance as the usage grows applying optimizations seaml essly and making recommendations via Redshift Advisor when an explicit user action is needed to further turbocharge Amazon Redshift performance • Result caching — Amazon Redshift uses result caching to deliver sub second response times for repeated queries Dashboard visualization and business intelligence tools that execute repeated queries experience a significant performance boost When a query executes Amazo n Redshift searches the cache to see if there is a cached result from a prior run If a cached result is found and the data has not changed the cached result is returned immediately instead of re running the query Durability and Availability To provide the best possible data durability and availability Amazon Redshift automatically detects and replaces any failed node in your data warehouse cluster It makes your replacement node available immediately and loads your most frequently Amazon Web Services Data Warehousing on AWS 15 accessed data first so you can resume querying your data as quickly as possible Amazon Redshift attempts to maintain at least three copies of data : the original and replica on the compute nodes and a backup in S3 The cluster is in read only mode until a replacement node is provisioned and added to the cluster which typically takes only a few minutes Amazon Redshift clusters reside within one Availability Zone However if you want to a Multi AZ setup for Amazon Redshift you can create a mirror and then self manage replication and failover With just a few clicks in the Amazon Redshift Management Console you can set up a robust disaster recovery (DR) environment with Amazon Redshift Amazon Redshift automatically takes incremental snapshots (backup s) of your data every eight hours or five gigabytes ( GBs) per node of data change You can get more information and control over a snapshot including the ability to control the automatic snapshot's schedule You can keep copies of your backups in multipl e AWS Regions In case of a service interruption in one AWS Region you can restore your cluster from the backup in a different AWS Region You can gain read/write access to your cluster within a few minutes of initiating the restore operation Elasticity and Scalability With Amazon Redshift you get the elasticity and scalability you need for your data warehousing workloads You can scale compute and storage independently and pay only for what you use With the elasticity and scalability that Amazon Reds hift offers you can easily run non uniform and unpredictable data warehousing workloads Amazon Redshift provides two forms of compute elasticity : • Elastic resize — With the elastic resize feature you can quickly resize your Amazon cluster by adding nodes to get the resources needed for demanding workloads and to remove nodes when the job is complete to save cost Additional nodes are added or removed in minutes with minimal disruption to on going read and write queries Elastic resize can be automated using a schedule you define to accommodate changes in w orkload that occur on a regular basis Resize can be scheduled with a few clicks in the console or programmatically using the AWS command line interface ( AWS CLI) or an API call • Concurrency Scaling — With the Concurrency Scaling feature you can support virtually unlimited concurrent users and concurrent queries with consistently fast query performance When concurrency scaling is enabled Amazon Redshift automaticall y adds additional compute capacity when you need it to process an increase in concurrent read queries Write operations continue as normal on your main cluster Users always see the most current data whether the queries run on the main cluster or on a con currency scaling cluster Amazon Web Services Data Warehousing on AWS 16 Amazon Redshift enables you to start with as little as a single 160 GB node and scale up all the way to multiple petabytes of compressed user data using many nodes For more information see About Clusters and Nodes in the Amazon Redshift Cluster Management Guide Amazon Redshift Managed Storage Amazon Redshift managed storage enables you to scale and pay for compute and storage independently so you c an size your cluster based only on your compute needs It automatically uses high performance solidstate drive ( SSD)based local storage as tier1 cache and takes advantage of optimizations such as data block temperature data block age and workload pat terns to deliver high performance while scaling storage automatically when needed without requiring any action Operations As a managed service Amazon Redshift completely automates many operational tasks including : • Cluster Performance — Amazon Redshift p erforms Auto A NALYZE to maintain accurate table statistics It als o performs Auto VACUUM to ensure that the database storage is efficient and de leted data blocks are reclaimed • Cost Optimization — Amazon Redshift enables you to pause and resume the clusters that need to be available only at a specific time enabling you to suspend ondemand billing while the cluster is not being used Pause and resume can also be automated using a schedule you define to match your operational needs Cost controls can be defined on Amazon R edshift clusters to monitor and control your usage and associated cost for Amazon Redshift Spectrum and Concurrency Scaling features Redshift Advisor To help you improve performance and decrease the operating costs for your cluster Amazon Redshift has a feature called Amazon Redshift Advisor Amazon Redshift Advisor offers you specific recommendations about changes to make Advisor develops its customized recommendations by analyzing workload and usage metrics for your cluster These tailored recommendations relate to operations and cluster s ettings To help you prioritize your optimizations Advisor ranks recommendations by order of impact You can view Amazon Redshift Advisor analysis results and recommendations on the AWS Management Console Amazon Web Services Data Warehousing on AWS 17 Interfaces Amazon Redshift has custom Java Database Connectivity (JDBC) and Open Database Connectivity (ODBC) drivers you can download from the Connect Client tab of the console which means you can use a wide range of familiar SQL clients You can also use standard PostgreSQL JDBC and ODBC drivers For more information about Amazon Redshift drivers see Amazon Redshift and PostgreSQL in the Amazon Redshift Database Developer Guide Amazon Redshift provides a built in Query Editor in the web console The Query Editor is an in browser interface for running SQL queries on Amazon Redshift clusters directly from AWS Management Console It’s a convenient way for a database administrator (DBA) or a user to run queries as needed or diagnose queries You can also find numerous examples of validated integrations with m any popular BI and ETL vendors In these integrations loads and unloads execute in parallel on each compute node to maximize the rate a t which you can ingest or export data to and from multiple resources including S3 Amazon EMR and DynamoDB You can easily load streaming data into Amazon Redshift using Amazon Kinesis Data Firehose enabling near real time analytics with existing BI tools and dashboards You can locate metrics for compute utilization memory utilization storage utilization and read/write traffic to your Amazon Redshift data warehouse cluster by using the con sole or Amazon CloudWatch API operations Security To help provide data security you can run Amazon Redshift inside a virtual private cloud based on the Amazon Virtual Private Cloud (Amazon VPC) service You can use the software defined networking model of the VPC to define firewall rules that restrict traffic based on the rules you configure Amazon Redshift su pports SSL enabled connections between your client application and your Amazon Redshift data warehouse cluster which enables data to be encrypted in transit You can also leverage Enhanced VPC Routing to manage data flow between your Amazon Redshift cluster and other data sources Data traffic is routed within the AWS network instead of public internet The Amazon Redshift compute nodes store your data but the data can be accessed only from the cluster’s leader node This isolation provides another layer of security Amazon Redshift integrates with AWS CloudTrail to enable you to audit all Amazon Redshift API calls To help keep your data secure at rest Amazon Redshift supports encryption and can encrypt each block using hardware accelerated Advanced Encryption Standard ( AES)256 encryption as each block is wr itten to disk This encryption takes place at a low level in the I/O subsystem; the I/O subsystem encrypts everything written to disk including intermediate query results The blocks are backed up as is which means that backups are also encrypted By def ault Amazon Redshift takes care of key management but you can choose to manage your keys using your Amazon Web Services Data Warehousing on AWS 18 own hardware security modules or manage your keys through AWS Key Management Service (AWS KMS) Database security management is controlled by managing user access granting the proper privileges to tables and views to user accounts or groups and leveraging column level grant and revoke to meet your security and compliance needs in finer granularity In addi tion Amazon Redshift provides multiple means of authentication to secure and simplify data warehouse access You can use AWS Identity and Access Management (AWS IAM) within your AWS account Use federated authentication if you already manage user identifies outside of AWS via SAML 20compatible identity providers to enable your users to access the data warehouse without managing database users and passwords Amazon Redshift also supports multi factor authentication (MFA) to provide additional security Cost Model Amazon Redshift requires no long term commitments or upfront costs This pricing approach frees you f rom the capital expense and complexity of planning and purchasing data warehouse capacity ahead of your needs Charges are based on the size and number of nodes in your cluster If you use Amazon Redshift managed storage (RMS) with an RA3 instance you pay separately for the amount of compute and RMS that you use If you need additional compute power to handle workload spike s you can enable concurrency scaling For e very 24 hours that your main cluster runs you accumulate one hour of credit to use this fe ature for free Beyond that you will be charged the per second on demand rate There is no additional charge for backup storage up to 100 percent of your provisioned storage For example if you have an active cluster with two XL nodes for a total of four terabytes (TB) of storage AWS provides up to four TB of backup storage on S3 at no additional charge Backup storage beyond th e provisioned storage size and backups stored after your cluster is terminated are billed at standard Amazon S3 rates There is no data transfer cha rge for communication between S3 and Amazon Redshift If you use Redshift Spectrum to access data store in your data lake you pay for the query cost based on how much data the query scans For more information see Amazon Redshift Pricing Ideal Usage Patterns Amazon Re dshift is ideal for OLAP using your existing BI tools Enterprises use Amazon Redshift to do the following: Amazon Web Services Data Warehousing on AWS 19 • Running enterprise BI and reporting • Analyze global sales data for multiple products • Store historical stock trade data • Analyze ad impressions and clicks • Aggregate gaming data • Analyze social trends • Measure clinical quality operation efficiency and financial performance in health care With the Amazon Redshift Spectrum feature Amazon Redshift supports semi structured data and extend s your data warehouse to your data lake This enables you to: • Run as needed analysis on large volume event data such as log analysis and social media • Offload infrequently accessed history data out of the data warehouse • Join the external dataset with the data warehouse directly without loading them into the data warehouse AntiPatterns Amazon Redshift is not ideally suited for the following usage patterns: • OLTP – Amazon Redshift is designed for data war ehousing workloads delivering extremely fast and inexpensive analytic capabilities If you require a fast transactional system you might want to choose a relational database system such as Amazon Aurora or Amazon RDS or a NoSQL database such as Amazon DynamoDB • Unstructured data – Data in Amazon Redshift must be structured by a defined schema Amazon Redshift do esn’t support an arbitrary schema structure for each row If your data is unstructured you can perform ETL on Amazon EMR to get the data ready for loading into Amazon Redshift For JSON data you can store key value pairs and use the native JSON functions in your queries • BLOB data – If you plan to store binary large object (BLOB) files su ch as digital video images or music you might want to stor e the data in S3 and referenc e its location in Amazon Redshift In this scenario Amazon Redshift keeps track of metadata (such as item name size date created owner location and so on) about your binary objects but the large objects themselves are stored in S3 Amazon Web Services Data Warehousing on AWS 20 Migrating to Amazon Redshift If you decide to migrate from an existing data warehouse to Amazon Redshift which migration strategy you should choose depends on several factors: • The size of the database and its tables and objects • Network bandwidth between the source server and AWS • Whether the migration and switchover to AWS will be done in one step or a sequence of steps over time • The data change rate in the source system • Transformations during migration • The partner tool that you plan to use for migration and ETL OneStep Migration Onestep migration is a good option for small databases that don’t require continuous operation Customers can extract existing databases as comma separated value (CSV) files or columnar format like Parquet then use services such as AWS Snowball to deliver datasets to S3 for loading into Amazon Redshift Customers then test the destination Amazon Redshift database for data cons istency with the source After all validations have passed the database is switched over to AWS TwoStep Migration Twostep migration is commonly used for databases of any size: 1 Initial data migration — The data is extracted from the source databas e preferably during non peak usage to minimize the impact The data is then migrated to Amazon Redshift by following the one step migration approach described previously 2 Changed data migration — Data that changed in the source database after the initial data migration is propagated to the destination before switchover This step synchronizes the source and destination databases After all the changed data is migrated you can validate the data in the destination database perform necessary tests and i f all tests are passed switch over to the Amazon Redshift data warehouse Amazon Web Services Data Warehousing on AWS 21 Wave based Migration Large scale MPP data warehouse migration presents a challenge in terms of project complexity and is riskier Taking precaution s to break a complex migration project into multiple logical and systematic waves can significantly reduce the complexity and risk Starting from a workload that covers a good number of data sources and subject areas with medium complexity then add more data sources and subject areas in each subsequent wave See Develop an application migration methodology to modernize your data wareh ouse with Amazon Redshift for a description of how to migrate from the source MPP data warehouse to Amazon Redshift using the wave based migration approach Tools and Additional Help for Database Migration Several tools and technologies for data migrati on are available You can use some of these tools interchangeably or you can use other third party or open source tools available in the market 1 AWS Database Migration Service supports both the one step and the two step migration processes To follow the two step migration process you enable supplemental logging to capture changes to the source system You can enable supplemental lo gging at the table or database level 2 AWS Schema Conversion Tool (SCT) is a free tool that can convert the source database schema and a majority of the database code objects including vie ws stored procedures and functions to a format compatible with the target databases SCT can scan your application source code for embedded SQL statements and convert them as part of a database schema conversion project After schema conversion is compl ete SCT can help migrate a range of data warehouses to Amazon Redshift using built in data migration agents 3 Additional data integration partner tools include : • Informatica • Matillion • SnapLogic • Talend • BryteFlow Ingest • SQL Server Integration Services (SSIS) Amazon Web Services Data Warehousing on AWS 22 For more information on data integration and consulting partners see Amazon Redshift Partners We provide technical advice migration support and financial assistance to help eligible customer s quickly and cost effectively migrate from legacy data warehouses to Amazon Redshift the most popular and fastest cloud data warehouse Qualifying customers receive advice on application architecture migration strategies program management proof ofconcept and employee training that are customized for their technology landscape and migration goals We offer migration assistance through Amazon Database M igration Accelerator AWS Professional Services or our network of Partners These teams and organizations specialize in a range of data warehouse and analytics technologies and bring a wealth of experience acquired by migrating thousands of data warehouses and applications to AWS We also offer service credits to minimize the financial impact of the migration For more information see Migrate to Amazon Redshift Designing Data Warehousing Workflows In the previous sections we discussed the features of Amazon Redshift that make it ideally suited for data warehousing To understand how to design data warehousing workflows with Amazon Redshift let’s look at the most common design pattern along with an example use case Suppose that a multinational clothing maker has more than a thousand retail stores sells certain clothing lines through department and discount stores and has an online presence From a technical standpoint these three channels currently operate independently They have different management point ofsale systems and accounting departments No single system merges all the related datasets together to p rovide the CEO with a 360 degree view across the entire business Suppose the CEO wants to get a company wide picture of these channels and perform analytics such as the following: • What trends exist across channels? • Which geographic regions do better across channels? • How effective are the company’s advertisements and promotions? • What trends exist across each clothing line? • Which external forces have impacts on the company’s sales ; for example the unemployment rate and weather conditions? • What onli ne ads are most effective? Amazon Web Services Data Warehousing on AWS 23 • How do store attributes affect sales ; for example tenure of employees and management strip mall versus enclosed mall location of merchandise in the store promotion endcaps sales circulars and in store displays? An enterpr ise data warehouse solves this problem It collects data from each of the three channels’ various systems and from publicly available data such as weather and economic reports Each data source sends data daily for consumption by the data warehouse Click stream data are streamed continuously and stored on S3 Because each data source might be structured differently an ETL process is performed to reformat the data into a common structure Then analytics can be performed across data from all sources simulta neously To do this we use the following data flow architecture: Figure 4: Enterprise data warehouse workflow 1 The first step is getting the data from different sources into S3 S3 provides a highly durable inexpensive and scalable storage platform that can be written to in parallel from many different sources at a low cost 2 For batch ETL you can use either Am azon EMR or AWS Glue AWS Glue is a fully managed ETL service that simplif ies ETL job creation and eliminate s the need to provision and manage infrastructure You pay only for the resources used while your jobs are running AWS Glue also provides a central ized metadata repository Simply point AWS Glue to your data stored in AWS and AWS Glue discovers your data and stores the associated table definition and schema in the AWS Glue Data Catalog Once cataloged your data is immediately searchable can be queried and is available for ETL Amazon Web Services Data Warehousing on AWS 24 3 Amazon EMR can transform and cleanse the data from the source format to go into the destination format Amazon EMR has built in integration with S3 which allows parallel threads of throughput from each node in your Amazon EMR cluster to and from S3 Typically a data warehouse gets new data on a nightly basis Because there is usually no need for analytics in the middle of the night the only requirement around this transformation process is that it finishes by the morning when the CEO and other business users need to access reports and dashboards You can use the Amazon EC2 Spot market to further bring down the cost of ETL A good spot strategy is to start bidding at a low price at midnight and continually increase your price over time until capacity is granted As you get closer to the deadline if spot bids have not succeeded you can fall back to on demand prices to ensure you still meet your completion time requirements Each source might have a different transfo rmation process on Amazon EMR but with the AWS pay asyougo model you can create a separate Amazon EMR cluster for each transformation You can tune each cluster it to be exactly the right capacity to complete all data transformation jobs without conten ding with resources for the other jobs 4 Each transformation job loads formatted cleaned data into S3 We use S3 here again because Amazon Redshift can load the data in parallel from S3 using multiple threads from each cluster node S3 also provides a n historical record and serves as the formatted source of truth between systems Data on S3 is cataloged by AWS Glue The metadata is stored in the AWS Glue data catalog which allows it to be consumed by other tools for analytics or machine learning if additional requirements are introduced over time 5 Amazon Redshift loads sorts distributes and compresses the data into its tables so that analytical queries can execute efficiently and in parallel If you leverage an RA3 instance with Amazon Redshift managed storage Amazon Redshift can automatically scale storage as your data increases As the business expands you can enable Amazon Redshift concurrency scaling to handle more and more user requests and keep near linear performance With new workload s are added you can increase data warehouse capacity in minutes by adding more nodes via Amazon Redshift elastic resize 6 Clickstream data is stored o n S3 via Kinesis Data F irehose hourly or even more frequently Because Amazon Redshift can query S3 external data via Spectrum without having to load them into a data warehouse you can track the customer online journey in near real time and join it with sales data in your data warehouse to understand customer behavior better This provides a more complete picture of customers and enables business users to get insight sooner and take action Amazon Web Services Data Warehousing on AWS 25 7 To visualize the analytics you can use Amazon QuickSight or one of the many partner visualization platforms that connect to Amazon Redshift using ODBC or JDBC This point is where the CEO and their staff view reports dashboards and charts Now executives can use t he data for making better decisions about company resources which ultimately increase s earnings and value for shareholders You can easily expand this flexible architecture when your business expands opens new channels launches additional customer specific mobile applications and brings in more data sources It takes just a few clicks in the Amazon Redshift Management Console or a few API calls Conclusion There is a strategic shift in data warehousing as enterprises migrate their analytics datab ases and solutions from on premises solutions to the cloud to take advantage of the cloud’s simplicity performance elasticity and cost effectiveness This whitepaper offers a comprehensive account of the current state of data warehousing on AWS AWS provides a broad set of services and a strong partner ecosystem that enable customers to easily build and run enterprise data warehousing in the cloud The result is a highly performant cost effective analytics architecture that can scale with your busines s on the AWS global infrastructure Contributors Contributors to this document include: • Anusha Challa Sr Analytics SSA Amazon Web Services • Corina Radovanovich Sr Product Marketing M anager Amazon Web Services • Juan Yu Sr Analytics SSA Amazon Web Services • Lucy Friedmann Product Marketing M anager Amazon Web Services • Manan Goel Principal Product Manager Amazon Web Services Further Reading For additional information see: • Amazon Redshift FAQs • Amazon Redshift lake house architecture • Amazon Redshift customer success Amazon Web Services Data Warehousing on AWS 26 • Amazon Redshift best practices • Implementing workload management • Querying external data using Amazon Redshift Spectrum • Amazon Redshift Documentation • Amazon Redshift system overview • What is Amazon Redshift? • AWS Key Management Service (KMS) • Amazon Redshift JSON functions • Amazon Redshift pricing • Amazon Redshift Partners • AWS Database Migration Service • Develop an application migration methodology to modernize your data warehouse with Amazon Redshift (blog entry) • What is Streaming Data? • Colum noriented DBMS Document Revisions Date Description January 2021 Updated to include latest features and capabilities March 2016 First publication
|
General
|
consultant
|
Best Practices
|
Database_Caching_Strategies_Using_Redis
|
ArchivedDatabase Caching Strategies Using Redis May 2017 This paper has been archived For the latest technical content see https://docsawsamazoncom/whitepapers/latest/database cachingstrategiesusingredis/welcomehtmlArchived Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2017 Amazon Web Services Inc or its affiliates All rights reserved Archived Contents Database Challenges 1 Types of Database Caching 1 Cach ing Patterns 3 Cache Aside (Lazy Loading) 4 Write Through 5 Cache Validity 6 Evictions 7 Amazon ElastiCache and Self Managed Redis 8 Relational Database Caching Techniques 9 Cache the Database SQL ResultSet 10 Cache Select Fields and Values in a Custom Format 13 Cache Select Fields and Values into an Aggregate Redis Data Structure 14 Cache Serialized Applicati on Object Entities 15 Conclusion 17 Contributors 17 Further Reading 17 Archived Abstract Inmemory data caching can be one of the most effective strategies for improving your overall application performance and reducing your database costs You can apply c aching to any type of database including relational databases such as Amazon Relational Database Service (Amazon RDS) or NoSQL databases such as Amazon DynamoDB MongoDB and Apache Cassandra The best part of caching is that it’s easy to implement and it dramatically improves the speed and scalability of your application This w hitepaper describes some of the caching strategies and implementation approaches that address the limitations and challenges associated with disk based databases ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 1 Database Challenges When you’re building distributed applications that require low latency and scalability disk based databases can pose a number of challenges A few common ones include the following : • Slow processing queries: There are a number of query optimization techniques and schema designs that help boost query performance However the data retrieval speed from disk plus the added query processing times generally put your query response times in double digit millis econd speeds at best This assumes that you ha ve a steady load and your da tabase is performing optimally • Cost to scale: Whether the data is distributed in a disk based NoSQL database or vertically scaled up in a relational database scaling for extremely high reads can be costly It also can require several database read replicas to match what a single in memory cache node can deliver in terms of requests per second • The need to simplify data access: Although relational databases provide an excellent means to data model relationships they aren’t optimal for data access There are instances where your applications may want to access the data in a particular structure or view to simplify data retrieval and increase application performance Before implementing database caching many architects and engine ers spend great effort trying to extract as much performance as they can from their database s However there is a limit to the performance that you can achieve with a disk based database and it’s counterproductive to try to solve a problem with the wrong tools For example a large portion of the latency of your database query is dictated by the physics of retrieving data from disk Types of Database Caching A database cache supplements your primary database by removing unnecessary pressure on it typically in the form of frequently accessed read data The cache itself can live in several areas including in your database in the applic ation or as a standalon e layer The following are the three most common types of database caches: ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 2 • Database integrated caches: Some databases such as Amazon Aurora offer an integrated cache that is managed within the database engine and has built in write through capabilities1 The database updates its cache automatically when the underlying data changes Nothing in the application tier is required to use this cache The downside of integrated caches is their size and capabilities Integrated caches are typically limited to the available memory that is allocated to the cache by the database instance and can’t be used for other purposes such as sha ring data with other instances • Local caches: A local cache stores your frequently used data within your application This makes data retrieval faster than other caching architectures because it removes network traffic that is associated with retrieving data A major disadvantage is that amo ng your applications each node has its own resident cache working in a disconnected manner The information that is stored in an individual cache node whether it ’s cached database rows web content or session data can’t be shared with other local cache s This creates challenges in a distributed environment where information sharing is critical to support scalable dynamic environments Because most applications use multiple application servers coordinating the values across them becomes a major challenge if each server has its own cache In addition when outages occur the data in the local cache is lost and must be rehydrated which effectively negat es the cache The majority of these disadv antages are mitigated with remote caches • Remote caches: A remote cache (or “side cache”) is a separate instance (or instances) dedicated for sto ring the cached data in memory Remote caches are stored on dedicated servers and are typically built on key/va lue NoSQL stores such as Redis2 and Memcached 3 They provide hundreds of thousands and up to a million requests per second per cache node Many solutions such as Amazon ElastiCache for Redis also provide the high availability need ed for critical workloads4 ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 3 The average latency of a request to a remote cache is on the sub millisecond timescale which is orders of magnitude faster than a request to a diskbased database At these spe eds local caches are seldom necessary Remote caches are ideal for distributed environment s because they work as a connected cluster that all your disparate systems can use Howev er when network latency is a concern you can apply a two tier caching strategy that uses a local and remote cache together This paper doesn’t describe this strategy in detail but it’s typically used only when needed because of the complexity it adds With remote caches the orchestration between caching the data and managing the validity of the data is managed by your applications and/or processes that use it The cache itself is not directly connected to the database but is used adjacently to it The remainder of this paper focus es on using remote caches and specifically Amazon ElastiCache for Redis for caching relational database data Caching Patterns When you are caching data from your database t here are caching patterns for Redis5 and Memcached6 that you can implement including proactive and reactive approaches Th e patterns you choose to implement should be directly related to your caching and application objectives Two common approaches are cache aside or lazy loading (a reactive approach) and write through (a proactive approach) A cache aside cache is updated after the data is requested A writethrough cache is updated immediately when the primary database is updated With both approaches the application is essentia lly managing what data is being cached and for how long The following diagram is a typical representation of an architecture that uses a remote distributed cache ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 4 Figure 1: Architecture using remote distributed cache Cache Aside (Lazy Loading) A cache aside cache is the most common caching strategy available The fundamental data retrieval logic can be summarized as fo llows: 1 When your application needs to read data from the database it checks the cache first to determine whether the data is available 2 If the data is available (a cache hit) the cached data is returned and the response is issued to the caller 3 If the data isn’t available (a cache miss) the database is queried for the data The cache is then populated with the data that is retrieved from the database and the data is returned to the caller Figure 2: A cache aside cache This approach has a couple of advantages: ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 5 • The cach e contains only data that the application actually requests which helps keep the cache size cost effective • Implementing this approach is straightforward and produces immediat e performance gains whether you use an application framework that encapsulates lazy caching or your own custom application logic A disadvantage when using cache aside as the only caching pattern is that because the data is loaded into the cache only after a cache miss some overhead is added to the initial response time because additional roundtrips to the cache and database are needed Write Through A write through cache reverses the order of how the cache is populated Instead of lazyloading the data in the cache after a cache miss the cac he is proactively updated immediately following the primary database update The fundamental data retrieval logic can be summarized as follows : 1 The a pplication batch or backend proces s updates the primary database 2 Immediately afterward the dat a is also updated in the cache Figure 3: A write through cache The write through pattern is almost always implemented along with lazy loading If the application gets a cache miss because the data is not present or has expired the lazy loading pattern is performed to update the cache The write through approach has a couple of advantages: ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 6 • Because the cache is uptodate with the primary database there is a much greater likelihood that the data will be found in the cache This in turn result s in better overall application performance and user experience • The performance of your d atabase is optimal because fewer database reads are performed A disadvantag e of the write through approach is that infrequently requested data is also written to the cache resulting in a larger and more expensive cache A proper caching strategy includes effective use of both write through and lazy loading of your data and setting an appropriate expiration for the data to keep it relevant and lean Cache Validity You can control the freshness of your cached data by applying a time to live (TTL) or “expiration” to your cached keys After the set time has passed the key is deleted from the cache and access to the origin data store is required along with reaching the updated data Two principles can help you determine the appropriate TTLs to appl y and the type of caching patterns to implement First it’s important that you understand the rate of change of the underlying data Second it’s important that you evaluate the risk of outdated data being returned back to your application instead of its updated counterpart For example it might make sense to keep static or reference data (that is data that is seldom updated ) valid for longer periods of time with write throughs to the cache when th e underlying data gets updated With dynamic data that changes often you might want to apply lower TTLs that expire the data at a rate of change that matches that of the primary dat abase This lowers the risk of returning outdated data while still providing a buffe r to offload database requests It’s also important to recognize that even if you are only caching data for minutes or seconds versus longer durations appropriately apply ing TTLs to ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 7 your cached keys can result in a huge performance boost and an overall better user experience with your application Another best practice when applying TTLs to your cache keys is to add some time jitter to your TTLs This reduces the possibili ty of heavy database load occurring when your cached data expires Take for example the scenario of caching product information If all your product data expires at the same time and your application is under heavy load then your backend database has to fulfill all the product requests Depending on the load that could generate too much pressure on your database resulting in poor performance By adding slight jitter to your TTLs a random ly generated time value (eg TTL = your initial TTL value in seconds + jitter) would reduce th e pressure on your backend database and also reduce the CPU use on your cache engine as a result of deleting expired keys Evictions Evictions occur when cache memory is overfilled or is greater than the maxmemory setting for the cache causing the engine selecting keys to evict in order to manage its memory The keys that are chosen are based on the eviction policy you select By default Amazon ElastiCache for Redis sets the volatile lru eviction policy to your Redis c luster When t his policy is select ed the least recently used keys that have an expiration (TTL) value set are evicted Other eviction policies are available and can be applied in the config urable maxmemory policy parameter The following table summarizes e viction policies: Eviction Policy Description allkeys lru The cache evicts the least recently used (LRU) keys regardless of TTL set allkeys lfu The cache evicts the least frequently used (LFU) keys regardless of TTL set volatile lru The cache evicts the least recently used (LRU) keys from those that have a TTL set ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 8 Eviction Policy Description volatile lfu The cache evicts the least frequently used (LFU) keys from those th at have a TTL set volatile ttl The cache evicts the keys with the shortest TTL set volatile random The cache randomly evicts keys with a TTL set allkeys random The cache randomly evicts keys regardless of TTL set noeviction The cache doesn’t evict keys at all This blocks future writes until memory frees up A good strategy in selecting an appropriate eviction policy is to consider the data stored in your cluster and the outcome of keys being evicted Generally least recently used ( LRU)based policies are more common for basic caching use cases However depending on your objectives you might want to use a TTL or random based eviction policy that better suits your requirements Also if you are experiencing evictions with your cluster it is usually a sign that you should scale up (that is use a node with a larger memory footprint ) or scale out (that is add more nodes to your cluster) to accommodate the additional data An exce ption to this rule is if you are purposefully relying on the cache engine to manage your keys by means of eviction also referred to an LRU cache 7 Amazon ElastiCache and Self Managed Redis Redis is an open source inmemory data store that has become the most popular key/value engine in the market Much of its popularity is due to its support for a variety of data structures as well as other features including Lua scripting support8 and Pub/Sub messaging capability Other added benefits include high availab ility topologies with support for read replicas and the ability to persist data Amazon ElastiCache offers a fully manage d service for Redis This means that all the administrative tasks associated with managing your Redis cluster including monitoring patching backups and automatic failover are managed ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 9 by Amazon This lets you focus on your business and your data instea d of your operations Other benefits of using Amazon ElastiCache for Redis over self managing your cache environment include the following : • An enhanced Redis engine that is fully compatible with the open source version but that also provides added stabilit y and robustness • Easily modifiable parameters such as eviction policies buffer limits etc • Ability to scale and resize your cluster to terabytes of data • Hardened security that lets you isolate your cluster within Amazon Virtual Private Cloud (Amazon VPC)9 For more information about Redis or Amazon ElastiCache see the Further Reading section at the end of this whitepaper Relational Da tabase Caching Techniques Many of the caching techniques that are described in this section can be applied to any type of database However this paper focuses on relational databases because they are the most common database caching use case The basic paradigm when you query data from a relational database includes executing SQL statements and iterating over the returned ResultSet object cursor to retrieve the database rows There are several techniques you can apply when you want to cache the returned data However it’s best to choose a method that simplifies your data access pattern and/or optimizes the architectur al goals that you have for your application To visualize this we’ll examine snippets of Java code to explain the logic You can find additional information on the AWS caching site10 The examples use the Jedis Redis client library11 for connecting to Redis although you can use any Java Redis library including Lettuce12 and Redisson 13 Assume that you issued the following SQL statement against a customer database for CUSTOMER_ID 1001 We’ll examine the various cachi ng strategies that you can use SELECT FIRST_NAME LAST_NAME EMAIL CITY STATE ADDRESS COUNTRY FROM CUSTOMERS WHERE CUSTOMER_ID = “1001”; ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 10 The query returns this record: … Statement stmt = connectioncreateStatement(); ResultSet rs = stmtexecuteQuery(query); while (rsnext()) { Customer customer = new Customer(); customersetFirstName(rsgetString("FIRST_NAME")); customersetLastName(rsgetString("LAST_NAME")); and so on … } … Iterating over the ResultSet cursor lets you retrieve the fields and values from the database rows From that point the application can choose where and how to use that data Let’s also assume that your application framework can ’t be used to abstract your caching implementation How do you best cac he the returned database data? Given this scenario you have many options The following sections evaluate some options with focus on the caching logic Cache the Database SQL ResultSet Cache a serialized ResultSet object that conta ins the fetched database row • Pro: When data retrieval logic is abstracted (eg as in a Data Access Object14 or DAO layer) the consuming code expects only a ResultSet object and does not need to be made aware of its origination A ResultSet object can be iterated over r egardless of w hether it originated from the database or was deserialized from the cache which greatly reduc es integration logic This pattern can be appli ed to any relational database • Con: Data retrieval still requires extracting values from the ResultSet object cursor and does not further simplify data access; it only reduces data retrieval latency ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 11 Note : When you cach e the row it’s important that it’s serializable The following example uses a CachedRowSet implementation for this purpose When you are using Redis this is stored as a byte array value The following code converts the CachedRowSet object into a byte arra y and then stores that byte array as a Redis byte array value The actual SQL statement is stored as the key and converted into bytes … // rs contains the ResultSet key contains the SQL statement if (rs != null) { //lets write through to the cache CachedRowSet cachedRowSet = new CachedRowSetImpl(); cachedRowSetpopulate(rs 1); ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutput out = new ObjectOutputStream(bos); outwriteObject(cachedRowSet); byte[] red isRSValue = bos toByteArray(); jedis set(keygetBytes() redisRSValue); jedis expire(keygetBytes() ttl); } … The nice thing about storing the SQL statement as the key is that it enable s a transparent caching abstraction layer that hides the implementation details The other added benefit is that you don’t need to create any additional mappings between a custom key ID and the executed SQL statement The last statement executes an expire command to apply a TTL to the stored key This code follows our write through logic in that upon querying the database the cached value is stored immediately afterward For lazy caching you would initially query the cache before executing the query again st the database To hide the implementation details use the DAO pattern and expose a generic method for your application to retrieve the data For example because your key is the actual SQL statement your method signature could look like the following: public ResultSet getResultSet(String key); // key is sql statement ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 12 The code that calls (consum es) this method expects only a ResultSet object regardless of what the underlying implementation details are for the interface Under the hood the getResultSet method execute s a GET command for the SQL key which if present is deserialize d and convert ed into a ResultSet object public ResultSet getResultSet(String key) { byte [] redisResultSet = null ; redisResultSet = jedis get(keygetBytes()); ResultSet rs = null ; if (redisResultSet != null ) { // if cached value exists deserialize it and return it try { cachedRowSet = new CachedRowSetImpl(); ByteArrayInputStream bis = new ByteArrayInputStream(redisResultSe t); ObjectInput in = new ObjectInputStream(bis); cachedRowSetpopulate((CachedRowSet) inreadObject()); rs = cachedRowSet; } … } else { // get the ResultSet from the database store it in the rs object then cache it … } … return rs; } If the data is not present in the cache query the database for it and cache it before returning As mentioned earlier a best practice would be to apply an appropriate TTL on the keys as well For all other caching techniques that we’ll review you should establish a naming convention for your Redis keys A good naming convention is one that is easily predictable to applications and developers A hierarchical structure separated by colons is a common naming convention for keys such as object:type:id ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 13 Cache Select Fields and Values in a Custom Format Cache a subset of a fetched database row into a cust om structure that can b e consumed by your applications • Pro: This approach is easy to implement You essentially store specific retrieved fields and values into a structure such as JSON or XML and then SET that structure into a Redis string The format you choose should be something that conforms to your application ’s data access pattern • Con: Your application is using different types of objects when querying for particular data ( eg Redis string and database results) In addition you are required to parse through the entire structure to retrieve the individual attributes associated with it The following code stores specific customer attributes in a customer JSON object and caches that JSON object into a Redis string : … // rs contains the ResultSet while (rsnext()) { Customer customer = new Customer(); Gson gson = new Gson(); JsonObject customerJSON = new JsonObject(); customersetFirstName(rsgetString("FIRST_NAME")); customerJSONadd(“first_name” gsontoJsonTree(customergetFirstName() ); customersetLastName(rsgetStri ng("LAST_NAME")); customerJSONadd(“last_name” gsontoJsonTree(customergetLastName() ); and so on … jedisset(customer:id:"+customergetCustomerID() customerJSONtoString() ); } … For data retrieval you can implement a generic method through an interface that accepts a customer key (eg customer:id:1001) and a n SQL statement ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 14 string argument It will also return whatever structure your application requires (eg JSON XML) and abstract the underlying det ails Upon initial request the application execute s a GET command on the customer key and if the value is present return s it and complete s the call If the value is not present it queries the database for the record write sthrough a JSON representation of the data to the cache and return s Cache Select Fields and Values into an Aggregate Redis Data Structure Cache the fetched database row into a specific data structure that can simplif y the application ’s data access • Pro: When converting the ResultSet object into a format that simplifies access such as a Redis Hash your application is able to use that data more effectively This technique simplifies your data access pattern by reducing the need to iterate over a ResultSet object or by parsing a structure like a JSON object stored in a string In addition working with aggregate data structures such as Redis Lists Sets and Hashes provide various attrib ute level commands associated with setting and getting data eliminating the overhead associated with processing the data before being able to leverage it • Con: Your application is using different t ypes of objects when querying for particular data ( eg Redis Hash and database results) The following code creates a HashMap object that is used to store the customer data The map is populated with the database data and SET into a Redis … // rs contai ns the ResultSet while (rsnext()) { Customer customer = new Customer(); Map<String String> map = new HashMap<String String>(); customersetFirstName(rsgetString("FIRST_NAME")); mapput("firstName" customergetFirstName()); customersetLastName(rsgetString("LAST_NAME")); mapput("lastName" customergetLastName()); and so on … ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 15 jedishmse t(customer:id:"+customergetCustomerID() map); } … For data retrieval you can implement a generic method through an interface that accepts a customer ID (the key) and a n SQL statement argument It return s a HashMap to the caller Just as in the other examples you can hide the details of where the map is originating from First your application can query the cache for the customer data using the customer ID key If the data is not present the SQL statement execute s and retrieve s the data from the dat abase Upon retrieval you may also store a hash representation of that customer ID to lazy load Unlike JSON the added benefit of storing your data as a hash in Redis is that you can query for individual attributes within it Say that for a given request you only want to respond with specific attributes associated with the customer Hash such as the customer name and address This flexibility is supported in Redis along with various other features such as adding and deleting individ ual attributes in a map Cache Serialized Application Object Entities Cache a subset of a fetched database row into a custom structure that can b e consumed by your applications • Pro: Use application objects in their native application state with simple serializing and deserializing techniques This can rapidly accelerate application performance by minimizing data transformation logic • Con: Advanced application development use case The following code converts the customer object into a byte array and then stores that value in Redis: … // key contains customer id Customer customer = (Customer) object; ByteArrayOutputStream bos = new ByteArrayOutputStream(); ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 16 ObjectOutput out = null ; try { out = new Object OutputStream(bos); outwriteObject(customer); outflush(); byte [] objectValue = bostoByteArray(); jedis set(keygetBytes() objectValue); jedis expire(keygetBytes() ttl); } … The key identifier is also stored as a byte representation and can be represented in the customer:id:1001 format As the other examples show you can create a generic method through an application interface that hides the underlying details method details In this example when instantiating an object or hydrating one with state the method accepts the customer ID (the key) and either returns a customer object from the cache or constructs one after querying the backend database First your application queries the cache for the serialized customer object using the customer ID If the data is not present the SQL statement execute s and the application consume s the data hydrate s the customer entity ob ject and then lazy load s the serialized representation of it in the cache public Customer getObject(String key) { Customer customer = null ; byte [] redisObject = null ; redisObject = jedis get(keygetBytes()); if (redisObject != null ) { try { ByteArrayInputStream in = new ByteArrayInputStream(redisObject); ObjectInputStream is = new ObjectInputStream(in); customer = (Customer) isreadObject(); } … ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 17 } … return customer; } Conclusion Modern applications can’t afford poor performance Today’s users have low tolerance for slow running applications and poor user experiences When low latency and scaling databases are critical to the success of your applications it’s imperative that you use database caching Amazon ElastiCache provides two managed in memory key value stores that you can use for database caching A managed service further simplifies using a cache in that it removes the administrative tasks associated with support ing it Contributors The following individuals and organizations contributed to this document: • Michael Labib Specialist Solutions Architect AWS Further Reading For more information see the following resources : • Performance at Scale with Amazon ElastiCache (AWS whitepaper)15 • Full Redis command list16 1 https://awsamazoncom/rds/aurora/ 2 https://redisio/download 3 https://memcachedorg/ 4 https://awsamazoncom/elasticache/redis/ 5 https://docsawsamazoncom/AmazonElastiCache/latest/red ug/Strategieshtml Notes ArchivedAmazon Web Services – Database Caching Strategies using Redis Page 18 6 https://docsawsamazoncom/AmazonElastiCache/latest/mem ug/Strategieshtml 7 https://redisio/topics/lru cache 8 https://wwwluaorg/ 9 https://awsamazoncom/vpc/ 10 https://awsamazoncom/caching/ 11 https://githubcom/xetorthio/jedis 12 https://githubcom/wg/lettuce 13 https://githubcom/redisson/redisson 14 http://wwworaclecom/technetwork/java/dataaccessobject 138824html 15 https://d0awsstaticcom/whitepapers/performance atscale withamazon elasticachepdf 16 https://redisio/commands
|
General
|
consultant
|
Best Practices
|
Demystifying_the_Number_of_vCPUs_for_Optimal_Workload_Performance
|
ArchivedDemystifying the Number of vCPUs for Optimal Workload Performance September 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 2 © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its a ffiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 3 Contents Abstract 4 Introduction 5 Methodology 6 Discussion by Example 8 Best Practices 10 Conclusion 13 Contributors 13 ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 4 Abstract Following industry standard rules of thumb when migrating physical servers or desktops into a virtual environment doesn’t ensure optimal CPU performance after consolidation especially for CPU intensive workloads This paper describes a proven scientific methodology for benc hmarking CPU performance for different CPU generations with detailed examples to achieve optimal performance Learn how to choose Amazon EC2 instance types based on CPU resources and apply best practices for CPU selection with Amazon EC2 ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 5 Introduction When you migrate physical servers or desktops to a virtual environment using a hypervisor (such as ESX Hyper V KVM Xen etc) you’re typically advised to follow industry standard rules of thumb for high workload consolidation For example you might b e advised to use 1 CPU core for every 2 virtual machines (VMs) However this ratio might not provide a realistic estimate for CPUs with high clock speeds such as thos e running at 16 GHz to 33 GHz You should use a higher consolidation ratio with faster CPUs New generation CPUs provide better performance even when running at the same clock speed or with the same number of CPU cores compared with prior generation CPUs The price performance ratio w ith new CPUs is better as well So how do we benchmark the CPU performance for different CPU generations to get the optimal performance after VM consolidation? As part of the answer and to ensure predictable results we should have a scientific approach t o determine the most appropriate CPU sizing Remember that undersizing a CPU resource can cause poor user experience and oversizing a CPU resource can cause wasted resources and higher Operating Expenses (OPEX) yielding a higher Total Cost Ownership (TCO) This paper examine s a proven methodology for choosing the right Amazon Elastic Compute Cloud (EC2) instance types based on CPU resources and includes detailed examples In addition some best practices for CPU selection with Amazon EC2 are discussed ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 6 Methodology Step 1: Normalize the CPU performance index (Pi) for different generation CPUs using the Moore’s Law equation1: 𝑃𝑖(𝑡)=2005556 (𝑡) (1) Where Pi (t) is the CPU perfor mance index at the reference month t = 0 In other words if we’re trying to migrate a system with a CPU A being first sold on Jan 2015 to CPU B being first sold on June 2016 then the performance index for CPU A is P i (0) = 1 and CPU B is P i (18) = 2 Step 2: Determine the normalized CPU utilization in term s of clock speed (GHz) of the current workload utiliza tion by inserting Equation (1) into Equation (2) The normalized CPU utilization (CPU Utilization (Norm) ) equation will be explained as shown below: 𝐶𝑃𝑈 𝑈𝑡𝑖𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛 (𝑁𝑜𝑟𝑚 )= [#𝐶𝑃𝑈 ×#𝐶𝑜𝑟𝑒 ×𝐶𝑃𝑈 𝐹𝑟𝑒𝑞 ×𝐶𝑃𝑈 𝑈𝑡𝑖𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛 ×𝑃𝑖(𝑡)] (2) Where ▪ #CPU = Current number of CPU sockets per physical server If it is a VM it should be equivalent to 1 ▪ #Core = Current number of CPU cores per physical server If it is a VM it should be equivalent to the number of currently deployed vCPUs (We are assuming that there is no oversubscription in this case ) If hyper threading is enabl ed th e number of CPU cores or v CPUs should be doubled 1 In the mid 1960s Gordon Moore the co founder of Intel made the observation that computer power measured by the number of transistors that could be fit onto a chip doubled every 18 months This law has performed extremely well over the preceding years ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 7 ▪ CPU Freq = Current CPU clock speed in GHz ▪ CPU Utilization = Current CPU utilization as a percentage ▪ 𝑃𝑖(𝑡) = Performance index for vCPUs per month Step 3: Determine the estimated CPU utilization b y reserving sufficient buffer for a workload spike This is calculated by inserting the required headroom in term s of percentage (%) into Equation (3) This gives a conservative estimate of the CPU sizing to avoid suboptimal performance The estimated CPU utilization (CPU Utilization (Est)) equation is explained as shown below 𝐶𝑃𝑈 𝑈𝑡𝑖𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛 (𝐸𝑠𝑡) = 𝐶𝑃𝑈 𝑈𝑡𝑖𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛 (𝑁𝑜𝑟𝑚 ) × (1+𝐻𝑒𝑎𝑑𝑟𝑜𝑜𝑚 ) (3) Where 𝐻𝑒𝑎𝑑𝑟𝑜𝑜𝑚 = Percentage of CPU resource reserved as a buffer for a workload spike Step 4: Refer to Amazon EC2 Instance Types to find the most appropriate CPU type for particular instance classes by using Equ ation (4) 𝐶𝑃𝑈 𝑈𝑡𝑖𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛(𝐸𝑠𝑡) ≤ 𝐶𝑃𝑈 𝐶𝑎𝑝𝑎𝑐𝑖𝑡𝑦(𝑛𝑒𝑤 )= [#𝑣𝐶𝑃𝑈 (𝑛𝑒𝑤 ) 2× 𝐶𝑃𝑈 𝐹𝑟𝑒𝑞(𝑛𝑒𝑤 )×𝑃𝑖(𝑛𝑒𝑤 )(𝑡)] (4) Where ▪ #𝑣𝐶𝑃𝑈 (𝑛𝑒𝑤 ) = Newly selected number of vCPUs for the Amazon EC2 instance It is divided by 2 since hyper threading is used on the Amazon EC2 instance ▪ #𝐶𝑃𝑈 𝐹𝑟𝑒𝑞 (𝑛𝑒𝑤 ) = Newly designated CPU clock speed (GHz) for the Amazon EC2 instance ▪ 𝑃𝑖(𝑛𝑒𝑤 )(𝑡) = Perf ormance index for new vCPUs per month ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 8 Discussion by Example Step 1: Table 1 shows the performance index which is calculated by using Equation (1) for various CPU models The oldest CPU model Xeon E5640 is used as the benchmark Both the Xeon E5640 and E5647 models belong to the current state of usage Table 1: CPU Performance index for various CPU model s Step 2: Table 2 shows the total CPU utilization in GHz after using Equation (2) for all the physical ser vers’ workload s that will be migrated to Amazon EC2 Table 2: Normalized CPU utilization in GHz Step 3: Table 3 shows the estimated CPU utilization in GHz after we include the buffer using Equation ( 3) Table 3: Estimated CPU utilization in GHz Step 4: After reviewing Amazon EC2 Instance Types we decided to deploy M4 instances Table 4 shows the performance index that is calculated using Equation (1) by taking the CPU model Xeon E52686 v4 as reference t = 0 ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 9 Table 4: Performance index for M4 class instances Table 5 illustrates the CPU capacity of M4 instances after normalization Model vCPU* CPU Freq (GHz) Mem (GiB) SSD Storage (GB) Perf Index Per Core CPU Capacity new (GHz) m4large 2/2 23 8 EBSonly 100 230 m4xlarge 4/2 23 16 EBSonly 100 460 m42xlarge 8/2 23 32 EBSonly 100 920 m44xlarge 16/2 23 64 EBSonly 100 1840 m410xlarge 40/2 23 160 EBSonly 100 4600 m416xlarge 64/2 23 256 EBSonly 100 7360 Table 5: M 4 class instances’ CPU capacity after normalization * The number of vCPUs is divided by 2 because each vCPU in an Amazon EC2 instance is a hyperthread of an Intel Xeon CPU core By comparing the results that you obtain from steps 3 and 4 Table 6 demonstrates the CPU selection mapping against each source machine that is being migrated to Amazon EC2 Host Name CPU Model Recommended AWS Instance Type Server01 Xeon E5640 m4large Server02 Xeon E5640 m4xlarge Server03 Xeon E5647 m4xlarge Server04 Xeon E5647 m42xlarge Table 6: Recommended instance type This example did n’t take into account memory storage or I/O factors For actual scenarios we should consider taking a more holistic view to optimally balance performance and TCO saving Amazon EC2 has many different classes of instance types such as Compute Optimized Me mory Optimized Storage Optimized IO Optimized and GPU Optimized – see https://awsamazoncom/ec2/instance CPU Model CPU Frequency (GHz) # Cores First Sold Performance Index Performance Index Per Core Xeon E52686 v4 230 180 Jun16 1796 100 ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 10 types for more detailed information These different classes of instance types are optimized to deliver the best performance and TCO saving depending on your application’s behavior and usage characteristic s Best Practices 1 Assess the requirements of your applications and select the appropri ate Amazon EC2 instance family as a starting point for application performance testing Amazon EC2 provides you with a variety of instance types each with one or more size options organized into distinct instance families that are optimized for different types of applications You should start evaluating the performance of your applications by : a) Identifying how your application compare s to different instance families ( for example is the application compute bound memory bound or I/O bound ?) b) Sizing your w orkload to identify the appropriate instance size There is no substitute for measuring the performance of your entire application because application performance can be impacted by the underlying infrastructure or by software and architectural limitation s We recommend application level testing including the use of application profiling and load testing tools and services 2 Normalize generations of CPUs by using Moore’s Law Processing performance is usually bound to the number of CPU cores clock speed and type of CPU hardware instances that an application runs on A new CPU model will usually outperform the models it precedes even with the same number of cores and clock speed Therefore you should normalize different generations of CPUs by using Moore’s Law as shown earlier in Methodology to obtain more realistic comparison results 3 Have a data collection period that is long enough to capture the workload utilization pattern Workload changes in accordance with time shifting For analysis y our data collection period should be long enough to show you the peak and trough utilization across your business cycle (for example monthly or quarterly) You should include peak utiliza tion instead of average utilization for the purposes of CPU sizing This will ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 11 ensure that you provide a consistent user experience when workloads are under peak utilization 4 Deploy discovery tools For large scale environments (more than a few hundred mach ines) deploy automated discovery tools such as the AWS Application Discovery Service to perform data collection It’s critical to ensure that the discovery tools includ e basic inventory capabilities to collect the required CPU inventory and utilization (maximum average and minimum) that are specified in Methodology Determine whether the discovery tool requires specific user permissions or secure/compliant port s to be open ed Also investigate whether the discovery tool requires the source machines to be rebooted to install agents In many critical production environments server rebooting is not permissible 5 Allocate enough buffer for spikes When you perform the CPU sizing and capacity planning always include a reasonable buffer of 10 –15% of total required capacity This buffer is crucial to avoid any overlap of scheduled and unscheduled processing that may cause unexpected spikes 6 Monitor continuously Carry out the performance benchmarks before and after migration to investigate user experience acceptance levels Deploy a cloud monitoring tool such as Amazon CloudWatch to monitor CPU performance The cl oud monitoring tool should use monitoring to send alerts if the CPU utilization exceeds the predefined threshold level The tool also should provide reporting capability that generate s relevant reports for short and long term capacity planning purpose s 7 Determine the right VM sizing A VM is considered undersized or stressed when the amount of CPU demand peaks above 70% for more than 1% of any 1 hour A VM is considered oversized when the amount of CPU demand is below 30% for more than 1% of the entire ra nge of 30 days Figure 1 and Figure 2 give a good illustration of determining stress analysis for undersized and oversized conditions ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 12 Figure 1: CPU Undersized condition Figure 2: CPU Oversized condition 8 Deploy single threaded appli cations on uniproces sor virtual machines instead of on SMP virtual machines for the best performance and resource use Single threaded applications can take advantage of a single CPU Deploying such applications on dual processor virtual machines does not speed up the appli cation Instead it causes the second virtual CPU to unnecessarily hold physical resources that other VMs could otherwise use The uniprocessor operating system versions are for single core machines If used on a multi core machine a uniprocessor operating system will recognize and use only one of the cores The SMP versions while required to fully utilize multi core machines can also be used on single core machines However d ue to their extra synchronization code SMP operating sys tems used on single core machines run slightly slower than a uniprocessor operating system on the same machine ArchivedAmazon Web Services – Demystifying the Number of vCPUs for Optimal Workload Performance Page 13 9 Consider using Amazon EC2 Dedicated Instances and Dedicated Host s if you have compliance requirements Dedicated instances and host s don’t share hardware with other AWS accounts To learn more about the di fferences between them see awsamazoncom/ec2/dedicated hosts Conclusion The methodology and best practices discussed in this paper give a pragmatic result for optimal performance regarding selected CPU resource s This methodology has been applied to many enterprises’ cloud transformation projects and delivered more predictable performance with significant TCO saving Additionally this methodology can be adopted for capacity planning and helps enterprises establish strong business justifications for platform expansion Actual performance sizing in a cloud environment should inc lude memory storage I/O and network traffic performance metrics to give a holistic performance sizing overview Contributors The following individual s and organizations contributed to this document: Tan Chin Khoon Enterprise Migration Architect – APAC For a more comprehensive and holistic example and discussion of cloud environment consolidation please contact Tan Chin Khoon Document Revisions Date Description September 2018 Updated formulas and instructions August 2016 First publication
|
General
|
consultant
|
Best Practices
|
Deploying_Microsoft_SQL_Server_on_AWS
|
ArchivedDeploying Microsoft SQL Server on Amazon Web Services November 2019 This paper has been archived For the latest technical content about the AWS Cloud see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Amazon RDS for SQL Server 1 SQL Serv er on Amazon EC2 1 Hybrid Scenarios 2 Choosing Between Microsoft SQL Server Solutions on AWS 2 Amazon RDS for Microsoft SQL Server 4 Starting an Amazon RDS for SQL Server Instance 5 Security 6 Performance Management 11 High Availability 15 Monitoring and Management 17 Managing Cost 21 Microsoft SQL Server on Amazon EC2 23 Starting a SQL Server Instance on Amazon EC2 23 Amazon EC2 Security 25 Performance Management 26 High Availability 29 Monitoring and Management 32 Managing Cost 34 Caching 36 Hybrid Scenarios and Data Migration 37 Backups to the Cloud 38 SQL Server Log Shipping Between On Premises and Amazon EC2 39 SQL Server Always On A vailability Groups Between On Premises and Amazon EC2 40 AWS Database Migration Service 42 Comparison of Microsoft SQL Server Feature Availability on AWS 42 ArchivedConclusion 46 Contributors 46 Further Reading 47 Document Revisions 47 ArchivedAbstract This whitepaper explain s how you can run SQL Server databases on either Amazon Relational Database Service (Amazon RDS) or Amazon Elastic Compute Cloud (Amazon EC2) and the advantages of each approach We review in detail how to provision and monitor your SQL Server database and how to manage scalability performance backup and recovery high availability and securi ty in both Amazon RDS and Amazon EC2 We also describe how you can set up a disaster recovery solution between an on premises SQL Server environment and AWS using native SQL Server features like log shipping replication and Always On availability groups This whitepaper helps you make an educated decision and choose the solution that best fits your needs ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 1 Introduction AWS offers a rich set of features to enable you to run Microsoft SQL Server –based workloads in the cloud These f eatures offer a variety of controls to effectively manage scale and tune SQL Server deployments to match your needs This whitepaper discusses these features and controls in greater detail in the following pages You can run Microsoft SQL Server versions on AWS using the following services: • Amazon RDS • Amazon EC2 Note: Some versions of SQL Server are dependent on Microsoft licensing For current supported versions see Amazon RDS for SQL Server and Microsoft SQL Server on AWS Amazon RDS for SQL Server Amazon RDS is a service that makes it eas y to set up operate and scale a relational database in the cloud Amazon RDS automates installation disk provisioning and management patc hing minor and major version upgrades failed instance replacement and backup and recovery of your SQL Server databases Amazon RDS also offers automated Multi AZ (Availability Zone) synchronous replication allowing you to set up a highly available and scalable environment fully managed by AWS Amazon RDS is a fully managed service and your database s run on their own SQL Server instance with the compute and storage resources you specify Backups high availability and failover are fully automated Becau se of these advantages we recommend customers consider Amazon RDS for SQL Server first SQL Server on Amazon EC2 Amazon Elastic Compute Cloud ( Amazon EC2 ) is a service that provides computing capacity in the clou d Using Amazon EC2 is similar to running a SQL Server database onpremises You are responsible for administering the database including backups and recovery patching the operating system and the database tuning of the operating system and database par ameters managing security and configuring high availability ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 2 or replication You have full control over the operating system database installation and configuration With Amazon EC2 you can quickly provision and configure DB instances and storage and you can scale your instances by changing the size of your instances or amount of storage You can provision your databases in AWS Regions across the world to provide low latency to your end users worldwide You are responsible for data replication and recovery across your instances in the same or different Regions Running your own relational database on Amazon EC2 is the ideal scenario if you require a maximum level of control and configurability Hybrid Scenarios You can also run SQL Server workloads in a hybrid environment For example you might have pre existing commitments on hardware or data center space that makes it impractical to be all in on cloud all at once Such commitments don’t mean you can’t take advantage of the scalability availability and cost benefits of running a portion of your workload on AWS Hybrid designs make this possible and can take many forms from leveraging AWS for long term SQL Server backups to running a secondary replica in a SQL Server Always On Availability Group Choosing Between Microsoft SQL Server Solutions on AWS For SQL Server databases both Amazon RDS and Amazon EC2 have advantages and certain limitations Amazon RDS for SQL Server is easier to set up manage and maintain Using Amazon RDS can be more cost effective than running SQL Server in Amazon EC2 and lets you focus on more important tasks such as schema and index maintenance rather than the day today administration of SQL Server and the underlying operating system Alternatively running SQL Server in Amazon EC2 gives you more control flexibility and choice Depending on your application and your requirements you might prefer one over the other Start by considering the capabilities and limitations of your proposed solution as follows: ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 3 • Does your workload fit within the features and capabilities offered by Amazon RDS for SQL Server? We will discuss these in greater detail later in this whitepaper • Do you need high availability and automated failover capabilities? If you are running a production workload high availability is a recommended best practice • Do you have the resources to manage a cluster on an ongoing basis? These activities include backups restores software updates availability data durability optimization a nd scaling Are the same resources better allocated to other business growth activities? Based on your answers to the preceding considerations Amazon RDS might be a better choice if the following is true: • You want to focus on business growth tasks such a s performance tuning and schema optimization and outsource the following tasks to AWS: provisioning of the database management of backup and recovery management of security patches upgrades of minor SQL Server versions and storage management • You need a highly available database solution and want to take advantage of the push button synchronous Multi AZ replication offered by Amazon RDS without having to manually set up and maintain database mirroring failover clusters or Always On Availability Gro ups • You don’t want to manage backups and most importantly point intime recoveries of your database and prefer that AWS automates and manages these processes However running SQL Server on Amazon EC2 might be the better choice if the following is true : • You need full control over the SQL Server instance including access to the operating system and software stack • Install third party agents on the host • You want your own experienced database administrators managing the databases including backups repli cation and clustering • Your database size and performance needs exceed the current maximums or other limits of Amazon RDS for SQL Server • You need to use SQL Server features or options not currently supported by Amazon RDS ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 4 • You want to run SQL Server 2017 on the Linux operating system For a detailed side byside comparison of SQL Server features available in the AWS environment see the Comparison of Microsoft SQL Server Feature Availability on AWS section Amaz on RDS for Microsoft SQL Server For the list of currently Amazon RDS currently supported versions and features see Microsoft SQL Server on Amazon RDS Amazon RDS for SQL Server supports the following editions of Microsoft SQL Server : • Express Edition : This edition is available at no additional licensing cost and is suitable for small workloads or proof ofconcept deployments Microsoft limits the amount of memory and size of the individual databases that can be run on the Express edition This edition is not available in a Multi AZ deployment • Web Edition : This edition is suitable for public internet accessible web workloads This edition is not available in a Multi AZ deployment • Standard Edition : This edition is suitable for most SQL S erver workloads and can be deployed in Multi AZ mode • Enterprise Edition : This edition is the most feature rich edition of SQL Server is suitable for most workloads and can be deployed in Multi AZ mode For a detailed feature comparison between the dif ferent SQL Server editions see Editions and supported features of SQL Server on the Microsoft Developer Network (MSDN) website In Amazon RDS for SQL Server the following features and options are supported depending on the edition of SQL Server: For the most current supported features see Amazon RDS f or SQL Server features • Core database engine features • SQL Server development tools: Visual Studio integration and IntelliSense • SQL Server management tools: SQL Server Management Studio (SSMS) sqlcmd SQL Server Profiles (for client side traces) SQL Server Migration Assistant (SSMA) Database Engine Tuning Advisor and SQL Server Agent ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 5 • Safe Common Language Runtime (CLR) for SQL Server 2016 and below versions • Service Broker • Fulltext search (except semantic search) • Secure Sockets Layer (SSL) connection support • Transparent Data Encryption (TDE) • Encryption of storage at rest using the AWS Key Management Service (AWS KMS) fo r all SQL Server license types • Spatial and location features • Change tracking • Change Data Capture • Always On or Database mirroring (used to provide the Multi AZ capability) • The ability to use an Amazon RDS SQL DB instance as a data source for reporting anal ysis and integration services • Local Time Zone support • Custom Server Collations AWS frequently improve s the capabilities of Amazon RDS for SQL Server For the latest information on supported versions features and options see Version and Feature Support on Amazon RDS Starting an Amazon RDS for SQL Server Instance You can start a SQL Server instance on Amazon RDS in several ways : • Interactively using the AWS Management Console • Programmatically using AWS CloudFormation templates • AWS SDKs and the AWS Command Line Interface (AWS CLI) • Using the PowerShell After the instance has been deployed you can connect to it using standard SQL Server tools Amazon RDS provides you with a Domain Name Service (DNS) endpoint for the server as shown in the following figure To connect to the database u se this DNS ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 6 endpoint as the SQL Server hostname along with the master user name and password configured for the instance Always use the DNS endpoint to connect to the instance because the underlying IP address might change Amazon RDS exposes the Always On AGs availability group listener endpoint for the SQL Server Multi AZ deployment The endpoint is visible in the console and is returned by the DescribeDBInstances API as an entry in the endpoints field You can easily connect to the listener endpoint in order to have faster fa ilover times Figure 1: Amazon RDS DB instance properties Security You can use several features and sets of controls to manage the security of your Amazon RDS DB instance These controls are as follows: • Network controls which determine the network configuration underlying your DB instance • DB instance access controls which determine administrative and management access to your RDS resources • Data access controls which determine access to the data stored in your RDS DB instance databases ArchivedAmazon Web Services Deploying Microsoft SQL Se rver on Amazon Web Services Page 7 • Data at rest protection which affects the security of the data stored in your RDS DB instance • Data in transit protection which affects the security of data connections to and from your RDS DB instance Network Controls At the network layer controls are on th e deployed instance EC2VPC level EC2VPC allows you to define a private isolated section of the AWS Cloud and launch resources within it You define the network topology the IP addressing scheme and the routing and traffic access control patterns Newe r AWS accounts have access only to this networking platform In EC2 VPC DB subnet groups are also a security control They allow you to narrowly control the subnets in which Amazon RDS is allowed to deploy your DB instance You can control the flow of net work traffic between subnets using route tables and network access control lists (NACLs) for stateless filtering You can designate certain subnets specifically for database workloads without default routes to the internet You can also deny non database traffic at the subnet level to reduce the exposure footprint for these instances Security groups are used to filter traffic at the instance level Security groups act like a stateful firewall similar in effect to host based firewalls such as the Microso ft Windows Server Firewall The rules of a security group define what traffic is allowed to enter the instance (inbound) and what traffic is allowed to exit the instance (outbound) VPC security groups are used for DB instances deployed in a VPC They can be changed and reassigned without restarting the instances associated with them For improve d security we recommend restricting inbound traffic to only database related traffic (port 1433 unless a custom port number is used) and only traffic from known s ources Security groups can also accept the ID of a different security group (called the source security group) as the source for traffic This approach makes it easier to manage sources of traffic to your RDS DB instance in a scalable way In this case y ou don’t have to update the security group every time a new server needs to connect to your DB instance; you just have to assign the source security group to it Amazon RDS for SQL Server can make DB instances publicly accessible by assigning internet routable IP addresses to the instances In most use cases this approach is not needed or desired and we recommend setting this option to No to limit the potential threat In cases where direct access to the database over the public internet is needed ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 8 we rec ommend limiting the sources that can connect to the DB instance to known hosts by using their IP addresses For this option to be effective the instance must be launched in a subnet that permits public access and the security groups and NACLs must permit inbound traffic from those sources DB instances that are exposed publicly over the internet and have open security groups accepting traffic from any source might be subject to more frequent patching Such instances can be force patched when security pat ches are made available by the vendors involved This patching can occur even outside the defined instance maintenance window to ensure the safety and integrity of customer resources and our infrastructure Although there are many ways to secure your data bases we recommend using private subnet(s) within a VPC no possible direct internet access DB Instance Access Controls Using AWS Identity and Access Management (IAM) you can manage access to your Amazon RDS for SQL Server instances For example you can authorize administrators under your AWS account (or deny them the ability) to create describe modify or delete an Ama zon RDS database You can also enforce mult ifactor authentication (MFA) For more information on using IAM to manage administrative access to Amazon RDS see Authe ntication and Access Control for Amazon RDS in the Amazon RDS User Guide Data Access Controls Amazon RDS for SQL Server supports both SQL Authentication and Windows Authentication and access control for authenticated users should be configured using the principle of least privilege A master account is created automatically when an instance is launched This master user is granted several permissions For det ails see Master User Account Privileges This login is typically used for administrative purposes only and is granted the roles of processadmin setupa dmin SQLAgentUser Alter on SQLAgentOperator and public at the server level Amazon RDS manages the master user as a login and creates a user linked to the login in each customer database with the db_owner permission You can create additional users and databases after launch by connecting to the SQL Server instance using the tool of your choice (for example SQL Server Management Studio) These users should be assigned only the permissions needed for the workload or application that they are supporting t o operate correctly For example if you as the master user create a user X who then creates a database user X will be a member of the db_owner role for this new database not the master user Later on if you reset the master password the master user wi ll be added to db_owner for this new database ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 9 You can also integrate with your existing identity infrastructure based on Microsoft Active Directory and authenticate against Amazon RDS for SQL Server databases using the Windows Authentication method Using Windows Authentication allows you to keep a single set of credentials for all your users and save time and effort by not having to update these credentials in multiple places To use the Windows Authentication method with your Amazon RDS for SQL Server instance sign up for the AWS Directory Service for Microsoft Active Directory If you don’t already have a directory running you can create a new one You can then associate directories with both new and existing DB instances You can use Active Directory to manage users and groups with access privileges to your SQL Server DB instance and also join other EC2 instances to that domain You can also establish a one way forest trust from an external exi sting Active Directory deployment to the directory managed by AWS Directory Service Doing so will give you the ability to authenticate already existing Active Directory users and groups you have established in your organization with Amazon RDS SQL Server instances You can also create SQL Server Windows logins on domain joined DB instances for users and groups in your directory domain or the trusted domain if applicable Logins can be added using a SQL client tool such as SQL Server Management Stud io using the following command CREATE LOGIN [<user or group>] FROM WINDOWS WITH DEFAULT_DATABASE = [master] DEFAULT_LANGUAGE = [us_english]; More information on configuring Windows Authentication with Amazon RDS for SQL Server can be found in the Using Windows Authentication topic in the Amazon R DS User Guide Unsupported SQL Server Roles and Permissions in Amazon RDS The following server level roles are not currently available in Amazon RDS: bulkadmin dbcreator diskadmin securityadmin serveradmin and sysadmin See Features Not Supported and Features with limited support Also the following server level permissions are not available on a SQL Server DB instance: • ADMINISTER BULK OPERATIONS ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 10 • ALTER ANY CREDENTIAL • ALTER ANY EVENT NOTIFICATION • ALTER RESOURCES • ALTER SETTINGS (you can use the DB parameter group API actions to modify parameters) • AUTHENTICATE SERVER • CREATE DDL EVENT NOTIFICATION • CREATE ENDPOINT • CREATE TRACE EVENT NOTIFICATION • EXTERNAL ACCESS ASSEMBLY • SHUTDOWN (you can use the RDS reboot option instead) • UNSAFE ASSEMBLY • ALTER ANY AVAILABILITY GROUP • CREATE ANY AVAILABILITY GROUP Data at Rest Protection Amazon RDS for SQL Server supports the encryption o f DB instances with encryption keys managed in AWS KMS Data that is encrypted at rest includes the underlying storage for a DB instance its automated backups and snapshots You can also encrypt existing DB instances and share encrypted snapshots with ot her accounts within the same Region Amazon RDS encrypted instances use the open standard AES 256 encryption algorithm to encrypt your data on the server that hosts your Amazon RDS instance Once your data is encrypted Amazon RDS handles authentication of access and decryption of your data transparently with a minimal impact on performance You don’t need to modify your database client applications to use encryption Amazon RDS encrypted instances also help secure your data from unauthorized access to the underlying storage You can use Amazon RDS encryption to increase data protection of your applications deployed in the cloud and to fulfill compliance requirements for data at rest encryption To manage the keys used for encrypting and decrypting your Ama zon RDS resources use AWS KMS ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 11 Amazon RDS also supports encryption of data at rest using the Transparent Data Encryption ( TDE) feature of SQL Server This feature is only available in the Enterprise Edition You can enable TDE by setting up a custom optio n group with the TDE option enabled (if such a group doesn’t already exist) and then associating the DB instance with that group You can find more details on Amazon RDS support for TDE on the Options for the Microsoft SQL Server Database Engine topic in the Amazon RDS User Guide If full data encryption is not feasible or not desired for your workload you can selectively encrypt table data using SQL S erver column level encryption or by encrypting data in the application before it is saved to the DB instance Data in Transit Protection Amazon RDS for SQL Server fully supports encrypted connections to the instances using SSL SSL support is available in all AWS Regions for all supported SQL Server editions Amazon RDS creates an SSL certificate for your SQL Server DB instance when the instance is created The SSL certificate includes the DB instance endpoint as the Common Name (CN) for the SSL certificat e to help guard against spoofing attacks You can find more details on how to use SSL encryption in Using SSL with a Microsoft SQL Server DB Instance in the Amazon RDS User Guide Performance Management The performance of your SQL Server DB instance is determined primarily by your workload Depending on your workload you need to select the right instance type which affects the compute capacity amount of memory and network capacity available to your database Instance type is also determined by the storage size and type you select when you provision the database Instance Sizing The amount of memory and compute capa city available to your Amazon RDS for SQL Server instance is determined by its instance class Amazon RDS for SQL Server offers a range of DB instance classes from 1 vCPU and 1 GB of memory to 96 vCPUs and 488 GB of memory Not all instance classes are ava ilable for all SQL Server editions however The i nstance class availability also varies based on the version Amazon RDS for SQL Server supports the various DB instance classes for the various SQL Server editions For the most up todate list of supported instance classes see Amazon RDS for SQL Server instance types ArchivedAmazon Web Services Deployin g Microsoft SQL Server on Amazon Web Services Page 12 Previous generation DB instance classes are superseded in terms of both cost effectiveness and performance by the current generation classes For the previous generation instance types see Previous Generation Instances for more information Understanding the performance characteristics of your workload is impor tant when identifying the proper instance class If you are unsure how much CPU you need we recommend that you start with the smallest appropriate instance class then monitor CPU utilization using Amazon CloudWatch You can modify the instance class for an existing Amazon RDS for SQL Server instance allowing the flexibility to scale up or scale down the instance size depending on the performance characteristics required If you are in a Multi AZ High Availability configuration making the change involve s a server reboot or a failover To modify a SQL Server instance see Modifying a DB Instance Running the Microsoft SQL Server database engine and for the list of modification setting see setting for Microsoft SQL Server DB Instances The settings are similar to the ones you configure when launching a new DB instance By default changes (including a change to the DB instance class) are applied during the next specified maintenance window Alternatively you can use the apply immediately flag to apply t he changes immediately Disk I/O Management Amazon RDS for SQL Server simplifies the allocation and management of database storage for instances You decide the type and amount of storage to use and also the level of provisioned I/O performance if applica ble You can change the amount of storage or provisioned I/O on an RDS for SQL Server instance after the instance has been deployed You can also enable storage auto scaling to enable the Amazon RDS to automatically increase the storage when needed to avoi d having your instance run out of storage space We recommend that you enable storage auto scaling to handle growth from the onset Amazon RDS for SQL Server supports two types of storage each having different characteristics and recommended use cases: • General Purpose (SSD) (also called GP2) is an SSD backed storage solution with predictable performance and burst capabilities This option is suitable for workloads that run in larger batches such as nightly report processing Credits are replenished while the instance is largely idle and are then available for bursts of batch jobs ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 13 • Provisioned IOPS storage (or PIOPS storage) is designed to meet the needs of I/Ointensive workloads that are sensitive to storage performance and consistency in random access I/O throughput The following table compares the Amazon RDS storage performance characteristics Table 1: Amazon RDS storage performance characteristics Storage Type Min Volume Size Max Volume Size Baseline Performance Burst Capability Storage Technology Pricing Criteria General Purpose 20 GiB (100 GiB recommende d) 16 TiB* 3 IOPS/GiB Yes; up to 3000 IOPS per volume subject to accrued credits SSD Allocated storage Provisioned IOPS 20 GiB (for Enterprise and Standard editions 100 GiB for Web and Express Edition) 16 TiB* 10 IOPS/GiB up to max 64000 IOPS No; fixed allocation SSD Allocated storage and Provisioned IOPS * Maximum IOPS of 64000 is guaranteed only on Nitro based instances that are on m5 instance types Although performance characteristics of instances change over time as t echnology and capabilities improve there are several metrics that can be used to assess performance and help plan deployments Different workloads and query patterns affect these metrics in different ways making it difficult to establish a practical base line reference in a typical environment We recommend that you test your own workload to determine how these metrics behave in your specific use case For Amazon RDS we provision and measure I/O performance in units of input/output operations per second ( IOPS) We count each I/O operation per second that is 256 KiB or smaller as one IOPS ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Service s Page 14 The average queue depth a metric available through Amazon CloudWatch tracks the number of I/O requests in the queue that are waiting to be serviced These requests have been submitted by the application but haven’t been sent to the storage device because the device is busy servicing other I/O requests Time spent in the queue increases I/O latency and large queue sizes can indicate an overloaded system from a storage pe rspectiveAs a result depending on the storage configuration selected your overall storage subsystem throughput will be limited either by the maximum IOPS or the maximum channel bandwidth at any time If your workload is generating a lot of small sized I /O operations (for example 8 KiB) you are likely to reach maximum IOPS before the overall bandwidth reaches the channel maximum However if I/O operations are large in size (for example 256 KiB) you might reach the maximum channel bandwidth before max imum IOPS As specified in Microsoft documentation SQL Server stores data in 8 KiB pages but uses a complex set of techni ques to optimize I/O patterns with the general effect of reducing the number of I/O requests and increasing the I/O request size This approach results in better performance by reading and writing multiple pages at the same time Amazon RDS accommodates t hese multipage operations by counting every read or write operation on up to 32 pages as a single I/O operation to the storage system based on the variable size of IOPS SQL Server also attempts to optimize I/O by reading ahead and attempting to keep the queue length nonzero Therefore queue depth values that are very low or zero indicate that the storage subsystem is underutilized and potentially overprovisioned from a n I/O capacity perspective Using small storage sizes (less than 1TB) with General Pur pose (GP2) SSD storage can also have a detrimental impact on instance performance If your storage size needs are low you must ensure that the storage subsystem provides enough I/O performance to match your workload needs Because IOPS are allocated on a ratio of 3 IOPS for each 1 GB of allocated GP2 storage small storage sizes will also provide small amounts of baseline IOPS When created each instance comes with an initial allocation of I/O credits This allocation provides for burst capabilities of up to 3000 IOPS from the start Once the initial burst credits allocation is exhausted you must ensure that your ongoing workload needs fit within the baseline I/O performance of the storage size selected ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 15 High Availability Amazon RDS provides high availab ility and failover support for DB instances using Multi AZ deployments Multi AZ deployments provide increased availability data durability and fault tolerance for DB instances Multi AZ high availability option uses SQL Server database mirroring or Always On availability g roups configuration options with additional improvements to meet the requirements of enterprise grade production workloads running on SQL Server The Multi AZ deployment option provides enhanced availability and data durability b y automatically replicating database updates between two AWS Availability Zones Availability Zones are physically separate locations with independent infrastructure engineered to be insulated from failures in other Availability Zones When you set up SQL Server Multi AZ RDS automatically configures all databases on the instance to use database mirroring or availability groups Amazon RDS handles the primary the witness and the secondary DB instance for you Because configuration is automatic RDS selec ts database mirroring or Always On availability group s based on the version of SQL Server that you deploy Amazon RDS supports Multi AZ with database mirroring or availability group s for the following SQL Server versions and editions (exceptions noted) : See Multi AZ Deployments for Microsoft SQL Server for more information • SQL Server 2017: Enterprise Editions (Always On availability group s are supported in Ent erprise Edition 140030491 or later) • SQL Server 2016: Enterprise Editions (Always On availability group s are supported in En terprise Edition 130052160 or later) Amazon RDS supports Multi AZ with database mirroring for the following SQL Server versions and editions except for the versions of Enterprise Edition noted previously: • SQL Server 2017: Standard and Enterprise Editions • SQL Server 2016: Standard and Enterprise Editions • SQL Server 2014: Standard and Enterprise Editions • SQL Server 2012: Standard and Enterprise Editions Amazon RDS supports Multi AZ for SQL Server in all AWS Regions with the following exceptions: ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 16 • US West (N California): Neither database mirroring nor Always On availability group s are supported • South America (São Paulo): Supported on all DB instance classes except m1 or m2 • EU (Stockholm): Neither database mirroring nor Always On availability group s are supported When you create or modify your SQL Server DB instance to run using Multi AZ Amazon RDS will automatically provision a primary database in one Availability Zone and maintain a synchronous secondary replica in a different Avail ability Zone In the event of planned database maintenance or unplanned service disruption Amazon RDS will automatically fail over the SQL Server database s to the up todate secondary so that database operations can resume quickly without any manual inter vention If an Availability Zone failure or instance failure occurs your availability impact is limited to the time that automatic failover takes to complete typically 60 120 seconds for database mirroring and 10 15 seconds for availability groups When failing over Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point to the secondary which is in turn promoted to become the new primary The canonical name record (or endpoint name) is an entry in DNS We recommend that you implement retry logic for database connection errors in your application layer by using the canonical name rather than attempt to connect directly to the IP address of the DB instance We recommend this approach because during a failover the underlyin g IP address will change to reflect the new primary DB instance Amazon RDS automatically performs a failover in the event of any of the following: • Loss of availability in the primary Availability Zone • Loss of network connectivity to the primary DB node • Compute unit failure on the primary DB node • Storage failure on the primary DB node Amazon RDS Multi AZ deployments don’t fail over automatically in response to database operations such as long running queries deadlocks or database corruption errors For ex ample suppose that a customer workload causes high resource usage on an instance and that SQL Server times out and triggers failover of individual databases In this case RDS recovers the failed databases back to the primary instance ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 17 When operations suc h as instance scaling or system upgrades like OS patching are initiated for Multi AZ deployments they are applied first on the secondary instance prior to the automatic failover of the primary instance for enhanced availability Due to failover optimiza tion of SQL Server certain workloads can generate greater I/O load on the mirror than on the principal particularly for DBM deployments This functionality can result in higher IOPS on the secondary instance We recommend that you consider the maximum IO PS needs of both the primary and secondary when provisioning the storage type and IOPS of your RDS for SQL Server instance Monitoring and Management Amazon CloudWatch collects many Amazon RDS specific metrics You can look at these metrics using the AWS Management Console the AWS CLI (using the monget stats command) or the AWS API Or the powershell (using the Get CWMetricStatistics cmdlet ) In addition to the system level metrics collected for Amazon EC2 instances (such as CPU usage disk I/O and network I/O) the Amazon RDS metrics include many database specific metrics such as database connections free storage space read and write I/O per second read and write latency read and write throughput and available RAM For a full up todate list see Amazon RDS Dimensions and Metrics in the Amazon CloudWatch Developer Guide In Amazon CloudWatch you can also configure alarms on these metrics to trigger notifications when the state changes An alarm watches a single metric over a time period you specify and performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods Notifications are sent to Amazon Simple Notification Service (Amazon SNS) topics or AWS Auto Scaling policies You can configure these alarms to notify database administrators by email or SMS text message when they get triggered You can also use notifications as triggers for custom automated response mechanisms or workflows that react to alarm events; however you need to implement such event handlers separately Amazon RDS for SQL Server als o supports Enhanced Monitoring Amazon RDS provides metrics in nearreal time for the operating system (OS) that your DB instance runs on You can view the metrics for your instance using the console or consume the Enhanced Monitoring JSON output from Ama zon CloudWatch Logs in a monitoring system of your choice Enhanced Monitoring gathers its metrics from an agent on the instance ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 18 Enhanced Monitoring gives you deeper visibility into the health of your Amazon RDS instances in nearreal time providing a co mprehensive set of 26 new system metrics and aggregated process information at a detail level of up to 1 second These monitoring metrics cover a wide range of instance aspects such as the following: • General metrics like uptime instance and engine version • CPU utilization such as idle kernel or user time percentage • Disk subsystem metrics including utilization read and write bytes and number of I/O operations • Network metrics like interface throughput and read and write bytes • Memory utili zation and availability including physical kernel commit charge system cache and SQL Server footprint • System metrics consisting of number of handles processes and threads • Process list information grouped by OS processes RDS processes (management monitoring diagnostics agents) and RDS child processes (SQL Server workloads) Because Enhanced Monitoring delivers metrics to CloudWatch Logs this feature incur s standard CloudWatch Logs charges These charges depend on a number of factors: • The number of DB instances sending metrics to CloudWatch Logs • The level of detail of metrics sampling —finer detail results in more metrics being delivered to CloudWatch Logs • The workload running on the DB instance —more compute intensive workloads have more OS process activity to report More information and instructions on how to enable the feature can be found in Viewing DB Instance Metrics in the Amazon RDS User Guide In ad dition to CloudWatch metrics you can use the Performance Insights and native SQL Server performance monitoring tools such as dynamic management views the SQL Server error log and both client and server side SQL Server Profiler traces Performance Insights expands on existing Amazon RDS monitoring features to illustrate your database's performance and help you analyze any issues that affect it With the Performance Insights dashboard you can visualize the database load and filter the lo ad by waits SQL statements hosts or users More information can be found at Using ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 19 Using Amazon RDS Performance Insights in the Amazon Relational Database Service User Guide Amazon RDS for SQL Server provides two administrative windows of time designed for effective management described following The service will assign default time windows to each DB instance if these aren’t customized • Backup window: The back up window is the period of time during which your instance is going to be backed up Because backups might have a small performance impact on the operation of the instance we recommend you set the window for a time when this has minimal impact on your wor kload • Maintenance window: The maintenance window is the period of time during which instance modifications (such as implementing pending changes to storage or CPU class for the instance) and software patching occur Your instance might be restarted during this window if there is a scheduled activity pending and that activity requires a restart but that is not always the case We recommend scheduling the maintenance window for a time when your instance has the least traffic or a potential restart is leas t disruptive Amazon RDS for SQL Server comes with several built in management features: • Automated backup and recovery Amazon RDS automatically backs up all databases of your instances You can set the backup retention period when you create a n instance If you don't set the backup retention period Amazon RDS uses a default retention period of one day You can modify the backup retention period; valid values are 0 (for no backup retention) to a maximum of 35 days Automated backups occur daily during the backup window If you select zero days of backup retention point in time log backups are not taken Amazon RDS uses these periodic data backups in conjunction with your transaction logs (backed up every 5 minutes) to enable you to restore your DB instance to any second during your retention period up to the LatestRestorableTime typically up to the last 5 minutes • Push button scaling With a few clicks you can change the instance class to increase or decrease the size of your instance’s compute capacity network capacity and memory You can choose to make the change immediately or schedule it for your next maintenance window • Automatic host replacement Amazon RDS automatically replaces the compute instance powering your deployment in the event of a har dware failure ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 20 • Automatic minor version upgrade Amazon RDS keeps your database software up todate You have full control on whether Amazon RDS deploy s such patching automatically and you can disable this option to prevent that Regardless of this setting publicly accessible instances with open security groups might be force patched when security patches are made available by vendors to ensure the safety and integrity of customer resources and our infrastructure The patching activity occurs during the w eekly 30 minute maintenance window that you specify when you provision your database (and that you can alter at any time) Such patching occurs infrequently and your database might become unavailable during part of your maintenance window when a patch is applied You can minimize the downtime associated with automatic patching if you run in Multi AZ mode In this case the maintenance is generally performed on the secondary instance When it is complete the secondary instance is promoted to primary The maintenance is then performed on the old primary which becomes the secondary • Preconfigured parameters and options Amazon RDS provides a default set of DB parameter groups and also option groups for each SQL Server edition and version These groups contain configuration parameters and options respectively which allow you to tune the performance and features of your instance By default Amazon RDS provides an optimal configuration set suitable for most workloads based on the class of the in stance that you selected You can create your own parameter and option groups to further tune the performance and features of your instance You can administer Amazon RDS for SQL Server databases using the same tools you use with on premises SQL Server ins tances such as SQL Server Management Studio However to provide you with a more secure and stable managed database experience Amazon RDS doesn’t provide desktop or administrator access to instances and it restricts access to certain system procedures a nd tables that require advanced privileges such as those granted to sa Commands to create users rename users grant revoke permissions and set passwords work as they do in Amazon EC2 (or on premises) databases The administrative commands that RDS doesn’t support are listed in Unsupported SQL Server Roles and Permissions in Amazon RDS Even though direct file system level access to the RDS SQL Server instance is no t available you can always migrate your data out of RDS instances You can use t ools like the Microsoft SQL Server Database Publishing Wizard to download the contents of ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 21 your databases into flat T SQL files You can then load these files into any other SQ L Server instances or store them as backups in Amazon Simple Storage Service (Amazon S3) or Amazon S3 Glacier or on premises In addition you can use the AWS Database Migration Service to move data to and from Amazon RDS You can also use native backup and restore through S3 You can use native backups to migrate databases to Amazon RDS for SQL Server instances or back up your RDS for SQL Server instances to S3 to copy to another SQL Server instance or to retai n offline For more details on how this works and the permissions required see Importing and Exporting SQL Server Databases Managing Cost Managing the cost of the IT infrastructure is often an important driver for cloud adoption AWS makes running SQL Server on Amazon a cost effective proposition by providing a flexible scalable environment and pricing models that allow you to pay for only the capac ity you consume at any given time Amazon RDS further reduces your costs by reducing the management and administration tasks that you have to perform Generally the cost of operating an Amazon RDS instance depends on the following factors: • The AWS Region the instance is deployed in • The instance class and storage type selected for the instance • The Multi AZ mode of the instance • The pricing model • How long the instance is running during a given billing period You can optimize the operating costs of your RDS wo rkloads by controlling the factors listed above AWS services are available in multiple Regions across the world In Regions where our costs of operating our services are lower we pass the savings on to you Thus Amazon RDS hourly prices for the different instance classes vary by the Region If you have the flexibility to deploy your SQL Server workloads in multiple Regions the potential savings from operating in one Region as compared to another can be an important factor in choosing the right Region Amazon RDS also offers different pricing models to match different customer needs: ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 22 • OnDemand Instance pricing allows you to pay for Amazon RDS DB in stances by the hour with no term commitments You incur a charge for each hour a given DB instance is running If your workload doesn’t need to run 24/7 or you are deploying temporary databases for staging testing or development purposes OnDemand Instance p ricing can offer significant advantages • Reserved Instances (RI) allow you to lower costs and reserve capacity Reserved Instances can save you up to 60 percent over On Demand rates when used in steady state which tend to be the case for many datab ases They can be purchased for 1 or 3year terms If your SQL Server database is going to be running more than 25 percent of the time each month you will most likely financially benefit from using a Reserved Instance Overall savings are greater when co mmitting to a 3 year term compared to running the same workload using OnDemand Instance pricing for the same period of time However the length of the term needs to be balanced against projections of growth because the commitment is for a specific inst ance class If you expect that your compute and memory needs are going to grow over time for a given DB instance you might want to opt for a shorter 1 year term and weigh the savings from the Reserved Instance against the overhead of being over provisione d for some part of that term The following pricing options are available for RDS Reserved Instances : • With All Upfront Reserved Instances you pay for the entire Reserved Instance with one upfront payment This option provides you with the largest discount compared to On Demand Instance pricing • With Partial Upfront Reserved Instances you make a low upfront payment and are then charged a discounted hourly rate for the instance for the duration of the Reserved Instance term • With No Upfront Reserved Instanc es you don’t make any upfront payments but are charged a discounted hourly rate for the instance for the duration of the Reserved Instance term This option still provides you with a significant discount compared to On Demand Instance pricing but the di scount is usually less than for the other two Reserved Instance pricing options Note that like in Amazon EC2 in Amazon RDS you can issue a stop command to a standalone DB instance and keep the instance in a stopped state to avoid incurring compute charge s You can't stop an Amazon RDS for SQL Server DB instance in a Multi AZ configuration instead you can terminate the instance take a final snapshot prior to termination and recreate a new Amazon RDS instance from the snapshot when ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 23 you need it or remov e the Multi AZ configuration first and then stop the instance Note that after 7 days your stopped instance will re start so that any pending maintenance can be applied Additionally you can use several other strategies to help optimize costs: • Terminate DB instances with a last snapshot when they are not needed then reprovision them from that snapshot when they need to be used again For example some development and test databases can be terminated at night and on weekends and reprovisioned on weekdays in the morning Alternatively use the stop feature mentioned above to turn off the database for the weekend • Scale down the size of your DB instance during off peak times by using a smaller instance class See the Amazon RDS for SQL Server Pricing webpage for up todate pricing information for all pricing models and instance classes Microsoft SQL Server on Amazon EC2 You can also choose to run a Microsoft SQL Server on Amazon EC2 as described in the following sections Starting a SQL Server Instance on Amazon EC2 You can start a SQL Server DB instance on Amazon EC2 in several ways : • Interactively using the AWS Manageme nt Console • Programmatically using AWS CloudFormation templates • Using AWS SDKs and the AWS Command Line Interface (AWS CLI) • Using the PowerShell For the procedure to launch Amazon EC2 using the AWS Management Consol e see Launch an Instan ce Check the below useful bullets for launching Amazon EC2 for running SQL Server instance ArchivedAmazon Web Services Deploying Micros oft SQL Server on Amazon Web Services Page 24 • You can deploy a SQL Server instance on Amazon EC2 using an Amazon Machine Image (AMI) An AMI is simply a packaged environment that includes all the necessary software to set up and boot your instance Some AMIs have just the operating system (for example Windows Server 2019 ) and others have the operating system and a version and edition of SQL Server (Windows Server 2019 and SQL Server 201 7 Standard Edition SQL Server 2017 on Ubuntu and so on) We recommend that you use the AMIs available at Windows A MIs These are available in all AWS Regions Some AMIs include an installation of a specific version and edition of SQL Server When running an Amazon EC2 instance based on one of these AMIs the SQL Server licensing costs are included in the hourly pri ce to run the Amazon EC2 instance • Other AMIs install just the Microsoft Windows operating system This type of AMI allows you the flexibility to perform a separate custom installation of SQL Server on the Amazon EC2 instance and bring your own license (B YOL) of Microsoft SQL Server if you have qualifying licenses For additional information on BYOL qualification criteria see License Mobility • Consider all five performance charact eristics (vCPU Memory Instance Storage Network Bandwidth and EBS Bandwidth) of Amazon EC2 instances when selecting the EC2 instance See Amazon EC2 Instance Types for more information • Depending on the type of SQL Server deployment for example stand alone Windows Failover Clustering and Always On Availability Groups SQL Server on Linux and so on you might decide to assign one or multiple static IP addresses to your Amazon EC2 instan ce You can do this assignment in the Network interface section of Configure Instance Details • Add the appropriate storage volumes depending on your workload needs For more details on select the appropriate volume types see the Disk I/O Management section • Assign the appropriate tags to the Amazon EC2 instance We recommend that you assign tags to other Amazon resources for example Amazon Elastic Block Store (Amazon EBS) volumes to allow for more control over resou rcelevel permissions and cost allocation For best practices on tagging AWS resources see Tagging Your Amazon EC2 Resources in the Amazon EC2 User Guide ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 25 Amazon EC2 Security When you run SQL Server on Amazon EC2 instances you have the responsibility to effectively protect network access to your instances with security groups adequate operating system settings and best practices such as limiting access to open port s and using strong passwords In addition you can also configure a hostbased firewall or an intrusion detection and prevention system (IDS/IPS) on your instances As with Amazon RDS in EC2 security controls start at the network layer with the network d esign itself in EC2 VPC along with subnets security groups and network access control lists as applicable For a more detailed discussion of these features review the preceding Amazon RDS Security section Using AWS Identity and Access Management (IAM) you can control access to your Amazon EC2 resources and authorize (or deny) users the ability to manage your instances running the SQL Server database and the corresponding EBS volumes For example you can r estrict the ability to start or stop your Amazon EC2 instances to a subset of your administrators You can also assign Amazon EC2 roles to your instances giving them privileges to access other AWS resources that you control For more information on how to use IAM to manage administrative access to your instances see Controlling Access to Amazon EC2 Resources in the Amazon EC2 User Guide In an Amazon EC2 deployment of SQL Server you are also responsible for patching the OS and application stack of your instances when Microsoft or other third party vendors release new security or functional patches This patching includes work for additional support services and instances such as Active Directory servers You can encrypt the EBS data volumes of your SQL Server instances in Amazon EC2 This option is available to all editions of SQL Server de ployed on Amazon EC2 and is not limited to the Enterprise Edition unlike transparent data encryption ( TDE) When you create an encrypted EBS volume and attach it to a supported instance type data stored at rest on the volume disk I/O and snapshots crea ted from the volume are all encrypted The encryption occurs on the servers that host Amazon EC2 instances transparently to your instance providing encryption of data in transit from EC2 instances to EBS storage as well Note that encryption of boot volu mes is not supported yet Your data and associated keys are encrypted using the open standard AES 256 algorithm EBS volume encryption integrates with the AWS KMS This integration allows you to use your own customer master key (CMK) for volume encryption Creating and ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 26 leveraging your own CMK gives you more flexibility including the ability to create rotate disabl e and define access controls and to audit the encryption keys used to protect your data Performance Management The performance of a relational DB instance on AWS depends on many factors including the Amazon EC2 instance type the configuration of the d atabase software the application workload and the storage configuration The following sections describe various options that are available to you to tune the performance of the AWS infrastructure on which your SQL Server instance is running Instance Si zing AWS has many different Amazon EC2 instance types available so you can choose the instance type that best fits your needs These instance types vary in size ranging from the smallest instance the t2micro with 1 vCPU 1 GB of memory and EBS only storage to the largest instance the d28xlarge with 36 vCPUs 244 GB of memory 48 TB of local storage and 10 gigabit network performance We recommend that you choose Amazon EC2 instances that best fit your workload requirements and have a good balance o f CPU memory and IO performance SQL Server workloads are typically memory bound so look at the r 5 or r5d instances also referred to as memory optimized instances If your workload is more CPU bound look at the latest compute optimized instances of th e c5 instance family See Amazon EC2 Instance types for more information You can customize the number of CPU cores for the instance You might do this to potentially optimize the licensing costs of your software with an instance that has sufficient amounts of RAM for memory intensive workloads but fewer CPU cores See Optimizing CPU Options for more inform ation One of the differentiators among all these instance types is that the m 5 r5 and c5 instance types are EBS optimized by default whereas older instance types such as the r3 family can be optionally EBS optimized You can find a detailed explanation of EBS optimized instances in the Disk I/O Management section following If your workload is network bound again look at instance families that sup port 25 gigabit network performance because these instance families also support Enhanced Networking These include the r5 z1d m5 and c5 instance families The i3en and c5n instance types even support 100 gigabit network performance Enhanced Networki ng enables you to get significantly higher packet per second (PPS) performance lower network jitter and lower latencies by using single root I/O ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 27 virtualization (SR IOV) This feature uses a new network virtualization stack that provides higher I/O perfor mance and lower CPU utilization compared to traditional implementations See Enhanced Networking on Windows in the Amazon EC2 User Guide Disk I/O Management The same storage types available for Amazon RDS are also available when deploying SQL Server on Amazon EC2 Additionally you also have access to instance storage Because you have fine grained control over the storage volumes and strategy to use you can deploy workloads that require more than 4 TiB in size or 64000 IOPS in Amazon EC2 Multiple EBS volumes or instance storage disks can even be striped together in a software RAID configuration to aggregate both the storage size and usable IOPS beyond the capabilities of a single volume The two main Amazon EC2 storage options are as follows: • Instance store volumes: Several Amazon EC2 instance types come with a certain amount of local (directly attached) storage which is ephemeral These include R5d M5d i3 i3en and x1e instance types • Any data saved on instance storage is no longer available after you stop and restart that instance or if the underlying hardware fails which causes an instance restart to happen on a different host server This character istic makes instance storage a challenging option for database persistent storage However Amazon EC2 instances can have the following benefits: o Instance store volumes offer good performance for sequential disk access and don’t have a negative impact on your network connectivity Some customers have found it useful to use these disks to store temporary files to conserve network bandwidth o Instance types with large amounts of instance storage offer unmatched I/O performance and are recommended for database workloads as long as you implement a backup or replication strategy that addresses the ephemeral nature of this storage ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 28 • EBS volumes: Similar to Amazon RDS you can use EBS for persistent block level storage volumes Amazon EBS volumes are off instance s torage that persist s independently from the life of an instance Amazon EBS volume data is mirrored across multiple servers in an Availability Zone to prevent the loss of data from the failure of any single component You can back them up to Amazon S3 by u sing snapshots These attributes make EBS volumes suitable for data files log files and the flash recovery area Although the maximum size of an EBS volume is 16 TB you can address larger database sizes by striping your data across multiple volumes See EBS volume characteristics for more information EBSoptimized instances enable Amazon EC2 instances to fully utilize the Provisioned IOPS on an EBS volume These instances deliver dedicated throughput between Amazon EC2 and Amazon EBS depending on the instance type When attached to EBSoptimized instan ces Provisioned IOPS volumes are designed to deliver within 10 percent of their provisioned performance 999 percent of the time The combination of EBSoptimized instances and Provisioned IOPS volumes helps to ensure that instances are capable of consist ent and high EBS I/O performance See EBS optimized by default for more information Most databases with high I/O requirements should benefit from this featu re You can also use EBS optimized instances with standard EBS volumes if you need predictable bandwidth between your instances and EBS For up todate information about the availability of EBS optimized instances see Amazon EC2 Instance Types To scale up random I/O performance you can increase the number of EBS volumes your data resides on for example by using eight 100 GB EBS volumes instead of one 800 GB EBS volume However remember that us ing striping generally reduces the operational durability of the logical volume by a degree inversely proportional to the number of EBS volumes in the stripe set The more volumes you include in a stripe the larger the pool of data that can get corrupted if a single volume fails because the data on all other volumes of the stripe gets invalidated also EBS volume data is natively replicated so using RAID 0 (striping) might provide you with sufficient redundancy and availability No other RAID mechanism i s supported for EBS volumes Data logs and temporary files benefit from being stored on independent EBS volumes or volume aggregates because they present different I/O patterns To take advantage of additional EBS volumes be sure to evaluate the networ k load to help ensure that your instance size is sufficient to provide the network bandwidth required ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 29 I3 and i3en instances with instance storage are optimized to deliver tens of thousands of low latency random I/O operations per second (IOPS) to applica tions from direct attached SSD drives These instances provide an alternative to EBS volumes for the most I/O demanding workloads Amazon EC2 offers many options to optimize and tune your I/O subsystem We encourage you to benchmark your application on se veral instance types and storage configurations to select the most appropriate configuration For EBS volumes we recommend that you monitor the CloudWatch average queue length metric of a given volume and target an average queue length of 1 for every 500 IOPS for volumes up to 2000 IOPS and a length between 4 and 8 for volumes with 2 000 to 4 000 IOPS Lower metrics indicate overprovisioning and higher numbers usually indicate your storage system is overloaded High Availability High availability is a d esign and configuration principle to help protect services or applications from single points of failure The goal is for services and applications to continue to function even if underlying physical hardware fails or is removed or replaced We will review three native SQL Server features that improve database high availability and ways to deploy these features on AWS Log Shipping Log shipping provides a mechanism to automatically send transaction log backups from a primary database on one DB instance to one or more secondary databases on separate DB instances Although log shipping is typically considered a disaster recovery feature it can also provide high availability by allowing secondary DB instances to be promoted as the primary in the e vent of a failure of the primary DB instance Log shipping offers you many benefits to increase the availability of log shipped databases Besides the benefits of disaster recovery and high availability already mentioned log shipping also provides access to secondary databases to use as read only copies of the database This feature is available between restore jobs It can also allow you to configure a lag delay or a longer delay time which can allow you to recover accidentally changed data on the prima ry database before these changes are shipped to the secondary database We recommend running the primary and secondary DB instances in separate Availability Zones and optionally deploying an optional monitor instance to track all the ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 30 details of log shippi ng Backup copy restore and failure events for a log shipping group are available from the monitor instance Database Mirroring Database mirroring is a feature that provides a complete or almost complete mirror of a database depending on the operating mode on a separate DB instance Database mirroring is the technology used by Amazon RDS to provide Multi AZ support for Amazon RDS for SQL Server This feature increases the availability and protection of mirrored databases and provides a mechanism to ke ep mirrored databases available during upgrades In database mirroring SQL Servers can take one of three roles: the principal server which hosts the read/write principal version of the database; the mirror server which hosts the mirror copy of the princ ipal database; and an optional witness server The witness server is only available in high safety mode and monitors the state of the database mirror and automates the failover from the primary database to the mirror database A mirroring session is establ ished between the principal and mirror servers which act as partners They perform complementary roles as one partner assumes the principal role while the other partner assumes the mirror role Mirroring performs all inserts updates and deletes that ar e executed against the principal database on the mirror database Database mirroring can either be a synchronous or asynchronous operation These operations are performed in the two mirroring operating modes: • Highsafety mode uses synchronous operation In this mode the database mirror session synchronizes the inserts updates and deletes from the principal database to the mirror database as quickly as possible using a synchronous operation As soon as the database is synchronized the transaction is comm itted on both partners This mode has increased transaction latency as each transaction needs to be committed on both the principal and mirror databases Because of this high latency we recommend that partners be in the same or different Availability Zone s hosted within the same AWS Region when you use this operating mode ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon W eb Services Page 31 • Highperformance mode uses asynchronous operation Using this mode the database mirror session synchronizes the inserts updates and deletes from the principal database to the mirror d atabase using an asynchronous process Unlike a synchronous operation this mode can result in a lag between the time the principal database commits the transactions and the time the mirror database commits the transactions This mode has minimum transacti on latency and is recommended when partners are in different AWS Regions SQL Server Always On Availability Groups Always On availability groups is an enterprise level feature that provides high availability and disaster recovery to SQL Server databases Always On availability groups uses advanced features of Windows Failover Cluster and the Enterprise Edition of all versions of SQL Server from SQL Server 2012 Starting in SQL Server 2016 SP1 basic availability groups are available for Standard Edition SQL Server as well (as a replacement for database mirroring) These availability groups support the failover of a set of user databases as one distinct unit or group User databases defined within an availability group consist of primary read/writ e databases along with multiple sets of related secondary databases These secondary databases can be made available to the application tier as read only copies of the primary databases thus providing a scale out architecture for read workloads You can a lso use the secondary databases for backup operations You can implement SQL Server Always On availability groups on Amazon Web Services using services like Windows Server Failover Clustering (WSFC) Amazon EC2 Amazon VPC Active Directory and DNS Alway s On cluster s require multiple subnets and need the MultiSubnetFailover=True parameter in the connection string to work correctly See How do I create a SQL Server Always On availability group cluster in the AWS Cloud? for how to deploy SQL Server Always On availability Groups For details on how to deploy SQL Server Always On availability groups in AWS using CloudFormation see the SQL Server on the AWS Cloud: Quick Start Reference Deployment ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 32 Figure 2: SQL Server Always On availability group Monitoring and Management Amazon CloudWatch is an AWS instance monitoring service that provides detailed CPU disk and network utilization metrics for each Amazon EC2 instance and EBS volume Using these metrics you can perform detailed reporting and management This data is available in the AWS Management Console and also the API Using the API allows for infrastructure automation and orchestration based on load metrics Additionally Amazon CloudWatch supports custom metrics such as memory utilization or disk utilizations which are metrics visible only f rom within the instance You can publish your own relevant metrics to the service to consolidate monitoring information You can also push custom logs to CloudWatch Logs to monitor store and access your log files for Amazon EC2 SQL Server instances You can then retrieve the associated log data from CloudWatch Logs using the Amazon CloudWatch console the CloudWatch Logs commands in the AWS CLI or the ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 33 CloudWatch Logs SDK This approach allows you to track log events in real time for your SQL Server inst ances As with Amazon RDS you can configure alarms on Amazon EC2 Amazon EBS and custom metrics to trigger notifications when the state changes An alarm tracks a single metric over a time period you specify and performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods Notifications are sent to Amazon SNS topics or AWS Auto Scaling policies You can configure these alarms to notify database administrators by email or SMS text message when they get triggered In addition you can use Microsoft and any third party monitoring tools that have built in SQL Server monitoring capabilities Amazon EC2 SQL Server monitoring can be integrated with System Center Operations Manager (SCOM) Open source monitoring frameworks such as Nagios can also be run on Amazon EC2 to monitor your whole AWS environment including your SQL Server databases The management of a SQL Server database on Amazon EC2 is similar to the management of an on premises database You can use SQL Server Management Studio SQL Server Configuration Manager SQL Server Profiler and other Microsoft and third party tools to perform administration or tuning tasks AWS also offers the AWS Add ins for Microsoft System Center to extend the functionality of your existing Microsoft System Center implementation to monitor and control AWS resources from the same interface as your on premises resources These addins are currently av ailable at no additional cost for SCOM versions 2007 and 2012 and System Center Virtual Machine Manager (SCVMM) Although you can use Amazon EBS snapshots as a mechanism to back up and restore EBS volumes the service does not integrate with the Volume Shadow Copy Service (VSS) You can take a snapshot of an attached volume that is in use However VSS integration is required to ensure that the disk I/O of SQL Server is temporarily paused during the snapshot process Any data that has not been per sisted to disk by SQL Server or the operating system at the time of the EBS snapshot is excluded from the snapshot Lacking coordination with VSS there is a risk that the snapshot will not be consistent and the database files can potentially get corrupte d For this reason we recommend using third party backup solutions that are designed for SQL Server workloads ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 34 Managing Cost AWS elastic and scalable infrastructure and services make running SQL Server on Amazon a cost effective proposition by tracking d emand more closely and reducing overprovisioning As with Amazon RDS the costs of running SQL Server on Amazon EC2 depend on several factors Because you have more control over your infrastructure and resources when deploying SQL Server on Amazon EC2 the re are a few additional dimensions to optimize cost on compared to Amazon RDS: • The AWS Region the instance is deployed in • Instance type and EBS optimization • The type of instance tenancy selected • The high availability solution selected • The storage type and size selected for the EC2 instance • The Multi AZ mode of the instance • The pricing model • How long it is running during a given billing period • Underlying Operating system (Windows or Linux) As with Amazon RDS Amazon EC2 hourly instance costs vary by the Region If you have flexibility about where you can deploy your workloads geographically we recommend deploying your workload in the Region with the cheapest EC2 costs for your particular us e case Different instance types have different hourly charges Generally current generation instance types have lower hourly charges compared to previous generation instance types along with better performance due to newer hardware architectures We recommend that you test your workloads on new instance types as these become available and plan to migrate your workloads to new instance types if the c ost vs performance ratio makes sense for your use case Many EC2 instance types are available with the E BSoptimized option This option is available for an additional hourly surcharge and provides additional dedicated networking capacity for EBS I/O This dedicated capacity ensures a predictable amount of networking capacity to sustain predictable EBS I/O Some current generation ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 35 instance types such as the C4 M4 and D2 instance types are EBS optimized by default and don’t have an additional surcharge for the optimization Dedicated Instances are Amazon EC2 instances that run in a VPC on hardware that’s dedicated to a single customer Your Dedicated Instances are physically isolated at the host hardware level from your instances that aren’t Dedicated Instances and from instances that belong to other AWS accounts We recommend deploying EC2 SQL Server inst ances in dedicated tenancy if you have certain regulatory needs Dedicated tenancy has a per region surcharge for each hour a customer runs at least one instance in dedicated tenancy The hourly cost for instance types operating in dedicated tenancy is different for standard tenancy Uptodate pricing information is available on the Amazon EC2 Dedicated Instances pricing page You also have the option to provision EC2 Dedicated Hosts These are physical servers with E C2 instance capacity fully dedicated to your use Dedicated Hosts can help you address compliance requirements and reduce cos ts by allowing you to use your existing server bound software licenses For m ore information see Amazon EC2 Dedicated Hosts and Bring license to AWS Amazon EC2 Reserved Instances allow you to lower costs and reserve capacity Reserved Instances can save you up to 70 percent over On Demand rates when used in steady state They can be purchased for one or three year terms If your SQL Server database is going to be running more than 60 percent of the time you will most likely financially benefit from using a Reserved Instance Unlike with On Demand pricing the capacity reservation is made for the entire duration of the term wheth er a specific instance is using the reserved capacity or not The following pricing options are available for EC2 Reserved Instances : • All Upfront Reserved Instances : you pay for the entire Reserved Instance with one upfront payment This option provides yo u with the largest discount compared to On Demand Instance pricing • Partial Upfront Reserved Instances : you make a low upfront payment and are then charged a discounted hourly rate for the instance for the duration of the Reserved Instance term ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 36 • No Upfront Reserved Instances : you don’t make any upfront payments but will be charged a discounted hourly rate for the instance for the duration of the Reserved Instance term This option still provides you with a significant discount compared to On Demand Instanc e pricing but the discount is usually less than for the other two Reserved Instance pricing options Additionally the following options can be combined to reduce your cost of operating SQL Server on EC2: • Use the Windows Server with SQL Server AMIs where licensing is included The cost of the SQL Server license is included in the hourly cost of the instance You are only paying for the SQL Server license when the instance is running This approach is especially effective for databases that are not running 24/7 and for short projects • Shut down DB instances when they are not needed For example some development and test databases can be shut down at night and on weekends and restarted on weekdays in the morning • Scale down the size of your databases during off peak times • Use the Optimizing CPU options Caching Whether using SQL Server on Amazon EC2 or Amazon RDS SQL Server users confronted with heavy workloads should look into reducing this load by caching data so that the web and application servers don’t have to repeatedly access the database for common or re peat datasets Deploying a caching layer between the business logic layer and the database is a common architectural design pattern to reduce the amount of read traffic and connections to the database itself The effectiveness of the cache depends largely on the following aspects: • Generally the more read heavy the query patterns of the application are on the database the more effective caching can be • Commonly the more repetitive query patterns are with queries returning infrequently changing datasets the more you can benefit from caching Leveraging caching usually requires changes to applications The logic of checking populating and updating a cache is normally implemented in the application data and database abstraction layer or Object Relationa l Mapper (ORM) ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 37 Several tools can address your caching needs You have the option to use a managed service similar to Amazon RDS but for caching engines You can also choose from different caching engines that have slightly different feature sets: • Amazon ElastiCache: In a similar fashion to Amazon RDS ElastiCache allows you to provision fully managed caching clusters supporting both Memcached and Redis ElastiCache simplifies and offloads the management monitoring and operation of a Memcached or Redis e nvironment enabling you to focus on the differentiating parts of your applications • Memcached: An open source high performance distributed in memory object caching system Memcached is an in memory object store for small chunks of arbitrary data (string s objects) such as results of database calls Memcached is widely adopted and mostly used to speed up dynamic web applications by alleviating database load • Redis: An open source high performance in memory key value NoSQL data engine Redis stores stru ctured key value data and provides rich query capabilities over your data The contents of the data store can also be persisted to disk Redis is widely adopted to speed up a variety of analytics workloads by storing and querying more complex or aggregate datasets in memory relieving some of the load off backend SQL databases Hybrid Scenarios and Data Migration Some AWS customers already have SQL Server running in their on premises or colocated data center but want to use the AWS Cloud to enhance their arc hitecture to provide a more highly available solution or one that offers disaster recovery Other customers are looking to migrate workloads to AWS without incurring significant downtime These efforts often can stretch over a significant amount of time A WS offers several services and tools to assist customers in these use cases and SQL Server has several replication technologies that offer high availability and disaster recovery solutions These features differ depending on the SQL Server version and edi tion Amazon RDS on VMware lets you deploy managed databases in on premises VMware environments using the Amazon RDS technology enjoyed by hundreds of thousands of AWS customers Amazon RDS provides cost efficient and resizable capacity while automating ti meconsuming administration tasks including hardware provisioning database setup patching and backups freeing you to focus on your applications RDS on VMware brings these same benefits to your on premises deployments making it ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 38 easy to set up operate and scale databases in VMware vSphere private data centers or to migrate them to AWS RDS on VMware allows you to utilize the same simple interface for managing databases in on premises VMware environments as you would use in AWS You can easily replica te RDS on VMware databases to RDS instances in AWS enabling low cost hybrid deployments for disaster recovery read replica bursting and optional long term backup retention in Amazon Simple Storage Service (S3) Amazon RDS on VMware is supporting Microsoft SQL Server PostgreSQL MySQL and MariaDB databases with Oracle to follow in the future Backups to the Cloud AWS storage solutions allow you to pay for only what you need AWS doesn’t require capacit y planning purchasing capacity in advance or any large upfront payments You get the benefits of AWS storage solutions without the upfront investment and hassle of setting up and maintaining an on premises system Amazon Simple Storage Service (Amazon S3 ) Using Amazon S3 you can take advantage of the flexibility and pricing of cloud storage S3 gives you the ability to back up SQL Server databases to a highly secure available durable reliable storage solution Many third party backup solutions are des igned to securely store SQL Server backups in Amazon S3 You can also design and develop a SQL Server backup solution yourself by using AWS tools like the AWS CLI AWS Tools for Windows PowerShell or a wide variety of SDKs for NET or Java and also the A WS Toolkit for Visual Studio AWS Storage Gateway AWS Storage Gateway is a service connecting an on premises software appliance with cloud based storage to provide seamless and secure integration between an organization’s on premises IT environment and AWS ’s storage infrastructure The service allows you to securely store data in the AWS Cloud for scalable and cost effective storage AWS Storage Gateway supports open standard storage protocols that work with your existing applications It provides low laten cy performance by maintaining frequently accessed data on premises while securely storing all of your data encrypted in Amazon S3 AWS Storage Gateway enables your existing on premises –to–cloud ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 39 backup applications to store primary backups on Amazon S3’s sc alable reliable secure and cost effective storage service SQL Server Log Shipping Between On Premises and Amazon EC2 Some AWS customers have already deployed SQL Server using a Windows Server Failover Cluster design in an on premises or colocated facility This approach provides high availability in the event of component failure within a data center but doesn’t protect against a significant outage impacting multiple components or the entire data center Other AWS customers have been using SQL Server synchronous mirroring to provide a high availability solution in their on premises data center Again this provides high availability in the event of component failure within the data center but doesn’t protect against a significant outage impactin g multiple components or the entire data center You can extend your existing on premises high availability solution and provide a disaster recover y solution with AWS by using the native SQL Server feature of log shipping SQL Server transaction logs can s hip from on premises or colocated data centers to a SQL Server instance running on an Amazon EC2 instance within a VPC This data can be securely transmitted over a dedicated network connection using AWS Direct Connect or over a secure VPN tunnel Once shi pped to the Amazon EC2 instance these transaction log backups are applied to secondary DB instances You can configure one or multiple databases as secondary databases An optional third Amazon EC2 instance can be configured to act as a monitor an instan ce that monitors the status of backup and restore operations and raises events if these operations fail ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 40 Figure 3: Hybrid SQL Server Log Shipping SQL Server Always On Availability Groups Between OnPremises and Amazon EC2 SQL Server Always On availability groups is an advanced enterprise level feature to provide high availability and disaster recovery solutions This feature is available when deploying the Enterprise Edition of SQL Server 2012 2014 2016 or 2017 within the AWS Cloud on Amazon EC2 or on physical or virtual machines deployed in on premises or colocated data centers SQL Server 201 6 and SQL Server 201 7 standard edition provides basic high availability two node single database failover non readable secondary You can also setup th e Always On availability groups on Linux based SQL Server by using PaceMaker for clustering instead of using the Windows Server Failover Clustering (WSFC) If you have existing onpremises deployments of SQL Server Always On availability groups you might want to use the AWS Cloud to provide an even higher level of availability and disaster recovery To do so you can extend your data center into a VPC by using a dedicated network connection like AWS Direct Connect or setting secure VPN tunnels between thes e two environments Consider the following points when planning a hybrid implementation of SQL Server Always On availability groups: • Establish secure reliable and consistent network connection between on premises and AWS (using AWS Direct Connect or VPN ) • Create a VPC based on the Amazon VPC service ArchivedAmazon Web Services Deploy ing Microsoft SQL Server on Amazon Web Services Page 41 • Use Amazon VPC route tables and security groups to enable the appropriate communicate between the new environments • Extend Active Directory domains into the VPC by deploying domain controllers as Amazon EC2 instances or using the AWS Directory Service AD Connector service • Use synchronous mode between SQL Server instances within the same environment (for example all instances on premises or all instances in AWS) • Use asynchronous mode between SQL Server instances in different environments (for example instance in AWS and on premises) Figure 4: Always On availability groups You can also use the distributed availability groups This type of availabilit y group is supported in SQL Server 2016 and later versions Distributed availability groups span two separate availability groups and you can use them for AWS as a DR solution or migrating on premises Amazon EC2 ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 42 Figure 5: Hybrid Windows Server Failover Cluster AWS Database Migration Service AWS Database Migration Service helps you migrate databases to AWS easily and securely When you use the AWS Database Migration Service the source database remains fully operational during the migration minimizing downtime to applications that rely on the database You can begin a database migration with just a few clicks in the AWS Management Console Once the migration has started AWS manages many of the complexities of the migration process like data type transformation compression and parallel transfer (for faster data transfer) while ensuring that data changes to the source database that occur during the migration process are automatically replicated to the target The service is intended to support migrations to and from AWS hosted databases where both the source and destination engine are the same and also heterogeneous data sources Comparison of Microsoft SQL Server Feature Availability on AWS The following t able shows a side byside comparison of available features of SQL Server in the AWS environment ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Servi ces Page 43 Table 2: SQL Server features on AWS Amazon RDS Amazon EC2 SQL Server Editions Supported Versions Supported Versions Express 2012 2014 2016 2017 2012 2014 2016 2017 Web 2012 2014 2016 2017 2012 2014 2016 2017 Standard 2012 2014 2016 2017 2012 2014 2016 2017 Enterprise 2012 2014 2016 2017 2012 2014 2016 2017 SQL Server Editions Installation Method Installa tion Method Express N/A AMI Manual install Web N/A AMI Manual install Standard N/A AMI Manual install Enterprise N/A AMI Manual install Manageability Benefits Supported Supported Managed Automated Backups Yes No (need to configure and manage maintenance plans or use third party solutions) Multi AZ with Automated Failover Yes Enterprise Edition only (with manual configuration of Always On Availability Groups) Builtin Instance and Database Monitoring and Metrics Yes No (push your own metrics to CloudWatch or use third party solution) Automatic Software Patching Yes No Preconfigured Parameters Yes No (default SQL Server installation only) DB Event Notifications Yes No (manually track and manage DB events) SQL Server Feature Supported Supported SQL Authentication Yes Yes Windows Authentication Yes Yes ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 44 Amazon RDS Amazon EC2 TDE (encryption at rest) Yes (Enterprise Edition only) Yes (Enterprise Edition only) Encrypted Storage using AWS KMS Yes (all editions except Express ) Yes SSL (encryption in transit) Yes Yes Database Replication No (Limited Push Subscription) Yes Log Shipping No Yes Database Mirroring Yes (Multi AZ) Yes Always On Availability Groups Yes Yes Max Number of DBs per Instance Depends on the instance size and MultiAZ configuration None Rename existing databases Yes (Single AZ only) Yes (not available for databases in Availability Groups or enabled for mirroring) Max Size of DB Instance 16 TiB None Min Size of DB Instance 20 GB (Web Express) 200 GB ( Standard Enterprise ) None Increase Storage Size Yes Yes BACKUP Command Yes Yes RESTORE Command Yes Yes SQL Server Analysis Services Data source only* Yes SQL Server Integration Services Data source only* Yes SQL Server Reporting Services Data source only* Yes Data Quality Services No Yes Master Data Services No Yes Custom Set Time Zones Yes Yes SQL Server Mgmt Studio Yes Yes Sqlcmd Yes Yes SQL Server Profiler Yes (client side traces) Yes SQL Server Migration Assistance Yes Yes DB Engine Tuning Advisor Yes Yes ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 45 Amazon RDS Amazon EC2 SQL Server Agent Yes Yes Safe CLR Yes Yes Fulltext search Yes (except semantic search) Yes Spatial and location features Yes Yes Change Data Capture Yes (Enterprise Edition –All versions 2016/2017 Standard edition) Yes Change Tracking Yes Yes Columnstore Indexes 2012 and later (Enterprise ) 2012 and later (Standard Enterprise ) Flexible Server Roles 2012 and later 2012 and later Partially Contained Databases 2012 and later 2012 and later Sequences 2012 and later 2012 and later THROW statement 2012 and later 2012 and later UTF16 Support 2012 and later 2012 and later New Query Optimizer 2014 and later 2014 and later Delayed Transaction Durability (lazy commit) 2014 and later 2014 and later Maintenance Plans No** Yes Database Mail Yes Yes Linked Servers Yes Yes MSDTC No Yes Service Broker Yes (except Endpoints) Yes Performance Data Collector No Yes WCF Data Services No Yes FILESTREAM No Yes Policy Based Management No Yes SQL Server Audit Yes Yes BULK INSERT No Yes OPENROWSET Yes Yes Data Quality Services No Yes Buffer Pool Extensions No Yes Stretch Database No Yes ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 46 Amazon RDS Amazon EC2 Resource Governor No Yes Polybase No Yes Machine Learning & R Services No Yes File Tables No Yes * Amazon RDS SQL Server DB instances can be used as data sources for SSRS ** Amazon RDS provides a separate set of features to facilitate backup and recovery of databases *** We encourage our customers use the Amazon Simple Email Service (Amazon SES) to send outbound emails originating from AWS resources and ensure a high degree of deliverability For detailed list of features supported by the editions of SQL Server see High Availability in Microsoft Documentation Conclusion AWS provides two deployment platforms to deploy your SQL Server databases: Amazon RDS and Amazon EC2 Each platform provides unique benefits that might be beneficial to your specific use case but you have the flexibility to use one or both depending on your n eeds Understanding how to manage performance high availability security and monitoring in these environments is outlined in this whitepaper key to choosing the best approach for your use case Contributors Contributors to this document include : • Jugal S hah Solutions Architect Amazon Web Services • Richard Waymire Outbound Principal Architect Amazon Web Services • Russell Day Solutions Architect Amazon Web Services • Darryl Osborne Solutions Architect Amazon Web Services • Vlad Vlasceanu Solutions Archit ect Amazon Web Services ArchivedAmazon Web Services Deploying Microsoft SQL Server on Amazon Web Services Page 47 Further Reading For additional information see: • Microsoft Products on AWS • Active Directory Reference A rchitecture: Implementing Active Directory Domain Services on AWS • Remote Desktop Gateway on AWS • Securing the Microsoft Platform on AWS • Implementing Microsoft Windows Server Failover Clustering and SQL Server Always On Availability Groups in the AWS Cloud • AWS Directory Service • SQL Server Database Restore to Amazon EC2 Linux Docu ment Revisions Date Description November 2019 Updated with information on new features and changes: release of SQL Server 2016 and 2017 in RDS RDS Backup and SQL Server on EC2 Linux new instance classes ; updated screen captures architecture diagrams optimize CPU Hybrid Scenarios and other minor corrections and content updates June 2016 Updated with information on new features and changes: release of Amazon RDS SQL Server Windows Authentication; availability of SQL Server 20 14 in Amazon RDS; new RDS Reserved DB Instance pricing model availability of the AWS Database Migration Service; other minor corrections and content updates May 2015 First publication
|
General
|
consultant
|
Best Practices
|
Designing_MQTT_Topics_for_AWS_IoT_Core
|
This version has been archived For the latest version refer t o https://docsawsamazoncom/whitepapers/latest/designingmqtttopicsawsiot core/designingmqtttopicsawsiotcorehtml?did=wp_card&trk=wp_card Designing MQTT Topics for AWS IoT Core May 2019 This version has been archived For the latest version refer t o https://docsawsamazoncom/whitepapers/latest/designingmqtttopicsawsiot core/designingmqtttopicsawsiotcorehtml?did=wp_card&trk=wp_card Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved
|
General
|
consultant
|
Best Practices
|
Determining_the_IOPS_Needs_for_Oracle_Database_on_AWS
|
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlDetermining the IOPS Need s for Oracle Database on AWS First Published December 2018 Updated November 17 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor d oes it modify any agreement between AWS and its customers © 2021 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlContents Introduction 1 Storage options for Oracle Database 1 IOPS basics 3 Estimating IOPS for an existing database 4 Estimating IOPS for a new database 6 Considering throughput 7 Verifying your configuration 7 Conclusion 7 Contributors 8 Further reading 8 Document revisions 9 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAbstract Amazon Web Services (AWS) provides a comprehensive set of services and tools for deploying Oracle Database on the AWS Cloud infrastructure one of the most reliable and secure cloud computing services available today Many businesses of all sizes use Oracl e Database to handle their data needs Oracle Database performance relies heavily on the performance of the storage subsystem but storage performance always comes at a price This white paper includes information to help you determine the input/output oper ations per second (IOPS ) necessary for your database storage system to have the best performance at optimal cost This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAmazon Web Services Determining the IOPS Needs for Oracle Database on AWS Page 1 Introduction AWS offers customers the flexibility to run Oracle Database on either Amazon Relational Database Service (Amazon RDS) which is a managed database service in the cloud or on Amazon Elastic Compute Cloud (Amazon EC2) Many customers prefer to use Amazon RDS for Oracle Database because it provides an easy managed option to run Oracle Database on AWS without having to think about infrastructure provisioning or installing and maintaining database software You can also run Oracle Database directly on Amazon EC2 which allows you full control over setup of the entire infrastructure and database environment To get the best performance from your database you must configure the storage tier to provide the IOPS and throughput that the database ne eds This is a requirement for both Oracle Database on Amazon RDS and Oracle Database on Amazon EC2 If the storage system does not provide enough IOPS to support the database workload you will have sluggish database performance and transaction backlog However if you provision much higher IOPS than your database actually needs you will have unused capacity The e lastic nature of the AWS infrastructure allow s you to increase or decrea se the total IOPS available for Oracle Database on Amazon EC2 but doing this could have a performance impact on the database requires extra effort and might require database downtime Storage options for Oracle Database For Oracle Database storage on AW S you must use Amazon Elastic Block Store (Amazon EBS) volumes which offer the consistent lowlatency performance required to run your Oracle Database Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure which provides high availability and durability Amazon EBS provides these volume types: • General Purpose solid state drive ( SSD) (gp2) • General Pu rpose SSD ( gp3) • Provisioned IOPS SSD (io1) • Provisioned IOPS SSD ( io2) This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAmazon Web Services Determining the IOPS Needs for Oracle Database on AWS Page 2 • Throughput Optimized hard disk drive ( HDD ) (st1) • Cold HHD ( sc1) Volume types differ in performance characteristics and cost For the high and consistent IOPS required for Oracle Database Amazon EBS General Purpose SSD or Amazon EBS Provisioned IOPS SSD volumes are the best fit For gp2 volumes IOPS performance is dire ctly related to the provisioned capacity gp2 volumes can deliver a consistent baseline of 3 IOPS/GB up to a maximum of 16000 IOPS (based on 16 KB/IO) for a 16 TB volume Input/output ( I/O) is included in the price of gp2 volumes so you pay only for each gigabyte of storage that you provision gp2 volumes also have the ability to burst to 3000 IOPS per volume independent of volume size to meet the periodic spike in performance that most application s need This is a useful database feature for which you can predict normal IOPS needs well but you might still experience an occasional higher spike based on specific workloads Currently gp3 is available for Oracle databases running on Amazon EC2 It has the same qualities of gp2 but also increases the throughput from 250 MiB/s to 1000 MiB/s gp2 volumes are sufficient for most Oracle Database workloads If you need more IOPS and throughput than gp2 can provide Provisioned IOPS ( PIOPS ) is the best choice io2 volumes can provide up to 64000 IOPS per volume for AWS Nitro based instances and 32000 IOPS per volume for other instance families Throughput optimized HDD volumes ( st1) offer low cost HDD volume desig ned for intensive workloads that require less IOPS but high throughput Oracle databases used for data warehouses and data analytics purposes can use st1 volumes Any log processing or data staging areas that require high throughput such as Oracle external tables or external BLOB storage can use st1 volumes st1 volumes can handle a maximum of 500 IOPS per volume Cold HDD volumes ( sc1) are suitable for legacy systems that you retain for occasional reference or archive purposes These systems are a ccessed less frequently and only a few scans are performed each day on the volume You can create s triped volumes (areas of free space on multiple volumes) for more IOPS and larger capacity The maximum IOPS an EC2 instance can support across all EBS volum es is 260000 The maximum IOPS an RDS instance can support is 256000 Use only Amazon EBSoptimized instances with gp2 and PIOPS You can use multiple This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAmazon Web Services Determining the IOPS Needs for Oracle Database on AWS Page 3 EBS volumes individually for different data files but striped volumes allow better throughput balancing scalability and burstable performance (for gp2) IOPS basics IOPS is the standard measure of I/O operations per second on a storage device It includes both read and write operations The amount of I/O u sed by Oracle Database can vary great ly in a time period based on the server load and the specific queries running If you are migrating an existing Oracle Database to AWS to ensure that you get the best performance regardless of load you must determine the peak IOPS used by your database and provision Amazon EBS volumes on AWS accordingly If you choose an IOPS number based on the average IOPS used by your existing database you should have sufficient IOPS for the database in most cases but database performance will suffer at peak load You can mitigate this issue to some extent by using Amazon EBS gp2 volumes which have the ability to burst to higher IOPS for small periods of tim e Customers sometimes assume that they need much more IOPS than they actually do This assumption occurs if customers confuse storage system IOPS with database IOPS Most enterprises use storage area network ( SAN) systems that can provide 100000 –200000 or more IOPS for storage The same SAN storage is usually shared by multiple databases and file systems which means the total IOPS provided by the storage system is used by many more applications than a single dat abase Most Oracle Database production systems in domains such as enterprise resource planning ( ERP) and customer relationship management ( CRM) are in the range of 3000–30000 IOPS Your individual application might have different IOPS requirements A per formance test environment’s IOPS needs are generally identical to those of production environments but for other test and development environments the range is usually 200 –2000 IOPS Some online transaction processing ( OLTP ) systems use up to 60000 IO PS There are Oracle databases that use more than 60000 IOPS but that is unusual If your environment shows numbers outside these parameters you should complete further analysis to confirm your numbers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAmazon Web Services Determining the IOPS Needs for Oracle Database on AWS Page 4 Estimating IOPS for an existing database The best way to estimate the actual IOPS that is necessary for your database is to query the system tables over a period of time and find the peak IOPS usage of your existing database To do this you measur e IOPS over a period of time and select the highe st value You can get this information from the GV$SYSSTAT dynamic performance view which is a special view in Oracle Database that provides database performance information This view is continuously updated while the database is open and in use Oracle Enterprise Manager and Automatic Workload Repository (AWR ) report s also use these views to gather data There is a GV$ view for almost all V$ views GV$ views contain data for all nodes in a Real Application Cluster (RAC) identified by an instance ID You can also use GV$ views for non RAC systems which have only one row for each performance criterion To determine IOPS y ou can modify the following sample Oracle PL/SQL script for your needs and run the script during peak database load in your environment For better accuracy run this during the same peak period for a few days and then choose the highest value as the peak IOPS Because the sample script capture s data and store s the PEAK_IOPS_ MEASUREMENT table you must f irst create the table with this code: CREATE TABLE peak_iops_measurement (capture_timestamp date total_read_io number total_write_io number total_io number total_read_bytes number total_write_bytes number total_bytes number); The following script runs for an hour ( run_duration := 3600 ) and captures data every five seconds (capture_gap := 5 ) It then calculates the average I /O and throughput per second for those 5 seconds and stores this information in the table To best fit your needs you can mod ify the run_duration and capture_gap values to change the number of seconds that the script runs and the frequency in seconds that data is captured DECLARE run_duration number := 3600; capture_gap number := 5; loop_count number :=run_duration/ capture_gap; This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAmazon Web Services Determining the IOPS Needs for Oracle Database on AWS Page 5 rdio number; wtio number; prev_rdio number :=0; prev_wtio number :=0; rdbt number; wtbt number; prev_rdbt number; prev_wtbt number; BEGIN FOR i in 1loop_count LOOP SELECT SUM(value) INTO rdio from gv$sysstat WHERE name ='phys ical read total IO requests'; SELECT SUM(value) INTO wtio from gv$sysstat WHERE name ='physical write total IO requests'; SELECT SUM(value)* 0000008 INTO rdbt from gv$sysstat WHERE name ='physical read total bytes'; SELECT SUM(value* 000000 8) INTO wtbt from gv$sysstat WHERE name ='physical write total bytes'; IF i > 1 THEN INSERT INTO peak_iops_measurement (capture_timestamp total_read_io total_write_io total_io total_read_bytes total_write_bytes total_bytes) VALUES (sysdate(rdio prev_rdio)/5(wtio prev_wtio)/5((rdio prev_rdio)/5)+((wtio prev_wtio))/5(rdbt prev_rdbt)/5(wtbt prev_wtbt)/5((rdbt prev_rdbt)/5)+((wtbt prev_wtbt))/5); END IF; prev_rdio := rdio; prev_wtio := wtio; prev_rdbt := rdbt; prev_wtbt : = wtbt; This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAmazon Web Services Determining the IOPS Needs for Oracle Database on AWS Page 6 DBMS_LOCKSLEEP(capture_gap); END LOOP; COMMIT; EXCEPTION WHEN OTHERS THEN ROLLBACK; END; / The important values are total_io and total_bytes The script captures the split of time spent in read and write operations that you can use for comparison later After you have collected data for a sufficient amount of time you can find the peak IOPS used by your database by running the following query which takes the highest value from the column total_io SELECT MAX(total_io) PeakI OPS FROM peak_iops_measurement; To prepare for any unforeseen performance spikes we recommend that you add an additional 10 percent to this peak IOPS number to account for the actual IOPS that your database needs This actual IOPS is the total number of I OPS you should provision for your Amazon EBS volume ( gp2 or io1) Estimating IOPS for a new database If you are setting up a database for the first time on AWS and you don’t have any existing statistics you can use an IOPS number based on the expected number of application transaction s per second Though the IOPS necessary per transaction can vary widely —based on the amount of data involved the number of queries in a transaction and the query complexity —generally 30 IOPS per transaction is a good number to consider For example if you are expecting 100 transactions per second you can start with 3000 IOPS Amazon EBS volumes Because the amount of data in a new database is usually small changing the IOPS associated with Amazon EBS will be relatively simple whether your database is on Amazon RDS or Amazon EC2 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAmazon Web Services Determining the IOPS Needs for Oracle Database on AWS Page 7 Considering throughput In addition to determining the right IOPS it is als o important to make sure your instance configuration can handle the throughput needs of your database Throughput is the measure of the transfer of bits across the network between the EC2 instance running your database and the Amazon EBS volumes that store the data The amount of available throughput relates directly to the network bandwidth that is available to the EC2 instance and the capability of Amazon EBS to receive data Amazon EBS–optimized instances consistently achieve the given level of performance For more information refer to Instance Types that Support EBS Optimization in the Amazon EC2 User Guide for Linux Instances You can find more about Amazon EC2– Amazon EBS configuration in the Amazon EC2 User Guide In addition to bandwidth availability there are other considerations that affect which EC2 instance you should choose for your Oracle Database These considerations include your database license virtual CPUs available and memory size Verifying your configuration After you configure your environment based on the IOPS and throughput numbers necessary for your environment you can verify your configuration before you install the database with the Oracle Orion tool which is available from Oracle Oracle Orion simulates Oracle Database I/O workloads using the same I/O software stack as Oracle Database which provides a measurement of IOPS and throughput that simulates what your database will experience For more details about this tool and to download it refer to the Oracle website Conclusion AWS provides the option to run Oracle in Amazon RDS or Amazon EC2 Choose RDS for a fully managed service or EC2 if you prefer full control AWS offers various storage services that allow the workl oad to optimized for cost or performance As workloads and requirements change the solution can scale up or down elastically This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAmazon Web Services Determining the IOPS Needs for Oracle Database on AWS Page 8 Contributors The following individuals and organizations contributed to this document: • Jayaraman Vellore Sampathkumar Amazon Web Services • Abdul Sathar Sait Amazon Web Services • Jinyoung Jung Amazon Web Services • Jason Massie Amazon Web Services Further reading For additional information about using Oracle Database with AWS services refer to the following resources Oracle Database on AWS • Advanced Architectures for Oracle Database on Amazon EC2 whitepaper • Strategies for Migrating Oracle Database to AWS whitepaper • Choosing the Operating System for Oracle Workloads on Am azon EC2 whitepaper • Best Practices for Running Oracle Database on AWS whitepaper Oracle on AWS • Oracle and Amazon Web Services • Amazon RDS for Oracle Database • Oracle in the Amazon Web Services Cloud FAQ Oracle Reference Architecture • Oracle quick start on AWS Oracle licensing on AWS • Licensing Oracle Software in the Cloud Computing Environment Getting starte d with Oracle RMAN backups and Amazon S3 • Getting Started: Backup Oracle databases directly to AWS with Oracle RMAN This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/determiningiops needsoracledbonaws/determiningiopsneedsoracledbonawshtmlAmazon Web Services Determining the IOPS Needs for Oracle Database on AWS Page 9 AWS service details and pricing • AWS Cloud Products • AWS Documentation • AWS Whitepapers • AWS Pr icing • AWS Pricing Calculator Document revisions Date Description November 17 2021 Updates for technical accuracy December 2018 First publication
|
General
|
consultant
|
Best Practices
|
Development_and_Test_on_AWS
|
This paper has been archived For the latest technical content refer t o the HTML version : https://docsawsamazoncom/whitepapers/latest/ developmentandtestonaws/developmentandtest onawshtml Development and Test on Amazon Web Services First Published November 2 2012 Updated June 29 2021 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 Development phase 2 Source code repository 3 Project mana gement tools 3 Ondemand development environments 6 Integrating with AWS APIs and IDE enhancements 9 Build phase 10 Schedule builds 10 Ondemand builds 10 Storing build artifacts 12 Testing phase 13 Automating test environments 13 Load testing 15 User acceptance testing 18 Sidebyside testing 19 Fault tolerance testing 20 Resource management 21 Cost allocatio n and multiple AWS accounts 21 Conclusion 22 Contributors 23 Further reading 23 Document revisions 23 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract This whitepaper describes how Amazon Web Services (AWS) adds value in the various phases of the software development cycle with specific focus on development and test For the development phase this whitepaper: • Shows you how to use AWS for managing version control • Describes project management tools the build process and environments hosted on AWS • Illustrates best practices For the test phase this whitepaper describes how to manage test environments and run various kinds of tests including load testing acceptance testing fault tolerance testing and so on AWS provides unique advantages in each of these scenarios and phases enabling you to choose the ones most appropriate for your software development project The intended audiences for this paper are project managers developers testers systems architects or anyone involved in software production activities This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 1 Introduction Organiz ations write software for various reasons ranging from core business needs (when the organization is a software vendor ) to customizing or integrating software Organizations also create different types of software: web applications standalone application s automated agents and so on In all such cases development teams are pushed to deliver software of high quality as quickly as possible to reduce the time to market or time to production In this document “development and test” refers to the various to ols and practices applied when producing software Regardless of the type of software to be developed a proper set of development and test practices is key to success However producing applications not only requires software engineers but also IT resou rces which are subject to constraints like time money and expertise The software lifecycle typically consists of the following main elements: Elements of the software lifecycle This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 2 This whitepaper covers aspects of the development build and test phase s For each of these phases you need different types of IT infrastructure AWS provides multiple benefits to software development teams AWS offers on demand access to a wide range of cloud infrastructure services charging only for the resources that are used AWS helps eliminate both the need for costly hardware and the administrative pain that goes with owning and operating it Owning hardware and IT infrastructure usually involves a capital expenditure for a 3 5 year period where most development and test teams need compute or storage for hours days weeks or months This difference in timescales can cause friction due to the difficulty for IT operations to satisfy simultaneous requests from project teams even as they are constrained by a fixed set of resources The result is that project teams spend a lot of time justifying sourcing and holding on to resources This time could be spent focusing on the main job By provisioning only the resources needed for the duration of development phases test runs or complete test campaigns your company can achieve important savings compared to investing up front in traditional hardware With the right level of granularity you can allocate resources depending on each project’s needs and budget In addition to those economic benefits AWS also offers significant operational advantages such as the ability to set up a development and test infrastructure in a matter of minutes rather than weeks or months and to scale capacity up and down to provide the IT resources needed only when they are needed This document highlights some of the best practices and recommendations around development and test on AWS For example for the development phase this document discuss es how to securely and durably set up tools an d processes such as version control collaboration environments and automated build processes For the testing phase this document discuss es how to set up test environments in an automated fashion and how to run various types of test s including side byside tests load tests stress tests resilience tests and more Development phase Regardless of team size software type being developed o r project duration development tools are mandatory to rationalize the process coordinate efforts and centralize production Like any IT system development tools require proper administration and maintenance Operating such tools on AWS not only relieve s your development team from low level system maintenance tasks such as network configuration hardware setup and so on but also facilitates the completion of more This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 3 complex tasks The following sections describe how to operate the main components of devel opment tools on AWS Source code repository The source code repository is a key tool for development teams As such it needs to be available and the data it contains (source files under version control) needs to be durably stored with proper backup poli cies Ensuring these two characteristics — availability and durability —requires resources expertise and time investment that typically aren’t a core competency of a software development team Building a source code repository on AWS involves creating an AWS CodeCommit repository AWS CodeCommit is a secure highly scalable managed source control service that hosts private Git Hub repositories It eliminates the need for you to operate your own source control system and there is no hardware to provision and scale or software to install configure and operate You can use CodeCommit to store anything from code to binaries and it supports the standard functionality of Git Hub allowing it to work seamlessly with your existing GitHubbased tools Your team can also use CodeCommit’s online code tools to browse edit and collaborate on projects CodeCommit enables you to store any number of files and the re are no repository size limits In a few simple steps you can find information about a repository and clone it to your computer creating a local repo sitory where you can make changes and then push them to the CodeCommit repository You can work from th e command line on your local machines or use a GUI based editor Project management tools In addition to the source code repository teams often use additional tools such as issue tracking project tracking code quality analysis collaboration content sh aring and so on Most of the time those tools are provided as web applications Like any other classic web application they require a server to run and frequently a relational database The web components can be installed on Amazon Elastic Compute Cloud (Amazon EC2) with the database using Amazon R elational Database Service (Amazon RDS) for data storage Within minutes you can create Amazon EC2 instances which are virtual machines over which you have complete control A variety of different operating systems and distributions are available as Amazon Machine Images (AMIs) An AMI is a template This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 4 that contains a software configuration (operating system application server and applications) that you can run on Amazon EC2 After you’ve properly installed and configured the project management tool AWS recommend s you create an AMI from this setup so you can quickly recreate that instance without having to reinstall and reconfigur e the software Project management tools have the same needs as source code repositories: they need to be available and data has to be durably stored While you can mitigate the loss of code analysis reports by recreating them against the desired repository version losing project or issue tracking infor mation might have more serious consequences You can address the availability of the project management web application service by using AMIs to create replacement Amazon EC2 instances in case of failure You can store the application’s data separately fr om the host system to simplify maintenance or migration operati ons Amazon Elastic Block Store (Amazon EBS) provides off instance storage volumes that persist independently from the life of an instance After you create a volume you can attach it to a running Amazon EC2 instance As such an Amazon EBS volume is provisioned and attached to the instance to store the data of the version control repository You achieve durability by taking point intime snapshots of the EBS volume containing the repository data EBS snapshots are stored in Amazon Simple Storage Service (Amazon S3) a highly durable and scalable data store Objects in Amazon S3 are redundantly stored on mul tiple devices across multiple facilities in an AWS Region You can automate the creation and management of snapshots using Amazon Data Lifecycle Manager These snapshots can be used as the starting point for new Amazon EBS volumes and can protect your data for long term durability In case of a failure you can recreate the application data volume from the snapshots and recreate the application instance from an AMI To facilitate proper durability and restoration Amazon Relational Database Service (Amazon RDS) offers an easy way to set up operate and scale a relational database in AWS It provides cost efficient and resizable capacity while managing time consuming database administration tasks freeing the project team from this responsibility Amazon RDS Database instances (DB instances) can be provisioned in a matter of minutes Optionally Amazon RDS will ensure that the relational database software stays up to date with the latest patches The automated backup feature of Amazon RDS enables point intime recovery for DB instances allowing restoration of a DB instance to any point in time within the backup retention period This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 5 An Elastic IP address provi des a static endpoint to an Amazon EC2 instance and can be used in combination with DNS (for example behind a DNS CNAME ) This helps teams to access their hosted services such as the project management tool in a consistent way even if infrastructure is changed underneath ; for example when scaling up or down or when a replacement instance is provisioned An Elastic IP Address provides a static endpoint to an Amazon EC2 instance Note : For even quicker and easier deployment many project management tools are available from the AWS Marketplace or as Amazon Machine Images As your development team grows or adds more tools to the project management instance you might require extra capacity for both the web application instance and the DB instance In AWS scaling instances vertically is an easy and straightforward operation You simply stop the EC2 instance change the instance type and start the instance Alternatively you can create a new web application server from the AMI on a more powerful Amazon EC2 instan ce type and replace the previous server You can use horizontal scaling by using Elastic Load Balancing adding more instances to the system by using AWS Auto Scaling In this case as you have more than one node you can use Elastic Load Balancing to distribute the load across all application nodes Amazon RDS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 6 DB instances can scale compute and memory resources with a few clicks on the AWS Management Console Use Elastic Load Balancing to distribute the load across all application nodes When you want to quickly set up a software development project on AWS and don’t want to configure custom p roject management tools on EC2 you can use AWS CodeStar AWS Code Star comes with a unified project dashboard and integration with Atlassian JIRA software a third party issue tracking and project management tool With the AWS CodeStar project dashboard you can easily t rack your entire software development process from a backlog work item to production code deployment Ondemand development environments Developers primarily use their local laptops or desktops to run their development environments This is typically wher e the integrated development environment (IDE) is installed where unit tests are run where source code is checked in and so on However there are a few cases where on demand development environments hosted in AWS are helpful AWS Cloud9 is a cloud based IDE that enables you to write run and debug your code with just a browser It includes a code editor debugger and terminal AWS Cloud9 comes prepackaged with essential tools for popular programming langu ages including JavaScript Python PHP Ruby Go C++ and more so you don’t need to install files or configure your development machine to start new projects Because your AWS Cloud9 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 7 IDE is cloud based you can work on your projects from your office ho me or anywhere using an internet connected machine With AWS Cloud9 you can quickly share your development environment with your team enabling you to pair program and track each other's inputs in real time Some development projects may use specialized sets of tools that would be cumbersome or resource intensive to install and maintain these on local machines especially if the tools are used infrequently For such cases you can prepare and configure development environments with required tools (develop ment tools source control unit test suites IDEs and so on ) and then bundle them as AMI s You can easily start the right environment and have it up and running in minimal time and with minimal effort When you no longer need the environment you can s hut it down to free up resources This can also be helpful if you need to switch context in the middle of having code checked out and work in progress Instead of managing branches or dealing with partial check ins you can spin up a new temporary environm ent On AWS you have access to a variety of different instance types some with very specific hardware configurations If you are developing specifically for a given configuration it may be helpful to have a development environment on the same platform where the system is going to run Amazon WorkSpaces enables you to provision virtual cloud based Microsoft Windows or Amazon Linux desktops for your users to run IDE s using your favorite applications such as Visual Studio IntelliJ Eclipse AWS CLI AWS SDK tools Visual Studio Code Eclipse Atom and many more The concept of hosted desktops is not limited to development environments ; it can apply to other roles or functions as well For more complex worki ng environments AWS CloudFormation makes it easy to set up collections of AWS resources This topic is discussed further in the Testing section of this document In many cases such environments are set up within the Amazon Virtual Private Cloud (Amazon VPC) which enables you to extend your on premise s private network to the cloud You can then provision the development en vironments as if they were on the local network but instead they are running in AWS This can be helpful if such environments require any onpremise s resources such as Lightweight Directory Access Protocol ( LDAP) The following diagram shows a deployment where development environments are running on Amazon EC2 instances within an Amazon VPC Those instances are remotely accessed from an enterprise network through a secure VPN connection This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 8 Development environments running on Amazon EC2 instances within a n Amazon VPC Stopping vs ending Amazon EC2 instances Whenever development environments are not used ; for example during the hours when you are not working or when a specific project is on hold you can easily shut them down to save resources and cost There are two possibilities: • Stopping the instances which is roughly equivalent to hibernating the o perating system • Ending the instances which is roughly equivalent to discarding the operating system When you stop an instance (possible for Amazon EBS−backed AMIs) the compute resources are released and no further hourly charges for the instance apply T he Amazon EBS volume stores the state and next time you start the instance it will have the working data as it did before you stopped it Note : any data stored on ephemeral drives will not be available after a stop/start sequence When you end an insta nce the root device and any other devices attached during the instance launch are automatically deleted (unless the DeleteOnTermination flag of a volume is set to “ false ”) meaning that data may be lost if there is no backup or snapshot available for the deleted volumes A n ended instance doesn’t exist anymore This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 9 and must be recreated from an AMI if needed You would typically end the instance of a development environment if all work has been completed and/or the specific environment will not be used anymore If you use AWS Cloud9 IDE the EC2 instance that AWS Cloud9 connects to by default stops 30 minutes after you close the IDE and restart s automatically when you open the IDE As a result you typically only i ncur EC2 instance charges for when you are actively working If you chose to run your development environments on EC2 instances you can use AWS Instance Scheduler to auto matically stop your instances during weekends or non working schedules This can help reduce the instance utilization and overall spend Integrating with AWS APIs and IDE enhancements With AWS you can now code against and control IT infrastructure either if the target platform of your project is AWS or if the project is about orchestrating resources in AWS For such cases you can use the various AWS SDKs to easily integrate their applications with AWS APIs taking the complexity out of coding directly against a web service interface and dealing with details around authentication retries error handling and so on The AWS SDK tools are available for multiple languages: C++ Go JavaScript Nodejs Python Java Net PHP Ruby and for mobile platforms Android and iOS AWS also offers IDE tools that make it easier for you to interact with AWS from within your IDEs such as: • AWS Toolkit for Visual Studio • AWS Toolkit for VS Code • AWS Toolkit for Eclipse • AWS Toolkit for IntelliJ IDEA • AWS Toolkit for PyCharm • AWS Toolkit for Azure DevOps • AWS Toolkit for Rider • AWS Toolkit for WebStorm This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 10 For developing and building Serverless application s AWS offers the Serverless Application Model (AWS SAM) open source framework which can be used with the AWS toolkits mentioned previously Build phase The process of building an application involves many steps includi ng compilation resource generation and packaging For large applications each step involves multiple dependencies such as building internal libraries using helper applications generating resources in different formats generating the documentation and so on Some projects might require building the deliverables for multiple CPU architectures platforms or operating systems The complete build process can take many hours which has a direct impact on the agility of the software development team This impact is even stronger on teams adopting approaches like continuous integration where every commit to the source repository triggers an automated build followed by test suites Schedu le builds To mitigate this problem teams working on projects with lengthy build times often adopt the “nightly build ” (or neutral build) approach or break the project into smaller sub projects (or a combination of both) Doing nightly builds involves a build machine checking out the latest source code from the repository and building the project deliverables overnight Development teams may not build as many versions as they would l ike and the build should be completed in time for testing to begin the next day Breaking down a project into smaller more manageable parts might be a solution if each sub project builds faster independently However an integration step combining all the different sub projects is still often necessary for the team to keep an eye on the overall project and to ensure the different parts still work well together Ondemand builds A more practical solution is to use more computational power for the build pr ocess On traditional environments where the build server runs on hardware acquired by the organization this option might not be viable due to economic constraints or provisioning delays A build server running on an Amazon EC2 instance can be scaled up vertically in a matter of minutes reducing build time by providing more CPU or memory capacity when needed This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 11 For teams with multiple builds triggered within the same day a single Amazon EC2 instance might not be able to produce the builds quickly enough A solution would be to take advantage of the on demand and pay asyougo nature of AWS CodeBuild to run multiple builds in parallel Every time a new build is requested by the development team or triggered b y a new commit to the source code repository AWS CodeBuild creates a temporary compute container of the class defined in the build project and immediately processes each build as submitted You can run separate builds concurrently without waiting in a queue This also enables you to schedule automated builds at a specific time window If you use a build tool on EC2 instances running as a fleet of worker nodes the task distribution to the worker nodes can be done using a queue holding all the builds to process Worker nodes pick the next build to process as they are free To implement this system Amazon Simple Queue Service (Amazon SQS) offers a reliable highly scalable hosted queue service Amazon SQS makes it easy to create an automated build workflow working in close conjunction with Amazon EC2 an d other AWS infrastructure services In this setup developers commit code to the source code repository which in turn pushes a build message into an Amazon SQS queue The worker nodes poll this queue to pull a message and run the build locally according to the parameters contained in the message (for example the branch or source version to use) You can further enhance this setup by dynamically adjusting the pool of worker nodes consuming the queue Auto Scaling is a service that makes it easy to scale the number of worker nodes up or down automatically according to predefined conditions With Auto Scaling worker nodes ’ capacity can increase seamlessly during demand spikes to maintain quick build gene ration and decrease automatically during demand lulls to minimize costs You can define scaling conditions using Amazon CloudWatch a monitoring service for AWS Cloud resources For example Amazon CloudWatch can monitor the number of messages in the build queue and notify Auto Scaling that more or less capacity is needed depending on the number of messages in the queue The following diagram summarizes this scena rio: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 12 Amazon CloudWatch can monitor the number of messages in the build queue and notify Auto Scaling that more or less capacity is needed Storing build artifacts Every time you produce a build you need to store the output somewhere Amazon S3 is an appropriate service for this Initially the amount of data to be stored for a given project is small but it grows over time as you produce more builds Here the pay as yougo and capacity characteristics of S3 are particularly attractive When you no longer need the build output you can delete it or use S3’s lifecycle policies to delete or archive the objects to Amazon S3 Glacier storage class AWS CodeBuild by default uses S3 bucket s to store the build outputs To distribute the build output (for example to be deployed in test staging or production or to be downloa ded to clients ) AWS offers several options You can distribute build output packages directly out of S3 by configuring bucket policies and/or ACLs to restrict the distribution You can also share the output object using an S3 presigned URL Another option is to use Amazon CloudFront a web service for content delivery which makes it easy to distribute pack ages to end users with low latency and high data transfer speeds thereby improving the end user experience This can be helpful for example when a large number of clients are downloading install packages or updates Amazon CloudFront offers several options; for example to authorize and/or restrict access though a full discussion of this is out of scope for this document This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 13 Testing phase Tests are a critical part of software development They ensure software quality but more importantly they help find issues early in the development phase lowering the cost of fixing them later during the project Tests come in many forms: unit tests performance tests user acceptance tests integration tests and so on and all require IT resources to run Test teams face the same challenges as development teams: the need for enough IT resources but only during the limited duration of the test runs Test environments change frequently and are different from project to project and may require different IT infrastructure or have varying capacity needs The AWS on demand and pay asyougo value propositions are well adapted to those constraints AWS enables your test teams to eliminate both the need for costly hardware and the administrative pain that goes along with owning and operating it AWS also offers significant operational advantages for testers Test environments can be set u p in minutes rather than weeks or months and a variety of resources including different instance types are available to run tests whenever they are needed Automating test environments There are many software tools and frameworks available for automatin g the process of running tests but proper infrastructure must be in place This involves provisioning infrastructure resources initializing the resources with a sample dataset deploying the software to be tested orchestrating the test runs and collect ing results The challenge is not only to have enough resources to deploy the complete application with all the different servers or services it might require but to be able to initialize the test environment with the right software and the right data ove r and over Test environments should be identical between test runs; otherwise it is more difficult to compare results Another important benefit of running tests on AWS is the ability to automate them in various ways You can create and manage test environments programmatically using the AWS APIs CLI tools or AWS SDKs Tasks that require human intervention in classic env ironments (allocating a new server allocating and attaching storage allocating a database and so on ) can be fully automated on AWS using AWS CodePipeline and AWS Cloud Formation For testers designing tests suites on AWS means being able to automate a test down to the operation of the components which are traditionally static hardware devices Automation makes test teams more efficient by removin g the effort of creating and initializing test environments and less error prone by limiting human intervention during This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 14 the creation of those environments An automated test environment can be linked to the build process following continuous integration principles Every time a successful build is produced a test environment can be provisioned and automated tests run on it The following sections describe how to automatically provision Amazon EC2 instances databases and complete environments Provisioning instances You can easily provision Amazon EC2 instances from AMIs An AMI encapsulates the operating system and any other software or configuration files pre installed on the instance When you launch the instance all the applications are already loaded from the AMI and ready to run For information about creating AMIs refer to the Amazon EC2 documentation The challenge with AMI based deployments is that each time you need to upgrade software you have to create a new AMI Although the process of creating a new AMI (and deleting an old one) can be completely automated using EC2 Image Builder you must define a strategy for managing and maintaining multiple versions of AMIs An alternative approach is to include only components into the AMI that don’t change often (operating sys tem language platform and low level libraries application server and so on ) More volatile components like the application under development can be fetched and deployed to the instance at runtime For more details on how to create self bootstrapped in stances see Bootstrapping Provisioning databases Test databases can be efficiently implemented as Amazon RDS database instances Your test teams can instantiate a fully op erational database easily and load a test dataset from a snapshot To create this test dataset you first provision an Amazon RDS instance After injecting the dataset you create a snapshot of the instance From that time every time you need a test database for a test environment you can create one as an Amazon RDS instance from that initial snapshot See Restoring from a DB snapshot Each Amazon RDS instance started from the same snapshot will contain the same dataset which helps ensure that your tests are consistent Provisioning complete environments While you can create complex test environments containing multiple instances using the AWS APIs command line tools or the AWS Management Console AWS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 15 CloudFormation makes it even easier to create a collection of related AWS resources and provision them in an orderly and predictable fashion AWS CloudFormation uses templates to create and delete a collection of resources together as a single unit (a stack ) A complete test environment ru nning on AWS can be described in a template which is a text file in JSON or YAML format Because templates are just text files you can edit and manage them in the same source code repository you use for your software development project That way the te mplate will mirror the status of the project and test environments matching older source versions can be easily provisioned This is particularly useful when dealing with regression bugs In just a few steps you can provision the full test environment enabling developers and testers to simulate a bug detected in older versions of the software AWS CloudFormation templates also support parameters that can be used to specify a specific software version to be loaded the Amazon EC2 instance sizes for the t est environment the dataset to be used for the databases and so on Provisioning cloud applications can be a challenging process that requires you to perform manual actions write custom scripts maintain templates or learn domain specific languages Yo u can now use the AWS Cloud Development Kit (AWS CDK) an open source software development framework for defining cloud infrastructure ascode with modern programming languages and deploying it through AWS Cloud Formation AWS CDK uses familiar programming languages such as TypeScript JavaScript Python Java C# / Net and Go for modeling your applications For more information about how to create and automate deployments on AWS using AWS CloudFormation see AWS CloudFormation Resources Load testing Functionality tests running in controlled environments are valuable tools to ensure software quality but they give lit tle information on how an application or a complete deployment will perform under heavy load For example some websites are specifically created to provide a service for a limited time: ticket sales for sports events special sales limited edition launch es and so on Such websites must be developed and architected to perform efficiently during peak usage periods In some cases the project requirements clearly state the minimum performance metrics to be met under heavy load conditions ( for example search results must be returned in under 100 milliseconds ( ms) for up to 10000 concurrent requests) and load tests are exercised to ensure that the system can sustain the load within those limits This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 16 For other cases it is not possible or practical to spe cify the load a system should sustain In such cases load tests are performed to measure the behavior under heavy load conditions The objective is to gradually increase the load of a system to determine the point where the performance degrades in such a way that the system cannot operate anymore Load tests simulate heavy inputs that exercise and stress a system Depending on the project inputs can be a large number of concurrent incoming requests a huge dataset to process and so on One of the main d ifficulties in load testing is generat ing large enough amounts of inputs to push the tested system to its limits Typically you need large amounts of IT resources to deploy the system to test and to generate the test input which requires further infrast ructure Because load tests generally don’t run for more than a couple of hours the AWS pay asyougo model nicely fits this use case You can also automate load tests using the techniques described in the previous section enabling your testers to exerci se them more frequently to ensure that each major change to the project doesn’t adversely affect system performance and efficiency Conversely by launching automated load tests you can discover whether a new algorithm caching layer or architecture desi gn is more efficient and benefits the project Note : For quick and easy setup testing tools and solutions are also available from the AWS Marketplace In Serverless architectures using AWS services such as AWS Lambda Amazon API Gateway AWS Step Function s and so on load testing can help identify custom code in Lambda functions that may not run efficiently as traffic scales up It also helps to determine an optimum timeout value by analyzing your functions ’ running duration to identify problems with a dependency service One of the most popular tools to perform this task is Artillery Community Edition which is an open source tool for testing serverless APIs You can also use Distributed Load Testing on AWS to automate application testing understand how it performs at scale and fix bottlenecks befor e releasing your application Network load testing Testing an application or service for network load involves sending large numbers of requests to the system being tested There are many software solutions available to simulate request scenarios but us ing multiple Amazon EC2 instances may be necessary to generate enough traffic Amazon EC2 instances are available on demand and are charged by the hour which makes them well suited for network load testing This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 17 scenarios Keep in mind the characteristics of di fferent instance types Generally larger instance types provide more input / output ( I/O) network capacity the primary resource consumed during network load tests With AWS test teams can also perform network load testing on applications that run outsid e of AWS Having load test agents dispersed in different Regions of AWS enables testing from different geographies ; for example to get a better understanding of the end user experience In that scenario it makes sense to collect log information from the instances that simulate the load Those logs contain important information such as response times from the tested system By running the load agents from different Regions the response time of the tested application can be measured for different geographi es This can help you understand the worldwide user experience Because you can end loadtesting Amazon EC2 instances right after the test you should transfer log data to S3 for storage and later analysis When you plan to run high volume network load te sts directly from your EC2 instances to other EC2 instances follow the Amazon EC2 Testing Policy Load testing for AWS Load testing an application running on AWS is useful to make sure that elasticity features are correctly implemented Testing a system for network load is important to make sure that for web front ends Auto Scaling and Elast ic Load Balancing configurations are correct Auto Scaling offers many parameters and can use multiple conditions defined with Amazon CloudWatch to scale the n umber of front end instances up or down These parameters and conditions influence how fast an Auto Scaling group will add or remove instances An Amazon EC2 instance’s post provisioning time might also affect an application’s ability to scale up quickly enough After initialization of the operating system running on Amazon EC2 instances additional services are initialized such as web servers application servers memory caches middleware services and so on The initialization time of these different s ervices affects the scale up delay especially when additional software packages need to be pulled down from a repository Load testing provide s valuable metrics on how fast additional capacity can be added into a particular system Auto Scaling is not onl y used for front end systems You might also use it for scaling internal groups of instances such as consumers polling an Amazon SQS queue or workers and deciders participating in an Amazon Simple Workflow Service (Amazon This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 18 SWF) workflow In both cases load testing the system can help ensure you’ve correctly implemented and configured Auto Sca ling groups or other automated scaling techniques to make your final application as costeffective and scalable as possible Cost optimization with Spot instances Load testing can require many instances especially when exercising systems that are designed to support a high amount of load While you can provision Amazon EC2 instances on demand and discard them when the test is completed while only paying by the hour there is an even more cost effective way to perform those tests using Amazon EC2 Spot Instances Spot Instances enable customers to bid for unused Amazon EC2 capacity Instances are charged th e Spot Price set by Amazon EC2 which fluctuates depending on the supply of and demand for Spot Instance capacity To use Spot Instances place a Spot Instance request specifying the instance type the desired Availability Zone the number of Spot Instances to run and the maximum price to pay per instance hour The Spot Price history for the past 90 days is available via the Amazon EC2 API or the AWS Management Console If the maximum price bid exceeds the current Spot Price the request is fulfilled and instances are started The instances run until either they are ended or the Spot Price increases above the maximum price whichever is sooner See Testimonials and Case Studies to read about other customers ’ case studies and testimonials on EC2 Spot instances User acceptance testing The objective of user acceptance testing is to present the current release to a testing team representing the final user base to determine if the project requirements and specification are met When users can test the software earlier they can spot conceptual weaknesses that have been introduced during the analysis phase or clarify gray areas in the project requirements By testing the software more frequently users can identify functional implementation errors and user interface or application flow misconceptions earlier lowering the cost and impact of correcting them Flaws detected by user acceptance testing may be very difficult to detect by other means The more often you conduct acceptance tests the better for the project because end users provide valuable feedback to development teams as requirements evolve This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 19 However like any other test practice acceptance tests req uire resources to run the environment where the application to be tested will be deployed As described in previous sections AWS provides on demand capacity as needed in a cost effective way which is also appropriate for acceptance testing Using some of the techniques described previously AWS enables complete automation of the process of provisioning new test environments and of disposing environments no longer needed Test environments can be provided for certain times only or continuously from the la test source code version or for every major release By deploying the acceptance test environment within Amazon VPC internal users can transparently access the application to be tested Such an application can also be integrated with other production ser vices inside the company such as LDAP email servers and so on offering a test environment to the end users that is even closer to the real and final production environment Side byside testing Sidebyside testing is a method used to compare a control system to a test system The goal is to assess whether changes applied to the test system improve a desired metric compared to the control system You can use this technique to optimize the performance of complex systems where a multitude of different par ameters can potentially affect the overall efficiency Knowing which parameter will have the desired effect is not always obvious especially when multiple components are used together and influence the performance of each other You can also use this tec hnique when introducing important changes to a project such as new algorithms caches different database engines or third party software In such cases the objective is to ensure your changes positively affect the global performance of the system After you’ve deployed the test and control systems send the same input to both using loadtesting techniques or simple test inputs Finally collect performance metrics and logs from both systems and compare them to determine if the changes you introduced in the test system present an improvement over the control system By provisioning complete test environments on demand you can perform side byside tests efficiently While you can do side byside testing without automated environment provisioning using t he automation techniques described above makes it easier to perform those tests whenever needed taking advantage of the pay asyougo model of AWS In contrast with traditional hardware it may not be possible to run multiple test environments for multip le projects simultaneously This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 20 Sidebyside tests are also valuable from a cost optimization point of view By comparing two environments in different AWS accounts you can easily come up with cost / performance ratios to compare both environments By continuously testing architecture changes for cost performance you can optimize your architectures for efficiency Fault tolerance testing When AWS is the target production environment for the application you’ve developed some specific t est practices provide insights into how the system will handle corner cases such as component failures AWS offers many options for building fault tolerant systems Some services are inherently fault tolerant for example Amazon S3 Amazon DynamoDB Amaz on SimpleDB Amazon SQS Amazon Route 53 Amazon CloudFront and so on Other services such as Amazon EC2 Amazon EBS and Amazon RDS provide features that help architect fault tolerant and highly available systems For example Amazon RDS offers the Multi Availability Zone option that enhances database availability by automatically provisioning and managing a replica in a different Availability Zone For more in formation on how to build fault tolerant architectures running on AWS read Building Fault Tolerant Applications on AWS and see the resources available in the AWS Architecture Center Many AWS customers run mission critical applications on AWS and they need to make sure their architecture is fault tolerant As a result an important practice for all sys tems is to test their fault tolerance capability While a test scenario exercises the system (using similar techniques to load testing) some components are taken down on purpose to check if the system is able to recover from such simulated failure You ca n use the AWS Management Console or the CLI to interact with the test environment For example you might end Amazon EC2 instances and the n test whether an Auto Scaling group is working as expected and a replacement instance automatically provisioned Yo u can also automate this kind of test by integrating AWS Fault Injection Simulator with your CI/CD pipeline It is a best practice is to use automated tools tha t for example occasionally and randomly disrupt Amazon EC2 instances With Fault Injection Simulator you can stress an application by creating disruptive events such as a sudden increase in CPU or memory consumption to observe how the system responds and implement improvements This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 21 Resource management With AWS your development and test teams can have their own resources scaled according to their own needs Provisioning complex environments or platforms composed of multiple resources can be done using AWS CloudFormation stacks or some of the other automation techniques described in this whitepaper In large organizations comprising multiple teams it is a good practice to create an internal role or service responsible for centralizing and managing IT reso urces running on AWS This role typically consists of: • Promoting the internal development and test practices described here • Developing and maintaining template AMIs and template AWS CloudFormation stacks with the different tools and platforms used in your organization • Collecting resource requests from project teams and provisioning resources on AWS according to your organization’s policies including network configuration (such as Amazon VPC) and security configurations ( such as Security Groups and IAM credentials) • Monitoring resource usage and charges using AWS Cost Explorer and allocating these to team budgets You can use the AWS Service Catalog to achieve the tasks above or you might want to develop your own internal provisioning and management portal for a tighter integration with internal processes You can do this by using one of the AWS SDKs which allow programmatic access to resources runn ing on AWS Cost allocation and multiple AWS accounts Some customers have found it helpful to create specific accounts for development and test activities This can be important when your production environment also runs on AWS and you need to separate tea ms and responsibilities Separate accounts are isolated from each other by default so that for example development and test users do not interfere with production resources To enable collaboration AWS offers a number of features that enable sharing of resources across accounts such as Amazon S3 objects AMIs and Amazon EBS snapsh ots To separate out and allocate the cost for the various activities and phases of the development and test cycle AWS offers various options One option is to use separate accounts (for example for development testing staging and production) and eac h account will have its own bill You can also consolidate multiple accounts using This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 22 consolidated billing for AWS Organizations to simplify costs and take advantage of quantity discounts with a single bil l Another option is to make use of the monthly c ost allocation report which enables you to organize and track your AWS costs by using resource tagg ing In the context of development and test tags can represent the various stages or teams of the development cycle though you are free to choose the dimensions you find most helpful Conclusion Development and test practices require certain resources at certain times for the development cycle In traditional environments those resources might not be available at all or not in the necessary timeframe When those resources are available they provide a fixed amount of capacity that is either insufficient (especially in variable activities like testing ) or wasted (but paid for) when the resources are not used For more information see the Auto Scali ng documentation AWS offers a cost effective alternative to traditional development and test infrastructures Instead of waiting weeks or even months for hardware you can instantly provision resources scale up as the workload grows and release resource s when they are no longer needed Whether development and test environments consist of a few instances or hundreds whether they are needed for a few hours or 24/7 you still pay only for what you use AWS is a programming language and operating system−ag nostic platform and you can choose the development platform or programming model used in your business This flexibility enables you to focus on your project not on operating and maintaining your infrastructure AWS also enables possibilities that are difficult to realize with traditional hardware You can fully automate resources on AWS so that environments can be provisioned and decommissioned without human intervention You can start development environments ondemand; kick off builds when needed unconstrained by the availability of resources; provision test resources; and automatically orchestrate entire test runs or campaigns AWS offers you the ability to experiment and iterate with a rapidly changeable infrastructure Your project teams are free to use inexpensive capacity to perform any kind of tests or to experiment with new ideas with no upfront expenses or long term commitments making AWS a platform of choice for development and test This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Development and Test on Amazon Web Services 23 Contributors The following individuals and organizations contributed to this document: • Rakesh Singh Sr Technical Account Manager AMER World Wide Public Sector • Carlos Conde • Attila Narin Further reading • Developer Tools on AWS • How AWS Pricing Works (whitepaper) • AWS Architecture Center • AWS Technical Whitepapers Document revisions Date Description June 29 2021 Updates November 2 2012 First publication
|
General
|
consultant
|
Best Practices
|
Digital_Transformation_Checklist_Using_Technology_to_Break_Down_Innovation_Barriers_in_Government
|
ArchivedDigital Transformation Checklist Using Technology to Break Down Innovation Barriers in Government December 201 7 This paper has been archived For the latest technical guidan ce on Public Sector Digital Transformation refer to https://awsamazoncom/government education/digitaltransformation/Archived © 201 7 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AW S agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Introduction 1 Transforming Vision 1 Checklist 1 Shifting Culture 2 Checklist 2 Change the Cost Model 3 Go Cloud Native 3 Track Progress 4 Data Driven Civic Innovation 4 Create the Environment for Digital Transformation 5 Deliver an Exceptional User Experience 5 Collaborate for Improved Worker Productivity 6 Expedite New Service Delivery 8 Global Reach 8 Key Takeaway 9 Contributors 9 Further Reading 9 Archived Abstract Innovation requires many ingredients: a great idea creativity persistence the right data and technology Governments around the world are taking advantage of the cloud to reduce cost and transform the way they deliver on the ir mission The exp ectations of an increasingly digital citizenry are high yet all levels of government face budgetary and human resource constraints Cloud computing (on demand delivery of IT resources via the Internet with pay asyougo pricing) can help government organi zations increase innovation agility and resiliency all while reducing costs This whitepaper provides guidelines that governments can use to break down innovation barriers and achieve a digital transformation that helps them engage and serve citizens ArchivedAmazon Web Services – Digital Transformation Checklist Page 1 Introduction Digital transformation is more than simply digitizing data It requires evolving from rigid legacy platforms to an IT environment that is designed to adapt to the changing needs of an organization It calls for innovation in addition to changes in policy procurement talent and culture to take full advantage of new opportunities that come with new breakthrough technologies Governments around the world are embracing the cloud to deliver services faster to citizens and to spur economic development At the same time this transformation can help them better cope with budgetary and human resource constraints This whitepaper offers a checklist of strategies and tactics governments worldwide are using to break down innovation barriers and tackle mission critical operations with the cloud Transforming Vision True digital transformation employs an innovative approach —one that combines technology and organizational processes for developing and delivering new services This requires a clear vision of where to start Active participation in the definition of a cloud strategy make s it easier to implement new ideas on an ongoing basis Establishing a new mindset is also critical in the digital transformation process Updating technologies is not enough To improve citizen engagement and staff productivity and accelerate service delivery this change is essential across all levels of the organization It’s about rethinkin g the approach and how new technology can help it materialize An agile development environment cultural shift and the right technology model can help governments further their modernization efforts Checklist Communicate a vision for what success looks like Define a clear governance strategy including the framework for achieving goals and the decision makers responsible for creating them ArchivedAmazon Web Services – Digital Transformation Checklist Page 2 Build a cross functional team to execute activities that support the strategy and goals Identify technology partners with the expertise to help meet these goals Move to a flexible IT system that supports rapid change Shifting Culture The idea of change can be daunting To successfully navigate a digital transformation it is imperative to reshape the culture accordingly This starts with shifting the organizational structure from traditional hierarchi es and silos to smaller teams that are empowe red to make decisions Collaboration between development staff IT and other strategic unit s eliminate s the “throw it over the wall” mentality and can ultimately translate to improved public service Note To keep up with the changes in technology it’s important to build an IT workforce that understands the latest trends and help them stay ahead of inherent learning curves Innovation works best with a bottom up approach where incentives are structured to recognize teams rather than individuals And by rewarding experiment ation you can remov e barriers and eliminate the fear of failure To drive c ultural change do the following: Checklist Reorganize staff into smaller teams to empower decision making Train staff on new policies and best practices Give permission to deviate from traditional rules Build a cloud development environment that exists as a place to play and build confidence with new skills Shift to a shortterm planning mindset and continuously iterate on the plan (agile project management) Consider hiring consultants to help with initial projects ArchivedAmazon Web Services – Digital Transformation Checklist Page 3 Change the Cost Model Small budgets can drive innovation because teams will take creative steps to build new processes to address problems C loud services can positively impact cost with the ability to modernize infrastructures without substantial capital investments Circumventing the long up front procurement process makes it possible to undertake more projects through immediate access to compute resources In addition cloud computing provides the option to spin up and spin down instances to accommodat e seasonal services and dev/test cycle s while only paying for the compute resources that you use Approach the cost model incrementally Start with cost containment shift to cost avoidance and then focus on cost reduction With a pay peruse model it’s possible to return long term budget bac k to the organization and reallocate funds to new projects Go Cloud Native While some organizations prefer to initially move individual licenses and projects to the cloud others opt for a cloud native approach Developing and running applications in this manner takes full advantage of the cloud computing model And by using DevOps processes that promote collaboration across small teams it’s possible to accelerate the delivery of new services with greater reliability DevOps tools provide sustainable processes through infrastructure automation continuous integration and delivery monitoring and auto remediation With a DevOps model it is possible to eliminate disparate development stovepipes and drive efficiencies Checklist Adopt the philosophy of a cohesive unit across developer s and operations and quality assurance and security functions Encourage an ownership mindset throughout the entire development and infrastructure lifecycle irrespective of roles Provide your team with standardized DevOps tools and training ArchivedAmazon Web Services – Digital Transformation Checklist Page 4 Build a unified code repository Add b uiltin security Perform frequent but small updates to remain agile and make deployment less risky Creat e an automated solution (drives consistency regardless of workflow or service ) By adopting a DevOps model organization s have more flexibility to experiment and develop solutions to long standing challenges creating a culture that enable s future innovation Track Progress During the digital transformation journey it is essential to establish metrics to track progress With early indicators in place it’s possible to take immediate action if something goes wrong or needs to be corrected Checklist Create a data driven metrics system Evaluate improvements and progress toward goals Assess whether the organization is p lanning and deliver ing consistently on goals within specified timeframes Data Driven Civic Innovation The AWS engine of innovatio n has long been embraced by the startup community They are now joined by governments who seek to power innovative solutions for large societal problems As government data becomes more widely available more people can use AWS comput e and big data analysis services to tackle problems that were until recently exclusively the domain of government projects Scientists developers and curious citizens are more equipped than ever to find forward thinking and new solutions to some of the world’s biggest challenges These opportunities for innovation are improving lives and creating opportunities for a new class of civic tech entrepreneur s ArchivedAmazon Web Services – Digital Transformation Checklist Page 5 Create the Environment for Digital Transformation Drawing from Amazon’s own experience as an innovator AWS helps guide organizations toward techniques and tools to create a forward leaning digital enterprise But c loud computing is only half of the answer —the other half comes from an organization’s commitment to making a change So what else should governments be thinki ng about on the road to digital transformation? The following sections provide a framework for leveraging AWS in your organization Deliver an Exceptional User Experience High user satisfaction results from ready access to information when and wherever needed However an agency’s user experience sho uld not just focus on citizens —it has to start with its own staff Selfservice web applications enable your users to find information without human intervention regardless of time zone or operating hours For example: • Citizens can conduct business on the ir time remov e dependence on service centers with long wait ing periods to reach repr esentatives • Employees gain access to convenient on demand information from any location which makes it easy to share data with coworkers • Governments can leverage expertise from private companies and other government s to accelerate innovation with new services • Organizations can collect analyze and predict trends based on how web services are used With a flexible system it’s no longer a hassle to modify services to better meet the demands of users How AWS D elivers Governments are leading the way in driving innovation for citizens The cloud offers not only cost savings and agility but also the opportunity to develop breakthroughs in citizen en gagement ArchivedAmazon Web Services – Digital Transformation Checklist Page 6 Whether through open data initiatives public safety modernization education reform citizen service improvements or infrastructure programs more government organizations are increasingly turning to AWS to provide the cost effective scalable secure and flexible infrastructure necessary to transform With a focus on delivering value from taxpayer dollars all levels of government look to manage costs while maintaining the performance and capacity citizens require In a cloud computing environment new IT resources are just a click away This reduc es the time it takes to make those resources available to developers from weeks to just minutes Trimming cost and time for experimentation and develop ment results in a dramatic increase in agility for the organization With cloud computing it’s not necessary to make large upfront investments in hardware or in time spent managing it Instead it’s possible to provision exactly the right type and size of computing resou rces ne cessary to test new ideas or operate the IT department You can access as many resources as need ed almost instantly and only pay for what gets use d Collaborate for Improved Worker Productivity Agencies can quickly achieve business goals by lever aging experience across multiple organizations By facilitating real time communication to share information between teams efficiency increases In addition the sharing of information fosters a culture of trust and innovative thinking And with improved access to information workers are able to make better informed decisions to achieve business results Checklist Pool limited resources to reduce cost and redundant efforts Evaluate whether incremental change s produce h igher quality results Be specific about how to improve communication How AWS Delivers AWS provides a host of services that can integrate into your existing processes and help transform the workplace into a collaborative environment ArchivedAmazon Web Services – Digital Transformation Checklist Page 7 Ama zon WorkDocs Ama zon WorkDocs is a fully managed secure enterprise storage and sharing service offering strong administrative controls and feedback capabilities Users can comment on files share them with others and seamlessly upload new versions Users have access from any place or device including PCs Macs tablets and mobile devices IT administrators can integrate with existing corporate directories enjoy flexible sharing policies and control where data is store d Identity and Access Management AWS Identity and Access Management (IAM) enables secure control led access to AWS services and resources for users IAM creates and manage s AWS users and groups and provides permission s to give them access to AWS resources DevOps and AW S Rapidly and reliably build and deliver citizen services using AWS and DevOps practices These services simplify provisioning and managing infr astructure deploying application code automating software release processes and monitoring application and infrastructure performance Running development and test workloads on AWS enables the elimination of hardware based resource constraints to quickl y create developer environments and expand testing machine fleet It offers instant access to machines with flexible configuration while only charging for what is used This enables faster onboarding of new developers the ability to try out configuration changes in parallel and run as large a test pass as needed Built in Security Government agencies are s tewards of citizens’ data and it is imperative to have the right controls in place to m aintain availability and integrity of that data Cloud security at AWS is the highest priority AWS customers can benefit from a data center and network architecture built to meet the requirements of the most security sensitive organizations With built in security it’s possible to : • React to incidents quickly • Run security scans daily • Monitor and track systems • Receive alerts if any changes are made to systems or services ArchivedAmazon Web Services – Digital Transformation Checklist Page 8 Data Protection Highly resilient disaster recovery is often viewed as complex and cost prohibitive but it’s affordable and easy to use in the cloud Agencies are using the AWS C loud to enable faster disaster recovery of their critical IT systems without incurring the infrastructure expense of a second physical site If an incident occurs AWS provides rapid recovery of IT infrastructure and data to ensure business continuity Expedite New Service Delivery Speed and agility have become basic requirements for conducting business Today agencies must design flexibility into new services from the start to make it easy to adapt as the mission evolves This is also paramount for transforming IT infrastructure M oving to an on demand computing environme nt delivers the requisite flexibility and scalab ility to support a collaborative work environment This approach minimizes costs and r eliably adapts resources to meet the needs of the business How AWS Delivers The AWS Cloud Adoption Framework offers structure to help agencies develop an efficient and effective plan for their digital transformation Guidance and best practices prescribed within the framework offer a compr ehensive approach to cloud computing across the organization throughout the IT lifecycle Agencies no longer need to plan for and procure IT infrastructure (that is network data storage system resources data centers and supporting hardware and software) weeks or months in advance Instead it’s possible to instantly configure and launch hundreds or thousands of servers in minutes and deliver results faster Global Reach By combining expertise across agencies to work on common problems organizations around the globe can share best practices take advantage of economies of scale to reduce costs provide better quality deliver more effective services and reduce ri sk ArchivedAmazon Web Services – Digital Transformation Checklist Page 9 How AWS Delivers AWS is organized into AWS Regions and Availability Zones that allow for high throughput and low latency communication This design also enables fault isolation An outage of one AWS Region or local Availability Zone does not affect the remaining AWS infrastructure Each Availability Zone has an identical IaaS cloud services system that enables mission owners to cost effectively deploy applications and services with great flexibility scalability and reliability Key Take away Digital transformation requires strong leadership to drive change as well as a clear vision Organizations are experimenting with and benefiting from cloud technology to achieve digital transformation The result of this transformation is a more resilient and innovative government that can deliver services to citizens through the medium they now demand and can help retain innovative talent within agencies As an added bonus this creates job opportunities because new talent is needed to solve new problems and the entrepreneurship this brings can spur economic development Whether it is transforming how individuals collaborate or the way in which organization s execute large scale processes digital transformation offers signif icant upside for all agencies regardless of their size or mission Contributors The following individuals and organizations contributed to this document: • Carina Veksler Public Sector Solutions AWS Public Sector • Doug VanDyke General Manager Federal Government AWS Public Sector Further Reading For additional information see the following : • How Cities Can Stop Wasting Money Move Faster and Innovate • AWS Cloud Adoption Framework ArchivedAmazon Web Services – Digital Transformation Checklist Page 10 • 10 Considerations for Cloud Procur ement • Maximizing Value with AWS
|
General
|
consultant
|
Best Practices
|
Docker_on_AWS_Running_Containers_in_the_Cloud
|
ArchivedDocker on AWS Running Containers in the Cloud First Published April 1 2015 Updated July 26 2021 This version has been archived For the latest version of this document refer to https://docsawsamazoncom/whitepapers/latest/dockeron aws/dockeronawshtml ArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Container Benefits 2 Speed 2 Consistency 2 Density and Resource Efficiency 3 Portability 3 Containers orchestrations on AWS 4 Key components 9 Container Enabled AMIs 9 Scheduling 9 Container Repositories 11 Logging and Monitoring 12 Storage 12 Networking 13 Security 14 CI/CD 16 Infrastructure as Code 16 Scaling 17 Conclusion 18 Contributors 18 Further reading 18 Document revisions 19 ArchivedAbstract This whitepaper provides guidance and options for running Docker on AWS Docker is an open platform for developing shipping and running applications in a loosely isolated environment called a container Amazon Web Services ( AWS ) is a natural complement to containers and offers a wide range of scalable infrastructure services upon which containers can be deployed You will find various options such as AWS Elastic Beanstalk Amazon Elastic Container Service (Amazon ECS) Amazon Elastic Kubernetes Service (Amazon EKS) AWS Fargate and AWS App Runner This paper cover details of each option and key components of the container orchestration ArchivedAmazon Web Services Docker on AWS 1 Introduction Prior to the introduction of containers developers and administrators were often faced with the challenges of compatibility restrictions with applications workloads having to be built specifically for its pre determined environment If this workload needed to be migrated for example from bare metal to a virtual machine (VM) or from a VM to the cloud or between service providers this typically meant rebuilding the application or the workload entirely to ensure compatibility with the new environment Container was introduced to overcome these incompatibilities by providing a common interface With the release of Docker the interest in containers technology has rapidly increased Docker is an open source project that uses several resource isolation features of the Linux kernel to sandbox an application its dependencies configuration files and interfaces inside of an atomic unit called a container This allows a container to run on any host with the appropriate kernel components while shielding the application from behavioral inconsistencies due to varianc es in software installed on the host Containers use operating system level virtualization compared to VMs which use hardware level virtualization using hypervisor which is a software or a firmwar e that creates and runs VMs Multiple containers can run o n a single host OS without needing a hypervisor while being isolated from neighboring containers This layer of isolation allows consistency flexibility and portability that enable rapid software deployment and testing There are many ways in which usin g containers on AWS can benefit your organization Docker has been widely employed in use cases such as distributed applications batch jobs continuous deployment pipelines and etc The use cases for Docker continue to grow in areas like distributed dat a processing machine learning streaming media delivery and genomics The following examples show how AWS services can integrate with Docker: • Amazon SageMaker provides pre built Docker Images for Deep Le arning through TensorFlow and PyTorch or lets you bring your custom pre trained models through Docker images • Amazon EMR on Amazon EKS provides a deployment option to run open source big data frameworks on Amazon EKS • Bioinformatics applications for Genomics within Docker containers on Amazon ECS provide a consistent reproducible run time envi ronment ArchivedAmazon Web Services Docker on AWS 2 • For many SaaS providers the profile of Amazon EKS represents a good fit with their multi tenant microservices development and architectural goals Container Benefits The rapid growth of Docker contain ers is being fueled by the many benefits that it provide s If you have applications that run on VMs or bare metal servers today you should consider containerizing them to take advantage of the benefits that come from Docker containers These benefits can be seen across your organization from developers and operations to Q uality Assurance (QA) The primary benefits of Docker are speed consistency density and portability Speed Because of their lightweight and modular nature containers can enable rapid iteration of your applications Development speed is improved by the ability to deconstruct applications into smaller unit s This reduces shared resources between application components le ading to fewer compatibility issues between required libraries or packages Operational speed is improved because code built in a container on a developer’s local machine can be easily moved to a test server by simply moving the container The container s tartup time primarily depends on the size of the container image cache and the time to pull the image and start the container on host To improve the container startup time you must keep the size of image as small as possible using techniques like mult istage builds and local cache when applicable For more information see Best practices for writing Dockerfiles Consistency The consistency and fidelity of a modula r development environment provide predictable results when moving code between development test and production systems By ensuring that the container encapsulates exact versions of necessary libraries and packages it is possible to minimize the risk of bugs due to slightly different dependency revisions This concept easily lends itself to a disposable system approach in which patching individual containers is less preferable than building new containers in parallel testing and replacing the old Thi s practice helps avoid drift of packages across a fleet of containers versions of your application or dev/test/prod environments; the result is more consistent predictable and stable applications ArchivedAmazon Web Services Docker on AWS 3 Density and Resource Efficiency Containers facilitate enhanced resource efficiency by allowing multiple containers to run on a single system Resource efficiency is a natural result of the isolation and allocation techniques that containers use Containers can easily be restricted to a c ertain number of CPUs and allocated specific amounts of memory By understanding what resource a container needs and what resource is available to your VM or underlying host server it’s possible to maximize the containers running on a single host result ing in higher density increased efficiency of compute resources and less wastage on excess capacity Amazon ECS achieves this through placement strategies The binpack placement strategy tries to optimize placement of containers to be cost efficient as possible Containers in ECS are part of ECS tasks placed on compute instances to leave the least amount of unused CPU or memory This in turn minimizes the number of computed instances in use resulting in better resource efficiency The placement strategi es can be supported by placement constraints which lets you place tasks by constraints like the instance type or the availability zone This further enables you to efficiently utilize resources by ensuring that your tasks are running on instance types suitable for your workload by logically separating your tasks using task groups Amazon EKS uses the native Kubernetes scheduling and placement strategy which tries to place pods on nodes to best match the requirements of your workloads across nodes and no t to place pods on nodes where there aren’t sufficient resources Kubernetes allows you to limit the resources like CPU and memory to Kubernetes namespaces pods or containers For more information see Scheduling Portabilit y The flexibility of Docker containers is based on their portability ease of deployment and smaller size compared to virtual machines Like Git Docker provides a simple mechanism for developers to download and install Docker containers and their subsequ ent applications using the command docker pull Because Docker provides a standard interface it makes containers easy to deploy wherever you like providing portability among different versions of Linux your laptop or the cloud The images Docker builds are compliant with OCI (Open Container Initiative) which was created to support fully interoperable container standards Docker can build images by reading the instructions from a Dockerfile which is a text based manifest You can run the same Docker container on any supported version of Linux if you have the Docker stack installed on the host Additionally Docker supports Windows containers which can run on supported Windows versions Con tainers also provide flexibility by making a micro ArchivedAmazon Web Services Docker on AWS 4 services architecture possible In contrast to common infrastructure models in which a virtual machine runs multiple services packaging services inside their own container on top of a host OS allows a ser vice to be moved between hosts isolated from failure of other adjacent services and protected from errant patches or software upgrades on the host system Because Docker provides clean reproducible and modular environments it streamlines both code dep loyment and infrastructure management Docker offers numerous benefits for a variety of use cases whether in development testing deployment or production Containers orchestrations on AWS Amazon Web Services (AWS) is an elastic secure flexible and developer centric ecosystem that serves as an ideal platform for Docker deployments AWS offers the scalable infrastructure APIs and SDKs that integrate tightly into a development lifecycle and accentuate the benefit s of the lightweight and portable containers that Docker offers to its users In this section we will discuss the different possibilities for container deployments using AWS services such as AWS Elastic Beanstalk Amazon Elastic Container Service Amazon Elastic Kubernetes Service AWS Fargate and other additional services • AWS Elastic Beanstalk supports the deployment of web applications from Docker containers With Docker containers you can define your own runtime environment You can also choose your own platform programming language and any application dependencies (such as package managers or tools) which typically aren't supported by other platforms By using Docker with Elastic Beanstalk you have an infrastructure that handles all the details of capacity provisioning load balancing scaling and application health monitoring Elastic Beanstalk can deploy a Docker image and source code to EC2 instances running the Elastic Beanstalk Docker platform The platform offers multi container (and singlecontainer) support You can also leverage the Docker Compose tool on the Docker platform to simplify your application configuration testing and deployment In situations where you want to use the benefits of containers and want the simplicity of deployi ng applications to AWS by uploading a container image AWS Elastic Beanstalk may be the right choice While it is useful for deploying a limited number of containers the way to run and operate containerized applications with more flexibility at scale is b y using Amazon ECS ArchivedAmazon Web Services Docker on AWS 5 • Amazon ECS is a fully managed container orchestration service and the easiest way to rapidly launch thousands of containers across AWS’ broad range of compute options using your preferred CI/CD and automation tools Amazon ECS with EC 2 launch mode provides an easy lift for your applications that run on VMs The powerful simplicity of Amazon ECS enables you to grow from a single Docker container to managing your entire enterprise application portfolio across availability zones in the cl oud and onpremises using Amazon ECS Anywhere without the complexity of managing a control plane addons and nodes ECS Clusters are made up of Container Instances which are Amazon EC2 instances running the Amazon ECS container agent which communicates instance and container state information to the cluster manager; and pre configured dockerd the Docker d aemon The Amazon ECS container agent is included in the Amazon ECS optimized AMI but you can also install it on any EC2 instance that supports the Amazon ECS specification Your containers are defined in a task definition that you use to run individual t asks or tasks within an ECS service that enables you to run and maintain a specified number of tasks simultaneously in a cluster The task definition can be thought of as a blueprint for your application that you can specify various parameters such as the Docker image to use which ports should be open amount of CPU and memory to use with each task or container within a task and the IAM role the task should use We will discuss ECS Task and Service use cases in depth in the scheduling part under the key components section ArchivedAmazon Web Services Docker on AWS 6 • Amazon EKS provides a natural migration path if you are using Kubernetes already and want to continue to make use of those skills on AWS for your container applications EKS is a managed service that you can use to run Kubernetes on AWS without needing to install operate and maintain your own Kubernetes control plane or nodes It provides highly available and secure clusters and automates key tasks such as patching node provisioning and updates EKS runs a single tenant Kubernetes con trol plane for each cluster The control plane infrastructure is not shared across clusters or AWS accounts Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes that are responsible scheduling containers managing the availability of applications storing cluster data and other key tasks EKS runs upstream Kubernetes certified conformant for a predictable experience You can easily migrate any standard Kubernetes application to EKS without needing to refa ctor your code This allows you to deploy and manage workloads on your Amazon EKS cluster the same way that you would with any other Kubernetes environment Amazon EKS Anywhere is a new deployment option (coming in 2021 ) that enables you to easily create a nd operate Kubernetes clusters on premises including on your own virtual machines and bare metal servers EKS Anywhere provides an installable software package for creating and operating Kubernetes clusters on premises and automation tooling for cluster l ifecycle support As new Kubernetes versions are released and validated for use with Amazon EKS we will support three stable Kubernetes versions as part of the update process at any given time The container runtime used in EKS clusters may change in the future but your Docker containers will still work and you shouldn’t notice it EKS will eventually move to containers as the runtime for the EKS optimized Amazon Linux 2 AMI You can follow the containers roadmap issue for more details ArchivedAmazon Web Services Docker on AWS 7 • AWS Fargate provides a way to run containers in a serverless manner with both ECS and EKS AWS Fargate allows you to deliver auton omous container operations which reduces the time spent on configuration patching and security AWS Fargate runs each task or pod in its own kernel providing the tasks and pods their own isolated compute environment This enables your application to hav e workload isolation and improved security by design With AWS Fargate there is no over provisioning and paying for additional servers It allocates the right amount of compute eliminating the need to choose instances and scale cluster capacity When you run your Amazon ECS tasks and services with the Fargate launch type you package your application in containers specify the CPU and memory requirements define networking and IAM policies and launch the application Each Fargate task has its own isolati on boundary and does not share the underlying kernel CPU resources memory resources or elastic network interface with another task For Amazon EKS AWS Fargate integrates with Kubernetes using controllers that are built by AWS using the extensible model provided by Kubernetes These controllers run as part of the Amazon EKS managed Kubernetes control plane and are responsible for scheduling native Kubernetes pods onto Fargate • AWS App Runner is a fully ma naged service that makes it easy to quickly deploy containerized web applications and APIs at scale without any prior experience of running infrastructure on AWS You can go from an existing container image container registry source code repository or existing CI/CD workflow to a fully running containerized web application on AWS in minutes AWS App Runner supports full stack development including both front end and back end web applications that use HTTP and HTTPS protocols App Runner automatically b uilds and deploys the web application and load balances traffic with encryption It monitors the number of concurrent requests sent to your application and automatically adds additional instances based on request volume AWS App runner is ideal for you if you want to run and scale your application on AWS without configuring or managing infrastructure services This means you will not have any orchestrators to configure build pipelines to set up load balancers to optimize or TLS certificates to rotate This really makes it the simplest way to build and run your containerized web application in AWS • Other options: There are additional Docker specific offerings available on AWS which can be useful based on the nature of your workloads It is beyond the scope of this whitepaper to look at these offerings in detail but we have extensive official AWS documentation and blog posts available for each of these offerings ArchivedAmazon Web Services Docker on AWS 8 o AWS App2Container (A2C) is a command line tool which can analyze and build an inventory of all NET and Java applications running in virtual machines on premises or in the cloud A2C packages the applica tion artifact and identified dependencies into container images configures the network ports and generates a Dockerfile ECS task definition or Kubernetes deployment YAML by integrating with various AWS services o Amazon LightSail is a highly scalable compute and networking resource on which you can deploy run and manage containers When you deploy your images to your LightSail container service t he service automatically launches and runs your containers in the AWS infrastructure o AWS Batch helps you to run batch computing workloads on the AWS Cloud You can define job definitions that specify which Do cker container images to run your jobs which run as a containerized applications on AWS Fargate or Amazon EC2 resources in your compute environment o AWS Lambda : You can packa ge and deploy Lambda functions as container images of up to 10 GB in size This allows you to easily build and deploy larger workloads that rely on sizable dependencies such as machine learning or data intensive workloads Just like functions packaged as ZIP archives functions deployed as container images benefit from the same operational simplicity automatic scaling high availability and native integrations with many services that you get with Lambda o Red H at OpenShift Service on AWS (ROSA) : If you are presently running Docker containers in OpenShift ROSA can accelerate your application development process by leveraging familiar OpenShift APIs and tools for deployments on AWS ROSA comes with pay asyougo hourly and annual billing a 9995% SLA and joint support from AWS and Red Hat o AWS Proton is a fully managed delivery service for container and serverless applications for Platform engineering teams to connect and coordinate all the different tools needed for infrastructure provisioning code deployments monitoring and updates Your choice i s usually driven by how much control you want to retain at the expense of additional management effort versus how much AWS can manage for you in the environment the containers run in For most use cases you may want to consider starting on the fully man aged end of the spectrum (App Runner or Fargate) and work backwards towards more of a self managed experience based on the demands of your workload The self managed experience can go to the extent of managing Docker ArchivedAmazon Web Services Docker on AWS 9 containers on EC2 VMs without the use o f any AWS managed services so you have the flexibility to pick the orchestration solution that works best for your needs Key components Container Enabled AMIs AWS has developed a streamlined purpose built operating system for use with Amazon EC2 Contain er Service The Amazon ECS Optimized AMI built on top of Amazon Linux 2 is pre configured with the Amazon ECS container agent a docker daemon with docker runtime dependencies which is the simplest way for you to get started and to get your containers ru nning on AWS quickly The Amazon EKS optimized Amazon Linux AMI is also built on top of Amazon Linux 2 configured to work with Amazon EKS and it includes Docker kubelet and the AWS IAM Authenticator Although you can create your own container instance AM I that meets the basic specifications needed to run your containerized workloads the Amazon ECS and EKS optimized AMIs are pre configured with requirements and recommendations tested by AWS engineers You can also use the Bottlerocket a Linux based open source operating system purpose built by AWS for running containers It includes only the essential software required to run containers and focuses on security and maintainability providing a reliable consistent and safe platform for container based workloads Scheduling When applications are scaled out across multiple hosts the ability to manage each host node docker containers and abstract away the complexity of the underlying platform becomes important In this environment scheduling refers to the ability to schedule containers on the most appropriate host in a scalable automated way In this section we will review key scheduling aspects of various AWS container orchestration servic es • Amazon ECS provides flexible scheduling capabilities by leveraging the same cluster state information provided by the Amazon ECS APIs to make appropriate placement decision Amazon ECS provides two scheduler options The service scheduler and the RunTa sk ArchivedAmazon Web Services Docker on AWS 10 o The service scheduler is suited for long running stateless applications that ensures an appropriate number of tasks are constantly running (replica) and automatically reschedules if tasks fail Services also let you deploy updates such as changing th e number of running tasks or the task definition version that should be running The daemon scheduling strategy deploys exactly one task on each active container instance o The Run Task is suited for batch jobs scheduled jobs or a single job that perform work and stop You can allow the default task placement strategy to distribute tasks randomly across your cluster which minimizes the chances that a single instance gets a disproportionate number of tasks Alternatively you can customize how the scheduler places tasks using task placement strategies and constraints • Amazon EKS: Kubernetes scheduler ( kube scheduler ) becomes responsible for finding the best node for every newly created pod or any unscheduled pods that have no node assigned It assigns the pod to the node with the highest ranking based on the filtering and ranking system If there is more than one nod e with equal scores kube scheduler selects one of these at random you can constrain a pod so that it can only runon set of nodes The scheduler will automatically do a reasonable placement but there are some circumstances where you may want to control w hich node the pod deploys to for example to ensure that a pod ends up on a machine with SSD storage attached to it or to co locate pods from two different services that communicate a lot into the same availability zone o NodeSelector is the simplest recommended form of node selection constraint For the pod to be eligible to run on a node the node must have each of the indicated key value pairs as l abels o Topology spread constraints are to control how Pods are spread across your cluster among failure domains such as regions zones nodes and other use rdefined topology domains This can help to achieve high availability as well as efficient resource utilization o Node affinity is a property of Pods that attrac ts them to a set of nodes (either as a preference or a hard requirement) Taints are the opposite that allow a node to repel a set of pods Tolerations are applied to pods and allow (but do not require) the pods to schedule onto nodes with matching taints ArchivedAmazon Web Services Docker on AWS 11 o Pod Priority indicates the importance of a Pod relative to other Pods If a Pod cannot be scheduled the scheduler tries to preempt (evict) lower priority Pods make scheduling of the pending Pod possible • Lambda is serverless so you don’t need to manage where or how t o scheduler your containers After you create a container image in the Amazon ECR you can simply create and run the Lambda function • Elastic Beanstalk can deploy a Dock er image and source code to EC2 instances running the Elastic Beanstalk Docker platform Compared to EKS or ECS Elastic Beanstalk’s container scheduling features are less for the sake of the managed infrastructure provisioning For more information on sam ples and help getting started with a Docker environment see Using the Docker platform Container Repositories Docker containers are distributed in the form of Docker images Docker images are a compile time construct defined by the Dockerfile manifest with a set of instructions to create the containers Docker images are stored in container registries for delivery to applications that need them Within a registry a collec tion of related images is grouped together as repositories Amazon Elastic Container Registry (Amazon ECR) is the AWS native managed container registry for Open Container Initiative (OCI) images which provides a convenient option with native integration t o the AWS ecosystem With ECR you can share container images privately within your organization using a private repository by default only accessible within your AWS account by IAM users with the necessary permissions Public repositories are available w orldwide for anyone to discover and download Amazon ECR comes with features like encryption at rest using AWS Key Management Service (AWS KMS) and in transit using Transport Layer Security (TLS) endpoints Amazon ECR image scanning helps in identifying software vulnerabilities in your container images by using CVEs database from the Clair project and provides a list of scan findings Additionally you can use VPC interface endpoints for ECR to res trict the network traffic between your VPC and ECR to Amazon network without a need for an internet gateway NAT gateway or a VPN/Direct Connect You can also use a registry of your choice such as DockerHub or any other cloud of self hosted container reg istry and integrate seamlessly with AWS container services For developers starting out with containers DockerHub API limits 100 image requests every six hours for anonymous usage but with ECR public you get 1 unauthenticated pull every second providing a less restrictive option to get started Your limits increase significantly when you authenticate to ECR and this is the recommended way to work with container registries as your adoption increases ArchivedAmazon Web Services Docker on AWS 12 Logging and Monitoring Treating logs as a continuous st ream of events instead of static files allows you to react to the continuous nature of log generation You can capture store and analyze real time log data to get meaningful insights into the application’s performance network and other characteristics An application must not be required to manage its own log files You can specify the awslogs log driver for containers in your task definition under the logConfiguration object to ship the stdout and stderr I/O streams to a designated log group in Amazon CloudWatch logs for viewing and archival Additionally FireLens for Amazon ECS enables you to use task definition parameters with the awsfirelens log driver to route logs to other AWS services or third party log aggregation tools for log storage and anal ytics FireLens works with Fluentd and Fluent Bit a fully compatible with Docker and Kubernetes Using the Fluent Bit daemonset you can send container logs from your EKS clusters to CloudWatch logs Amazon CloudWatch is a monitoring service for that you can use to collect various system application wide metrics and logs and set alarms CloudWatch Container Insights helps you explore aggregate and summarize your container metrics application logs and performance log events at the cluster node pod task and service level through automated dashboards in the CloudWatch console Container Insights also provides diagnostic information such as container restart failures crashloop backoffs in an EKS cluster to help you isolate issues and resolve them qu ickly Container Insights is available for Amazon Elastic Container Service (Amazon ECS including Fargate) Amazon Elastic Kubernetes Service (Amazon EKS) and Kubernetes platforms on Amazon EC2 During AWS re:Invent 2020 AWS launched Amazon Managed Service for Prometheus (AMP) and Amazon Managed Service for Grafana (AMG) two new open source based managed serv ices providing additional options to choose from AWS also provides the option to discover and ingest Prometheus custom metrics to CloudWatch Container Insights to reduce the number of monitoring tools Given the pace at which new services and features are being launched in this space AWS launched the One Observability Demo Workshop to help customers to get hands on experience with AWS instrumentation options and the latest capabilities of AWS observability services in a self paced guided sandbox environment Storage By default all files created inside a container are stored on a writable container layer This means the data doesn’t persist when that container no longer exists and is tightly ArchivedAmazon Web Services Docker on AWS 13 coupled to the host where a container is running Amazon ECS supports the following data volume options for containers • BindMounts : A file or directory on a host can be mounted into one or more containers For tasks hosted on Amazon EC2 the data can be tied to the lifecycle of the host by specifying a host and optional sourcePath value in your task definition Within the container writes to ward the containerPath are persisted to the underlying volume defined in the sourcePath independ ently from the container’s lifecycle You can also share data from a source container with other containers in the same task For tasks hosted on AWS Fargate us ing platform version 140 or later they receive a minimum of 20 GB of ephemeral storage for bind mounts which can be increased to a maximum of 200 GB • Docker Volumes : With the support for Docker volumes you can have the flexibility to configure the life cycle of the Docker volume and specify whether it’s a scratch space volume specific to a single instantiation of a task or a persistent volume that persists beyond the lifecycle of a unique instantiation of the task • Amazon EFS: It provides simple scala ble and persistent file storage for use with your Amazon ECS tasks With Amazon EFS storage capacity is elastic growing and shrinking automatically as you add and remove files Your applications can have the storage they need when they need it Amazon EFS volumes are supported for tasks hosted on Fargate or Amazon EC2 instances Kubernetes supports many types of volumes Ephemeral volume types have a lifetime of a pod but persistent volumes exist beyond the lifetime of a pod When a pod ceases to exist Kubernetes destroys ephemeral volumes; however Kubernetes does not destroy p ersistent volumes For any kind of volume in a given pod data is preserved across container restarts For Amazon EKS Container Storage Interface (CSI) driver provides a CSI interface to manage the lifecycle of Amazon EBS EFS FSx for Lustre for Persiste nt Volume For more information see Kubernetes Volumes Networking AWS container services take advantage of the native networking features of Amazon Virtual Private Cloud (Amazon VPC) T his allows the hosts running your containers to be in different subnets across Availability Zones providing high availability Additionally you can take advantage of VPC features like Network Access Control Lists (NACL) and Security Groups to ensure tha t only network traffic you want to allow to come in or leave your containe r For ECS the main networking modes are ones that operate at a task level using the awsvpc network mode or the traditional bridge network ArchivedAmazon Web Services Docker on AWS 14 mode which runs a built in virtual network inside each Amazon EC2 instance awsvpc is the only network available for AWS Fargate Amazon EKS uses Amazon VPC Container Network Interface (CNI) plugin for Kuberne tes for the default native VPC networking to attach network interfaces to Amazon EC2 worker nodes Amazon VPC network policies restrict traffic between control plane components to within a single cluster Control plane components for a cluster can't view o r receive communication from other clusters or other AWS accounts except as authorized with Kubernetes RBAC policies The pods receive IP addresses from the private IP ranges of your VPC When the number of pods running on the node exceeds the number of a ddresses that can be assigned to a single network interface the plugin starts allocating a new network interface if the maximum number of network interfaces for the instance aren't already attached Using CNI customer networking you can assign IP addres ses from a different CIDR block than the subnet that the primary network interface is connected to You also have the option to set network policies through third party libraries for Calico so you have the options to control network communication inside y our Kubernetes cluster at a very granular level More details on EKS networking are available in the AWS documentation Security The shared responsibility of security applies to AWS container services as well AWS manages the security of the infrastructure that runs your containers However controlling access for your users and your container applications is your responsibility as the customer AWS Identity and Access Management ( IAM) plays an important role in the security of AWS container services The permissions provided by the IAM policies attached to the different principals in your AWS account determines what capabilities they have You should avoid using long lived credentials like access keys and secret access keys with your container applications IAM roles provide you with temporary security credentials for your role session You can use roles to delegate access to users applications or services that don't normally have ac cess to your AWS resources There are usually IAM roles at two different levels the first determines what a user can do within AWS Container services and the second is a role which determines which other AWS services your container applications running i n your cluster can interact with For EKS the IAM roles works together with Kubernetes RBAC to control access at multiple levels IAM roles for service accounts (IRSA) with EKS enables you to associate an IAM role with a Kubernetes service account This service account can then provide AWS permissions to the containers in any pod that uses that service account ArchivedAmazon Web Services Docker on AWS 15 With this feature you no longer need to over provision permissions to the IAM role associated with the Amazon EKS node so that pods on that node can call AWS APIs Other aspects of security are network security audit capability and secrets management The container services take advantage of dif ferent constructs provided by Amazon VPC By applying the right controls for IP addresses and ports at different levels you can ensure that only desired traffic enters and leaves your container applications For your audit needs you can use AWS CloudTrai l a service that provides a record of actions taken by a user role or another AWS service in AWS container services Using the information collected by CloudTrail you can determine the request made to Amazon ECS the IP address from which the request w as made who made the request when it was made and additional details AWS Secrets Manager and AWS Systems Manager Parameter Store are two services that can be used to secure sensitive data used within container applications Systems Manager Parameter Store provides secure hierarchical storage of data with no servers to manage Secrets Manager provides additional capabilities that includes random password generation and automatic password rotation Data stored within Systems Manager Parameter can be en crypted using AWS KMS and Secrets Manager uses it to encrypt the protected text of a secret as well AWS container services can integrate with either Systems Manager Parameter Store or Secrets Manager to use process sensitive data securely Kubernetes secr ets enables you to store and manage sensitive information such as passwords docker registry credentials and TLS keys using the Kubernetes API Kubernetes Secrets are by default stored as unencrypted base64 encoded strings They can be retrieved in pla in text by anyone with API access or anyone with access to Kubernetes' underlying data store You can apply native encryption atrest configuration provided by Kubernetes to encrypt the secrets at rest However this involves storing the raw encryption ke y in the encryption configuration which is not the most secure way of storing encryption keys Kubernetes stores all secret object data within etcd encrypted at the disk level using AWS managed encryption keys You can further encrypt Kubernetes secrets using a unique data encryption key (DEK) You are responsible for applying necessary RBAC based controls to ensure that only the right roles in your Kubernetes cluster have access to the secrets and the IAM permi ssions for the AWS KMS key is restricted to authorized principals ArchivedAmazon Web Services Docker on AWS 16 CI/CD Containers have become a feature component of continuous integration (CI) and continuous deployment (CD) workflows Because containers can be built programmatically using Dockerfiles containers can be automatically rebuilt anytime a new code revision is committed Immutable deployments are natural with Docker Each deployment is a new set of containers and it’s easy to rollback by deploying containers that reference previous images AWS container services provide APIs that make deployments easy by providing the complete state of the cluster and the ability to deploy containers using one of the built in schedulers or a custom scheduler AWS Code Services in AWS Developer Tools provide a convenient AWS native stack to perform CI/CD for your container applications It provide s tooling to pull the source code from the source code repository build the container image push the container image to the container registry and deploy the image as a running container in one of the container services AWS CodeBuild uses Docker images to provision the build environments which makes it flexible to adapt to the needs of the applicat ion you are building A build environment represents a combination of operating system programming language runtime and tools that CodeBuild uses to run a build NonAWS tooling for CI/CD like GitHub Jenkins DockerHub and many others can also integrate with the AWS container services using the APIs Infrastructure as Code You should define your cloud resources as code so that you can spend less time creating and managing the infrastructure As with other AWS services AWS CloudFormation provides you a way to model and set up your container resources formatted text files in JSON or YAML describ ing the resources that you want to provision If you're unfamiliar with JSON or YAML AWS also provides other options to script your container environments AWS Copilot CLI is a tool for developers to build release and operate production read y containerized applications on Amazon ECS and AWS Fargate Copilot takes best practices from infrastructure to continuous delivery and makes them available to customers from the comfort of their command line You can also monitor the health of your serv ice by viewing your service's status or logs scale up or down production services and spin up a new environment for automated testing For EKS eksctl is a simple CLI tool for creating and managing clusters on EKS It uses CloudFormation under the covers but allows you to specify your cluster configuration information using a config file with sensible defaults for configuration that is not specified If you prefer to use a familiar programming language to define cloud ArchivedAmazon Web Services Docker on AWS 17 resources you can use AWS Cloud Development Kit (CDK) CDK is a software development framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation Today CDK s upports TypeScript JavaScript Python Java C#/Net and (in developer preview) Go Alternately if your organization already uses Terraform or similar tools that have modules for AWS container services you can use them to define your infrastructure as code too Scaling Amazon ECS is a fully managed container orchestration service with no control planes to manage scaling at all Amazon ECS provides options to auto scale container instances and ECS services Amazon ECS cluster auto scaling (CAS) enables you to have more control over how you scale the Amazon EC2 instances within a cluster The core responsibility of CAS to ensure that the right number of instances are running in an Auto Scaling Group to meet the needs of the tasks including tasks already r unning as well as tasks the customer is trying to run that don’t fit on the existing instances Amazon ECS Service Auto Scaling is the ability to automatically increase or decrease the desired count of tasks in your Amazon ECS service for Both EC2 and Farg ate based clusters You can use services’ CPU and memory utilization or other CloudWatch metrics Amazon ECS Service Auto Scaling supports target tracking step scaling and scheduled scaling policies For more information see Service auto scaling Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes Amazon EKS supports following Kubernetes auto scaling options • Cluster Autoscaler automatically adjusts the number of worker nodes in your cluster when pods fail or are rescheduled onto other nodes Amazon EKS node groups are provisioned as part of an Amazon EC2 Auto Scaling group which are compatible with the Cluster Autoscaler • Horizontal Pod Autoscaler automatically scales the number of pods in a deployment replication controller or stateful set based on CPU utilization or with custom metrics This can help your applications scale out to meet increased demand or scale in when resources are not needed thu s freeing up your nodes for other applications similar to Amazon ECS Service Autoscaling ArchivedAmazon Web Services Docker on AWS 18 • Vertical Pod Autoscaler frees the users from necessity of setting up todate resource limits and requests for the containers in their pods By default it provides the calculated recommendation without automatically changing resource requirements of the pods but when auto mode is configured it will set the requests automatic ally based on usage and thus allow proper scheduling onto nodes so that appropriate resource amount is available for each pod It will also maintain ratios between limits and requests that were specified in initial containers configuration For more inform ation on large clusters see considerations for large clusters Conclusion Using Docker containers in conjunction with AWS can accelerate your software development by creating s ynergy between your development and operations teams The efficient and rapid provisioning the promise of build once run anywhere the separation of duties via a common standard and the flexibility of portability that containers provide offer advantages to organizations of all sizes By providing a range of services that support containers along with an ecosystem of complimentary services AWS makes it easy to get started with containers while providing the necessary tools to run containers at scale Contributors Contributors to this document include : • Chance Lee Solutions Architect Amazon Web Services • Sushanth Mangalore Solutions Architect Amazon Web Services Further reading For additional information see: • Container Migration Methodology • Best Practices for writ ing Dockerfiles • Deploying AWS Elastic Beanstalk Applications from Docker Containers • Introducing AWS App Runner ArchivedAmazon Web Services Docker on AWS 19 • Twelve Factor Apps using Amazon ECS and AWS Fargate • Blue/Green deployment with CodeDeploy • IAM roles for Kubernetes service accounts • Amazon EKS Networking • Amazon ECS using AWS Copilot • Amazon EKS Best Practices Guides • Amazon ECS Workshop • Amazon EKS Workshop Document revisions Date Description July 26 2021 Whitepaper updated for technical accuracy April 2015 First publication
|
General
|
consultant
|
Best Practices
|
DoDCompliant_Implementations_in_the_AWS_Cloud
|
DoDCompliant Implementations in AWS First Published April 2015 Updated November 3 2021 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved Contents Overview 1 Getting started 1 Shared responsibil ities and governance 2 Shared responsibility model 2 Compliance and governance 13 AWS global infrastructure 17 Architecture 19 Traditional DoD data center 19 DoD compliant cloud environment 20 AWS services 26 Compute 26 Networking 30 Storage 35 Management 40 Services in scope 44 Reference architecture 45 Impact lev el 2 45 Impact level 4 49 Impact level 5 51 Conclusion 53 Contributors 54 Further reading 54 Document revisions 54 Abstract This whitepaper is intended for Department of Defense ( DoD) mission owners who are designing the security infrastructure and configuration for applications running in Amazon Web Services (AWS) It provides security best practices and architectural recommendations that can help you properly design and deploy DoD compliant infrastructure to host your mission applications and protect your data and assets in the AWS Cloud The paper is designed for Information Technology ( IT) decision makers and security personnel and assumes that mission owners are familiar with basic security concepts in the areas of networking operating systems data encryption and operational controls AWS provides a secure hosting environment for mission owners in which to deploy their applications Mission owners retain the responsibility to sec urely deploy manage and monitor their systems and applications in accordance with DoD security and compliance policies When operating an application or system on AWS the mission owner is responsible for network configuration and security of their AWS en vironment including Amazon Elastic Compute Cloud (Amazon EC2) guest operating system s and management of user access Amazon Web Services DoDCompliant Implementations in AWS 1 Overview In January 2015 the Defense Information Systems Agency (DISA) released the DoD Cloud Computing (CC) Security Requirements Guide (SRG) which provided guidance for cloud service providers and for DoD mission owners in support of running workloads in cloud environments The DoD CC SRG is the primary guidance for cloud computing in the DoD community This whitepaper provides highlevel guid ance for DoD mission owners and partners in designing and deploying solutions in the AWS Cloud that are able to be accredited at Impact Level (IL) 2 IL 4 and IL 5 Although t here are many design permutations that can meet CC SRG requirements on AWS this document presents sample reference architectures to consider that will address many of the common use cases for IL2 IL4 and IL5 Getting started When considering a n applicat ion deployment or migration to the AWS Cloud DoD mission owners must first make sure that their IT plans align with their organization’s business model A solid understanding of the mission and core competencies of your organization will help you identify opportunities for modernization and innovation by migrating to the AWS Cloud You must think through key technology questions includin g: • How can the AWS C loud advance your mission objectives? • Do you have legacy applications and systems that need greater scalability reliability or security than you can afford to maintain in your own environment? • What are your compute storage and network capacity requirements? • How will you be prepared to scale up (and down) to support the mission ? As you answer each question apply the lenses of flexibility cost effectiveness scalability elasticity and security Taking advantage of AWS services allow s you to focus on your co re competencies and leverage the resources and experience that AWS provides Amazon Web Services DoDCompliant Implementations in AWS 2 Shared responsibilities and governance As mission owners build systems on top of AWS Cloud infrastructure the responsibility for implementing operational maintenance and securit y measures are shared : mission owners provide operational maintenance and security support for their software defined cloud components and AWS provide s operational maintenance and security for its infrastructure Mission owners can also inherit or use securi ty controls provided by AWS Shared responsibility model Security and compliance are shared responsibilit ies between AWS and mission owners This shared model can help relieve your operational burden because AWS operates manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates The mission owner assumes responsibility and management of the guest operating system (incl uding updates and security patches) and other associated application software as well as the configuration of the AWS provided security group firewall Mission owners should carefully consider the services they choose as their responsibilities vary depend ing on the services used the integration of those services into their IT environment and applicable laws and regulations1 Security responsibilities in the Cloud and of the Cloud Amazon Web Services DoDCompliant Implementation s in AWS 3 It is possible for mission owners to enhance security and/or meet their more stringent compliance requirements by leveraging AWS services like Amazon GuardDuty AWS Key Management Service (AWS KMS ) and encrypted Amazon Simple Storage Service (Amazon S3) buckets as well as network firewalls and centralized log aggregation The nature of this shared responsibility also provides the flexibility and mission owner control that permits the deployment of solutions that meet industry specific certification requirements This mission owner and AWS shared responsibility model also extends to compliance contro ls Just as the responsibility to operate the IT environment is shared between AWS and its mission owners so is the management operation maintenance and verification of shared compliance controls AWS manages security controls associated with AWS physi cal infrastructure Mission owners can then use the AWS control and compliance documentation available to them at AWS Artifact to perform their control evaluation and verification procedures AWS offers ser vices and features t hat can ease management of the customer’s portion of the shared responsibility model Refer to AWS Cloud Security Mission owner responsibilities Service instance management Mission owners are responsible for managing their instantiations of Amazon S3 bucket storage and objects Amazon Relational Database Service (Amazon RDS) database instances EC2 compute instances and their associ ated storage and Virtual Private Cloud (VPC) network environments This includes mission owner installed operating systems databases and applications running on EC2 instances that are within their authorization boundary Mission owners are also respons ible for managing specific controls relating to shared interfaces and services within the ir security authorization boundary such as customized security control solutions Examples include but are not limited to configuration and patch management vulner ability scanning disaster recovery protecting data in transit and at rest host firewall management credential management identity and access management and VPC network configurations Mission owners provision and configure their AWS compute storage and network resources using API calls to AWS API endpoints or by using the AWS Management Console Using these methods the mission owner is able to launch and shut down EC2 Amazon Web Services DoDCompliant Implementations in AWS 4 and RDS instances change firewall parameters and perform other management functions Application management Applications that run on AWS services are the responsibility of each mission owner to configure and maintain Mission owners should address the controls relevant to each application in the applicable System Security Plan (SSP) Operating system maintenance AWS provides Amazon Machine Images (AMIs) for standard OS releases that include Amazon Linux 2 Microsoft Windows Server R ed Hat Enterprise Linux SUSE Linux and Ubuntu Linux with no additional configuration applied to the image An AMI provides the information required to launch an EC2 instance which is a virtual server in the cloud The miss ion owner specifies the AMI used to launch an instance and the mission owner can launch as many instances from the AMI as needed An AMI includes the following: • A template for the root volume for the instance The root volume of an instance is either an Amazon Elastic Block Store (Amazon EBS) volume or an instance store volume • Launch permissions that control which AWS accounts can use the AMI to launch instances • A block device mapping that specifies the volumes to attach to the instance when it’s launched The OS that is installed on an AMI provided by AWS is patched to a point in time In general AMIs include a minimal install of a guest operating system AWS does not perform any systems administration operati ons or maintenance duties such as patching DoD mission owners are responsible for properly hardening patching and maintaining their AMIs in accordance with DoD Security Technical Implementation Guides (STIGs) and the Information Assurance Vulnerability Management process To aid mission owners in compliance and configuration manag ement consider implementing AWS Systems Manager AWS Systems Manager can scan your instances against your patch configuration and custom policies You can define patch baselines maintain up todate antivirus definitions and enforce firewall policies You can also remotely manage your servers at scale without manually loggi ng in to each server Systems Manager also provides a centralized store to manage your Amazon Web Services DoDCompliant Implementations in AWS 5 configuration data whether i n plaintext such as database strings or secrets such as passwords This allows you to separate your secrets and configuration data from c ode Amazon EC2 provides an AWS Systems Manager (SSM ) document AWSEC2 ConfigureSTIG to apply Security Technical Information Guide ( STIG ) controls to an instance to help you quickly build compliant images following STIG standards The STIG SSM document scans for misconfigurations and runs a remediation script The STIG SSM document installs InstallRoot on Windows AMIs which is a utility produced by the Department of Defense (DoD) designed to instal l and update DoD certificates and remove unnecessary certificates to maintain STIG compliance There are no additional charges for using the STIG SSM document For more information refer to AWSEC2 ConfigureSTIG In 2019 AWS release d new AMIs for Microsoft Windows Server to help you meet STIG compliance standards Amazon EC2 Windows Server AMIs for STIG Compliance are preconfigured with more than 160 req uired security settings STIG compliant operating systems include Windows Server 2012 R2 Windows Server 2016 and Windows Server 2019 The STIG compliant AMIs include updated DoD certificates to help you get started and achieve STIG compliance For instru ctions on how to deploy these AMIs consult Amazon EC2 documentation or search on the AWS Marketplace AWS does not guarantee a specific patch level or control configuration settings Mission owner responsibility includes updating any EC2 instance to a recent patch level and configuring the instance to suit specific mission needs Upon deployment of EC2 instances the mission owner can assume full administrator access and is responsible for performing additional configuration patching security hardening vulnerability scanning and application installation AWS does not maintain administrator access to mission owner EC2 instances Mission owners can customize the instance launched from a public AMI and then save that configuration as a custom AMI for the mission owner’s own use After mission owners create and register an AMI they can use it to launch new instances This concept is analog ous to creating virtual machine templates in a traditional data center environment Instances launched from this customized AMI contain all of the customizations that mission owner has made The mission owner can deregister the AMI when finished After the AMI is deregistered mission owners cannot use it to launch new instances Amazon Web Services DoDCompliant Implementations in AWS 6 Creating custom Amazon Machine Images (AMIs) Workload migration Mission owners also have several options to assist in bulk virtual machine migration to AWS commonly referred to as lift andshift One such option is the AWS Server Migration Service (SMS) AWS SMS is an agentless service which makes it easier and faster for mission owners to migrate thousa nds of on premises workloads to AWS AWS SMS lets mission owners automate schedule and track incremental replications of live server volumes making it easier to coordinate large scale server migrations Although agentless SMS does require privileged acc ess to the source servers' hypervisor A second option is AWS Application Migration Service formerly known as CloudEndure Migration which is an agent based approach AWS Application Migration Service simplifies expedites and reduces the cost of cloud m igration by offering a highly automated lift andshift solution With AWS Application Migration Service you can maintain normal business operations throughout the replication process It nearly continuously replicates source servers which means little to no performance impact When you’re ready to launch the production machines your machines are automatically converted from their source infrastructure into the AWS infrastructure so they can boo t and run natively in AWS Security group configuration Mission owners are responsible for properly configuring their security groups in accordance with their organization’s networking policies A security group acts as a virtual firewall for an instance t o control inbound and outbound traffic As part of ongoing operations and maintenance mission owners must regularly review their security group Amazon Web Services DoDCompliant Implementations in AWS 7 configuration and instance assignment to maintain a secure baseline Security groups are not a solution that ca n be deployed using a one sizefitsall approach They should be carefully tailored to the intended functionality of each class of instance deployed within the mission owner’s AWS environment VPC configuration Amaz on Virtual Private Cloud (VPC) provides enhanced capabilities that AWS mission owners can use to secure their AWS environment through the deployment of traditional networking constructs such as demilitarized zones ( DMZs ) Virtual Local Area Networks (VLANs) and subnets that are segregated by functionality Network Access Control Lists (N ACLs ) provide stateless filtering that can be used similar ly to a firewall to defend against malicious traffic at the subnet level This adds another layer of network security in addition to the mission owner’s security group implementation Inbound traffic into VPC Backups Mission owners are responsible for establishing a backup strategy using AWS services or third party tools that meet the retention goals identified for their application Through the use of Amazon EBS snapshots mission owners can ensure their data is backed up to Amazon S3 on a regular basis Mission owners are responsible for setting and maintaining proper a ccess permissions to their Amazon EBS volumes and Amazon S3 Amazon Web Services DoDCompliant Implementations in AWS 8 buckets and objects Amazon S3 objects and Amazon EBS snapshots can also be configured with lifecycle policies to meet retention requirements and can be aged off to Amazon S3 Glacier for lower cost long term deep storage Host based security tools Mission owners should install and manage anti malware and host based intrusion detection systems in accordance with their organization’s security policies Host based security tools can be included withi n the mission owner’s AMI installed via bootstrapping services when the instance is launched or deployed using configuration management and automation tools like AWS Systems Manager Vulnerability scanning and penetration testing Mission owners are respo nsible for conducting regular vulnerability scanning and penetration testing of their systems in accordance with their organization’s security policies All vulnerability and penetration testing must be properly coordinated with AWS Security in accordance with AWS policy For more information refer to AWS Penetration Testing page Vulnerability scanning of EC2 instances can be accomplished using third party tools or via Amazon Inspector Amazon Inspector is an automated security assessment service that assesses applications for exposure vulnerabilities and deviations from best practices After performing an assessment Amazon Inspector produces a detailed list of security findings prioritized by level of severity These findings can be reviewed directly or as part of detailed assessment reports which are available via the Amazon Inspector console or API High availability and disaster recovery Mission owners also have the responsibility to architect their applications and systems so they are highly available and are routinely backed up Applications and systems should use multiple Availability Zones within an AWS Region for fault tolerance Distributing applications across multiple Availability Zones provides the ability to remain resilient in the face of most failure modes includi ng natural disasters or system failures Mission owners also have the option of automating recovery in case of failures of systems or processes With APIs and automation in place mission owners can launch and test the Disaster Recovery (DR) solution on a recurring periodic basis to endure proper functionality of the solution and be prepared ahead of time Mission owners can reduce recovery times by quickly provisioning pre configured resources (such as AMIs ) when they are needed Amazon Web Services DoDCompliant Implementations in AWS 9 or cutover to already pr ovisioned DR site (and then scaling gradually as you need) Security best practices can be enumerated within an AWS CloudFormation template and provision resources within a VPC2 AWS Identity and Access Management Mission owners are responsible for properly managing their AWS account s including AWS account credentials as well as any IAM users groups or roles that they have associated with their account s This includes configuring multi factor author ization (MFA) password complexity and password retention requirements as applicable by accreditation policy Through the use of the AWS Identity and Access Management (IAM) service mission owners can implement rolebased access control that properly separates users by their identified roles and responsibilities thereby establishing least privilege and helping to ensure that users have only the permissions necessary to perform their assigned tasks To manage multiple AWS accounts mission owners should leverage AWS Organizations AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organi zation that you create and centrally manage AWS Organizations includes account management and consolidated billing capabilities that enable you to better meet your mission’s budgetary security and compliance needs Identity federation AWS offers multipl e options for federating your identities in AWS You can use IAM to enable users to sign in to their AWS accounts with their existing corporate credentials AWS supports identify federation with on premises authentication stores such as Lightweight Directo ry Access Protocol (LDAP) and Active Directory With federation you can use single sign on (SSO) to access your AWS accounts using credentials from your organization’s directory Federation uses open standards such as Security Assertion Markup Language 2 0 (SAML) to exchange identity and security information between an identity provider (IdP) and an application Multi factor and CAC authentication At a minimum AWS mission owners should implement multi factor authentication (MFA) for the ir AWS account credentials as well as any privileged IAM accounts associated with AWS accounts MFA can be used to add an additional layer of security in Amazon S3 through activation of the MFA delete feature Amazon Web Services DoDCompliant Implementations in AWS 10 The DoD has standardized on MFA through the use of t he Common Access Card (CAC) or US Government Personal Identity Verification (PIV) token You can require your AWS users to authenticate to the AWS Management Console with a smart card by implementing SAML identity federation You can also implement a RAD IUS server to handle authentication requests to an AWS Managed Microsoft Active Directory instance Mission owner applications that are migrated to AWS that currently require CAC authentication at the application layer operate exactly the same as they do w ithin an on premises data center environment Privileged remote access Mission owners should implement privileged remote access for application and systems administrators to manage their AWS environment s There are several options for privileged remote access: • Amazon WorkSpaces • Amazon AppStream 20 • AWS Sys tems Manager Session Manager • EC2based bastion hosts Amazon Work Spaces is a managed secure Desktop asaService (DaaS) solution that helps you decrease the complexity in managing hardware inventory OS versions and patches and Virtual Desktop Infrastructure (VDI) Amazon Work Spaces can be configured to restrict access to specific resources within designated VPC subnets and specific AWS services controlled by the user’s IAM policy or role Users are able to access their Work Spaces through an installable desktop client or using the Remote Desktop client Both of these deployment options can be configured for CAC/PIV authentication A mazon Work Spaces is accredited at IL2 IL4 and IL5 Amazon AppStream 20 is a fully managed application streaming service You centrally manage your desktop applications on AppStream 20 and securely deliver them to any computer For example you are able to stream database management tools such as SQL Server Management Studio web browsers such as Firefox and Chrome (restricted to certain URLs if desired) as well as common office software Applications and data are not stored on users' computers Your applications are streamed as encrypted pixels and access data secured within your network Users are Amazon Web Services DoDCompliant Implementations in AWS 11 able to authenticate with their CAC/PIV tokens through the use of identity federation The Amazon AppStream 20 service is accredited at IL2 IL4 and IL5 AWS Systems Manager Session Manager is a fully managed AWS Systems Manager capability that lets you manage your EC2 instances on premises instances and virtual machines (VMs) through an interactive one click browser based shell or through the AWS CLI S ession Manager provides secure and auditable instance management without the need to open inbound ports maintain bastion hosts or manage SSH keys Session Manager also allows you to comply with security policies that require controlled access to instances strict security practices and fully auditable logs with instance access details while still providing end users with simple one click access to your managed instances across multiple operating system s Users are able to authenticate with their CAC/PIV tokens through the use of AWS Management Console identity federation Session Manager is a feature of AWS Systems Manager which is accredited at IL2 IL4 IL5 and IL6 EC2based b astion hosts are hardene d instances used for administrative tasks within the AWS environment Rather than allowing shell access to all EC2 instances from the public internet access can be restricted to a single EC2 instance thereby limiting the attack surface fr om possible comp romise Access to the bastion host should be through whiteliste d IP addresses within the mission owner’s organization require valid SSH keys and require multi factor authentication Auditing capabilities within the OS of the bastion host should be config ured to record all administrative activity These bastion hosts must be patched hardened and scanned in the same way as all other EC2 instances deployed within the mission environment Auditing Mission owners are responsible for properly configuring the ir AWS services to ensure that required audit logs are generated Audit logs should be forwarded to a dedicated log server instance or tool located within the mission owner’s VPC management subnet or written to a secured and encrypted Amazon S3 bucket ensuring that sensitive data is properly protected The mission owner should enable the use of AWS CloudTrail a managed service that enables governance compliance operational auditing and risk auditing of your AWS account With CloudTrail you can log nearly continuously monitor and retain account activity related to actions across your AWS infrastructure CloudTrail provides event history of your AWS account activity including actions taken through th e AWS Management Console AWS SDKs command line tools and other AWS services This event history Amazon Web Services DoDCompliant Implementations in AWS 12 simplifies security analysis resource change tracking and troubleshooting In addition you can use CloudTrail to detect unusual activity in your AWS accou nts Data protection and spillage Following the Shared Responsibility Model AWS customers are responsible for encryption and access control of their data within their AWS environments According to published DISA guidance all data at rest must also be encrypted You can use AWS Key Management Service (AWS KMS) to help ensure that your data is encrypted at rest For more information refer to AWS KMS Keys To provide protection against data spills all mission owner data stored on Amazon EBS volumes and Amazon S3 must be encrypted using AES 256 encryption in accordance with DoD guidance The mission owner is resp onsible for implementing FIPS 140 2 validated encryption for data at rest with customer managed encryption keys in accordance with DoD policy The combination of the mission owner’s encryption and the automated wipe functionality that AWS provides can ens ure that any spilled data is illegible ciphertext greatly limiting the risk of accidental disclosure AWS degausses and destroys all decommissioned media in accordance with National Institute of Standards and Technology ( NIST ) and National Security Agency (NSA) standards Intrusion detection Mission owners are responsible for properly implementing host based intrusion detection systems on their instances as well as any required network based intrusion detection To assist mission owners in this endeavor AWS provides native services like Amazon GuardDuty Amazon GuardDuty is a threat detection service that nearly continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads With the cloud the collection and aggregation of account and network activities is simplified but it can be time consuming for security teams to nearly continuously analyze event log data for potential threats With GuardDuty you now have an intelligent and cost effective option for nearly continuous threat detection in the AWS Cloud GuardDuty is accredited at IL2 IL4 and IL5 Mission owners are responsible for coordinating deployment of their intrusion detection capabilities with their Cyber Security Service Provider (CSSP) Mission owners can implement the Secure Cloud Computing Architecture (SCCA) within AWS to help them meet their compliance and security requirements More information about the SCCA architecture can be foun d in the IL2 IL4 and IL5 sample reference architecture section of this document Amazon Web Services DoDCompliant Implementations in AWS 13 Compliance and governance Mission owners are required to maintain strong governance over the entire IT environment regardless of whether it is deployed in a traditional da ta center or in the AWS C loud Best practices include : • Understanding your workload’s required compliance objectives • Establishing a control environment that meets those objectives and requirements • Understanding the requirements for validation based on the organization’s risk tolerance • Verifying the operating effectiveness of the control environment Deployment of workloads in the AWS Cloud gives you options to apply various types of controls and utilize multiple verification methods To help mission owners meet DoD compliance and governance requirements a mission owner must perform the following basic steps: 1 Review information from AWS and other sources to understand how their cloud environment is architected and configured 2 Document all relevant DoD compliance requirements that may be in scope for their workloads in the cloud 3 Design and implement control objectives to meet the organization’s security and compliance requirements 4 Identify and document controls owned by outside or third parties 5 Verify that all control objectives are met and all key controls are designed and operatin g effectively Approaching compliance and governance in this manner will help mission owners gain a better understanding of their environment and will help clearly delineate any verification activities that need to be performed FedRAMP The Federal Risk an d Authorization Management Program (FedRAMP) is a US government wide program that provides a standardized approach to security assessment authorization and nearly continuous monitoring for cloud products and services3 Amazon Web Services DoDCompliant Implementations in AWS 14 The DoD SRG uses the FedRAMP pro gram to establish a standardized approach for DoD entities that are utilizing commercial cloud services AWS has been assessed and approved under FedRAMP and has been issued two Agency Authority to Operate (ATO) authorizations covering all 48 Contiguous S tates and the District of Columbia (CONUS) Regions which include AWS GovCloud (US) US East and US West For more information on FedRAMP compliance of the AWS Cloud visit our FedRAMP FAQ page All cloud service providers must demonstrate compliance with FedRAMP standards before they can be considered for a provisional authorization under the CC SRG by DoD Cloud Computing Security Requirements Guide The DoD CC SRG provides a formalized assessment and authorization process for cloud service provider s to obtain a DoD Provisional Authorization (PA) which can then be leveraged by mission owners These provisional auth orizations provide reusable certification s that attest to the compliance of specific AWS Regions and services in alignment with DoD standards reducing the time necessary for a DoD mission owner to assess and authorize their workloads for migration to AWS The CC SRG supports the overall goal of the US federal government to increase the utilization of commercial cloud computing and it provides a means for the DoD to support this goal The CC SRG requires the categorization of mission systems and their workloads at one of four (4) Impact Levels Each level represents a determination of the data sensitivity of a particular system and the controls required to protect it starting at level 2 (lowest) through level 6 (highest) The following table summarizes th e impact levels with a description of a typical workload connectivity restrictions Boundary Cloud Access Point (BCAP) requirements and Computer and Network Defense (CND) requirements Table 1 – Security requirements Impact Level Information Sensitivity Security Controls Location OffPremises Connectivity Separation Personnel Requirements 2 PUBLIC or non critical mission information FedRAMP Moderate US / US outlying areas or DoD on premises Internet Virtual / Logical Public Community National Agency Check and Inquiries (NACI) Amazon Web Services DoDCompliant Implementations in AWS 15 Impact Level Information Sensitivity Security Controls Location OffPremises Connectivity Separation Personnel Requirements 4 CUI or Non CUI Noncritical mission information NonNational Security Systems Level 2 + CUIspecific Tailored Set US / US outlying areas or DoD on premises NIPRNet via CAP Virtual / Logical Limited “Public” Com munity Strong virtual separation between tenant systems and information US Persons ADP1 Single Scope Background Information (SSBI) ADP2 National Agency Check with Law and Credit (NACLC) Nondisclosure Agreement (NDA) 5 Higher Sensitivity CUI Mission critical information National Security Systems Level 4 + NSS and CUIspecific Tailored Set US / US outlying areas or DoD on premises NIPRNet via CAP Virtual / Logical Federal Government Community Dedicated multi tenant infrastructure physically separate from non federal systems Strong virtual separation between tenant systems and information 6 Classified SECRET National Security Systems Level 5 + Classified Overlay US / US outlying areas or DoD on premises CLEARED/CL ASSIFIED FACILITIES SIPRNET DIRECT With DoD SIPRNet Enclave Connection Approval Virtual / Logical Federal Government Community Dedicated multi tenant infrastructure physically separate from non federal and unclassified systems US citizens with favorably adjudicated SSBI and SECRET clearance NDA AWS hold s a provisional authorization for Impact Level 2 workloads within US East and US West which permits mission owners to deploy public unclassified information in these AWS Regions with both the AWS authorization and the mission application’s ATO AWS GovCloud holds a provisional authorization for Impact Levels 2 4 and 5 and permits mission own ers to deploy the full range of controlled unclassified information categories covered by these levels The AWS Secret Region holds a provisional authorization for Impact Level 6 and permits workloads up to and including Secret classification Amazon Web Services DoDCompliant Implementations in AWS 16 To begin pl anning for the deployment of a DoD mission system in AWS it is critical that the CC SRG impact level categorization be made in advance Systems designated at Impact Level 2 can begin deployments relatively quickly Conversely a designation at Impact Level 4 or 5 requires that the mission application on AWS be connected to the Nonsecure Internet Protocol Router Network (NIPRNet) by means of AWS Direct Connect Internet Protocol Security (IPsec) virtual private network (VPN) or both This NIPRNet connecti on also requires that the traversal of all in bound and outbound traffic to and from the mission owner’s VPC be routed through a Border Cloud Access Point (BCAP) or equivalent DoD CIO approved boundary and its associated CND suite The provisioning of circu its for an AWS Direct Connect to NIPRNet connection typically has a substantial lead time so mission owners should plan accordingly Mission owners can also take advantage of existing Cloud Access Points that have been set up by various DoD agencies or CS SPs including DISA For more information regarding the Department of Defense CC SRG refer to the DISA cybermil website for the latest Cloud Security announcements and requirements or the latest CC SRG v13 document For more information on the DISA Cloud Access Point refer to the DISA Cloud Connection Process Guide FedRAMP + CC SRG compliance = the path to AWS For DoD application owners to obtain an Authority to Operate ( ATO) for their cloud deployed applications from their approving authority they must select a cloud service provider that has obtained a provisional authorization from DoD Gaining authorization under FedRAMP is the first step toward gaining authorization from DoD There are four paths into the FedRAMP repository ; the Joint Authorization Board (JAB) and Agency ATO paths are the most common If a CSP wants to go beyond FedRAMP and become a DoD CSP the CSP must go through the DoD CC SRG assessment process Curr ently attaining a FedRAMP Moderate authorization enables a CSP to be considered for Impact Level 2 of the CC SRG while an additional assessment is required against the FedRAMP+ controls of Impact Levels 4 and 5 prior to being granted a provisional author ization at those levels Regardless of whether the Designated Accrediting Authority (DAA) is using the DoD Information Assurance and Certification Accreditation Process (DIACAP) or the Risk Management Framework (RMF) process the DAA has the ability to lev erage and inherit the Provisional Authorization package(s) as part of its assessment toward a final ATO which only it grants (not the Defense Information Systems Agency (DISA)) The RMF process has been formally adopted by the DoD Mission owners can reque st the AWS Amazon Web Services DoDCompliant Implementations in AWS 17 FedRAMP package to get a better understanding of compliance and security processes that AWS abides by Controls inheritance and responsibilities AWS global infrastructure AWS provides facilities and hardware in support of mission owners with security features controlled by AWS at the infrastructure level In the infrastructure as a service (IaaS) model AWS is responsible for applicable service delivery layers including: • Infrastructure (hardware and software that comprise the infrastructure) • Service management processes (the operation and management of the infrastructure and the system and software engineering lifecycles) Mission owners use AWS to manage the cloud infrastructure includi ng the network data storage system resources data centers security reliability and supporting hardware and software Across the globe the infrastructure of AWS is organized into Regions Each Region contains Availability Zones which are located within a particular geographic area that allows for low latency communication between the zones Customer data resides within a particular Region and does not move to a different Region unless the customer explicitly takes this action Amazon Web Services DoDCompliant Implementations in AWS 18 Currently there are seven Regions available within CONUS that are permitted for use by the DoD They are: • US East (IL2) o useast1 (Northern Virginia) o useast2 (Ohio) • US West (IL2) o uswest1 (Northern California) o uswest2 (Oregon) • AWS GovCloud (IL4 IL5 ITAR and export controlled workloads) o usgovwest1 (Oregon) o usgoveast1 (Ohio) • AWS Secret Region (IL6) Each Availability Zone has an identical cloud services offering compute storage and networking among other functionality that enables mission own ers to deploy applications and services with flexibility scalability and reliability AWS provides mission owners with the option to choose only the services they require and the ability to provision or release them as needed Amazon Web Services DoDCompliant Implementations in AWS 19 Architecture Traditional D oD data center Traditional three tier data center architecture A typical DoD three tier data center architecture might consist of the following: • Two data center locations ; one hosting the production environment and one hosting the COOP or DR environment • Each system consists of three distinct tiers or network enclaves Each enclave is defined by separate subnets or VLANS • Network isolation and control between enclaves is maintained by a firewall Th is isolation allows the web tier to communicate with the application tier and the application tier to communicate with the database tier Direct external access to the application and web tiers is prohibited • A load balancer is used to distribute traffic across the web servers and may also provide SSL /TLS offloading Because of the distance between these systems and the network connectivity the data replication between databases is asynchronous In addition to the three tier web application and database components additional “shared” or c ommon services are needed to support the infrastructure as a whole These services may be dedicated to this application or may be leveraged to support multiple applications Amazon Web Services DoDCompliant Implementations in AWS 20 DoD compliant cloud environment Migrating mission workloads to a DoD compliant e nvironment in the AWS Cloud is achievable through the following high level steps Step 1 – Find a “ home” in the AWS Cloud Planning migration to AWS Regions and Availability Zones Concepts • AWS Region • AWS Availability Zone Amazon Web Services DoDComplian t Implementations in AWS 21 Step 2 – Define your network in AWS Configuring VPC subnets NACLs and route tables Concepts • Virtual Private Cloud (VPC) • VPC Subnet • Network Access Control List (Network ACL) • VPC Route Table Amazon Web Services DoDCompliant Implementations in AWS 22 Step 3 – deploy servers (or containers or serverless infrastructure) Deploying Amazon EC2 instances in your subnets Concepts • Amazon Elastic Compute Cloud (EC2) Step 4 – Add storage Creating and attaching Amazon EBS volumes to your Amazon EC2 instances Amazon Web Services DoDCompliant Implementations in AWS 23 Concepts • Amazon EBS • Amazon S3 • Amazon S3 Glacier Step 5 – Add scalability redundancy and failover Adding ELB to handle traffic coming to your EC2 instances Concepts • Multi AZ Architecture • Elastic Load Balancing (ELB) Amazon Web Services DoDCompliant Implementations in AWS 24 Adding an Auto Scaling Group to incr ease and decrease your compute capacity Concepts • Amazon EC2 Auto Scaling Step 6 – Implement network traffic filtering Adding security groups to your VPC Amazon Web Services DoDCompliant Implementations in AWS 25 Concepts • AWS security groups • Defense in depth Recap Comparison of AWS Cloud architecture and onpremises data centers Availability Zones are analogous to data centers Subnets are analogous to layer 3 VLANs EC2 instances are analogous to servers or virtual machines Security groups are analogous to stateful firewalls Shared services There are se veral other components that are required to support a DoD compliant environment in AWS The DoD CC SRG stipulates that IL4+ workloads require protection by a web application firewall including network intrusion detection/prevention full packet capture fun ctionality vulnerability scanning endpoint protection identity and access control (including public key infrastructure ( PKI)) common services (DNS/NTP) as well as log management and patching capabilities Amazon Web Services DoDCompliant Implementations in AWS 26 Additional components required to support a DoDcompliant environment in AWS AWS services Compute Amazon Elastic Compute Cloud (EC2) Amazon EC2 is a web service that provides virtual server instances that can be used to build and host software systems Amaz on EC2 facilitates web scale computing by enabling mission owners to deploy virtual machines on demand The simple web service interface allows mission owners to obtain and configure capacity with minimal friction and it provides complete control over computing resources Amazon EC2 changes the economics of computing by allowing organizations to avoid large capital expenditures and instead pay only for capacity that is actually used Amazon EC2 functionality and features include : • Elastic – Amazon EC2 reduces the time required to obtain and boot new server instances to minutes allowing mission owners to quickly scale capacity both up and down as computing requirements change Amazon Web Services DoDCompliant Implementations in AWS 27 • Flexible – The mission owner can choose among various options f or number of CPUs memory size and storage size A highly reliable and fault tolerant system can be built using multiple EC2 instances EC2 instances are very similar to traditional virtual machines or hardware servers EC2 instances use operating systems such as Windows or Linux They can accommodate most software that can run on those operating systems EC2 instances have IP addresses so the usual methods of interacting with a remote machine such as Secure Shell (SSH) and Remote Desktop Protocol (RDP) can be used • Amazon Machine Image (AMI) – AMI templates are used to define an EC2 server instance Each AMI contains a software configuration including operating system application server and applications applied to an instance type Instance types in Amazon EC2 are essentially hardware archetypes matched to the amount of memory (RAM) and computing power (number of CPUs) needed for the application Using AMI template s to launch Amazon EC2 instances • Custom AMI – The first step toward building applicati ons in AWS is to create a library of customized AMIs Starting an application then becomes a matter of launching the AMI For example if an application is a website or web service the AMI should be configured with a web server ( for example Apache Nginx or Microsoft Internet Information Serv ices) the associated static content and the code for all dynamic pages Amazon Web Services DoDCompliant Implem entations in AWS 28 Alternatively the AMI could be configured to install all required software components and content by running a bootstrap script as soon as the instance is launched As a result after launching the AMI the web server will start and the application can begin accepting requests After an AMI has been created replacing a failing instance is very simple; a replacement instance can easily be launched tha t uses the same AMI as its templ ate • EC2 local instance store volumes – These volumes provide temp orary block level stor age for EC2 instances When an EC2 insta nce is created from an A MI in most cases it comes with a preconfigured blo ck of preattached disk storage Unlike Ama zon EBS volumes data on instance store vo lumes p ersists only during the life of the associated E C2 instance and they are not intended to be used as durable disk stor age Data on EC2 local instance store vo lumes is p ersiste nt across orderly instance reboots (following OS vendor procedure for rebooting the underlying operating system) but not in situ ations where the EC2 instance shuts down or goes through a failure/restart cycle Local instance store volumes should not be used for any data that must persist over time such as permanent file or database storag e Although local instance store volumes are n ot persistent the data can be persisted by periodically copying or backing it up to Amazon EBS or Amazon S3 • Missionowner controlled – Mission owners have compl ete co ntrol of their instances They have root access to each one and can int eract with them as they would any mach ine Mission owners can stop their instance while retain ing the data on a bo ot partition and then subs equently restart the same instance using web ser vice APIs In stances can be rebooted remotely using web ser vice APIs Mission owners also ha ve acce ss to the AWS Management Conso le to view and control their instances • API Management – Managing instances can be done through an API call scriptab le com mand line tools or the AWS Management Conso le Being ab le to quickly launch rep laceme nt insta nces based on a cu stom AMI is a cr itical first step towards fault tolerance The next step is storing persi stent data that these server instan ces use Amazon Web Services DoDCompliant Implementations in AWS 29 • Multiple Availability Zones – Amazon EC2 pro vides the ab ility to place instances in multip le Availability Zones Availability Zones are d istinct locations that are en gineered to be ins ulated from failures in other Availability Zones They provide inexpensi ve low latency network connecti vity to other zones in the same Region By launc hing insta nces in separate Availability Zones mission owners can protect the ir app lications from failure of a single location Regions cons ist of one or more Availability Z ones • Reliable – The Amazon EC2 Ser vice Le vel Agreement (SLA) commitment is 9995% a vailability for each EC2 Region • Elastic IP Addresses – Elastic IP address es are static IP ad dresses designed for dynamic clo ud comp uting An E lastic IP addr ess is assoc iated with a mission owner accou nt and n ot with a particular instance So mission owners control th at address until they choose to explicitly release it Un like traditional static IP addresses ho wever Elastic IP ad dresses can be programmatically remapped to any instance in their a ccount in the event of an instance or Availability Zone failure Mission owners don ’t need to wait for a network techn ician to r econfigure or replace a h ost or wait for the Domain Name System (DNS) to propagate In addition miss ion owners can optio nally configure the reverse D NS reco rd of any of their Elastic IP ad dresses Scalab le (Durability) – AWS Auto Scaling is a web service that enables mission owners to automatically launch or terminate Amazon EC2 instances based on userdefined policies health status checks and schedules For applications configured to run on a cloud infrastructure scaling is an importa nt part of cost control and resource management Scaling is the ability to increase or decrease the compute capacity of an application by either changing the number of servers (horizontal scaling) or changing the size of the servers (vertical scaling) In a typical situation when the web application starts to get more traffic the mission owner either adds more servers or increases the size of existing servers to handle the additional load Similarly if the traffic to the web application starts to slow down the under utilized servers can be shut down or the size of the existing servers can be decreased Depending on the infrastructure involved vertical scaling might involve changes to server configurations every time the application scales With horizo ntal scaling AWS simply increases or decreases the number of servers according to the application's demands Amazon Web Services DoDCompliant Implementations in AWS 30 The decision when to scale vertically and when to scale horizontally depends on factors such as the mission owner’s use case cost performance and infrastructure When using Auto Scaling mission owners can increase the number of servers in use automatically when the user demand goes up to ensure that performance is maintained and can decrease the number of servers when demand goes down to minim ize costs Auto Scaling helps make efficient use of compute resources by automatically doing the work of scaling for the mission owner This automatic scaling is the core value of the Auto Scaling service Auto Scaling is well suited for applications that experience hourly daily or weekly variability in usage and need to automatically scale horizontally to keep up with changes in usage Auto Scaling frees users from having to predict traffic spikes accurately and plan for provisioning resources in advanc e of them With Auto Scaling mission owners can build a fully scalable and affordable infrastructure in the cloud Networking Amazon Virtual Private Cloud (VPC) AWS enables a mission owner to create the equivalent of a “ virtual private enclave” with the A mazon VPC service Amazon VPC is used to provision a logically isolated section of the AWS Cloud where a customer can launch AWS resources in a virtual network that is defined by the mission owner This logically separate space within AWS contain s compute and storage resources that can be connected to a mission owner’s existing infrastructure through a virtual private network (VPN) connection AWS Direct Connect (private) connection and/or the internet With Amazon VPC it is then possible to extend existi ng DoD directory services management tools monitoring/security scanning solutions and inspection capabilities thus maintaining a consistent means of protecting information whether it is residing on internal DoD IT resources or in AWS Network isolatio n and the ability to demonstrate separation of infrastructure and data is applicable at Impact Levels 2 4 and 5 and it is a key requirement of the CC SRG for Impact L evels 4 and 5 Amazon Web Services DoDCompliant Implementations in AWS 31 The CC SRG requires Impact Level 4 and 5 mission applications to be conn ected to NIPRNet without direct internet access from the VPC Mission owners have complete control over the definition of the virtual networking environment within their VPC including the selection of a private (RFC 1918) address range of their choice ( for example 10000/16) the creation of subnets the configuration of route tables and the inclusion or exclusion of network gateways Further mission owners can define the subnets within their VPC in a way that enables them to group similar kinds of in stances based on IP address range Mission owners can use VPC functionality and features in the following ways: • Mission owners can define a VPC on scalable infrastructure and specify its private IP address range from any range they choose • Mission owners can sub divide a VPC’s private IP address space further into one or more public or private subnets according to application requirements and security best practices This can facilitate running applications and services in a customer’s VPC • Mission owners define inbound and outbound access to and from individual subnets using network access control lists • Data can be stored in Amazon S3 with set permissions ensuring that the data can only be accessed from within a mission owner’s VPC • An Elastic IP a ddress can be attached to any instance in a mission owner’s VPC so it can be reached directly from the internet (Impact level 2 only) • A mission partner’s VPC can be bridged with their onsite DoD IT infrastructure (encapsulated in an encrypted VPN connecti on) to extend existing security and management policies to the VPC instances as if they were running within the mission partner’s physical infrastructure Amazon VPC pro vides advanced sec urity features such as security groups and n etwork access co ntrol lists to enable inb ound and o utbound filtering at the instance le vel and subn et level When building a VPC mission owners mu st define the su bnets ro uting rules security groups and network access control lists ( NACLs) that comply with the networking and security requirements of the DoD and their organization Amazon Web Services DoDCompliant Implementations in AWS 32 Subnets VPCs can span multiple Availability Zones After creating a VPC mission owners can add one or more subnets in each Availability Zone Each subnet must reside entirely within o ne Availability Zone cannot span zones and is assigned a unique ID by AWS Routing By design each subnet must be associated with a route table that specifies the allowed routes for outbound traffic leaving the subnet Every subnet is automatically assoc iated with the main route table for the VPC By updating the association mission owners can change the contents of the main route table Mission owners should know the following basic things about VPC route tables: • The VPC has an implicit router • The VPC comes with a main route table that mission owners can modify • Mission owners can create additional custom route tables for their VPC • Each subnet must be associated with a route table which controls the routing for the subnet If a mission owner does not associate a subnet with a particular route table the subnet uses the main route table • Mission owners can replace the main route table with a custom table that they have created (this table becomes the default table each new subnet is associated with) • Each route in a table specifies a destination Classless Inter Domain Routing (CIDR) block and a target (for example traffic destined for 1721600/12 is targeted for the virtual private gateway) Amazon VPC uses the most specific route that matches the tra ffic to determine how to route the traffic Security groups and network AC Ls AWS provides two features that mission owners can use to increase security in their VPC: security groups and net work access co ntrol lists (NACLs) Both features enab le mission owners to control the in bound and outbound traffic for their instan ces Security groups work at the in stance le vel and access control lists (ACLs) work at the subnet level security groups default to deny all and must be configured by the m ission owner to pe rmit tr affic Amazon Web Services DoDCompliant Implementations in AWS 33 Security groups pro vide stateful filtering at the in stance le vel and can meet the n etwork secur ity nee ds of many AWS mission owners However VPC users can choose to use both security groups and network ACLs to take advantage of the additional layer of secur ity that network ACLs prov ide An A CL is an optional layer of security that acts as a firewall for controlling traffic in and out of a subn et Mission owners can set up network ACLs with rules simi lar to those implemented in security groups to add a layer of stateless filtering to their VPC Mission owners should know the following basic things about network ACLs: • A network ACL is a numbered list of rules that is evaluated in order starting with the lowest numbered rule to deter mine whether traffic is allowed in or out of any subnet associated with the network ACL The highest rule number available for use is 32766 We suggest that mission owners start by creating rules with rule numbers that are multiples of 100 so that new rul es can be inserted later on • A network ACL has separate inbound and outbound rules and each rule can either allow or deny traffic • Each VPC automatically comes with a modifiable default network ACL; by default it allows all inbound and outbound traffic • Each subnet must be associated with a network ACL; if mission owners don't explicitly associate a subnet with a network ACL the subnet is automatically associated with the default network ACL • Mission owners can create custom network ACLs; each custom net work ACL starts out closed (permits no traffic) until the mission owner adds a rule • Network ACLs are stateless; responses to allow inbound traffic are subject to the rules for outbound traffic (and vice versa) The following table summarizes the basic dif ferences between security groups and network ACLs Inbound traffic will first be processed according to the rules of the network ACL applied to a subnet and subsequently by the security group applied at the instance level Amazon Web Services DoDCompliant Implementations in AWS 34 Table 2 — Differences between security groups and network ACLs Security Group Network ACL Operates at the instance level (first layer of defense) Operates at the subnet level (additional layer of defense) Supports allow rules only Supports allow rules and deny rules Is stateful; return traffic is automatically allowed regardless of any rules Is stateless; return traffic must be explicitly allowed by rules All rules are evaluated before deciding whether to allow traffic Rules are processed in order when decidi ng whether to allow traffic Applies to an instance only if someone specifies the security group when launching the instance or associates the security group with the instance after the instance is launched Automatically applies to all instances in the su bnets it’s associated with (backup later of defense so you don’t have to rely on someone specifying the security group) The following diagram illustrates the layers of security provided by security groups and network ACLs For example traffic from an internet gateway is routed to the appropriate subnet using the routes in the routing table The rules of the network ACL associated with the subnet control which traffic is allowed to the subnet The rules of the security group associated with an instance co ntrol which traffic is allowed to the instance Amazon Web Services DoDCompliant Implementation s in AWS 35 Security layers provided by security groups and network ACLs Storage There are three common storage options for instances and/or resources that can be utilized in conjunction with a system hosted within an Amazon VPC The three storage types are Amazon S3 Amazon EBS and instance storage each of which has distinct use cases Amazon S3 Amazon S3 is a highly durable repository designed for mission critical and primary data storage for mission owner data It enables mission owners to store and retrieve any amount of data at any time from within Amazon EC2 or anywhere on the web Amazon S3 stores data objects redundantly on multiple devices across multiple facilities and allows concurrent read or write access to these data objects by many separate clients or application threads Amazon S3 is designed to protect data and allow access to it even in t he case of a failure of a data center Amazon Web Services DoDCompliant Implementations in AWS 36 Additionally mission owners can use the redundant data stored in Amazon S3 to recover quickly and reliably from instance or application failures The Amazon S3 versioning feature allows the retention of prior versio ns of objects stored in Amazon S3 and also protects against accidental deletions initiated by staff or software error Versioning can be enabled on any Amazon S3 bucket Mission owners should know the following basic things about Amazon S3 functionality and features: • Mission owners can write read and delete objects containing from 1 byte to 5 terabytes of data each The number of objects mission owners can store is unlimited • Each object is stored in an Amazon S3 bucket and retrieved via a unique develop erassigned key • Objects stored in an AWS Region never leave unless the mission owner transfers them out • Authentication mechanisms are provided to ensure that data is kept secure from unauthorized access Objects can be made private or public and rights can be granted to specific users • Options for secure data upload and download and encryption of data at rest are provided for additional data protection • Amazon S3 uses standards based REST and SOAP interfaces designed to work with any internet development toolkit • Amazon S3 is built to be flexible so that protocol or functional layers can easily be added • Amazon S3 includes options for performing recurring and high volume deletions For recurring deletions rules can be defined to remove sets of objects af ter a predefined time period For efficient one time deletions up to 1000 objects can be deleted with a single request For more information on these Amazon S3 features consult the Amazon S3 documenta tion Amazon Elastic Block Store Amazon EBS provides block level storage volumes for use with Amazon EC2 instances Amazon Web Services DoDCompliant Implementations in AWS 37 Amazon EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same Availability Zone Amazon EBS volumes that are attached to an Amazon EC2 instance are exposed as storage volumes that persist independently from the life of the instance With Amazon EBS users pay only for what they use Amazon EBS is recommended when data changes frequently and requires long term persistence Amazon EBS volumes are particularly well suited for use as the primary storage for file systems databases or for any applications that require f ine granular updates and access to raw unformatted block level storage Amazon EBS is particularly helpful for database style applications that frequently encounter many random reads and writes across the dataset Mission owners can attach multiple volum es to the same instance within the limits specified by their AWS account Currently an AWS account is limited 300 TiB of total storage within EBS volumes Amazon EBS volumes store data redundantly making them more durable than a typical hard drive The a nnual failure rate for an Amazon EBS volume is 01% to 05% compared to 4% for a commodity hard drive Amazon EBS and Amazon EC2 are often used in conjunction with one another when building an application on AWS Any data that needs to persist can be stor ed on Amazon EBS volumes not on the temporary storage associated with each EC2 instance If the EC2 instance fails and needs to be replaced the Amazon EBS volume can simply be attached to the new EC2 instance Because this new instance is a duplicate of the original there is no loss of data or functionality EBS volumes are highly reliable but to further mitigate the possibility of a failure backups of these volumes can be created using a feature called snapshots A robust backup strategy will include an interval between backups a retention period and a recovery plan Snapshots are stored for high durability in Amazon S3 Snapshots can be used to create new EBS volumes which are an exact copy of the original volume at the time the snapshot was taken These EBS operations can be performed through API calls Mission owners should know the following basic things about Amazon EBS functionality and features: Amazon Web Services DoDCompliant Implementations in AWS 38 • Amazon EBS allows mission owners to create storage volumes from 1 GB to 16 TB that can be mounted as devices by EC2 instances Multiple volumes can be mounted to the same instance • Storage volumes behave like raw unformatted block devices with user supplied device names and a block device interface Mission owners can create a file system on top of E BS volumes or use them in any other way they would use a block device (like a hard drive) • Amazon EBS volumes are placed in a specific Availability Zone and can then be attached to instances also in that same Availability Zone • Each storage volume is automatically replicated within the same Availability Zone This prevents data loss due to failure of any single hardware component • Amazon EBS also provides the ability to create point intime snapshots of volumes which are persisted to Amazon S3 These snapshots can be used as the starting point for new Amazon EBS volumes and protect data for long term durability The same snapshot can be used to instantiate as many volumes as desired For more information on these Amazon EBS features refer to the Amazon EBS documentation Instance storage An instance store provides volatile temporary block level storage for use with an EC2 instance and consists of one or more instance stor e volumes Instance store volumes must be configured using block device mapping at launch time and mounted on the running instance before they can be used Instances launched from an instance store backed AMI have a mounted instance store volume for the vi rtual machine's root device volume and can have other mounted instances store volumes depending on the instance type The data in an instance store is temporary and only persists during the lifetime of its associated instance If an instance reboots (int entionally or unintentionally) data in the instance store persists However data on instance store volumes is lost under the following circumstances: • Failure of an underlying drive • Stopping an Amazon EBS backed instance • Ending an instance Amazon Web Services DoDCompliant Implementations in AWS 39 Therefore AWS mission owners should not rely on instance store volumes for important long term data Instead keep data safe by using a replication strategy across multiple instances storing data in Amazon S3 or using Amazon EBS volumes Encryption AWS supports multip le encryption mechanisms for data stored within a mission owner’s VPC The following is a summary of the encryption methods: • Amazon EBS encryption — For Amazon EBS volumes encryption is managed by OS level encryption ( for example BitLocker or Encrypted File System (EFS)) by third party products or by Amazon EBS encryption For Amazon EBS encryption when customers create an encrypted Amazon EBS volume and attach it to a supported instance type the data stored at rest on the vol ume the disk I/O and the snapshots created from the volume are all encrypted The encryption occurs on the servers that host Amazon EC2 instances providing encryption of data in transit from EC2 instances to Amazon EBS storage • Amazon S3 encryption — Provides added security for object data stored in buckets in Amazon S3 Mission owners can encrypt data on the client side and upload the encrypted data to Amazon S3 In this case mission owners manage the encryption process the encryption keys and relate d tools Optionally mission owners can use the Amazon S3 server side encryption feature Amazon S3 encrypts object data before saving it on disks in its data centers and it decrypts the object data when objects are downloaded freeing mission owners from the tasks of managing encryption encryption keys and related tools Mission owners can also use their own encryption keys with the Amazon S3 server side encryption feature • AWS Key Management Service ( AWS KMS) — AWS KMS is a managed service that makes it easy for mission owners to create and control the encryption keys used to encrypt your data Learn more information about AWS KMS in the Management section of this paper Amazon Web Services DoDCompliant Implementations in AWS 40 • AWS Cloud HSM — AWS CloudHSM is a cloud based hardware security module (HSM) that allows you to easily add secure key storage and high performance cryptographic operations to your AWS applications CloudHSM has no upf ront costs and proves the ability to start and stop HSMs on demand allowing you to provision capacity when and where it is needed quickly and cost effectively CloudHSM is a managed service that automates the time consuming administrative tasks such as h ardware provisioning software patching high availability and backups CloudHSM is one of several AWS services including AWS KMS which offers a high level of security for your cryptographic keys AWS KMS provides an easy costeffective way to manage e ncryption keys on AWS that meets the security needs for the majority of customer data CloudHSM offers customers the option of single tenant access and control over their HSMs Management AWS Identity and Access Management (IAM) IAM is a web service that enables mission owners to manage users and permissions in AWS The service is targeted at organizations with multiple users or systems that use products such as Amazon EC2 Amazon RDS and the AWS Management Console With IAM missio n owners can centrally manage users security credentials such as access keys and permissions that control which AWS resources users can access Without IAM organizations with multiple users and systems must either create multiple AWS accounts each with its own billing and subscriptions to AWS products or employees must all share the security credentials of a single AWS account Also without IAM mission owners have no control over the tasks a particular user or system can do and what AWS resources the y might use IAM addresses this issue by enabling organizations to create multiple users (each user is a person system or application) who can use AWS products each with individual security credentials all controlled by and billed to a single AWS acco unt With IAM each user is allowed to do only what they need to do as part of the user's job IAM includes the following features: • Central control of users and security credentials —Mission owners control creation rotation and revocation of each user's AWS security credentials (such as access keys) Amazon Web Services DoDCompliant Implementations in AWS 41 • Central control of user access —Mission owners control what data users can access and how they access it • Shared resources – Users can share data for collaborative projects • Permissions based on organiz ational groups – Mission owners can restrict users' AWS access based on their job duties (for example admin developer etc) or departments When users move inside the organization mission owners can easily update their AWS access to reflect the change in their role • Central control of AWS resources – A mission owner’s organization maintains central control of the data the users create with no breaks in continuity or lost data as users move around within or leave the organization • Control over resource creation – Mission owners can help make sure that users create data only in sanctioned places • Networking controls – Mission owners can restrict user access to AWS resources to only from within the organization's corporate network using SSL AWS Key Manag ement Service ( AWS KMS) AWS Key Management Service allows mission owners to create and control encryption keys used to encrypt their data It utilizes FIPS 140 2 validated cryptographic modules AWS KMS runs with other AWS services like AWS CloudTrail to provide mission owners with logs of all key usage to help meet regulatory and compliance needs AWS KMS gives mission owners more control over the access to data that is encrypted Mission owners have control o ver who can use the AWS KMS keys and gain access to encrypted data AWS KMS uses envelope encryption to protect data Envelope encryption is the practice of encrypting plaintext data with a data key and then encrypting the data key with another key Enve lope encryption allows several benefits For example when rotating keys instead of re encrypting the raw data multiple times with different keys mission owners can only re encrypt the data keys that protect the raw data Amazon Web Services DoDCompliant Implementations in AWS 42 AWS KMS includes the followi ng features: • AWS KMS keys are the primary resource within AWS KMS AWS KMS keys are used to generate encrypt and decrypt data keys that are used outside of AWS KMS to encrypt data AWS KMS stores tracks and protect s AWS KMS keys and when an individual wants to use a n AWS KMS key the key is accessed through AWS KMS A n AWS KMS key never leaves AWS KMS unencrypted nor does AWS KMS store manage or track data keys • There are two types of AWS KMS keys within a mission owners AWS account : o Customer managed AWS KMS keys in which the mission owner creates manages and uses the AWS KMS keys In this case the mission owner is responsible for enabling and disabling AWS KMS keys establishing IAM and key policies to grant other permissions to use the keys o AWS managed AWS KMS keys In this case keys are managed by the AWS service that runs with AWS KMS • Data keys are encryption keys for encrypting data including large amounts of data AWS KMS is used to generate encrypt and decrypt data keys • Mission owners can import their own key material from their own infrastructure and use to encrypt their data They can also use AWS KMS to manage the lifecycle of the key material • Key policies are used to control access to AWS KMS in AWS Each AWS KMS has its own policy that has permissions and enables access to the key For a user to access a resource he or she must have access to the key and permissions to use the key • Mission owners can add an additional layer of security by limiting permissions to AWS KMS by using encryption context The encryption context is another key value pair of dat a that can be associated with the information protected by AWS KMS • AWS also offers an Encryption Software Development Kit (SDK) which is a library to implement encryption and follow best practices within an application AWS CloudTrail AWS CloudTrail is a service that enables governance compliance operational and risk auditing of a mission owner’s AWS account CloudTrail nearly continuously logs and monitors actions taken by a user role or another AWS service within an account Amazon Web Services DoDCompliant Implementations in AWS 43 CloudTrail monitors actions as an event in the AWS Management Console AWS CLI SDKs and APIs Mission owners can use CloudTrail to view search download archive analyze and respond to account activity across the mission owner’s AWS infrastructure Mission owners have granularity to identify who or what took the action what resources were acted upon when the even t occurred and other details CloudTrail includes the following features: • Mission owners have the abi lity to aggregate all logs in Amazon S3 and restrict access to the Amazon S3 buckets to prevent tampering and deletion of log data • Mission owners can turn on CloudTrail in all AWS Regions even if they aren’t operating in other Regions This way suspici ous activity in an account is always logged • CloudTrail can be enabled to audit usage of AWS KMS keys Amazon CloudWatch Amazon CloudWatch is a monitoring service for AWS Cloud resources and applications running within AWS CloudWatch is near real time stream of system events and can be used to monitor for specific events and performs actions in an automated manner Amazon CloudWatch is different from AWS CloudTrail as the latter records API calls for an AWS account and deliver logs Amazon CloudWatch includes the following features: • Mission owners can collect and track metrics like CPU usage and disk reads/writes of Amazon EC2 instances or other Key Performance Indicators (KPIs) • CloudWatch alarms send notifications or automatically make changes to resources that mission owners are monitoring based on the rules they have defined • Mission owners can create custom metrics to monitor application resources to gain visibility into resource utilization appl ication performance and operational health Amazon Web Services DoDCompliant Implementations in AWS 44 • The CloudWatch service also includes CloudWatch Logs which can be used to monitor store and access your log fi les from Amazon EC2 instances AWS CloudTrail Route 53 Lambda and other sources This can be used for log aggregation and consolidation to support log reduction and audit ing security operations functions AWS Config AWS Config is a service that lets mission owners assess audit and evaluate the configurations of all of their AWS resources AWS Config monitors AWS resources and captures the configuration of these resources It can automatically evaluate the configuration against desired configurations and helps simplify compliance auditing security analysis change management and operational troubleshooting Mission owners can use AWS Config to keep an inventory of their A WS resources as well as software configurations within EC2 instances AWS Config includes the following features: • Mission owners can keep track of all of their AWS resources and determine when a change to a certain resource has been made • AWS Config can be used to assess overall compliance Mission owners can define rules for provisioning AWS resources for example only allow Amazon EBS volumes to be created if they are encrypted • Mission owners can also track the relationships among the resources and r eview dependencies prior to making changes • AWS Config can also capture a comprehensive history of AWS resource configuration Mission owners can obtain the details of the event API call that invoked the change • AWS Config also allows viewing of complian ce status across the enterprise over multiple accounts and multiple Regions This way it is easier to identify non compliant accounts or resources and view the data from the AWS Config console in a central account Services in scope As stated previously hosting a workload requires classification of data and determination of DoD SRG Impact Level of the data The impact level may require the mission owner to choose AWS Regions carefully for their workloads Amazon Web Services DoDCompliant Implementations in AWS 45 CONUS Regions (US East/US West) within the US h ave a provisional authorization to host IL2 data whereas the AWS GovCloud (US) Region s may be used to host IL4 and IL5 data Within each Region there are also a variety of services that mission owners can use that have gone through the DoD SRG accredita tion process AWS is constantly working with 3rd party auditors and with DoD accreditation agencies to get more services accredited at different impact levels For an updated list of services that are currently undergoing or have already undergone various accreditation processes refer to the AWS Services in Scope page Reference architecture Impact level 2 CC SRG Impact Level 2 (IL2) systems are appropriate for hosting public or limited access information IL2 systems are not required to be fully segregated from internet traffic and they can connect directly to the internet The following is a sample reference architecture for an IL2 system with a recovery time objective (RTO) of greater than or equal to one day Amazon Web Services DoDCompliant Implementations in AWS 46 Sample Impact Level 2 Architecture with RTO >= 1day(s) The following is an IL2 sample reference architecture with a recovery t ime objective (RTO) of less than or equal to one hour Amazon Web Services DoDCompliant Implementations in AWS 47 The following reference architecture is an example of how to both meet application RTO requirements and maintain CC SRG compliance Sample Impact Level 2 Architecture with RTO >= 1 hour The following are some k ey attributes: • Access to and from the internet traverses an internet gateway • A layer 7 reverse web proxy may reside in the DMZ for protection against application level attacks targeting web infrastructures Similarly mission owners have the option of using native AWS services like AWS W eb Application Firewall and AWS Shield to protect against web based attacks • Web and application instances are deployed in Auto Scaling groups across multiple Availability Zones Amazon Web Services DoDCompliant Implementations in AWS 48 • Each impact level 2 infrastructure should be adequately stratified to limit access to the web/application and database assets to either authorized traffic (by strata) or to administrative traffic initiated from an authorized bastion host contained within the infrastructure • Static web addressable content is stored in secured Ama zon S3 buckets (using bucket policies) and directly addressable from the internet • Infrastructure backups images and volume snapshots are securely stored in the Amazon S3 infrastructure in separate buckets so they are not publicly addressable from the internet • Application database utilizes Amazon RDS which is a managed offering for many flavors of commercial databases The Amazon RDS instance is deployed in a multiAZ configuration with primary and secondary databases and synchronous replication betwe en the two By default the AWS infrastru cture operates in a “zero trust” sec urity model Access to an instance r egardless of the strata on which it res ides mu st be explicitly allowed The enforcement of this model is enab led through the use of security groups (SG) which are addressab le by other security groups For administrative access to any instance in the infrastructure the use of a bastion h ost is defined as the only host instance that is authori zed to access infrastru cture ass ets within a designated infrastructure These hosts are typically Windows Ser ver instances (RDP via port 3389) Remote Desktop Gate way servers and/or Linux instances for S SH acc ess to Linux hosts Any instance des ignated as a bastion host shou ld be included in a bastion sec urity group This should be the only security group granted access to the reverse web pro xy web/application instances a nd database instances (via por ts 22 and/or 3389) Additiona lly to further bolster the d efensive posture of the infrastructure the bastion host(s) sh ould be po wered off when adm inistration activities are not being performed Amazon Web Services DoDCompliant Implementations in AWS 49 The following table is a sample summary of security group behavior by traffic flow : Table 3 — Security group behavior by traffic flow Traffic From Security Group (SG) Traffic to SG Security Group Rule Internet Reverse web proxy (reverse proxySG) Allow 80/443 from internet (all) Reverse web proxy (reverse proxySG) Web/application server (s) (web server SG) Allow 80/443 from reverse proxySG Web/application server (s) (web server SG) Database s erver(s) (dbserver SG) Allow appropriate database port Administrator (internet trusted admin IP) Bastion host (basti onhostSG) Allow 3389/22 from trusted remote admini stration host (host IP address range) Bastion host (bastionhost SG) Proxy web application database instances (reverse proxySG webserver SG db server SG) Allow 3389/22 from bastion hostSG Impact Level 4 DoD systems hosting data categorized at IL4 and IL5 of the CC SRG must attain complete separation from systems hosting non DoD data and route traffic entirely through dedicated connections to the DoD information networks (DoDIN) through a VPN or an AWS Di rect Connect To achieve full separation of network traffic the current approved DoD reference architecture is to establish an AWS Direct Connect connection from DoDIN to AWS including BCAP with a Computer Network Defense (CND) Suite hosted in a colocat ion facility associated with AWS The following illustration is a sample reference architecture for an IL4 system The following architecture utilizes best practices according to the DoD SRG which creates guidelines for a Secure Cloud Computing Architectu re (SCCA) In addition AWS has published additional SCCA reference architecture guidance Amazon Web Services DoDCompliant Implementations in AWS 50 Sample Impact Level 4 architecture The following l ist contains the CC SRG requirements for IL4 that are added to those already defined for IL2: • No direct access to/from the public internet – All traffic in/out of AWS must traverse the DoDIN through a virtual private gateway • Security and management of the environment is separated from the application environment using different VPCs in accordance with the SCCA architecture • A Virtual Data Center Security Stack (VDSS) VPC is utilized for performing security functionality in ac cordance with the SCCA and all traffic will flow through the VDSS VPC before reaching the Mission Owner VPC which contains the application The VDSS may contain approved third party security components to meet the security requirements of the mission owne r (for example performing full packet capture or adding intrusion detection or prevention services) • A Virtual Data Center Management Stack (VDMS) VPC is established for performing management functionality and offering shared services This VPC may host shared services for hosting multiple mission owner application VPCs The VDMS may also perform host management via bastion hosts security scans and other services deemed necessary by the mission owner Amazon Web Services DoDCompliant Implementations in AWS 51 • Connection to the DoDIN – This can be accomplished t hrough the use of AWS Direct Connect IPsec VPN and/or a combination of the two All traffic traversing between DoDIN and the DoD application must use a BCAP • Access to Amazon S3 is restricted to AWS Direct Connect – Access to Amazon S3 is while internet addressable by default only accessible through a private route introduced as part of the AWS Direct Connect service • All traffic to/from the VPC is scanned on DoDIN – All traffic entering and/or exiting the Amazon VPC is required to pass through a hardwa rebased Computer Network Defense suite of tools This infrastructure is both owned and operated by the government (or on behalf of the government by a Mission Partner organization) • HostBased Security System (HBSS) servers are deployed in the VDMS VPC – All DoD EC2 instances will have HBSS installed and they will communicate with an orchestrator server hosted in the VDMS VPC • Assured Compliance Assessment Solution (ACAS) tool is deployed in the VDMS VPC – All DoD instances will be scanned by an ACAS tool that is located in the VDMS with full access to the subnets of the mission owner VPC Impact Level 5 Data that is classified at IL5 has additional controls that must be placed on top of impact level 4 controls One of the controls required for IL5 is tha t all data must be encrypted in flight and at rest Any components of the architecture that process IL5 data require s physical separation from unencrypted data Within AWS all IL5 workloads must be deployed in the AWS GovCloud (US) Region within an Amazon VPC and network traffic must flow through an approved CAP or DoD SCCA compliant solution As previously stated all data must be encrypted in flight and at rest Decryption of data at certain points of the traffic flow ( for example decrypting to perfor m compute operations) requires an Amazon EC2 Dedicated Host or Dedicated Instance to meet the requirements for physical separation The AWS services that require dedicated tenancy are EC2 E lastic Map Reduce Elastic Beanstalk Amazon WorkSpaces E lastic K ubernetes Service without Fargate and Elastic Container Service without AWS Fargate If architecting a three tier web application like in the examples used so far all three tiers of the application’s compute must use Dedicated Hosts or Instances Amazon Web Services DoDCompliant Implementations in AWS 52 It is also possible to have the web tier instances as On Demand Instances (IL4) if the web servers are only passing encrypted traffic The application and database tiers will always require Dedicated Instances or Hosts Multi tenant By default EC2 instances a re multi tenant This means that the mission owner pays for the compute capacity by the hour or second Mission owners can increase or decrease their capacity based on demand Dedicated Instances Dedicated Instances are EC2 instances that run inside a VPC on hardware that is dedicated to a single customer The Dedicated Instances of a mission owner are physically isolated at the host hardware level from instances that belong to other AWS accounts Dedicated Instances may share hardware with other instances from the same AWS account that are not Dedicated Instances More information can be found on the Dedicated Instances pricing page Dedicated Hosts A Dedicated Host is a physical server with Amazon EC2 instance capacity fully reserved for one AWS account Dedicated Hosts are designed to meet compliance requirements and allow mission owners to utilize their server bound software licenses The following diagram is an example of an IL5 architecture hosted in AWS Amazon Web Services DoDComplian t Implementations in AWS 53 Sample Impact Level 5 architecture hosted in AWS The following are some k ey attributes: • All EC2 instances must use Dedicated Instances or Dedicated Hosts if handling unencrypted data • Mission Owners can use AWS KMS for managing their encryption keys or they may bring their own encryption keys Key policies must be utilized to control and grant other resources or individual access to encryption keys • This architecture also utilizes the SCCA guidelines by in corporating a VDSS and VDMS like in the IL4 environment Conclusion AWS provides a number of important benefits to DoD mission owners including flexibility elasticity utility billing and reduced time tomarket It provides a range of security service s and features that you can use to manage the security of your assets and data in AWS Amazon Web Services DoDCompliant Implementations in AWS 54 Although AWS provides an excellent service management layer for infrastructure or platform services mission owners are still responsible for protecting the confidenti ality integrity and availability of their data in the cloud and for meeting specific mission requirements for information protection Conventional security and compliance concepts still apply in the cloud Using the various best practices highlighted in this whitepaper we encourage you to build a set of security policies and processes for your organization so you can deploy applications and data Contributors The following individuals contributed to this document: • Paul Bockelman Lead Architect AWS Worldwide Public Sector • Andrew McDermott Solutions Architect • Nabil Merchant Security Consultant AWS Worldwide Public Sector • Jim Collins Principal Consultant AWS Professional Services • Michael Alpaugh Senior Security Architect AWS Worldwide Public Sector Further reading For additional information refer to the following: • AWS Whitepapers • AWS Documentation Document revisions Date Description November 3 2021 Major structural update and additional content; updated diagrams; compliance updates April 2018 Updated diagrams IL5 reference architecture section added Added description of additional services April 2015 First publication Amazon Web Services DoDCompliant Implementations in AWS 55 Notes 1 Department of Defense Cloud Computing Security Requirements Guide 2 Using AWS for Disaster Recovery 3 FedRAMP: About Us
|
General
|
consultant
|
Best Practices
|
Encrypting_Data_at_Rest
|
ArchivedEncrypting Data at Rest Ken Beer Ryan Holland November 2014 https://awsamazoncom/security/securitylearningThis paper has been archived For the latest security information see the AWS Cloud Security Learning page on the AWS website at:ArchivedArchivedArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 2 of 20 Contents Contents 2 Abstract 2 Introduction 2 The Key to Encry ption: Who Controls the Keys? 3 Model A: You control the encryption method and the entire KMI 4 Model B: You control the encryption method; AWS provides the storage component of the KMI while you provide the management layer of the KMI 11 Model C: AWS controls the encryption method and the entire KMI 12 Conclusion 17 References and Further Reading 19 Abstract Organizational policies or industry or government regulations might require the use of encryption at rest to protect your data The flexible nature of Amazon Web Services (AWS) allows you to choose from a variety of different options that meet your needs This whitepaper provides an overview of different methods for encrypting your data at rest available today Introduction Amazon Web Services (AWS) delivers a secure scalable cloud computing platform with high availability offering the flexibility for you to build a wide range of applications If you require an additional layer of security for the data you store in the cloud there are several options for enc rypting data at rest —ranging from completely automated AWS encryption solutions to manual client side options Choosing the right solutions depends on which AWS service you’re using and your requirements for key management This white paper provides an overview of various methods for encrypting data at rest in AWS Links to additional resources are provided for a deeper understanding of how to actually implement the encryption methods discussed ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 3 of 20 The Key to Encryption: Who Controls the Keys? Encryption on a ny system requires three components: ( 1) data to encrypt; (2 ) a method to encrypt the data using a cryptographic algorithm; and ( 3) encryption keys to be used in conjunction with the data and the algorithm Most modern programming languages provide libraries with a wide range of available cryptographic algorithms such as the Advanced Encryption Standard (AES) Choosing the right algorithm involves evaluating security performance and compliance requirements specific to your application Although the selection of an encryption algorithm is important protecting the keys from unauthorized access is critical Managing the security of encryption keys is often performed using a key management infrastructure (KMI) A KMI is composed of two sub components: the st orage layer that protects the plaintext keys and the management layer that authorizes key usage A common way to protect keys in a KMI is to use a hardware security module (HSM) An HSM is a dedicated storage and data processing device that performs cryptographic operations using keys on the device An HSM typically provides tamper evidence or resistance to protect keys from unauthorized use A software based authorization layer controls who can administer the HSM and which users or applications can use which keys in the HSM As you deploy encryption for various data classifications in AWS it is important to understand exactly who has access to your encryption keys or data and under what conditions As shown in Figure 1 t here are three different models for how you and/or AWS provide the encryption method and the KMI • You control the encryption method and the entire KMI • You control the encryption method AWS provides the storage component of the KMI and you provide the management layer of the KMI • AWS c ontrols the encryption method and the entire KMI Figure 1: Encryption models in AWS ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 4 of 20 Model A: You control the encryption method and the entire KMI In this model you use your own KMI to generate store and manage access to keys as well as control all encryption methods in your applications This physical location of the KMI and the encryption method can be outside of AWS or in an Amazon Elastic Compute Cloud (Amazon EC2) instance you own The encryption method can be a combination of open source tools AWS SDKs or third party software and/or hardware The important security property of this model is that you have full control over the encryption keys and the execution environment that utilizes those key s in the encryption code AWS has no access to your keys and cannot perform encryption or decryption on your behalf You are responsible for the proper storage management and use of keys to ensure the confidentiality integrity and availability of your data Data can be encrypted in AWS services as described in the following sections Amazon S3 You can encrypt data using any encryption method you want and then upload the encrypted data using the Amazon Simple Storage Service (Amazon S3) API Most common application languages include cryptographic libraries that allow you to perform encryption in your applications Two commonly available open source tools are Bouncy Castle and OpenSSL After you have encrypted an object and safely stored the key in your KMI the encrypted object can be uploaded to Amazon S3 directly with a PUT request To decrypt this data you issue the GET request in the Amazon S3 API and then pass the e ncrypted data to your local application for decryption AWS provides an alternative to these open source encryption tools with the Amazon S3 encryption client which is an open source set of APIs embedded into the AWS SDKs This client lets you supply a key from your KMI that can be used to encrypt or decrypt your data as part of the call to Amazon S3 The SDK leverages Java Cryptography Extensions (JCEs ) in your application to take your symmetric or asymmetric key as input and encrypt the object prior to uploading to Amazon S3 The process is reversed when the SDK is used to retrieve an object The downloaded encrypted object from Amazon S3 is passed to the client along with the key from your KMI The underlying JCE in your application decrypts the object The Amazon S3 encryption client is integrated into the AWS SDKs for Java Ruby and NET and it provides a transparent drop in replacement for any cryptographic code you might have used previously with your application that interacts with Amazon S3 Although AWS provides the encryption method you control the security of your data because you control the keys for that engine to use If you’re using the Amazon S3 encryption client on premises AWS never has access to your keys or unencrypted data If you’re using the client in an application running in Amazon EC2 a best practice is to pass keys to the client using secure transport (eg Secure Sockets Layer (SSL ) or Secure Shell (SSH )) from your KMI to help ensure confidentiality For more information ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 5 of 20 see the AWS SDK for Java documentation and Using Client Side Encryption in the Amazon S3 Developer Guide Figu re 2 shows how these two methods of client side encryption work for Amazon S3 data Figure 2: Amazon S3 client side encryption from on premises system or from within your Amazon EC2 application There are third party solutions available that can simplify the key management process when encrypting data to Amazon S3 CloudBerry Explorer PRO for Amazon S3 and CloudBerry Backup both offer a client side encryption option that applies a user defined password to the encryption scheme to protect files stored on Amazon S3 For programmatic encryption needs SafeNet ProtectApp for Java integrates with the SafeNet KeySecure KMI to provide client side encryption in your application The KeySecure KMI provides secure key storage and policy enforcement for keys that are passed to the ProtectApp Java client compatible with the AWS SDK The KeySecure KMI can run as an on premises appliance or as a virtual appliance in Amazon EC2 Figure 3 shows how the SafeNet solution can be used to encrypt data stored on Amazon S3 ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 6 of 20 Figure 3: Amazon S3 client side encryption from on premises system or from within your application in Amazon EC2 using SafeNet ProtectApp and SafeNet KeySecure KMI Amazon EBS Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with Amazon EC2 instances Amazon EBS volumes are network attached and persist independently from the life of an instance Because Amazon EBS volumes are presented to an instance as a block device you can leverage most standard encryption tools for file system level or block level encryption Some common block level open source encryption solutions for Linux are Loop AES dmcrypt (with or without) LUKS and TrueCrypt Each of these operates below the file system layer using kernel space d evice drivers to perform encryption and decryption of data These tools are useful when you want all data written to a volume to be encrypted regardless of what directory the data is stored in Another option would be to use file system level encryption w hich works by stacking an encrypted file system on top of an existing file system This method is typically used to encrypt a specific directory eCryptfs and EncFs are two Linux based open source examples of file system level encryption tools These solutions require you to provide keys either manually or from your KMI An important caveat with both block level and file system level encryption tools is that they can only be used to encrypt data volumes that are not Amazon EBS boot volumes This is becaus e these tools don’t allow you to automatically make a trusted key available to the boot volume at startup ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 7 of 20 Encrypting Amazon EBS volumes attached to Windows instances can be done using BitLocker or Encrypted File System (EFS) as well as open source applica tions like TrueCrypt In either case you still need to provide keys to these encryption methods and you can only encrypt data volumes There are AWS partner solutions that can help automate the process of encrypting Amazon EBS volumes as well as supplying and protecting the necessary keys Trend Micro SecureCloud and SafeNet ProtectV are two such partner products that encrypt Amazon EBS volumes and include a KMI Both products are able to encrypt boot volumes in addition to data volumes These solutions also support use cases where Amazon EBS volumes attach to auto scale d Amazon EC2 instances Figure 4 shows how the SafeNet and Trend Micro solutions can be used to encrypt data stored on Amazon EBS using keys managed on premises via software as a service ( SaaS) or in software running on Amazon EC2 Figure 4: Encryption in Amazon EBS using SafeNet ProtectV or Trend Micro SecureCloud AWS Storage Gateway AWS Storage Gateway is a service connecting an on premises software appliance with Amazon S3 It can be exposed to your network as an iSCSI disk to facilitate copying data from other sources Data on disk volumes attached to the AWS Storage Gateway will be automatically uploaded to Amazon S3 based on policy You can encrypt source data on the disk volumes using any of the file encryption methods described previously (eg Bouncy Castle or OpenSSL) before it reaches the disk You can also use a block level encryption tool (eg BitLocker or dm crypt/LUKS) on the iSCSI endpoint that AWS Storage Gateway exposes to encrypt all data on the disk volume Alternatively two AWS partner solutions Trend Micr o SecureCloud and SafeNet StorageSecure can perform ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 8 of 20 both the encryption and key management for the iSCSI disk volume exposed by AWS Storage Gateway These partners provid e an easy check box solution to both encrypt data and manage the necessary keys that is similar in design to how their Amazon EBS encryption solutions work Amazon RDS Encryption of data in Amazon Relational Database Service (Amazon RDS) using client side technology requires you to consider how you want data queries to work Because Amazon RDS doesn’t expose the attached disk it uses for data storage transparent disk encryption using techniques described in the previous Amazon EBS section are not available to you However selective encryption of database fields in your application can be done using any of the standard encryption libraries mentioned previously (eg Bouncy Castle OpenSSL) before the data is passed to your Amazon RDS instance While this specific field data would not easily support range queries in the database queries based on unencrypted fields can still return useful results The encrypted fields of the returned results can be decrypted by your local application for presentation To support more efficient querying of encrypted data you can store a keyed hash message authentication code (HMAC) of an encrypted field in your schema and you can supply a key for the hash function Subsequent queries of protected fields that contain the HMAC of the data being sought would not disclose the plaintext values in the query This allows the database to perform a query against the encrypted data in your database without disclosing the plaintext values in the query Any of the encryption methods you choose must be performed on your own application instance before data is sent to the Amazon RDS instance CipherCloud and Voltage Secur ity are two AWS partners with solutions that simplify protecting the confidentiality of data in Amazon RDS Both vendors have the ability to encrypt data using format preserving encryption (FPE) that allows ciphertext to be inserted into the database without bre aking the schema They also support tokenization options with integrated lookup tables In either case your data is encrypted or tokenized in your application before being written to the Amazon RDS instance These partners provide options to index and sear ch against databases with encrypted or tokenized fields The unencrypted or untokenized data can be read from the database by other applications without needing to distribute keys or mapping tables to those applications to unlock the encrypted or tokenized fields For example you could move data from Amazon RDS to the Amazon Redshift data warehousing solution and run queries against the non sensitive fields while keeping sensitive fields encrypted or tokenized Figure 5 shows how the Voltage solution can be used within Amazon EC2 to encrypt data before being written to the Amazon RDS instance The encryption keys are pulled from the Voltage KMI located in your data center by the Voltage Security client running on your applications on Amazon EC2 ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 9 of 20 Figure 5: Encrypting data in your Amazon EC2 applications before writing to Amazon RDS using Voltage SecureData CipherCloud for Amazon Web Services is a solution that works in a way that is similar to the way the Voltage Security client works for applications running in Amazon EC2 that need to send encrypted data to and from Amazon RDS CipherCloud provides a JDBC driver that can be installed on the application regardless of whether it’s running in Amazon EC2 or in your data center In addition the CipherCloud for Any App solution can be deployed as an inline gateway to intercept data as it is being sent to and from your Amazon RDS instance Figure 6 shows how the CipherCloud solution can be deployed this way to encrypt or tokenize data leaving your data center before being written to the Amazon RDS instance ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 10 of 20 Figure 6: Encrypting data in your data center before writing to Amazon RDS using CipherCloud Encryption Gateway Amazon EMR Amazon Elastic MapReduce (Amazon EMR) provides an easy touse Hadoop implementation running on Amazon EC2 Performing encryption throughout the MapReduce operation involves encryption and key management at four distinct points: 1 The source data 2 Hadoop Distributed File System (HDFS) 3 Shuffle phase 4 Output data If the source data is not encrypted th en this step can be skipped and SSL can be used to help protect data in transit to the Amazon EMR cluster If the source data is encrypted then your MapReduce job will need to be able to decrypt the data as it is ingested If your job flow uses Java and the source data is in Amazon S3 you can use any of the client decryption methods described in the previous Amazon S3 sections The storage used for the HDFS mount point is the ephemeral storage of the cluster nodes Depending on the instance type there m ight be more than one mount Encrypting these mount points requires the use of an Amazon EMR bootstrap script that will do the following: • Stop the Hadoop service • Install a file system encryption tool on the instance • Create an encrypted directory to mount the encrypted file system on top of the existing mount points • Restart the Hadoop service ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 11 of 20 You could for example perform these steps using the open source eCryptfs package and an ephemeral key generated in your code on each of the HDFS mounts You don’t need to worry about persistent storage of this encryption key because the data it encrypts does not persist beyond the life of the HDFS instance The shuffle phase involves passing data between cluster nodes before the reduce step To encrypt this data in transit you can enable SSL with a configure Hadoop bootstrap option when you create your cluster Finally to enable encryption of the output data your MapReduce job should encrypt the output using a key sourced from your KMI This data can be sent to Amazon S3 for storage in encrypted form Model B: You control the encryption method AWS provides the KMI storage component and you provide the KMI management layer This model is similar to Model A in that you manage the encryption method but it differs from Model A in that the keys are stored in an AWS CloudHSM appliance rather than in a key storage system that you m anage on premises While the keys are stored in the AWS environment they are inaccessible to any employee at AWS This is because only you have access to the cryptographic partitions within the dedicated HSM to use the keys The AWS CloudHSM appliance has both physical and logical tamper detection and response mechanisms that trigger zeroization of the appliance Zeroization erases the HSM’s volatile memory where any keys in the process of being decrypted were stored and destroys the key that encrypts stor ed objects effectively causing all keys on the HSM to be inaccessible and unrecoverable When you determine whether using AWS CloudHSM is appropriate for your deployment it is important to understand the role that an HSM plays in encrypting data An HSM can be used to generate and store key material and can perform encryption and decryption operations but it does not perform any key lifecycle management functions (eg access control policy key rotation) This means that a compatible KMI m ight be needed in addition to the AWS CloudHSM appliance before deploying your application The KMI you provide can be deployed either on premises or within Amazon EC2 and can communicate to the AWS CloudHSM instance securely over SSL to help protect data and encryption keys Because the AWS CloudHSM service uses SafeNet Luna appliances any key management server that supports the SafeNet Luna platform can also be used with AWS CloudHSM Any of the encryption options described for AWS services in Model A can work with A WS CloudHSM as long as the solution supports the SafeNet Luna platform This allows you to run your KMI within the AWS compute environment while maintaining a root of trust in a hardware appliance to which only you have access ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 12 of 20 Applications must be able to access your AWS CloudHSM appliance in an Amazon Virtual Private Cloud (Amazon VPC) The AWS CloudHSM client provided by SafeNet interacts with the AWS CloudHSM appliance to encrypt data from your application Encrypted data can then be sent to any AWS s ervice for storage Database disk volume and file encryption applications can all be supported with AWS CloudHSM and your custo m application Figure 7 shows how the AWS CloudHSM solution works with your applications running on Amazon EC2 in an Amazon VPC Figure 7: AWS CloudHSM deployed in Amazon VPC To achieve the highest availability and durability of keys in your AWS CloudHSM appliance we recommend deploying multiple AWS CloudHSM applications across Availability Zones or in conjunction with an on premises SafeNet Luna appliance that you manage The SafeNet Luna solution support s secure replication of keying material across appliances For more information see AWS CloudHSM on the AWS website Model C : AWS controls the encryption method and the entire KMI In this model AWS provides server side encryption of your data transparently managing the encryption method and the keys ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 13 of 20 AWS Key Management Service (KMS) AWS Key Management Service (KMS) is a manage d encryption service that lets you provision and use keys to encrypt your data in AWS services and your applications Master keys in AWS KMS are used in a fashion similar to the way master keys in an HSM are used After masters key are created they are designed to never be exported from the service Data can be sent into the service to be encrypted or decrypted under a specific master key under you account This design gives you centralized control over who can access your master keys to encrypt and decrypt data and it gives you the ability to audit this access AWS KMS is natively integrated with other AWS services including Amazon EBS Amazon S3 and Amazon Red shift to simplify encryption of your data within those services AWS SDKs are integrated with AWS KMS to let you encrypt data in your custom applications For applications that need to encrypt data AWS KMS provide s global availability low latency and a high level of durability for your keys Visit https://awsamazoncom/kms/ or download the KMS Cryptographic Details White Paper to learn more AWS KMS and other services that encrypt your data directly use a method ca lled envelope encryption to provide a balance between performance and security Figure 8 describes envelope encryption 1 A data key is generated by the AWS service at the time you request your data to be encrypted 2 Data key is used to encrypt your data 3 The data key is then encrypted with a key ‐encrypting key unique to the service storing your data ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 14 of 20 4 The encrypted data key and the encrypted data are then stored by the AWS storage service on your behalf Figure 8: Envelope encryption The keyencrypting keys used to encrypt data keys are stored and managed separately from the data and the data keys Strict access controls are placed on the encryption keys designed to prevent unauthorized use by AWS employees When you need access to your pl aintext data this process is reversed The encrypted data key is decrypted using the key encrypting key; the data key is then used to decrypt your data The following AWS services offer a variety of encryption features to choose from Amazon S3 There are three ways of encrypting your data in Amazon S3 using server side encryption 1 Server side encryption: You can set an API flag or check a box in the AWS Management Console to have data encrypted before it is written to disk in Amazon S3 Each object is en crypted with a unique data key As an addit ional safeguard this key is encrypted with a periodically rotated master key managed by Amazon S3 Amazon S3 server side encryption uses 256 bit Advanced Encryption Standard (AES) keys for both object and master keys This feature is offered at no additional cost beyond what you pay for using Amazon S3 2 Server side encryption using customer provided keys: You can use your own encryption key while uploading an object to Amazon S3 This encryption key is used by Amazon S3 to encrypt your data using AES 256 After the object is encrypted the encryption key you supplied is deleted from the Amazon S3 system that used it to protect your data When you retrieve this object from Amazon S3 you must provide the same enc ryption key in your request Amazon S3 verifies that the encryption key matches decrypts the object and returns the object to you This feature is offered at no additional cost beyond what you pay for using Amazon S3 3 Server side encryption using KMS: You can encrypt your data in Amazon S3 by defining an AWS KMS master key within your account that you want to use to encrypt the unique object key (referred to as a data key in figure 8) that will ultimately encrypt your object When you upload your object a request is sent to KMS to create an object key KMS generates this object key and encrypts it using the master key ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 15 of 20 that you specified earlier; KMS then returns this encrypted object key along with the plaintext object key to Amazon S3 The Amazon S3 web server encrypts your object using the plaintext object key and stores the now encrypted object (with the encrypted object key) and deletes the plaintext object key from memory To retriev e this encrypted object Amazon S3 sends the encrypted object key to AWS KMS AWS KMS decrypts the object key using the correct master key and returns the decrypted (plaintext) object key to S3 With the plaintext object key S3 decrypts the encrypted object and returns it to you For pr icing of this option please refer to the AWS Key Management Service pricing page Amazon EBS When creating a volume in Amazon EBS you can choose to encrypt it using a n AWS KMS master key within your acc ount that wil l encrypt the unique volume key that will ultimately encrypt your EBS volume After you make your selection the Amazon EC2 server sends an authenticated request to AWS KMS to create a volume key AWS KMS generates this volume key encrypts it using the master key and returns the plaintext volume key and the encrypted volume key to the Amazon EC2 server The plaintext volume key is stored in memory to encrypt and decrypt all data going to and from your attached EBS volume When the encrypted volume (or any encrypted snapshots derived from that volume) needs to be reattached to an instance a call is made to AWS KMS to decrypt the encrypted volume key AWS KMS decrypts this encrypted volume key with the correct master key and returns the decrypted volume key to Amazon EC2 Amazon Glacier Before it’s written to disk d ata are always automatically encrypted using 256 bit AES keys unique to the Amazon Glacier service that are stored in separate systems under AWS control This feature is offered at no additional cost beyond what you pay for using Amazon Glacier AWS Storage Gateway The AWS Storage Gateway transfers your data to AWS over SSL and stores data encrypted at rest in Amazon S3 or Amazon Glacier using their respective server side encryption schemes Amazon EMR S3DistCp is an Amazon EMR feature that moves large amounts of data from Amazon S3 into HDFS from HDFS to Amazon S3 and between Amazon S3 buckets S3DistCp supports the ability to request Amazon S3 to use server side encryp tion when it writes EMR data to an Amazon S3 bucket you manage This feature is offered at no additional cost beyond what you pay for using Amazon S3 to store your Amazon EMR data ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 16 of 20 Oracle on Amazon RDS You can choose to license the Oracle Advanced Security option for Oracle on Amazon RDS to leverage the native Transparent Data Encryption (TDE) and Native Network Encryption (NNE) features The Oracle encryption module creates data and key encrypting keys to encrypt the database The key encrypting keys specific to your Oracle instance on Amazon RDS are themselves encrypted by a periodically rotated 256 bit AES master key This master key is unique to the Amazon RDS service and is stored in separate systems under AWS control Microsoft SQL Server on Amazo n RDS You can choose to provision Transparent Data Encryption (TDE) for Microsoft SQL Server on Amazon RDS The SQL Server encryption module creates data and key encrypting keys to encrypt the database The key encrypting keys specific to your SQL Server i nstance on Amazon RDS are themselves encrypted by a periodically rotated regional 256 bit AES master key This master key is unique to the Amazon RDS service and is stored in separate systems under AWS control This feature is offered at no additional cos t beyond what you pay for using Microsoft SQL Server on Amazon RDS Amazon Redshift When creating an Amazon Redshift cluster you can optionally choose to encrypt all data in user created tables There are three options to choose from for server side encry ption of an Amazon Redshift cluster 1 In the first option data blocks (included backups) are encrypted using random 256 bit AES keys These keys are themselves encrypted using a random 256 bit AES database key This database key is encrypted by a 256 bit AES cluster master key that is unique to your cluster The cluster master key is encrypted with a periodically rotated regional master key unique to the Amazon Redshift service that is stored in separate systems under AWS control This feature is offered at no additional cost beyond what you pay for using Amazon Redshift 2 With the second option the 256 bit AES cluster master key used to encrypt your database keys is generated in your AWS CloudHSM or by using a SafeNet Luna HSM appliance on premises This cluster master key is then encrypted by a master key that never leaves your HSM When the Amazon Redshift cluster starts up the cluster master key is decrypted in your HSM and used to decrypt the database key which is sent to the Amazon Redshift hosts to reside only in memory for the life of the cluster If the cluster ever restarts the cluster master key is again retrieved from your HSM —it is never stored on disk in plaintext This option lets you more tightly control the hierarchy and lifecycle of the keys used to encrypt your data This feature is offered at no additional cost beyond what you pay for using Amazon Redshift (and AWS CloudHSM if you choose that option for storing keys) ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 17 of 20 3 In the third option the 256 bit AES cluster master key used to encrypt your database keys is generated in AWS KMS This cluster master key is then encrypted by a master key within AWS KMS When the Amazon Redshift cluster starts up the cluster master key is decrypted in AWS KMS and used to decrypt the database key which is sent to the Amazon Redshift hosts to reside only in memory for the life of the cluster If the cluster ever restarts the cluster master key is again retrieved from the hardened security appliance in AWS KMS— it is never stored on disk in plaintext This option lets you define fine grained controls over the access and usage of your master keys and audit these controls through AWS CloudTrail For pricing of this option please refer to the AWS Key Manageme nt Service pricing page In addition to encrypting data generated within your Amazon Redshift cluster you can also load encrypted data into Amazon Redshift from Amazon S3 that was previously encrypted using the Amazon S3 Encryption Client and keys you provide Amazon Redshift supports the decryption and re encryption of data going between Amazon S3 and Amazon Redshift to protect the full lifecycle of your data These server side encryption features across multiple services in AWS enable you to easily encr ypt your data simply by making a configuration setting in the AWS Management Console or by making a CLI or API request for the given AWS service The authorized use of encryption keys is automatically and securely managed by AWS Because unauthorized ac cess to those keys could lead to the disclosure of your data we have built systems and processes with strong access controls that minimize the chance of unauthorized access and had these systems verified by third party audits to achieve security certifications including SOC 1 2 and 3 PCI DSS and FedRAMP Conclusion We have presented three different models for how encryption keys are managed and where they are used If you take all responsibility for the encryption method and the KMI you can have granu lar control over how your applications encrypt data However that granular control comes at a cost —both in terms of deployment effort and an inability to have AWS services tightly integrate with your applications’ encryption methods As an alternative yo u can choose a managed service that enables easier deployment and tighter integration with AWS cloud services This option offers check box encryption for several services that store your data control over your own keys secured storage for your keys and auditability on all data access attempts Table 1 summarizes the available options for encrypting data at rest across AWS We recommend that you determine which encryption and key management model is most appropriate for your data classifications in the context of the AWS service you are using ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 18 of 20 Encryption Method and KMI Model A Model B Model C AWS Service Client Side Solutions Using Customer Managed Keys Client Side Partner Solutions with KMI for Customer Managed Keys Client Side Solutions for Customer Managed Keys in AWS CloudHSM Server Side Encryption Using AWS Managed Keys Amazon S3 Bouncy Castle OpenSSL Amazon S3 encryption client in the AWS SDK for Java SafeNet ProtectApp for Java Custom Amazon VPCEC2 application integrated with AWS CloudHSM client Amazon S3 server side encryption server side encryption with customer provided keys or server side encryption with AWS Key Management Service Amazon Glacier N/A N/A Custom Amazon VPCEC2 application integrated with AWS CloudHSM client All data is automatically encrypted using server side encryption AWS Storage Gateway Linux Block Level: Loop AES dm crypt (with or without LUKS) and TrueCrypt Linux File System: eCryptfs and EncFs Windows Block Level: TrueCrypt Windows File System: BitLocker Trend Micro SecureCloud SafeNet StorageSecure N/A Amazon S3 server side encryption Amazon EBS Linux Block Level: Loop AES dm crypt+LUKS and TrueCrypt Linux File System: eCryptfs and EncFs Windows Block Level: TrueCrypt Windows File Syste m: BitLocker EFS Trend Micro SecureCloud SafeNet ProtectV Custom Amazon VPCEC2 application integrated with AWS CloudHSM client Amazon EBS Encryption with AWS Key Management Service Oracle on Amazon RDS Bouncy Castle OpenSSL CipherCloud Database Gateway and Voltage SecureData Custom Amazon VPCEC2 application integrated with AWS CloudHSM client Transparent Data Encryption (TDE) and Native Network Encryption (NNE) with optional Oracle Advanced Security license TDE for Microsoft SQL Serve r Microsoft SQL Server on Amazon RDS Bouncy Castle OpenSSL CipherCloud Database Gateway and Voltage SecureData Custom Amazon VPCEC2 application integrated with AWS CloudHSM client N/A Amazon Redshift N/A N/A Encrypted Amazon Redshift clusters with your master key managed in AWS CloudHSM or on premises Safenet Luna HSM Encrypted Amazon Redshift clusters with AWS managed master key Amazon EMR eCryptfs Custom Amazon VPCEC2 application integrated with AWS CloudHSM client S3DistCp using Amazon S3 server side encryption to protect persistently stored data ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 19 of 20 Table 1: Summary of data at rest encryption options References and Further Reading • Bouncy Castle Java crypto library http://wwwbouncycastleorg/ • OpenSSL crypto library http://wwwopensslorg/ • CloudBerry Explorer PRO for Amazon S3 encryption http://wwwcloudberrylabcom/amazon s3explorer procloudfront IAMaspx • Client Side Data Encryption with the AWS SDK for Java and Amazon S3 http://awsamazoncom/articles/2850096021478074 • SafeNet encryption products for Amazon S3 Amazon EBS and AWS CloudHSM http://wwwsafenet inccom/ • Trend Micro SecureCloud http://wwwtrendmicrocom/us/enterprise/cloud solutions/secure cloud/indexhtml • CipherCloud for AWS and CipherCloud for Any App http://wwwciphercloudcom/ • Voltage Security SecureData Enterprise http://wwwvoltagecom/products/securedata enterprise/ • AWS CloudHSM https://awsamazoncom/cloudhsm/ • AWS Key Management Service https://awsamazoncom/kms/ • Key Management Service Cryptographic Details White Paper https://d0awsstaticcom/whitepapers/KMS Cryptographic Detailspdf • Amazon EMR S3DistCp to encrypt data in Amazon S3 http://docsawsamazoncom/ElasticMapReduce/latest/DeveloperGuide/UsingEM R_s3distcphtml • Transparent Data Encryption for Oracle on Amazon RDS http://docsawsamazoncom/AmazonRDS/latest/UserGuide/AppendixOracleOp tionshtml#AppendixOracleOptionsAdvSecurity ArchivedAmazon Web Services – Encrypting Data at Rest in AWS November 2014 Page 20 of 20 • Transparent Data Encryption for Microsoft SQL Server on Amazon RDS http://docsawsamazoncom/AmazonRDS/latest/UserGuide/CHAP_SQLServerh tml#SQLServerConceptsGeneralOptions • Amazon Redshift encryption http://awsamazoncom/redshift/faqs/#0210 • AWS Security Bl og http://blogsawsamazoncom/security Document Revisions November 2013: First Version November 2014: • Introduced section on AWS Key Management Service (KMS) and Amazon EBS in Model C • Updated sections in Model C for Amazon S3 Amazon Redshift
|
General
|
consultant
|
Best Practices
|
Encrypting_File_Data_with_Amazon_Elastic_File_System
|
ArchivedEncrypting File Data with Amazon Elastic File System Encryption of Data at Rest and in Transit April 2018 This paper has been archived For the most recent version of this paper see https://docsawsamazoncom/whitepapers/latest/ efsencryptedfilesystems/efsencryptedfile systemshtmlArchived© 2018 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AW S agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedContents Introduction 1 Basic Concepts and Terminology 1 Encryption of Data at Rest 3 Managing Keys 3 Creating an Encrypted File System 4 Using an Encrypted File System 7 Enforcing Encryption of Data at Rest 7 Detecting Unencrypted File Systems 7 Encryption of Data in Transit 10 Setting up Encryption of Data in Transit 10 Using Encryption of Data in Transit 12 Conclusion 13 Contributors 13 Further Reading 13 Document Revisions 13 ArchivedAbstract In today’s world of cybercrime hacking attacks and the occasional security breach securing data has become increasingly important to organizations Government regulations and industry or company compliance policies may require data of different classifications to be secured by using proven encryption policies cryptographic algorithms and proper key management This paper outlines best practices for encrypting shared file systems on AWS using Amazon Elastic File System ( Amazon EFS) ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 1 Introduction Amazon Elastic File System ( Amazon EFS)1 provides simple scalable highly available and highly durable shared file system s in the cloud The file systems you create using Amazon EFS are elastic allowing them to grow and shrink automatically as you add and remove data They can grow to petabytes in size distributing data across an unconstrained number of storage servers in multiple Availability Zones Data stored in these file systems can be encrypted at rest and in transit using Amazon EFS For encryption of data at re st you can create encrypted file systems through the AWS Management Console or the AWS Command Line Interface ( AWS CLI ) Or you can create encrypted file systems programmatically through the Amazon EFS API or one of the AWS SDK s Amazon EFS integrates with AWS Key Management Service ( AWS KMS)2 for key management You can also enable encryption of data in transit by mounting the file system and transferring all NFS traffic over an encrypted Transport Layer Security (TLS) tunnel This paper outlines best practices for encrypting shared file systems on AWS using Amazon EFS It describes how to enable encryption of data in transit at the client connection layer and how to create an encrypted file system in the AWS Management Console and in the AWS CLI Using the APIs and SDKs to create an encrypted file system is outside the scope of this paper but you can learn more about how this is done by readin g Amazon EFS API in the Amazon EFS User Guide3 or the SDK documentation4 Basic Concepts and Terminology This section defines concepts and terminology referenced in this whitepaper • Amazon Elastic File System (Amazon EFS ) – A highly available and highly durable service that provides simple scalable shared file storage in the AWS C loud Amazon EFS provides a standard file system interface and file system semantics You can store virtually an unlimited amount of data across an unconstrained number of storage servers in multiple Availability Zones • AWS Identity and Access Management (IAM) 5 – A service that enables you to securely co ntrol fine grained access to AWS service APIs Policies are created and used to limit access to individual users groups and roles You can manage your AWS KMS keys t hrough the IAM console ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 2 • AWS KMS – A managed service that makes it easy for you to create and manage the encryption keys used to encrypt your data It is fully integrated with AWS CloudTrail to provide logs of API calls made by AWS KMS on your behalf to help meet compliance or regulatory requirements • Customer master key (CMK) – Represents the top of your key hierarchy It contains key material to encrypt and decrypt data AWS KMS can generate this key material or you can generate it and then import it into AWS KMS CMKs are specific to an AWS account and AWS Region and can be customer managed or AWS managed o AWS managed CMK – A CMK that is generated by AWS on your behalf An AWS managed CMK is created when you enable encryption for a resource of an integrated AWS service AWS managed CMK key policies are managed by AWS and you cannot change th em There is no charge for the creation or storage of AWS managed CMKs o Customer managed CMK – A CMK you create by using the AWS Management Console or API AWS CLI or SDKs You can use a customer managed CMK when you need more granular control over the CM K • KMS permissions – Permissions that control a ccess to a customer managed CMK These permissions are defined using the key policy or a combination of IAM policies and the key policy For more information see Overview of Managing Access in the AWS KMS Developer Guide6 • Data keys – Cryptographic keys generated by AWS KMS to encrypt data outside of AWS KMS AWS KMS allows authorized entities to obtain data keys protected by a CMK • Transport Layer Security ( TLS formerly called Secure Sockets Layer [SSL]) – Cryptographic protocols essential for encrypting information that is exchanged over the wire • EFS mount helper – A Linux client agent (amazon efsutils) used to simplify the mounting of EFS file systems It can be used to setup maintain and route all NFS traffic over a TLS tunnel ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 3 For more information about basic concepts and terminology see AWS Key Management Service Concepts in the AWS KMS Developer Guide7 Encryption of Data at Rest You can create an encrypted file system so all your data and metadata is encrypted at rest usi ng an industry standard AES 256 encryption algorithm Encryption and decryption is handled automatically and transparently so you don’t have to modify your applications If your organization is subject to corporate or regulatory policies that require encryption of data and metadata at rest we recommend creating an encrypted file system Managing Keys Amazon EFS is integrated with AWS KMS which manages the encryption keys for encrypted file systems AWS KMS also supports encryption by other AWS services such as Amazon Simple Storage Service ( Amazon S3 ) Amazon Elastic Block Store ( Amazon EBS ) Amazon Relational Database Service ( Amazon RDS ) Amazon Aurora Amazon Redshift Amazon WorkMail Amazon WorkSpaces etc To encrypt file system contents Amazon EFS uses the Advanced Encryption Standard algorithm with XTS Mode and a 256 bit key (XTS AES 256) There are three important questions to answer when considering how to secu re data at rest by adopting any encryption policy These questions are equally valid for data stored in managed and unmanaged services Where are keys stored? AWS KMS stores your master keys in highly durable storage in an encrypted format to help ensure that they can be retrieved when needed Where are keys used? Using an encrypted Amazon EFS file system is transparent to clients mounting the file system All cryptographic operations occur within the EFS service as data is encrypted before it is written to disk and decrypted after a client issues a read request ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 4 Who can use the keys? AWS KMS key policies control access to encryption keys You can combine them with IAM policies to provide another layer of control Each key has a key policy If the key is a n AWS managed CMK AWS manages the key policy If the key is a customer managed CMK you manage the key policy These key policies are the primary way to control access to CMKs They define the permissions that govern the use and management of key s When you create an encr ypted file system you grant the EFS service access to use the CMK on your behalf The calls that Amazon EFS makes to AWS KMS on your behalf appear in your CloudTrail logs as though they originated from your AWS account For more information about AWS KMS and how to manage access to encryption keys see Overview of Managing Access to Your AWS KMS Resources in the AWS KMS Developer Guide8 For more information about how AWS KMS manages cryptography see the AWS KMS Cryptographic Details whitepaper 9 For more information about how to create an administrator IAM user and group see Creating Your First IAM Admin User and Group in the IAM User Guide 10 Creating an Encrypted File S ystem You can create an encrypted file system using the AWS Management Console AWS CLI Amazon EFS API or AWS SDKs You can only enable encryption for a file system when you create it Amazon EFS integrates with AWS KMS for key management and uses a CMK to encrypt the file system File system metadata such as file names directory names and directory contents are encrypted and decrypted using an EFS managed key The contents of your files or file data is encrypted and decrypted using a CMK that you choose The CMK can be one of thre e types : • An AWS managed CMK for Amazon EFS • A customer managed CMK from your AWS account • A customer managed CMK from a different AWS account ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 5 All users have an AWS mana ged CMK for Amazon EFS whose alias is aws/elasticfilesystem AWS manages this CMK ’s key policy and you cannot change it There is no cost for creating and storing AWS managed CMKs If you decide to use a customer managed CMK to encrypt your file system select the key alias of the customer managed CMK that you own or enter the Amazon Resource Name ( ARN ) of a customer managed CMK that is owned by a different account With a customer managed CMK that you own you control which user s and services can use the key through key policies and key grants You also control the life span and rotation of t hese keys by choosing when to disable re enable delete or revoke access to them AWS KMS charges a fee for creating and storing customer managed CMK s For information about managing access to keys in other AWS accounts see Allowing External AWS Accounts to Access a CMK in the AWS KMS Developer Guide11 For more informati on about how to mana ge customer managed CMKs see AWS Key Management Service Concepts in the AWS KMS Developer Guide12 The following sections discuss how to create an encrypte d file system using the AWS Management Console and using the AWS CLI Creating an Encrypted File System Using the AWS Management Console To create an encrypted Amazon EFS fi le system using the AWS Management Console follow these steps 1 On the Amazon EFS console select Create file system to open the file system creation wizard 2 For Step 1: Configure file system access choose your VPC create your mount targets and then choose Next Step 3 For Step 2: Configure optional settings add any tags choose your performance mode select the b ox to enable encryption for your file system select a KMS master key and then choose Next Step ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 6 Figure 1: Enabling encryption through the AWS Management Console 4 For Step 3: Review and create review your settings and choose Create File System Creating an Encrypted File System Using the AWS CLI When you use the AWS CLI to create an encrypted file system you use additional parameters to set the encryption status and customer managed CMK Be sure you are using the latest vers ion of the AWS CLI For information about how to upgrade your AWS CLI see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide 13 In the CreateFileSystem operation the encrypted parameter is a Boolean and is required for creating encrypted file systems The kms key id is required only when you use a customer managed CMK and you include the key’s alias or ARN Do not include this parameter if you’re using the AWS managed CMK $ aws efs create filesystem \ creation token $(uuidgen) \ performance mode generalPurpose \ encrypted \ kmskeyid user/ customer managedCMKalias For more information about creating Amazon EFS file sys tems using the AWS Management Console AWS CLI AWS SDKs or Amazon EFS API see the Amazon EFS User Guide 14 ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 7 Using an Encrypted File System Encryption has minimal effect on I/O latency and throughput Encryption and decryption are transparent to users applications and services All data and metadata is encrypted by Amazon EFS on your behalf before it is written to disk and is decrypted before it is read by clients You don’t need to change client tools applications or services to access an encrypted file system Enforcing Encryption of Data at Rest Your organization might require the encryption of all data that meets a specific classification or is associated with a particular application workload or environment You can enforce data encryption policies for Amazon EFS file systems by using detective controls that detect the creation of a file system and verify that encryption is enabled If an unencrypted file system is detected you can respond in a number of ways ranging from deleting the file sys tem and mount targets to notifying an administrator Be aware that if you want to delete the unencrypted file system but want to retain the data you should first create a new encrypted file system Next you should copy the data over to the new encrypted file system After the data is copied over you can delete the unencrypted file system Detecting Unencrypted File Systems You can create an Amazon CloudWatch alarm to monitor CloudTrail logs for the CreateFileSystem event and trigger an alarm to notify an administrator if the file system that was created was unencrypted Creating a Metric Filter To create a CloudWatch alarm that is triggered when an unencrypted Amazon EFS file system is created follow this procedure You must have an exi sting trail created that is sending CloudTrail logs to a CloudWatch Logs log group For more information see Sending Events to CloudW atch Logs in the AWS CloudTrail User Guide 15 1 Open the CloudWatch console at https://consoleawsamazoncom/cloudwatch/ ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 8 2 In the navigation pane choose Logs 3 In the list of log groups c hoose the log group that you created for CloudTrail log events 4 Choose Create Metric Filter 5 On the Define Logs Metric Filter page choose Filter Pattern and then type the following: { ($eventName = CreateFileSystem) && ($responseElementsencrypted IS FALSE ) } 6 Choose Assign Metric 7 For Filter Name type UnencryptedFileSystemCreated 8 For Metric Namespace type CloudTrailMetrics 9 For Metric Name type UnencryptedFileSystemCreatedEventCount 10 Choose Show advanced metric settings 11 For Metric Value type 1 12 Choose Create Filter Creating an Alarm After you create the metric filter follow thi s procedure to create an alarm 1 On the Filters for Log_Group_Name page next to the UnencryptedFileSystemCreated filter name choose Create Alarm 2 On the Create Alarm page set the parameters shown in Figure 2 ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 9 Figure 2: Create a Cloud Watch alarm 3 Choose Create Alarm Testing the Alarm for Unencrypted File System Created You can test the alarm by creating an unencrypted file system as follows 1 Open the Amazon EFS console at https://consoleawsamazoncom/efs 2 Choose Create File System 3 From the VPC list choose your default VPC 4 Select the check boxes for all the Availability Zones Be sure that they all have the default subnets automatic IP addresses and the default security groups chosen These are your mount targets 5 Choose Next Step 6 Name your file system and keep Enable encryption unchecked to create an unencrypted file system 7 Choose Next Step 8 Choose Creat e File System Your trail logs the CreateFileSystem operation and delivers the event to your CloudWatch Logs log group The event triggers your metric alarm and CloudWatch Logs sends you a notification about the change ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 10 Encryption of Data in Transit You ca n mount a file system so all NFS traffic is encrypted in transit using Transport Layer Security 12 (TLS formerly called Secure Sockets Layer [SSL] ) with an industry standard AES 256 cipher TLS is a set of industry standard cryptographic protocols used for encrypting information that is exchanged over the wire AES 256 is a 256 bit encryption cipher used for data transmission in TLS If your organization is subject to corporate or regulatory policies that require en cryption of data and metadata in transi t we recommend setting up encryption in transit on every client accessing the file system Setting up Encryption of Data in T ransit The recommended method to setup encryption of data in transit is to download the EFS mount helper on each client The EFS mount helper is an open source utility that AWS provides to simplify using EFS including setting up encryption of data in transit The mount helper uses the EFS recommended mount options by default 1 Install the EFS mount helper • Amazon Linux: sudo yum install y amazon efsutils • Other Linux distributions: download from GitHub (https://githubcom/aws/efs utils ) and install • Supported Linux distributions: o Amazon Linux 201709+ o Amazon Linux 2+ o Debian 9+ o Red H at Enterprise Linux / CentOS 7+ o Ubuntu 1604+ • The amazon efsutils package automatically installs the following dependencies: ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 11 o NFS client (nfs utils) o Network relay (stunnel) o Python 2 Mount the file system: sudo mount t efs o tls filesystemid efsmountpoint • mount t efs invokes the EFS mount helper • Using the DNS name of the file system or the IP address of a mount target is not supported when mounting using the EFS mount helper use the file system id instead • The EFS mount helper uses the AWS recommended mount options by default Overriding these default mount options is not recommended but we provide the flexibility to do so when the occasion arises We recommend thoroughly testing any mount option overrides s o you understand how these changes impact file system access and performance • Below are the default mount options used by the EFS mount helper o nfsvers=41 o rsize=1048576 o wsize=1048576 o hard o timeo=600 o retrans=2 3 Use the file fstab to automatically remount your file system after any system restart • Add the following line to /etc/fstab filesystemid efsmountpoint efs _netdevtls 0 0 ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 12 Using Encryption of Data in Transit If your organization is subject to corporate or regulatory policies that require encrypt ion of data in transit we recommend using encryption of data in transit on every client accessing the file system Encryption and decryption is configured at the connection level and adds another layer of security Mounting the file system using the EFS m ount helper sets up and maintains a TLS 12 tunnel between the client and the Amazon EFS service and routes all NFS traffic over this encrypted tunnel The certificate used to establish the encrypted TLS connec tion is signed by the Amazon C ertificate Authority (C A) and trusted by most modern Linux distributions The EFS mount helper also spawns a watchdog process to monitor all secure tunnels to each file system and ensures they are running After using the EFS mount helper to establish encrypted connections to Amazon EFS no other user input or configuration is required Encryption is transparent to user connections and applications accessing the file system After successfully mounting and establishing an encrypted connection to an EFS file system using the EFS mount helper the output of a mount command shows the file system is mounted and an encrypted tunnel has been established using the localhost (127001) as the network relay See samp le output below 127001:/ on efs mountpoint type nfs4 (rwrelatimevers=41rsize=1048576wsize=1048576namlen=255har dproto=tcpport=20059timeo=600retrans=2sec=sysclientaddr=12 7001local_lock=noneaddr=127001) To map an efsmount point to an EFS file system query the mountlog file in /var/log/amazon/efs and find the last successful mount operation This can be done using a simple grep command like the one below grep E "Successfully mounted* efsmountpoint" /var/log/amazon/efs/mountlog | tail 1 The output of this grep command will return the DNS name of the mounted EFS file system See sample output below ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 13 20180315 07:03:42363 INFO Successfully mounted filesystemidefsregionamazonawscom at efs mountpoint Conclusion Amazon EFS file system data can be encrypted at rest and in transit You can encrypt data at rest by using CMKs that you can control and manage using AWS KMS Creating an encrypted file system is as simple as selecting a check box in the Amazon EFS file system cr eation wizard in the AWS Management Console or adding a single parameter to the CreateFileSystem operation in the AWS CLI AWS SDKs or Amazon EFS API Using an encrypted file system is also transparent to services applications and users with minimal e ffect on the file system’s performance You can encrypt data in transit by using the EFS mount helper to establish an encrypted TLS tunnel on each client encrypting all NFS traffic between the client and the mounted EFS file system Encryption of both data at rest and in transit is available to you at no additional cost Contributors The following individuals and organizations contributed to this document: • Darryl S Osborne storage specialist solutions architect AWS • Joseph Travaglini sr product manager Amazon EFS Further Reading For additional information see the following : • AWS KMS Cryptographic Details Whitepaper16 • Amazon EFS User Guide17 Document Revisions Date Description April 2018 Added encryption of data in transit ArchivedAmazon Web Services – Encrypting File Data with Amazon Elastic File System Page 14 Date Description September 2017 First publication 1 https://awsamazoncom/efs/ 2 https://awsamazoncom/kms/ 3 https://docsawsamazoncom/efs/latest/ug/API_CreateFileSystemhtml 4 https://awsamazoncom/tools/ sdk 5 https://awsamazoncom/iam/ 6 https://docsawsamazoncom/kms/latest/developerguide/control access overviewhtml 7 https://docsawsamazoncom/kms/latest/developerguide/conceptshtml 8 https://docsawsamazoncom/kms/latest/developerguide/control access overviewhtml managing access 9 https://d0awsstaticcom/whitepapers/KMS Cryptographic Detailspdf 10 https://docsawsamazoncom/IAM/la test/UserGuide/getting started_create admin grouphtml 11 https://docsawsamazoncom/kms/latest/developerguide/key policy modifyinghtml keypolicy modifying external accounts 12 https://docsawsamazoncom/kms/latest/developerguide/conceptshtml master_keys 13 https://docsawsamazoncom/cli/latest/userguide/installinghtml 14 https://docsawsamazoncom/efs/latest/ug/whatisefshtml 15 https://docsawsamazoncom/awscloudtrail/latest/userguide/send cloudtrail events tocloudwatch logshtml 16 https://awsamazoncom/whitepapers/ 17 https://docsawsamazoncom/efs/latest/ug/whatisefshtml Notes
|
General
|
consultant
|
Best Practices
|
Establishing_Enterprise_Architecture_on_AWS
|
ArchivedEstablishing Enterprise Architecture on AWS March 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2020 Amazon Web Services Inc or its affiliates All rights reserved Archived Contents Abstract 4 Introduction 1 Enterprise Architecture Tenets 2 Enterprise Architecture Domains 4 AWS Services that Support Enterprise Architecture Activities 6 Roles and Actors 7 Application Portfolio 8 Governance and Auditability 9 Change Management 10 Enterprise Architecture Repository 10 Conclusion 11 Contributors 12 Document Revisions 12 Archived Abstract This whitepaper outlines AWS practices and services that support enterprise architecture (EA) activities It is written for IT leaders and enterprise architects in large organizations Enterprise architecture guide s organizations in the delivery of the target production landscape to realize their business vision in the cloud There are many established enterprise architectu re frameworks and methodologies In this whitepape r we will focus on the AWS services and practices that you can use to deliver common enterprise architecture artifacts and tools and provide business benefit to your organization This whitepaper uses terms and definitions that are familiar to The Open Group Architecture Framework (TOGAF ) practitioners but it is not restricted to TOGAF or any other EA framework 1 ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 1 Introduction A key challenge facing many organizations is demonstrat ing the business value of their IT assets Enterprise arc hitecture aims to define the target IT landscape that realizes the business vision and drives value The k ey goals of enterprise architecture are to: • Analyze and evolve the organization’s business vision and strategy • Describe the business vision and strategy in a common ma nner (for example business capabilities functions and processes) • Provide tools frameworks and specifications to support governance in all the architectural practices • Enable trace ability across the IT landscape • Define the programs and architectures nee ded to realize the target IT state A key value proposition of a mature enterprise architecture practi ce is being able to do better “W hat if?” analysis or impact analysis B eing able to identify what application s realize what business capabilities lets you make informed decisions on delivering your organization’s business vision For example : • “What is the impact on our IT landscape if we decide to outsource a certain business service ?” • “What business capabilities and processes are impacted if we retire a certain IT system ?” • “What is the cost of realizing this aspect of our bus iness vision ?” This whitepaper will help you create endtoend traceability of IT a ssets which is one of the main goals of enterprise architecture teams Traceability audit and capture of “current state” is a perpetual challenge in a world of vendor specific hardware and legacy systems Often it is simply not possible for enterprises to catalog all of their assets In this scenario they cannot determine the business value of their IT landscape Moving to the cloud ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 2 gives enterprises an opportunity to achieve traceability of their assets in the cloud Enterprise Architecture Tenets Enterprise architecture tenets are general rules and guidelines that inform and support the way in which an organization sets about fulfilling its mission They are intended to be enduring and seldom amended You should use tenets to guide your architecture design and cloud adoption Tenets can be used through the entire lifecycle of an application in your IT landscape —from conception to delivery —and to support ongoing maintenance and continuous releases Tenets are used in application design and should guide application governance and architectural reviews We highly recommend creatin g cloud based tenets to guide you in creat ing applications and workloads that will help you realize and govern your enterprise’s target landscape and business vision Examples of tenets might be: “Maximize Cost Benefit for the Enterprise” A cost centric tenet encourage s architects application teams IT stakeholders and business owners to always consider the cost effectiveness of their workloads It encourage s your enterprise to focus on projects that differentiate the business (value) not the infrastruct ure Your enterprise should examine capital expenditure and operational expenditure for each workload It will result in customer centric solutions that are most cost effective These savings benefit both your organization and your customer s “Business Con tinuity” A business continuity tenet inform s and drive s the non functional requirements for all current and future workloads in your enterprise The geographic footprint and wide range of AWS services support s the realization of this tenet The AWS Cloud i nfrastructure is built around AWS Regions and Availability Zones Each AWS Region is a separate geographic area Each Region has multiple physically separated and isolated locations know as Availability Zones Availability Zones ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 3 are connected with low latency high throughput a nd highly redundant networking This tenet guide s the architecture and application teams to leverage the reliability and availability of the AWS Cloud “Agility and Flexibility” This tenet enforces the need for all applications t o be “future proof ” In a cloud computing environment new IT resources are only ever a click away which means you reduce the time it takes to make those resources available to your developers from weeks to just minutes This results in a dramatic increas e in agility for your organization since the cost and time it takes to experiment and develop is significantly lower Being f lexib le and agil e also mean that your enterprise respond s rapidly to business requirements as customer behaviors evolve The AWS Cloud enables teams to implement continuous integration and delivery practices across all development stages DevOps DevSecOps and methodologies such as Scrum become easier to set up Teams can quickly compare and evaluate architectures and practices ( eg microservices and serverless) to determine what solution best fits enterprise needs “Cloud First Strategy” Such a tenet is key to an organization that wishes to migrate to the cloud It prescribe s that new applications should be in the cloud This gove rnance prohibits the deployment of new applications on non approved infrastructure Architectural and review boards can closely examine why a workload should be granted an exception and not deployed in the cloud “All Users Services and Applications Belong in an Organizational Unit” An enterprise may use this tenet to ensure that its target landscape reflects the enterprise’s organizational structure It mandates that all cloud activities belong in an AWS organizational unit which lets your enterprise govern the business ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 4 vision globally but give s autonomy when necessary to various local business units “Security First” This tenet describe s the security values of the organization For example “ Data is secured in transit and rest” or “ All infrastructure should be described as code” or “ All workloads are approved by the security organization” etc Using this tenet your architecture team can determine what level of trust they have in the cloud Enterprises vary from zero trust to total tru st In a zero trust scenario the enterprise would control all encryption keys for example They would decide to use customer managed keys with AWS Key Management Service 2 They would manage key rotation themselves and store the keys in their own hardware security module (HSM) In a total trust scenario the enterprise would choose to allow AWS to manage the encryption keys and key rotation They would also choose to use AWS CloudHSM 3 AWS can support your enterprise in both zero trust and total trust scenarios The secu rity tenet guides you in decid ing where your enterprise is at on that scale Tenets should be used to guide architectural design and decisions that drive the target landscape in the cloud They provide a firm foundation for making architecture and planning decisions for framing policies procedures and standards and for supporting resolution of contradictory situations Tenets should also be heavily leveraged during the architectural review phases of applications and workloads before they go live to ens ure the correct target landscape is being realized Enterprise Architecture Domains Enterprise architecture guides your organization’s business information process and technology decisions to enable it to execute its business strategy and meet customer needs There are typic ally four architecture domains : ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 5 • Business architecture domain – describes how the enterprise is organizationally structured and what functional capabilities are necessary to deliver the business vision Business architecture addresses the questions WHAT and WHO: WHAT is the organization’s business vision strategy and objectiv es that guide creation of business services or capabilities? WHO is executing defined business services or capabilities? • Application architecture domain – describes the individual applications their interactions and their relationships to the core busine ss processes of the organization Application architecture addresses the question HOW : HOW are previously defined business ser vices or capabilities implemented? • Data architecture domain – describes the structure of an organization's logical and physical data assets and data management resources Knowledge about your customers from data analytics lets you improve and continuously evolve business processes • Technology architecture domain – describes the software and hardware needed to implement the business data and application services Each of these domains have well known arti facts diagrams and practices Enterprise architects focus on each domain and how they relate to one another to deliver an organization's strategy In addition enterprise architecture tries to answer WHERE and WHY as well: • WHERE are assets located? • WHY is something being changed? Figure 1 shows how these domains fit together: ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 6 Figure 1: The four domains of an enterprise architecture AWS Services that Support Enterprise Architecture Activities Several AWS services can support your enterprise architecture activities : • AWS Organizations • AWS Identity & Access Management (IAM) • AWS Service Catalog • AWS CloudTrail • Amazon CloudWatch • AWS Config • AWS Tagging and Resource Grouping Figure 2 shows how these services support your enterprise architecture: ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 7 Figure 2: AWS services that support an enterprise architecture The following sections discuss many of the enterprise architecture activities and AWS services shown in Figure 2 Roles and Actors In the b usiness architecture domain there are actors and roles An actor can be a person organization or system that has a role that initiates or interacts w ith activities Actors belong to an enterprise and in combination with the role perform the business function Understanding the actors in your organization enables you to create a definitive listing of all participants that interact with IT including users and owners of IT systems Understanding actortorole relationships is necessary to enable organizational change management and organiz ational transformation The actors and roles of your enterprise can be modelled on two levels Typically an organization ha s a corporate directory (eg Active Directory) that reflects its actors and roles On a different level you can enforce these comp onents with AWS Identity and Access Management (IAM) 4 IAM achieves the actorrole relationship while complementing AWS Organizations In IAM an actor is known as a user An AWS account within an ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 8 OU defines the users for that account and the corresponding roles that user s can adopt With IAM you can securely control access to AWS services and resources for your users You can also create and manage AWS users and groups and use permissions to allow and deny their access to AWS resources SCPs put bounds around the permissions that IAM policies can grant to entities in an account such as IAM users and roles The AWS account inherits the SCPs defined in or inherited by the OU Then within the AWS account you can write even more granular policies to define how and what the user or role can access You can apply t hese policies at the user or group level In this manner you can create very granular permissions for the actors and roles of your organization Key business relationships between OUs actors (user s) and roles can be reflected in IAM Application Portfolio Application portfolio management is an important part of the application architecture domain in an e nterprise architecture It covers managing an organization’s collection of software applications and software based services that are used to attain its business goals or objectives An agreed application portfolio allows a standard set of applications to be used in an organization You can use AWS Service Catalog to manage your enterprise’s application portfolio in the cloud 5 and centrally manage commonly deployed appli cations It helps you achieve consistent governance and meet your compliance requirements AWS Service Catalog ensures compliance with corporate standards by providing a single location where organizations can centrally manage catalogs of their application s With AWS Service Catalog you can control which applications and versions are available the configuration of the available services and permission access by an individual group department or cost center AWS Service Catalog lets you : • Define your ow n application catalog End users of your organization can quickly discover and deploy applications using a self service portal ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 9 • Centrally manage lifecycle of applications You can add new application versions as necessary as well as control the use of applications by specifying constraints such as the AWS Region in which a product can be launched • Grant a ccess at a granular level – You can g rant a user access to a portfolio to let that user browse and launch the products • Constrain how your AWS resources are deployed You can restrict the ways that specific AWS resources can be deployed for a product You can use constraints to apply limits to products for governance or cost control For example you can let your marketing users create c ampaign websites but restrict their access to provision the underlying databases Governance and Auditability AWS CloudTrail is a service that enables governance compliance operational auditing and risk auditing of your AWS account 6 With CloudTrail yo u can log every API call made This enables compliance with governance bodies internal and external to your organization CloudTrail gives your organization transparency across its entire AWS landscape CloudTrail provides event history of your AWS account activity including actions taken through the AWS Management Console AWS SDKs command line tools and other AWS services This event history simplifies security analysis resource change tracking and troubleshooting Amazon CloudWatch is a monitoring service for AWS Cloud resources and the applications you run on AWS 7 You can use CloudWatch to collect and track metrics collect and monitor log file s set alarms and automatically react to changes in your AWS resources CloudWatch monitors and logs the behavior of your application landscape CloudWatch can also trigger events based on th e behavior of your application While CloudTrail tracks usage of AWS CloudWatch monitors your application landscape I n combination these two services help with architecture g overnance and audit functio ns ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 10 Change Management Enterprise architect s manage transition architectures Transition architectures are the increm ental releases in production that bring the current state to the target state architecture The goal of transition architecture s is to ensure that the evolving architecture continue s to deliver the target business strategy Therefore you need to manage changes to the architecture in a cohesive way AWS Config is a service that lets you assess audit and evaluate the configurations of your AWS resour ces 8 AWS Config continuously monitors and records your AWS resource configurations and lets you automate the evaluation of recorded configurations against desired configurations With AWS Config you can review changes in configurations and determine you r overall compliance against the configurations specified in your internal guidelines This enables you to simplify compliance auditing security analysis change management and operational troubleshooting Enterprise Architecture Repository An enterprise architecture repository is a collection of artifacts that describes an organization’s current and target IT landscape The goal of the enterprise architecture repository is to reflect the organi zation ’s inventory of technology data application s and bus iness artifacts and to show the relationships between these components Traditionally in a non cloud environment organi zations were restricted to choose expensive offtheshelf products to meet their enterprise architecture repository needs You can avoid these expenses with AWS services AWS Tagging and Resource Groups let you organize your AWS landscape by applying tags at different lev els of granularity 9 Tags allow you to label collect and organize resources and components within services The Tag Editor lets you manage tags across services and AWS Regions 10 Using this approach you can globally manage all the application business data and technology components of you r target landscape A Resource Group is a collection of resources that share one or more tags 11 It can be used to create an enterprise architecture “view” of your IT landscape ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 11 consolidating AWS resources into a per project ( that is the on going programs that realize your targe t landscape) per entity ( that is capabilities roles processes) and perdomain ( that is Business Application Data Technology) view You can use AWS Config Tagging and Resource Groups to see exactly what cloud assets your company is using at any moment These services make i t easier to detect when a rogue server or shadow application appear in your target production landscape You may wish to continue using a tradit ional repository tool perhaps due to existing licensing commitments or legacy processes In this scenario the enterprise repository can run natively on a n EC2 instance and be maintained as before 12 Conclusion The role of an enterprise architect is to enable the organization to be innovative and respond rapidly to changing customer behavior The enterprise architect holds the long term business vision of the organization and is responsible for the journey it has to take to reach this target landscape They support an organization to achieve their objectives by successfully evolving across all domains; Business Application Technology and Data This is no different when moving t o the cloud The Enterprise A rchitect role is key in successful cloud adoption Enterprise architects can use AWS services as architectural building blocks to guide the technology decisions of the organization to realize the enterprise’s business vision It has been challenging for enterprise architects to measure their goals and demonstrate their value with on premises architectures With AWS Cloud adoption enterprise architects can use AWS services to create traceability and relationships across the enterprise architecture domains allowing the architect to correctly track how their organization is changing and improving AWS lets the enterprise architect address end toend traceability operational modeling and governance It is easier to gather data o n transition architectures in the cloud as the organization moves to its target state ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 12 The wide breadth of AWS services and agility means i t is also easier for architects and application teams to respond rapidly when architectural deviations are identified and changes need to take place Using AWS services you can more easily execute and realize the value of enterprise architecture practices Contributors The following individuals and organizations contributed t o this document: • Margo Cronin Solutions Architect AWS • Nemanja Kostic Solutions Architect AWS Document Revisions Date Description April 2020 Removed AWS Organizations section March 2018 First publication 1 http://wwwopengrouporg/subjectareas/enterprise/togaf 2 https://awsamazoncom/kms/ 3 https://awsamazoncom/cloudhsm/ 4 https://awsamazoncom/iam/ 5 http://docsawsamazoncom/servicecatalog/latest/adminguide/introduction html 6 https://awsamazoncom/cloudtrail/ 7 https://awsamazoncom/cloudwatch/ Notes ArchivedAmazon Web Services – Establishing Enterprise Architecture on AWS Page 13 8 http://docsawsamazoncom/config/latest/developerguide/WhatIsConfight ml 9 http://docsawsamazoncom/awsconsolehelpdocs/latest/gsg/what are resource groupshtml 10 http://docsawsamazoncom/awsconsolehelpdocs/latest/gsg/tag editorhtml 11 http://docsawsamazoncom/awsconsolehelpdocs/latest/gsg/what are resource groupshtml 12 https:// awsamazoncom/ec2/
|
General
|
consultant
|
Best Practices
|
Estimating_AWS_Deployment_Costs_for_Microsoft_SharePoint_Server
|
Estimating AWS Deployment Costs for Microsoft SharePoint Server March 201 6 This paper has been archived For the latest technical content about this subject see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 2 of 27 © 201 6 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 3 of 27 Contents Abstract 4 Introduction 5 AWS Regions and Availability Zones 5 Windows Server in Amazon EC2 6 Amazon EBS 6 Amazon S3 7 Amazon VPC 7 Elastic Load Balancing 7 AWS Direct Connect 8 AWS Simple Monthly Calculator 8 Reviewing the SharePoint Reference Architecture 9 Licensing and Tenancy Options 10 License Included 10 BYOL 10 Using the Simple Monthly Calculator 12 Process Overview 12 Estimating Compute Costs 13 Estimating Storage Costs 17 Using Elastic IP 19 Estimating Data Transfers 19 Estimating Load Balancing 19 Choosing AWS Direct Connect and Amazon VPC 20 Reviewing the Estimate 21 MoneySaving Ideas 22 ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 4 of 27 AWS Directory Service 22 Reserved Instances and Spot Instances 23 Auto Scaling 23 NAT Alternatives 24 ThirdParty Solutions 24 Conclusion 25 Contributors 25 Further Reading 25 Abstract This whitepaper is intended for IT managers systems integrators presales engineers and Microsoft Windows IT professionals who want to learn how to use the Amazon Web Services (AWS) Simple Monthly Calculator to estimate the cost of their cloud infrastructure on AWS1 A scalable and highly available Microsoft SharePoint Server 2013 architecture is given as an example and its various components are plugged into the calculator to estimate the monthly cost Although SharePoint is highlighted the techniques described can easily be applied to other Windows workloads on AWS such as Dynamics CRM or Skype for Business Server The cost estimates include licenses for Windows Server and SQL Server but exclude licenses for SharePoint Server as will be explained A few ways to save money on the SharePoint Server deployment are also described This paper focuses on Amazon Elastic Compute Cloud (Amazon EC2) and AWS storage services that are common to most Microsoft infrastructure deployments on AWS and briefly mentions how AWS Directory Service and NAT gateways can be very beneficial in your architecture ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 5 of 27 Introduction AWS currently offers over 50 cloud computing services with new services being added frequently You won’t need to be familiar with all the services to deploy SharePoint Server on AWS but the key point is that at th e end of each month you pay only for what you use and you can start or stop using a service at any time There are no minimum commitments or longterm contracts required This pricing model helps replace upfront capital expenses for your IT projects with a low variable cost For compute resources you pay on an hourly basis from the time you launch a resource until the time you terminate it For storage and data transfer you pay on a pergigabyte basis For additional information on how AWS pricing works see the following sources: How AWS Pricing Works whitepaper2 AWS Cloud Pricing Principles on the AWS website3 Before we get into t he calculator let’s briefly review a few of the key features and services that will come into play in a SharePoint architecture on AWS AWS Regions and Availability Zones Amazon EC2 is hosted in multiple Regions around the world Each Region is a separate geographic area and has multiple isolated locations known as Availability Zones You can think of Availability Zones as very large data centers Using redundant Availability Zones in your architecture enables you to achieve high availability AWS does not move your data or replicate your resources across Regions unless you do so specifically Figure 1 shows the relationship between Regions and Availability Zones ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 6 of 27 Figure 1: Each AWS Region Contains at Least Two Availability Zones Windows Server in Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) provides a secure global infrastructure to run Windows Server workloads in the cloud including Internet Information Services (IIS) SQL Server Exchange Server SharePoint Server Skype Server for Business Dynamics CRM System Center and custom NET applications 4 Preconfigured Amazon Machine Images (AMIs) enable you to start running fully supported Windows Server virtual machine instances in minutes You can choose from a number of server operating system versions and decide whether or not to include preinstalled SQL Server in the hourly cost Amazon EBS Amazon Elastic Block Storage (Amazon EBS) provides persistent blocklevel storage volumes for use with Amazon EC2 instances5 Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure offering high availability and durability Amazon EBS volumes provide consistent lowlatency performance On Windows Server instances Amazon EBS volumes are mounted to appear as regular drive letters to the operating system and applications Amazon EBS volumes can be up to 16 TiB in size and you can mount up to 20 volumes on a single Windows instance After writing data to an EBS volume you can periodically create a snapshot of the volume to use as a baseline for new volumes or for data backup Snapshots are ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 7 of 27 incremental so that only the blocks on the device that have changed after your last snapshot are saved in the new snapshot Snapshots are automatically saved in Amazon Simple Storage Service (Amazon S3) which stores three redundant copies across multiple Availability Zones so you have peace of mind knowing that your data is immediately backed up “off site” Amazon S3 Amazon Simple Storage Service (Amazon S3) provides developers and IT teams with secure durable highly scalable costeffective object storage6 Amazon S3 is easy to use and includes a simple web services interface to store and retrieve any amount of data from anywhere on the web Object storage is not appropriate for workloads that require incremental data insertions such as databases However Amazon S3 is an excellent service for storing snapshots of Amazon EBS volumes While Amazon EBS duplicates your volume synchronously in the same Availability Zone snapshots to Amazon S3 are replicated across multiple zones substantially increasing the durability of your data Amazon VPC Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you define7 This virtual network closely resembles a traditional network that you ’d operate in your own data center with the benefits of using the scalable infrastructure of AWS Your VPC is logically isolated from other virtual networks in the AWS cloud You can configure your VPC; you can select its IP address range create subnets and configure route tables network gateways and security settings With the AWS Direct Connect service you can effectively make your VPC function as an extension of your own onpremises network Elastic Load Balancing Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances8 It enables you to achieve greater levels of fault tolerance in your applications seamlessly providing the required amount of load balancing capacity needed to distribute application traffic Elastic Load Balancing ensures that only healthy Amazon EC2 instances receive traffic by ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 8 of 27 detecting unhealthy instances and rerouting traffic across the remaining healthy instances Elastic Load Balancing automatically scales its request handling capacity to meet the demands of application traffic Additionally Elastic Load Balancing offers integration with Auto Scaling to ensure that you have backend capacity to meet varying levels of traffic levels without requiring manual intervention9 For SharePoint Server you can create an internal (nonInternet facing) load balancer to route traffic between your web tier and your application tier using private IP addresses within your Amazon VPC You can also implement a multi tiered architecture using internal and Internetfacing load balancers to route traffic between application tiers With this multitier architecture your application infrastructure can use private IP addresses and security groups allowing you to expose only the Internetfacing tier with public IP addresses AWS Direct Connect AWS Direct Connect makes it easy to establish a dedicated private network connection from your premises to AWS10 In many cases this can reduce your network costs increase bandwidth throughput and provide a more consistent network experience than Internetbased connections This dedicated connection can be partitioned into multiple virtual interfaces This enables you to use the same connection to access public resources such as objects stored in Amazon S3 and private resources such as Amazon EC2 instances running within an Amazon VPC while maintaining network separation between the public and private environments AWS Simple Monthly Calculator The AWS Simple Monthly Calculator is an easy touse online tool that enables you to estimate the monthly cost of AWS services for your project based on your expected usage The Simple Monthly Calculator is continuously updated with the latest pricing for all AWS services in all AWS Regions Before continuing with this guide please take a few minutes to watch this video for an introduction to the Simple Monthly Calculator: Video: Getting Started with the AWS Simple Monthly Calculator11 ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 9 of 27 Reviewing the SharePoint Reference Architecture AWS provides several Quick Starts which consist of detailed deployment guides and deployment code12 Quick Starts help you understand and quickly deploy reference architectures on AWS In this whitepaper we will be using the reference architecture for SharePoint Server 2013 as an example to explore the Amazon Simple Monthly Calculator Figure 2 is copied from the AWS SharePoint Server 2013 Quick Start 13 It includes several AWS services that we will enter into the calculator Figure 2: Reference Architecture for SharePoint Server 2013 ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 10 of 27 Licensing and Tenancy Options On Amazon EC2 you can choose to run instances that include the relevant license fees in their cost (“license included”) or use the Bring Your Own License (BYOL) licensing model License Included When you are launching an EC2 instance there are two ways to find an AMI for the licenseincluded model: Choose a Quick Start AMI that includes Windows Server or SQL Server The license cost is included in the hourly instance charge At this time only Windows Server and SQL Server (excluding SQL Server Enterpris e Edition) are available with this option Choose an AMI from the AWS Marketplace A much wider selection of software is available here including SQL Server Enterprise Edition SharePoint Enterprise Edition and many other Windowsbased applications from other vendors Windows Server Client Access Licenses (CALs) are not required with any of these AMIs BYOL Many vendors offer cloud licenses for their software There are three ways you can take advantage of your Microsoft software licenses on AWS: BYOL wit h License Mobility (shared tenancy) This option does not cover Windows Server BYOL with Dedicated Hosts (dedicated tenancy) This option allows you to comply with Microsoft’s 90 day rule for Windows Server cloud licenses With Dedicated Hosts you can import your own virtual machine images with Windows Server and pay Amazon EC2 Linux rates AWS has a qwikLAB that demonstrates this process 14 MSDN with Dedicated Hosts or Dedicated Instances All Microsoft products covered by MSDN can be run on AWS for dev/test environments per MSDN terms ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 11 of 27 For more information see the AWS Software Licensing FAQ 15 If you use the BYOL option for Windows Server the license cost is not included in the instance cost Instead you pay the same rate as EC2 instances with Amazon Linux pricing which is lower than the cost of instances with Windows Server pre installed When you use BYOL you are responsible for managing your own licenses but AWS has features that help you maintain license compliance throughout the lifecycle of your licenses such as instance affinity 16 targeted placement available through Amazon EC2 Dedicated Hosts17 and the AWS Key Management Service (AWS KMS)18 Microsoft License Mobility is a benefit for Microsoft Volume Licensing customers with eligible server applications covered by active Microsoft Software Assurance (SA) Microsoft License Mobility allows you to move eligible Microsoft software to AWS for use on EC2 instances with default tenancy (which means that instances might share server space with instances from other customers) But if you are bringing your own Microsoft licenses into EC2 Dedicated Hosts or EC2 Dedicated Instances (instead of using default tenancy) then Microsoft Software Assurance is not required You should use Dedicated Hosts for BYOL license scenarios that are server bound (eg Windows Server SQL Server) and that require you to license against the number of sockets or physical cores on a dedicated server If you have SQL Server Enterprise Edition licenses that you want to use on AWS there are two significant advantages to using Dedicated Hosts: Licensing on a Dedicated Host is per physical core (instead of vCPU) This means that when you use large instances you can license the entire host instead of licensing the instances separately For a r38xlarge instance (which is wellsuited for SQL Server) that means you would consume only 20 of your SQL Server licenses instead of 32 For disaster recovery deployments i f a failover instance is dedicated to you you don’t need licenses for it F or a cluster of two r38xlarge instances that means you would consume only 20 licenses instead of 64 ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 12 of 27 Using the Simple Monthly Calculator Process Overview The following is a suggested process to help you estimate the costs of deploying your IT project on AWS We’ll discuss each step in subsequent sections 1 The first choice you need to make is usually an easy one: Which AWS Region do you want to run your SharePoint farm in? AWS pricing varies slightly by region 2 Now sketch a highlevel diagram of your project including each server you will need labeling the servers with the software functions they will perform eg Web Front End For this whitepaper we’ll use the diagram in Figure 2 from the AWS Quick Start r eference deployment for SharePoint After you’re happy with your sketch make a list of each server and load balancer in your diagram This list will be a key input to the calculator 3 Think about whether you will use OnDemand Instances or Reserved Instan ces OnDemand Instances make it easy to get started but when you are ready to make a commitment you can save significantly (up to 75%) by purchasing Reserved Instances 19 4 Determine if you have unused software licenses available and if you have the appropriate agreements with those software vendors to use those licenses in the cloud (eg Microsoft License Mobility through Software Assurance) See the Licensing and Tenancy Options section earlier in this paper for more information 5 Examine or estimate the volume of your current SharePoint storage that you intend to migrate to the cloud and estimate your monthly growth (this storage will go to Amazon EBS) Also estimate the volume and growth of your data backups (this storage will go to Amazon S3) One nice thing about the cloud is that you don’t need to over provision capacity in advance to handle demand spikes You can scale up almost instantly as you grow and pay only for what you actually consume 6 Estimate the monthly data transfers for an average user and then multiply that by the number of users of your system to determine a ballpark total for ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 13 of 27 data transfers You’ll also need to estimate data transfers between Availability Zones when synchronization or replication is included in your architecture 7 Determine if you will use AWS Direct Connect or a virtual private network (VPN) to connect your onpremises network to your VPC or neither option (for example if you plan to have all employees and customers access your AWS resources over the Internet) 8 Finally decide what level of AWS Support you will need For a businessclass SharePoint deployment you should choose the Business Support plan at the minimum But you should also consider the Enterprise Support plan which adds 15 minute response times for critical questions and a dedicated technical account manager Estimating Compute Costs Now let’s follow the steps outlined previously to begin estimating our AWS monthly costs for the SharePoint farm depicted in Figure 2 Building Your Server List Working from the sketch of our architecture we can create the following list of servers and the Amazon EC2 instance types that we think might be suitable for each server role We needn’t worry about getting the ins tance type exactly right at this stage because this is just an estimate If you have particular servicelevel agreements that you must deliver then picking the right instance types may require some experimentation and budget analysis For additional information about Amazon EC2 instance types see Amazon EC2 Instance Types on the AWS website20 At this point you’re just making a list of what you need before you use the calculator After you enter and save the data in the calculator you can also go back to edit it anytime Server Description Quantity Operating System Instance Type vCPUs RAM (GiB) NAT Network Address Translation 2 Amazon Linux t2micro 1 1 RDGW Remote Desktop Gateway 2 Windows Server 2012 R2 t2medium 2 4 WFE Web front end servers 2 Windows Server 2012 R2 c32xlarge 8 15 ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 14 of 27 Server Description Quantity Operating System Instance Type vCPUs RAM (GiB) APP Application servers 2 Windows Server 2012 R2 c32xlarge 8 8 SQL SQL Server 2 Windows Server 2012 R2 r32xlarge 8 61 AD Active Directory 2 Windows Server 2012 R2 m4large 2 8 We set the quantity to two for each server because we want to use two Availability Zones to deploy a high availability design The NAT instance runs Amazon Linux because NAT is a basic function and Amazon Linux is less expensive than Windows It’s simple to set up a Linux NAT instance on AWS but an even better option is to use the NAT Gateway service21 This service isn’t available in the calculator yet so for the purposes of this whitepaper we’ll try to stick to the design from the SharePoint Quick Start that’s shown in Figure 2 Licensing Considerations SQL Server AlwaysOn Availability Groups which come with SQL Server Enterprise Edition are a good solution to achieve a highly available deployment across two Availability Zones So the SharePoint Quick Start recommends using SQL Server Enterprise in your SharePoint deployment on AWS You have two choices here: you can either purchase the SQL Server Enterprise licenses from AWS (in which case license costs will be included in your hourly charges for those Amazon EC2 instances) or you can utilize Microsoft License Mobility through Software Assurance to bring your own licenses into the cloud22 If you choose to purchase SQL Server Enterprise from AWS when you launch your EC2 instances you will need to select the AMI from AWS Marketplace (Other editions of SQL Server are offered as Quick Start AMIs but Enterprise Edition is currently offered only through AWS Marketplace) This will save you time because you won’t need to install SQL Server yourself On the other hand if you plan to use the BYOL model you need to install your own bits or import your ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 15 of 27 virtual machine with SQL Server installed (using the VM Import/Export service )23 For BYOL the first trick to estimating your costs in the calculator is to choose Amazon Linux (not Windows Server!) for each instance for which you plan to bring your own Windows Server license In the calculator you can alternatively choo se Windows Server without SQL Server if you plan to purchase Windows Server from AWS but use the BYOL model for SQL Server Enterprise; or you can choose Windows Server with SQL Server Enterprise if you don’t want to use BYOL for either The second trick for entering BYOL in the calculator comes up when you open the dialog box to pick the instance type In this dialog box you can choose Show (advanced options) to see check boxes for Detailed Monitoring (for Amazon CloudWatch) and Dedicated Instances At thi s time the calculator doesn’t offer Dedicated Hosts Remember you might use Dedicated Instances to bring your own license of SQL Server if your license is not based on the number of sockets or physical cores If you bring your own SQL Server licenses that are based on the number of sockets or physical cores then you must use Dedicated Hosts not Dedicated Instances For this exercise we will purchase all the Windows Server and SQL Server Enterprise licenses from AWS so we won’t be using Dedicated Hosts or Dedicated Instances Just to be clear if you plan to bring your own license your monthly cost will be significantly lower than the cost estimate that the calculator will give us in this example EBS Optimization There is one more detail to be aware of: For SQL Server instances we recommend that you select the EBSOptimized option An EBSoptimized instance uses an optimized configuration stack and provides additional dedicated capacity for Amazon EBS I/O This optimization provides the best performance for your EBS volumes by minimizing contention between Amazon EBS I/O and other traffic from your instance The hourly price for EBSoptimized instances is added to the hourly usage fee for supported instance types In the calculator when you select the r32xlarge instance type for SQL Server be sure to ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 16 of 27 select the EBSO ptimized check box See the documentation for EBSoptimized instances for more information24 Entering Your Data Now we’re ready to enter the table above into the calculator Open your browser to the AWS Simple Monthly Calculator and begin entering the data Our partial result looks like Figure 3 If you prefer not to enter all the data from scratch you can use the configuration that I’ve already shared 25 Note The prices shown in this whitepaper reflect data from the Simple Monthly Calculator at the time of writing and are provided for illustration purposes only Depending on pricing changes regional factors and special offers the cost s you get from the calculator may be different Figure 3: Entering the Amazon EC2 Instances into the Calculator For now we’ve entered all the instances as On Demand Instances running 100% of the time Later we’ll discuss saving money by using Auto Scaling to shut down some instances on weekends for example or by changing the purchase option from On Demand to Reserved Instances for 1year or 3year terms Another thing to keep in mind is that you might want to use OnDemand Instances only in development and QA environments and use Reserved Instances in your production environment ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 17 of 27 Now that you’ve entered all that data it’s a good idea to save it before proceeding Switch to the Estimate tab at the top of the calculator and then choose Save and Share You can optionally give your estimate a name and description choose OK and the calculator will generate a hyperlink for you (see Figure 4) Now copy and paste that hyperlink into an email to yourself That way you can return to the calculator anytime to continue editing the data for your SharePoint farm Figure 4: Saving Your Data in the Calculator Estimating Storage Costs The next step in the calculator is to put in the proper size for the boot volume on each instance and enter any additional Amazon EBS volumes that we need to attach to each instance When launching a Windows instance in Amazon EC2 the default boot volume is 30 GiB but the SharePoint Quick Start recommends setting it to 100 GiB That provides extra space for installing SharePoint Server and other applications you may want We won’t add any storage to the Linux NAT instances and we’ll leave the boot volumes for the RDGW and AD instances at the default size of 30 GiB If you are migrating your existing SharePoint farm to AWS you can examine your current storage needs to help estimate your future capacity requirements For the purposes of this whitepaper let’s enter one additional 5 TiB volume for SharePoint storage in each Availability Zone ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 18 of 27 You also need to think about I/O throughput For this basic exercise we’re going to skip over this consideration and simply use General Purpose SSD for all the EBS volumes AWS also offers Magnetic volumes (which are less expensive than General Purpose) and Provisioned IOPS SSD volumes (for consistent performance) For additional information about Amazon EBS see Amazon EBS Product Details 26 The final factor for Amazon EBS is the amount of backup storage that you require (backup copies are stored in Amazon S3) This value depends on the backup method backup frequency system size and backup retention Accurately calculating the amount of backup storage required can get quite involved and is beyond the scope of this guide For now let’s take a ve ry simplistic approach and estimate that the snapshot storage for each volume will equal the size of the volume itself Once you enter the EBS volumes the calculator should look like Figure 5 Go ahead and save the data in the calculator again Figure 5: Entering the Amazon EBS Volumes into the Calculator Elastic IP addresses data transfers and Elastic Load Balancing are three features that are closely related to Amazon EC2 that are optional in the Simple Monthly Calculator We’ll talk about those next ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 19 of 27 Using Elastic IP Elastic IP addresses are a limited resource but very useful for instances in a public subnet AWS only charges for Elastic IP addresses that you allocate but don’t assign to running instances and the cost is only a few dollars per mont h if you allocate one and never utilize it If you think you will have idle Elastic IP addresses you can enter them here but for this example we’ll ignore that option in the calculator Estimating Data Transfers Inbound data transfer to Amazon EC2 is free Charges do apply for data that is transferred out from Amazon EC2 to the Internet to another AWS Region or to another Availability Zone For details on AWS data transfer pricing see the “Data Transfer” section at https://awsamazoncom/ec2/pricing/ Just for illustration let’s say we plan to have 1000 users on SharePoint and each user will transfer out 05 GB per day (including weekends) So that’s 1000 users * 05 GB * 30 days = 15000 GB/mont h Let’s enter that in the calculator on the row for Data Transfer Out Estimating Load Balancing The SharePoint reference architecture uses one ELB load balancer When we enter that in the calculator we also need to estimate how much traffic will pass through it We estimated 15000 GB/month for egress in the previous section so let’s double that to cover ingress and egress Typically egress may exceed ingress but this is only an estimate; please see Elastic Load Balancing Pricing for more information27 You will see that Elastic Load Balancing is typically a very small part of the total cost At this stage the section of the calculator below Amazon EBS looks like Figure 6: ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 20 of 27 Figure 6: Entering Data Transfer and Elastic Load Balancing into the Calculator Switch to the Estimate tab at the top of the calculator and save your interim data again You can browse through the detail rows and see the line item cost of each section Choosing AWS Direct Connect and Amazon VPC Another factor you may want to enter in the calculator is the cost for AWS Direct Connect or Amazon VPC If you decide to go with either option you may want to revise your data transfer estimates for Elastic Load Balancing because these options tend to replace or reduce the ordinary Internet traffic to your VPC There is no additional charge for using Amazon VPC aside from the standard Amazon EC2 usage charges If a secure connection is required between your on premises network and Amazon VPC you can choose a hardware VPN connection or private network connection as described in the following sections Hardware VPN Connection When you use hardware VPN connecti ons to your Amazon VPC you are charged for each VPN connectionhour for which your VPN connection is provisioned and ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 21 of 27 available Additional information about hardware VPN connection pricing can be found at https://awsamazoncom/vpc/pricing/ Private Network Connection AWS Direct Connect provides the capability to establish a dedicated network connection from your onpremises network to AWS AWS Direct Connect pricing is based on perporthour and data transfer out charges Additional information about AWS Direct Connect pricing can be found at https://awsamazoncom/directconnect/pricing/ For t his example since we’ve already entered our Internet data transfer estimates we’ll skip adding AWS Direct Connect or Amazon VPC Reviewing the Estimate The final thing to do is click the AWS Support tab in the navigation bar and then select the Business Support plan as we recommended earlier The final cost estimate looks like Figure 7 ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 22 of 27 Figure 7: Estimate of Your Monthly Bill This shows that Amazon EC2 is the dominant cost for SharePoint Server on AWS and if you look in the Services tab you’ll see t hat the SQL Server instances are the lion’s share of that cost As a reminder if you have licenses available you could cut your costs substantially by bringing your own licenses into AWS as discussed earlier in the Licensing and Tenancy Options section There are several other cost saving ideas that we haven’t taken advantage of yet in this example We’ll survey those in the next section MoneySaving Ideas AWS Directory Service AWS Directory Service is a managed service that makes it easy to set up and run Microsoft Active Directory (AD) in the AWS cloud or connect your AWS ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 23 of 27 resources with an existing onpremises Microsoft Active Directory Once your directory is created you can use it to manage users and groups provide single sign on to applications and services create and apply group policy domainjoin Amazon EC2 instances and simplify the deployment and management of cloud based Linux and Microsoft Windows workloads If cost and simplified administration are important to you you should consider using AWS Directory Service instead of running two EC2 instances with the Active Directory role installed in Windows Server See AWS Directory Service Product Details for more information28 Reserved Instances and Spot Instances Another way to save money in Amazon EC2 is to use Reserved or Spot Instances Spot Instances work well for intermittent workloads such as highperformance computing and may not be applicable to SharePoint in general But depending on the size and cost of your compute instances and the nature of your workload you should consider using Spot Instances to incrementally process and save data computations Once you get your pilot SharePoint farm up and running on AWS consider making a 1 year or 3year commitment to take advantage of Reserved Instance pricing You can save up to 75% Auto Scaling Auto Scaling helps you maintain application availability and allows you to scale your Amazon EC2 capacity up or down automatically according to conditions you define You can use Auto Scaling to help ensure that you are running your desired number of Amazon EC2 instances Auto Scaling can also automatically increase the number of Amazon EC2 instances during demand spikes to maintain performance and decrease capacity during lulls to reduce costs Auto Scaling is well suited both to applications that have stable demand patterns and to applications that experience hourly daily or weekly variability in usage If you have dev/test SharePoint farms that aren’t used on weekends or if you anticipate less network traffic to your production SharePoint farm on weekends you may be able to realize significant cost savings by shutting down certain ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 24 of 27 instances periodically For example weekends account for about 33% of the total monthly cost There may be some complications to autoscaling your SharePoint farm but the savings may be worth it It’s beyond the scop e of this paper to go into the details but you’ll want to consider how to save patch and use your own SharePoint AMI with Auto Scaling And bear in mind that booting and domain joining can take a few minutes See Auto Scaling Product Details for more information29 NAT Alternatives Finally le t’s talk about alternatives to Network Address T ranslation (NAT) In the calculator we chose to deploy two Linux instances dedicated to running NAT Amazon Linux is a lowcost option and there are recipes for running NAT in Amazon EC2 that make it pretty simple But there are other options that might be less costly and even easier to administer The AWS SharePoint 2013 Quick Start was written before the launch of the NAT Gateway service This is a managed service that greatly simplifies the task of providing NAT for your VPC and you should consider it as your first option See the blog post Managed NAT (Network Address Translation) Gateway for AWS on the AWS blog for more information 30 If NAT Gateway isn’t appropriate for you there are other options Notice in our network diagram ( Figure 2 ) that we have an RDGW instance running Windows Server in each public subnet Since we’re already paying for those instances there’s no reason we couldn’t install the Windows Routing and Remote Access Service (RRAS) and make the instances dualuse for NAT and RDGW Finally we have another NAT option if we choose to add a virtual private network or AWS Direct Connect We could s et up the route tables in the VPC to route all outbound traffic through the onpremises network This would eliminate the need for NAT instances in the VPC ThirdParty Solutions AWS has a vast partner network of consulting and technology partners A few partners are worthy of mention here You could use AvePoint31 or Metalogix32 to offload storage of uploaded files (binary large objects or BLOBs) from ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 25 of 27 SharePoint (which goes in SQL Server) to Amazon S3 That can substantially reduce the size of the database which may in turn reduce your software license costs reduce your backup storage space and require less maintenance Additionally you may consider using SIOS33 or SoftNAS34 sharedstorage options to possibly remove the need for SQL Server AlwaysOn Availability Groups Conclusion This paper outlined a process you can follow to estimate the cost of running your IT workloads on AWS As an example we entered a SharePoint Server 2013 reference architecture into the AWS Simple Monthly Calculator We explored various AWS services relevant to an enterprise SharePoint deployment We also discussed how you can use your existing Microsoft software licenses on AWS There is often more than one way to design and deploy your architecture in AWS so we also provided alternative ideas that may help you save money on AWS Contributors The following individuals and organizations contributed to this document: Scott Zimmerman partner solutions architect AWS Bill Timm partner solutions architect AWS Julien Lepine solutions architect AWS Further Reading For additional information please consult the following sources: Getting Started with Amazon EC2 Windows Instances http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/EC2Win_G etStartedhtml Quick Start: Microsoft SharePoint Server 2013 on AWS https://docsawsamazoncom/quickstart/latest/sharepoint/ ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 26 of 27 Notes 1 http://calculators3amazonawscom/indexhtml 2 http://mediaamazonwebservicescom/AWS_Pricing_Overviewpdf 3 http://awsamazoncom/pricing/ 4 https://awsamazoncom/ec2/ 5 https://awsamazoncom/ebs/ 6 https://awsamazoncom/s3/ 7 https://awsamazoncom/vpc/ 8 https://awsamazoncom/elasticloadbalancing/ 9 https://awsamazoncom/autoscaling/ 10 https://awsamazoncom/directconnect/ 11 http://bitly/1mwA12X 12 http://awsamazoncom/quickstart/ 13 https://docsawsamazoncom/quickstart/latest/sharepoint/ 14 https://runqwiklabscom/ 15 https://awsamazoncom/windows/faq/ 16 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/dedicated hostsinstanceplacementhtml#dedicatedhostsaffinity 17 http://docsawsamazoncom/AWSEC2/latest/WindowsGuide/dedicated hostsinstanceplacementhtml#dedicatedhoststargetedplacement 18 http://docsawsamazoncom/kms/latest/developerguide/ 19 http://docsawsamazoncom/AWSEC2/latest/UserGuide/instance purchasingoptionshtml 20 http://awsamazoncom/ec2/instancetypes/ 21 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/vpcnat gatewayhtml 22 http://awsamazoncom/windows/resources/licensemobility/ ArchivedAmazon Web Services – Estimating AWS Deployment Costs for Microsoft SharePoint Server March 2016 Page 27 of 27 23 https://awsamazoncom/ec2/vmimport/ 24 http://docsawsamazoncom/AWSEC2/latest/UserGuide/EBSOptimizedhtml 25 http://calculators3amazonawscom/indexhtml#r=IAD&s=EC2&key= calc176211163ED74E669A4D86681BBB4462 26 https://awsamazoncom/ebs/details/ 27 https://awsamazoncom/elasticloadbalancing/pricing/ 28 https://awsamazoncom/directoryservice/details/ 29 https://awsamazoncom/autoscaling/details/ 30 https://awsamazoncom/blogs/aws/newmanagednatnetworkaddress translationgatewayforaws/ 31 http://wwwawspartner directorycom/PartnerDirectory/PartnerDetail?Name=AvePoint 32 http://wwwawspartner directorycom/PartnerDirectory/PartnerDetail?Name=metalogix 33 http://wwwawspartner directorycom/PartnerDirectory/PartnerDetail?Name=SIOS+Technology+Corp 34 http://wwwawspartner directorycom/PartnerDirectory/PartnerDetail?Name=AvePoint Archived
|
General
|
consultant
|
Best Practices
|
Extend_Your_IT_Infrastructure_with_Amazon_Virtual_Private_Cloud
|
ArchivedExtend Your IT Infrastructure with Amazon Virtual Private Cloud December 2018 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers/Archived © 201 8 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers Archived Contents Notices 2 Contents 3 Abstract 4 Introduction 1 Understanding Amazon Virtual Private Cloud 1 Different Levels of Network Isolation 2 Example Scenarios 7 Host a PCI Compliant E Commerce Website 7 Build a Development and Test Environment 8 Plan for Disaster Recovery and Business Continuity 10 Extend Your Data Center into the Cloud 10 Create Branch Office and Business Unit Networks 12 Best Practices for Using Amazon VPC 13 Automate the Deployment of Your Infrastructure 14 Use Multi AZ Deployments in VPC for High Availability 14 Use Security Groups and Network ACLs 15 Control Access with IAM Users and Policies 15 Use Amazon CloudWatch to Monitor the Health of Your VPC Instances and VPN Link 16 Conclusion 17 Further Reading 17 Document Revisions 18 Archived Abstract Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS cloud where you can launch AWS resources in a virtual network you define This paper provides an overview of how you can connect an Amazon V PC to your existing IT infrastructure while meeting security and compliance requirements This allows you to access AWS resources as though they are a part of your existing networkArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 1 Introduction With Amazon Virtual Private Cloud (Amazon VPC) you can provision a private isolated section of the Amazon Web Services (AWS) cloud where you can launch AWS resources in a virtual network that you define With Amazon VPC you can define a virtual network topology that closely resembles a traditional network th at you might operate in your own data center You have complete control over your virtual networking environment including selection of your own IP v4 address range creation of subnets and configuration of route tables and network gateways For example with VPC you can: • Expand the capacity of existing on premises infrastructure • Launch a backup stack of your environment for disaster recovery purposes • Launch a Payment Card Industry Data Security Standard (PCI DSS) compliant website that accepts secure pa yments • Launch isolated development and testing environments • Serve virtual desktop applications within your corporate network In a traditional approach to these use cases you would need a lot of upfront investment to build your own data center provision the required hardware acquire the necessary security certifications hire system administrators and keep everything running With VPC on AWS you have little upfront investment and you can scale your infrastructure in or out as necessary You get all of the benefits of a secure environment at no extra cost; AWS security controls certifications accreditations and features me et the security criteria required by some of the most discerning and security conscious customers in large enterprise as well as governmental agencies For a full list of certifications and accreditations see the AWS Compliance Center This paper highlights common use cases and best practices for Amazon VPC and related services Understanding Amazon Virtual Private Cloud Amazon VPC is a secure private and isolated section of the AWS cloud where you can launch AWS resources in a virtual network topology that you define When you create a VPC you provide a set of private IP v4 addresses that you want instances in your VPC to use You specify this set of addresses in the form of a Classless Inter Domain ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 2 Routing (CIDR) block for example 10000/16 You can assign block sizes of between /28 (16 IP v4 addresses) and /16 (65536 IP v4 addresses) You can also add a set of IPv6 addresses to your VPC IPv6 addresses are allocated from an Amazon owned range of add resses and the VPC receives a /56 (more than 1021 IPv6 addresses) In Amazon VPC each Amazon Elastic Compute Cloud (Amazon EC2) instance has a default network interface that is assigned a primary private IP address on your Amazon VPC network You can cre ate and attach additional elastic network interfaces (ENI) to any Amazon EC2 instance in your VPC Each ENI has its own MAC address It can have multiple IPv6 or private IP v4 addresses and it can be assigned to a specific security group The total number of supported ENIs and private IP addresses per instance depends on the instance type The ENIs can be created in different subnets within the same Availability Zone a nd attached to a single instance to build for example a low cost management network or network and security appliances The secondary ENIs and private IP addresses can be moved within the same subnet to other instances for lowcost high availability sol utions To each private IP v4 address you can associate a public elastic IP v4 address to make the instance reachable from the Internet IPv6 addresses are the same whether inside the VPC or on the public Internet (if the subnet is public ) You can also con figure your Amazon EC2 instance to be assigned a public IPv4 address at launch Public IP v4 addresses are assigned to your instances from the Amazon pool of public IP v4 addresses; they are not associated with your account With support for multiple IPv6 addresses private IPv4 addresses and Elastic IP addresse s you can among other things use multiple SSL certificates on a single server and associate each certificate with a specific IP address There are some default limits on the number of compon ents you can deploy in your VPC as documented in Amazon VPC Limits To request an increase in any of these limits fill out the Amazon VPC Limits form Different Levels of Network Isolation You can set up your VPC subnets as public private or VPN only In order to set up a public subnet you have to configure its routing table so that traffic from that subnet to the Internet is routed through an Internet gateway associated with the VPC as shown in Figure 1 By assigning EIP addresses to instances in that subnet you can make them reachable from the Internet over IPv4 as well It is a best prac tice to restrict both ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 3 ingress and egress traffic for these instances by leveraging stateful security group rules for your instances You can also use network a ddress translation ( NAT ) gateways (for IPv4 traffic) and egress only gateways (for IPv6 traffic) on private subnets to enable them to reach Internet addresses without allowing inbound traffic Stateless network filtering can also be applied for each subnet by setting up network access control lists (ACLs) for the subnet Figure 1: Example of a VPC with a public subnet only For private subnets traffic to the Internet can be routed through a NAT gateway or NAT instance with a public EIP that resides in a public subnet This configuration allows your resources in the private subnet to connect outbound traffic to the Internet without allocating Elastic IP addresse s or accepting direct inbound conne ctions AWS provides a managed NAT gateway or you can use your own Amazon EC2 based NAT appliance Figure 2 shows an example of a VPC with both public and private subnets using an AWS NAT gateway ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 4 Figure 2: Example of a VPC with public and private subnets By attaching a virtual private gateway to your VPC you can create a VPN connection between your VPC and your own data center for IPv4 traffic as shown in Figure 3 The VPN connection uses industry standard IPsec tunnels (IKEv1 PSK with encryption using AES256 and HMAC SHA2 with various Diffie Hellman groups ) to mutually authenticate each gateway and to protect against eavesdropping or tampering while your data is in transit For redundancy each VPN connection has two tunnels with each tunnel using a unique virtual private gateway public IP v4 address ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 5 Figure 3: Example of a VPC isolated from the Internet and connected through VPN to a corporate data center You have two routin g options for setting up a VPN connection: dynamic routing using Border Gateway Protocol (BGP) or static routing For BGP you need the IP v4 address and the BGP autonomous system number (ASN) of the customer gateway before attaching it to a VPC Once you ha ve provided this information you can download a configuration template for a number of different VPN devices and configure both VPN tunnels For devices that do not support BGP you may set up one or more static routes back to your on premises network by providing the corresponding CIDR ranges when you configure your VPN connection You then configure static routes on your VPN customer gateway and on other internal network devices to route traffic to your VPC via the IPsec tunnel If you choose to have onl y a virtual private gateway with a connection to your on premises network you can route your Internet bound traffic over the VPN and control all egress traffic with your existing security policies and network controls You can also use AWS Direct Connect to establish a private logical connection from your on premises network directly to your Amazon VPC AWS Direct Connect provides a private high bandwidth network connection between your network and your VPC You can use multiple logical connection s to establish private connectivity to multiple VPCs while maintaining network isolation With AWS Direct Connect you can establish 1 Gbps or 10 Gbps dedicated network connections between AWS and any of the AWS Direct Connect locations A dedicated connection can be partitioned into multiple logical connections by using industry standard 8021Q VLANs In this way you can use the same connection to access public ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 6 resources such as objects stored in Amazon Simple Storage Service (Amazon S3) that use public ly accessible IPv4 and IPv6 address es and private resources such as Amazon EC2 instances that are running within a VPC using Amazon owned IPv6 space or private IPv4 space —all while maint aining network separation between the public and private environments You can choose a partner from the AWS Partner Network (APN) to integrate the AWS Direct Connect endpoint in an AWS Direc t Connect location with your remote networks Figure 4 shows a typical AWS Direct Connect setup Figure 4: Example of using VPC and AWS Direct Connect with a customer remote network Finally you may combine all of these diffe rent options in any combination that make the most sense for your business and security policies For example you could attach a VPC to your existing data center with a virtual private gateway and set up an addit ional public subnet to connect to other AWS services that do not run within the VPC such as Amazon S3 Amazon Simple Queue Service (Amazon SQS) or Amazon Simple Notification Service (Amazon SNS) In this situation you could also leverage IAM Roles for Amazon EC2 for accessing these services and configure IAM policies to only allow access from the Elastic IP address of the NAT server ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 7 Example Scenarios Becau se of the inherent flexibility of Amazon VPC you can design a virtual network topology that meets your business and IT security requirements for a variety of different use cases To understand the true potential of Amazon VPC let’s take a few of the most common use cases: • Host a PCI compliant e commerce website • Build a development and test environment • Plan for disaster recovery and business continuity • Extend your data center into the cloud • Create branch office and business unit networks Host a PCI Complia nt ECommerce Website Ecommerce websites often handle sensitive data such as credit card information user profiles and purchase history As such they require a Payment Card Industry Data Security Standard (PCI DSS) compliant infrastructure in order to protect sensitive customer data Because AWS is accredited as a Level 1 service provider under PCI DSS you can run your application on PCI compliant technology infrastructure for storing processing and transmitting credit card information in the cloud As a merchant you still have to manage your own PCI certification but by using an accredited infrastructure service provider you don’t need to put additional effort into PCI compliance at the infrastructure level For more information about PCI complia nce see the AWS Compliance Center For example you can create a VPC to host the customer database and manage the checkout process of your ecommerce website To offer high availability you set up private subnets in each Availability Zone within the same region and then deploy your customer and order management databases in each Availability Zone Your checkout servers will be in an Auto Sca ling group over several private subnets in different Availability Zones Those servers will be behind an elastic load balancer that spans public subnets across all used Availability Zones and the elastic load balancer can be protected by a n AWS w eb applic ation firewall (WAF) By combining VPC subnets network ACLs and security groups you have fine grained control over access to your AWS infrastructure You’ll be prepared for the main challenges —scalability security ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 8 elasticity and availability —for the most sensitive part of commerce websites Figure 5 shows an example of a n ECommerce architecture Figure 5: Example of a n ECommerce architecture Build a Development and Test Environment Software environments are in constant flu x with new versions features patches and updates Software changes must often be deployed rapidly with little time to carry out regression testing Your ideal test environment would be an exact replica of your production environment where you would ap ply your updates and then test them against a typical workload When the update or new version passes all tests you can roll it into production with greater confidence To build such a test environment in house you would have to provision a lot of hardwa re that would go unused most of the time Sometimes this unused hardware is subsequently repurposed leaving you without your test environment when you need it Amazon VPC can help you build an economical functional and isolated test environment that sim ulates your live production environment that can be launched when you need it and shut down when you’re finished testing You don’t have to buy expensive hardware; you are more flexible and agile when your environment changes; your test environment can tra nsparently interact within your on premises network by using LDAP messaging and monitoring; and you pay AWS only for what you actually ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 9 use This process can even be fully automated and integrated into your software development process Figure 6 shows an example of development test and production environment s within different VPCs Figure 6: Example of development test and production environment s The same logic applies to experimental applications When you are eval uating a new software package that you want to keep isolated from your production environment you can install it on a few Amazon EC2 instances inside your test environment within a VPC and then give access to a selected set of internal users If all goes well you can transition these images into production and terminate unneeded resources ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 10 Plan for Disaster Recovery and Business Continuity The consequences of a disaster affecting your data center can be devastating for your business if you are not prepared for such an event It is worth spending time devising a strategy to minimize the impact on your operations when these events happen Trad itional approaches to disaster recovery usually require labor intensive backups and expensive standby equipment Instead consider including Amazon VPC in your disaster recovery plan The elastic dynamic nature of AWS is ideal for disaster scenarios where there are sudden spikes in resource requirements Start by identifying the IT assets that are most critical to your business As in the test environment described previously in this paper you can automate the replication of your production environment to duplicate the functionality of your critical assets Using automated processes you can back up your production data to Amazon Elastic Block Store (Amazon EBS) volumes or Amazon S3 buckets Database contents can be continually replicated to your AWS infra structure using AWS Database Migration Service (AWS DMS) You can write declarative AWS CloudFormation templates to describe your VPC infrastructure stack which you can launch automatically in any AWS region or Availability Zone In the event of a disaste r you can quickly launch a replication of your environment in the VPC and then direct your business traffic to those servers If a disaster involves only the loss of data from your in house servers you can recover it from the Amazon EBS data volumes that you’ve been using as backup storage For more information read Using Amazon Web Services for Disaster Recovery which is available at the AWS Architecture Center Extend Your Data Center into the Cloud If you have invested in building your own data center you may be facing challenges to keep up with constantly changing capacity requirements Occasional spikes in demand may exceed your total capacity If your enterprise is successful even routine operations will eventually reach the capacity limits of your data center and you’ll have to decide how to extend that capacity Building a new data center is one way but it is expensive and slow and the risk of underprovisioning or overprovisioning is high In both of these cases Amazon VPC can help you by serving as an extension of your own data center ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 11 Amazon VPC allows you to specify your own IP address range so you can ext end your network into AWS in much the same way you would extend an existing network into a new physical data center or branch office VPN and AWS Direct Connect connectivity options allow these networks to be seamlessly and securely integrated to create a single corporate network capable of supporting your users and applications regardless of where they are physically located And just like a physical extension of a data center IT resources hosted in VPC will be able to leverage existing centralized IT systems like user authentication monitoring logging change management or deployment services without the need to change how users or systems administrators access or manage your applications External connectivity from this extended virtual data cente r is also completely up to you You may choose to direct all VPC traffic to traverse your existing network infrastructure to control which existing internal and external networks your Amazon EC2 instances can access This approach for example allows you to leverage all of your existing Internet based network controls for your entire network Figure 7 shows an example of a data center that has been extended into AWS Figure 7: Example of a data center extended into AWS that leverages a customer’s existing connection to the Internet Additionally you could also choose to leverage the extensive Internet connectivity of AWS to offload traffic from on premises firewalls and load balancers and selectively present IPv6 endpoints ev en if your on premises network only supports IPv4 You can deploy an AWS WAF to protect your infrastructure against attacks leverage an application load balancer in your VPC to direct traffic to a mix of AWS based and on premises resources using a VPN con nection to provide a seamless end user experience as shown in Figure 8 ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 12 Figure 8: Example of a data center extended into AWS that leverages multiple connections to the Internet Create Branch Office and Business Unit Networks If you have branch offices that require separate but interconnected local networks consider deploying separate VPCs for each branch office Applications can easily communicate with each other using VPC peering subject to VPC security group rules that you app ly The VPCs can even be in different AWS accounts and different regions which can help reduce latency enhance resource isolation and enable cost allocation controls If you need to limit network communication within or across subnets you can configure security groups or network ACL rules to define which instances are permitted to communicate with each other You could also use this same idea to group applications according to business unit functions Applications specific to particular business units c an be installed in separate VPCs one for each unit Figure 9 shows an example of using VPC s and VPN s for branch office scenarios ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 13 Figure 9: Example of using VPC and VPN for branch office scenarios The main advantages of using Amazon VPC over provisioning dedicated on premises hardware in a branch office are similar to those described elsewhere: you can elastically scale resources up down in and out to meet demand ensuring that you don’t underprovision or overprovision Adding capacity is easy: launch additional Amazon EC2 instances from your custom Amazon Machine Images (AMIs) When the time comes to decrease capacity simply terminate the unneeded instances manually or automatically using Auto Scaling policies Althou gh the operational tasks may be the same to keep assets running properly you won’t need dedicated remote staff and you’ll save money with the AWS pay asyouuse pricing model Best Practices for Using Amazon VPC When using Amazon VPC there are a few bes t practices you should follow : • Automate the deployment of your infrastructure • Use Multi AZ deployments in VPC for high availability • Use security groups and network ACLs • Control access with IAM users and policies • Use Amazon CloudWatch to monitor the health of your VPC instances and VPN link ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 14 Automate the Deployment of Your Infrastructure Managing your infrastructure manually is tedious error prone slow and expensive For example in the case of a disaster recovery your plan should include only a limited number of manual steps because they slow down the process Even in less critical use cases such as development and test environments we recommend that you ensure that your standby environment is an exact replica of the production environment Manually re plicating your production environment can be very challenging and it increases the risk of introducing or not discovering bugs related to dependencies in your deployment By automating the deployment with AWS CloudFormation you can describe your infrastructure in a declarative way by writing a template You can use the template to deploy predefined stacks within a very short time in any AWS region The template can fully a utomate creation of subnets routing information security groups provisioning of AWS resources —whatever you need By using AWS CloudFormation helper scripts you can use standard Amazon Machine Images (AMIs) that will upon startup of Amazon EC2 instance s install all of the software at the right version required for your deployment Automated infrastructure deployment should be fully integrated into your processes You should treat your automation scripts like software that needs to be tested and maintai ned according to your standards and policies A continuous deployment methodology using services such as AWS CodePipeline to orchestrate the full process through build test and deploy phases can help make infrastructure deployment a regular and well tested business process Thoroughly tested automated processes are often faster cheaper more reliable and more secure than processes that rely on many manual steps Use Multi AZ Deployments in VPC for High Availability Architectures designed for high ava ilability typically distribute AWS resources redundantly across multiple Availability Zones within the same region If a service disruption occurs in one Availability Zone you can redirect traffic to the other Availability Zone to limit the impact of the disruption This general best practice also applies to architectures that include Amazon VPC ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 15 Although a VPC can span multiple Availability Zones each subnet within the VPC is restricted to a single Availability Zone In order to deploy a multi AZ Amazon Relational Database Service (Amazon RDS) instance for example you first have to configure VPC subnets in each Availability Zone within the region where the database instances will be launched Likewise Auto Scaling groups and elastic load balancers can span multiple Availability Zones by being deployed across VPC subnets that have been created for each zone Use Security Groups and Network ACLs Amazon VPC security groups allow you to control both ingress and egress traffic and you can define rules for a ll IP protocols and ports For a full overview of the features available with Amazon VPC security groups see Security Groups for Your VPC Amazon VPC security groups are stateful firewalls allowing return traffic for permitted TCP connections A network access control list ( ACL) is an additional layer of security that acts as a firewall to control traffic into and out of a subnet You can define access control rules for each of your subnets Although a VPC security group operates at the instance level a network ACL operates at the subnet level For a network ACL you can specify both allow and deny rules for both ingress and egress Network ACLs are stateless firewalls ; return traffic for TCP connections must be explicitly allowed on the TCP ephemeral ports (typically 32768 65535) As a best practice you should secure your infrastructure with multiple layers of defense By running your infrastructure in a VPC you can control which instances are exposed to the Internet in the first place and you can define both security groups and network ACLs to further protect your infrastructure at the infrastructure and subnet levels Additionally you should secure your i nstances with a firewall at the operating system level and follow other security best practices as outlined in AWS Security Resources Control Access with IAM Users and Policies With AWS Identity and Access Management (IAM) you can create and manage users in your AWS account A user can be either a person or an application that needs to interact with AWS With IAM you can centrally manage your users their security credentials such as access credentials and permissions that control which AWS ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 16 resources the users can access You typically create IAM users for users and use IAM roles for applications We recommend that you use IAM to implement a least privilege security strategy For exam ple you should not use a single AWS IAM user to manage all aspects of your AWS infrastructure Instead we recommend that you define user groups (or roles if using federated logins) for the different tasks that have to be performed on AWS and restrict each user to exactly the functionality he or she requires to perform that role For example you can create a network admin group of users in IAM and then give only that group the rights to create and modify the VPC For each user group define restrictive p olicies that grant each user access only to those services he or she needs Make sure that only authorized people in your organization have access to these users Use services such as Amazon GuardDuty to detect anomalous access patterns Implement strong a uthentication requirements such as minimum password length and complexity and consider multifactor authentication to reduce the risk of compromising your infrastructure For more information on how to define IAM users and policies see Controlling Access to Amazon VPC Resources Use Amazon CloudWatch to Monitor the Health of Your VPC Instances and VPN Link Just as you do with public Amazon EC2 instances you can use Amazo n CloudWatch to monitor the performance of the instances running inside your VPC Amazon CloudWatch provides visibility into resource utilization operational performance and overall demand patterns including CPU utilization disk reads and writes and n etwork traffic The information is displayed on the AWS Management Console and is also available through the Amazon CloudWatch API so you can integrate into your existing management tools You can also view the status of your VPN connections by using eithe r the AWS Management Console or making an API call The status of each VPN tunnel will include the state (up/down) of each VPN tunnel and the amount of traffic seen across the VPN tunnels ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 17 Conclusion Amazon VPC offers a wide range of tools that give you mo re control over your AWS infrastructure Within a VPC you can define your own network topology by defining subnets and routing tables and you can restrict access at the subnet level with network ACLs and at the resource level with VPC security groups Yo u can isolate your resources from the Internet and connect them to your own data center through a VPN You can assign elastic IP v4 and public IPv6 addresses to some instances and connect them to the public Internet through an Internet gateway while keeping the rest of your infrastructure in private subnets Amazon VPC makes it easier to protect your AWS resources while you keep the benefits of AWS with regard to flexibility scalability elasticity performance availability and the pay asyouuse pricing model Further Reading • Amazon VPC product page: https://awsamazoncom/vpc/ • Amazon VPC documentati on: https://awsamazoncom/documentation/vpc/ • AWS Direct Connect product page: https://awsamazoncom/directconnect/ • Getting started with AWS Direct Connect: https://awsamazoncom/directconnect/getting started/ • AWS Security Center: https://awsamazoncom/security/ • Ama zon VPC Connectivity Options: https://mediaamazonwebservicescom/AWS_Amazon_VPC_Connectivity_Opti onspdf • AWS VPN CloudHub: https://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPN_CloudHub html • AWS Security Best Practices: https://awsamazoncom/whitepapers/aws security best practices/ • Architecting for the Cloud: Best Practices: http://mediaamazonwebservicescom/AWS_Cloud_Best_Practicespdf ArchivedAmazon Web Services – Extend Your IT Infrastructure with Amazon V PC Page 18 Document Revisions Date Description December 2018 Added IPv6 features Removed references to EC2 classic Added AWS DMS AWS CodePipeline Amazon GuardDuty Changed multiple subnet strategy to multiple VPC VPC peering CloudHub Removed recommendation to change credentials regularly (no longer NIST recommended); added complexity and MFA December 2013 Major revision to reflect new functionality of Amazon VPC Added new use cases for Amazon VPC Added section “Understanding Amazon Virtual Private Cloud” Added section “Best Practices for Using Amazon VPC” January 2010 First publication
|
General
|
consultant
|
Best Practices
|
Federal_Financial_Institutions_Examination_Council_FFIEC_Audit_Guide
|
ArchivedPage 1 of 23 Federal Financial Institutions Examination Council (FFIEC) Audit Guide October 201 5 THIS WHITEPAPER HAS BEEN ARCHIVED For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapersArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Page 2 of 23 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Page 3 of 23 Contents Executive Summary 4 Approaches for using AWS Audit Guides 4 Examiners 4 AWS Provided Evidence 4 FFIEC Audit Checklist for AWS 5 1 Governance 5 2 Network Configuration and Management 7 3 Asset Configuration and Management 9 4 Logical Access Control 10 5 Data Encryption 13 6 Security Logging and Monitoring 13 7 Security Incident Response 15 8 Disaster Recovery 15 9 Inherited Controls 17 ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Page 4 of 23 Executive Summary This AWS Federal Financial Institutions Examination Council (FFIEC) audit guide has been designed by AWS to guide financial institutions that are subject to audits by members of the FFIEC on the use and security architecture of AWS services This document is intended for use by AWS financial institution customers their examiners and audit advisors to understand the scope of AWS services and to provide guidance for implementation and examination when using AWS services as part of the financial institutions environment for customer data Approaches for using AWS Audit Guides Examiners When assessing organizations that use AWS services it is critical to understand the “ Shared Responsibility” model between AWS and the customer This audit guide organizes the requirements into common security program controls and control areas Each control references the applicable audit requirements For more detail on each control reference the applicable regulatory requirements examiner activities and AWS evidence of compliance please refer to the Coalfire FFIEC Compliance on AWS whitepaper In general AWS services should be treated similar to onpremise infrastructure services that have been traditionally used by customers for their operating services and applications Policies and processes that apply to onpremise devices and servers should also apply when supplied by AWS services Controls pertaining solely to policy or procedure are generally entirely a responsibility of the customer Similarly the management of access to AWS services either via the AWS Console or Command Line API should be treated like other privileged administrator access AWS Provided Evidence AWS services are regularly assessed against industry standards and requirements In an attempt to support a variety of industries including federal agencies retailers international organizations health care providers and financial institutions AWS elects to have a variety of assessments performed ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 against the services and infrastructure For a complete list and information on assessments performed by third parties please refer to the AWS Compliance web site FFIEC Audit Checklist for AWS The AWS compliance program ensures that AWS services are regularly audited against applicable standards Some control statements may be satisfied by the customer’s use of AWS (for instance physical access to sensitive data) However most controls have either shared responsibilities between AWS and the customer or are entirely the customer’s responsibility This audit checklist describes the customer ’s responsibilities for compliance with the FFIEC IT Handbook when utilizing AWS services 1 Governance Definition: Governance includes the elements required to provide senior management assurance that its direction and intent are reflected in the security posture of the customer This is achieved by utilizing a structured approach to implementing an information security program For the purposes of this audit plan it means understanding which AWS services the customer has purchased what kinds of systems and information the customer plans to use with the AWS service and what policies procedures and plans apply to these services Major audit focus: Understand what AWS services and resources are being used by the customer and ensure that the customer’s security or risk management program has taken into account their use of the public cloud environment Audit approach: As part of this audit determine who within the customer’s organization is an AWS account owner and resource owner and what kinds of AWS services and resources they are using Verify that the customer’s policies plans and procedures include cloud concepts and that cloud is included in the scope of the customers audit program ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Governance Checklist Checklist Item IT Security Program and Policy Access the security policy and program related to the use of AWS services Ensure that the program is properly document ed for oversight changes in service IT secu rity policies incident reporting and security roles Verify that there is appropriate approval for the use of AWS and the services are appropriately addressed within the information security program Confirm that an employee is assigned as authority for the use and security of AWS services and there are defined roles for those noted key roles Verify that any customer changes in AWS services are reflected in the security program Review the customer’s IT security policies and ensure that they cover AWS services and take size and complexity into consideration Review management oversight and ensure that they assess and approve the use and configuration of AWS services Ensure the customer has integrated AWS services into their SIEM tools and has a process for monitoring and addressing non compliance Review the customer’s use of any AWS reporting tools such as: Amazon CloudWatch AWS Trusted Advisor Verify that there is a policy in place for the appropriate disclosure of client information within AWS Information Security Oversight Verify that the customer has conducted oversight and annual IT assessments including any remediation(s) related to AWS services Include a review of management and B oard of Directors (B OD) oversight Risk Assessment Assess and review the customer’s risk assessment for AWS services including: adherence to the customer’ s risks assessment policy and procedures AWS deployed data inclusion into the cu stomer’s risks assessment and BOD oversight Verify that AWS services were included in risk assessment and privacy impact assessment Personnel Controls Verify that there are proper segregation of duties background checks and training conducted for IT operations staff Verify that the level of access for AWS services is comparable to the level of secure information and comprehensive screening including signed statements of understanding for non disclosure ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Checklist Item Systems Development Lifecycle Verify that the use of AWS development tools are documented and follow the customers SDLC process including security requirements and configuration changes Service Provider Oversight Ensure that the customer documents and follows a defined process to evaluate on board and maintain security safeguards including AWS Ensure that internal procedures include onboarding shared security responsi bility and communication process with AWS Verify that the customer’s contract with AWS includes a requirement to implement and maintain privacy and security safeguards Verify adherence to appropriate due diligence standards security program management and monitoring of service capabilities and reliability Documentation and Inventory Verify that the customer’s AWS network is fully documented and all AWS critical systems are included in inventory documentation with limited access to this documentation Review AWS Config reports for AWS resource inventory configuration history and configuration change notifications (Example API Call 1) 2 Network Configuration and Management Definition: Network management in AWS is very similar to network management onpremises except that network components such as firewalls and routers are virtual Customers must ensure that their network architecture follows the security requirements of their organization including the use of DMZs to separate public and private (untrusted and trusted) resources the segregation of resources using subnets and routing tables the secure configuration of DNS additional transmission protection in the form of a VPN and limits on inbound and outbound traffic Customers who must perform monitoring of their network can do so using hostbased intrusion detection and monitoring systems Major audit focus: Missing or inappropriately configured security controls related to external access/network security that could result in a security exposure ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Audit approach: Understand the network architecture of the customer’s AWS resources and how the resources are configured to allow external access from the public Internet and the customer’s private networks Note: AWS Trusted Advisor can be leveraged to validate and verify AWS configurations settings Network Configuration and Management Checklist Checklist Item Network Controls Identify how network segmentation is applied within the customer’s AWS environment (Example API Call 2 5) Review the customer’s overall infrastructure including use of AWS services to ensure there is no single point of failure Review AWS Security Group implementation AWS Direct Connect and Amazon VPN configuration for proper implementation of network segmentation and ACL and f irewall setting for AWS services Ensure that the customer’s procedures for governing the daily activities of personnel include the administration of the AWS services Confirm the customer has established appropriate logging and monitoring for Amazon EC 2 instances to ensure th at any possible security related events are identified Verify that the customer has a procedure for granting remote internet or VPN access to employees for AWS Console access and remote access to Amazon EC2 networks and systems Malicious Code Controls Assess the customer’s implementation and management of antimalware for Amazon EC2 instances in a similar manner as with physical systems Firewall Controls Review the customer’s defined process of firewall rules management within AWS and include Security Group configuration changes VPN configuration and management approval along with m aintenance of documentation of approval s Verify that the host based or other firewall configuration is properly hardened Verify if AWS Security Groups are the primary firewall solution If other firewall technologies are used the examiner should review the technology to ensure that it is properly configured to hide internal addresses block malicious code and has logging enable d Ensure AWS Security Group administration is performed from secure workstations and via HTTPS for either the AWS Console or c ommand line API Additionally ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Checklist Item ensure that multi factor authentication is enabled for any user that is assigned general administrative rights or rights to manage security groups within the AWS Console or through command line APIs Verify internal policies for restricting AWS Security Group management to select IT staff Verify that the customer’s training records include AWS security such as Amazon IAM usage EC2 Security Groups and remote access to EC2 instances 3 Asset Configuration and Management Definition: AWS customers are responsible for maintaining the security of anything they install on or connect to their AWS resources Secure management of the customer ’s AWS resources means knowing what resources the customer is using (asset inventory) securely configuring the guest OS and applications on the customers resources (secure configuration settings patching and antimalware) and controlling changes to the customers resources (change management) Major audit focus: Customers must manage their operating system and application security vulnerabilities to protect the security stability and integrity of the asset Audit approach: Validate that the customer ’s OS and applications are designed configured patched and hardened in accordance to the customer’s policies procedures and standards All OS and application management practices can be common between onpremise and AWS systems and services Asset Configuration and Management Checklist Checklist Item Change Management Controls Ensure the customer ’s use of AWS services follows the same change control processes as internal ser vices including testing back out procedures training and logs related to changes Verify that AWS services are included within the customer’s internal patch management process ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Checklist Item Ensure that patch management strategies include establishing version control o f all operation systems Amazon Machine Images and application software used within the AWS service environment Ensure that polic ies and procedures related to client information within AWS is secured in accordance with the customer’s IT Security Policies Operating System Access Ensure the customer’s internal policies and procedures call for restricting and monitoring privileged access to AWS services and Amazon EC2 instances to de signated administrator Review the Amazon EC2 instances in use within the customer’s organization If AWS monitoring tools are used such as AWS CloudWatch review its use for logical security Application Access Controls Review controls for applications implemented on Amazon EC2 instances to ensure they are appropriate for the risk of the application and the needs of the customer users Ensure that authentication and authorization methods application access controls and assessment event logging for applications implemented on Amazon EC2 instances is conducted in a similar manner as with physical systems Database Security Controls Review access and data modification activity for Amazon RDS or customer databases in a similar manner as with internal systems Determine if production data is utilized in test environment using AWS database services and if so ensure that the security policies and controls are configured to match production controls 4 Logical Access Control Definition: Logical access controls determine not only who or what can have access to a specific system resource but the type of actions that can be performed on the resource (read write etc) As part of controlling access to AWS resources users and processes must present credentials to confirm that they are authorized to perform specific functions or have access to specific resources The credentials required by AWS vary depending on the type of service and the access method and include passwords cryptographic keys and certificates Access to ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 AWS resources can be enabled through the AWS account individual AWS Identify and Access Management (IAM) user accounts created under the AWS account or identity federation with the customer’s corporate directory (single sign on) AWS Identity and Access Management (IAM) enables a customer’s users to securely control access to AWS services and resources Using IAM a customer can create and manage AWS users and groups and use permissions to allow and deny access to AWS resources Major audit focus: This portion of the audit focuses on identifying how users and permissions are set up in AWS for the services being used by the customer It is also important to ensure that the credentials associated with all of the customer’s AWS accounts are being managed securely by the customer Audit approach: Validate that permissions for AWS assets are being managed in accordance with customer’s internal policies procedures and processes Note: AWS Trusted Advisor can be leveraged to validate and verify IAM Users Groups and Role configurations Logical Access Control Checklist Checklist Item Access Management Authentication and Authorization Ensure there are internal policies and procedures for managing access to AWS services and Amazon EC2 instances Federated Access Controls: Ensure that the mechanisms properly apply internal role assignment to AWS permission and understand the processes and methods to authorize access levels to ensure a least privilege model has been implemented Native AWS Access C ontrols: Compare Amazon IAM roles and user assignment to functional roles and responsibilities Temporary credentials should also be considered to ensure that these credentials are only assigned limited privileges (Example A PI Call 6 7) Instant Access Controls: For Amazon EC2 instances review implemented roles and assignments based on the local operating systems access controls mechanisms and/or any federation that the customer has established for managing access to the EC2 virtual machines Review the records for granting access the type of access control in use within the customer’s organization as it related to AWS services and user account policy and password complexities and validate that they extend to AWS services ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Checklist Item Ensure that multi factor identification is enabled for users and no shared accounts exist as it relates to AWS services Remote Access Ensure internal policies and procedures are followed for managing remote access to AWS services and Amazon EC2 instances Note: All access to AWS and Amazon EC2 instances is “remote acces s” by definition unless Direct Connect has been configured Review access logging and Amazon IAM configuration Amazon IAM accounts for network access should be configured for multi factor authentication (Example API Call 8) Ensure that Security groups are configured to allow for direct access to common management ports for Amazon instances (Example API Call 9) Ensure that multi factor authentication mechanisms and encryption configuratio n have been implemented on the system in a similar manner as with physical systems Personnel Control & Segregation of Duties Ensure that the IT staff are aware of the informa tion security program applicable to AWS services and how it relates to their job functions Review the customer’s type of access control in use within the ir organization as it relates to AWS services : Federated Access Controls: Review internal role assignments to AWS permissions and understand the processes and methods to authorize Native AWS Access Controls: Compare Amazon IAM roles and user assignment to functional roles and responsibilities (Example API Cal l 10) Instance Access Controls: Review implemented roles and assignments based on the local operating systems access controls mechanisms and/or any federation that the customer has established for managing access to the EC2 virtual machines (Example API Call 11) Verify internal policies and procedures for managing access to AWS services and Amazon EC2 instances Individuals monitoring security administrator logs should function independently from individua ls responsible for operations administrators Verify that information security awareness training includes AWS security such as Amazon IAM usage EC2 Security Group s and remote access to EC2 instances ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 5 Data Encryption Definition: Data stored in AWS is secure by default; only AWS owners have access to the AWS resources they create However some customers who have sensitive data may require additional protection which they can enable by encrypting the data when it is stored on AWS Only Amazon S3 service currently provides an automated serverside encryption function in addition to allowing customers to encrypt on the customer side before the data is stored For other AWS data storage options the customer must perform encryption of the data Major audit focus: Data at rest should be encrypted in the same way as the customer protects onpremise data Also many security policies consider the Internet an insecure communications medium and would require the encryption of data in transit Improper prot ection of customers’ data could create a security exposure for the customer Audit approach: Understand where the data resides and validate the methods used to protect the data at rest and in transit (also referred to as “data in flight”) Note: AWS Trusted Advisor can be leveraged to validate and verify permissions and access to data assets Data Encryption Checklist Checklist Item Encryption Controls Ensure there are appropriate controls in place to protect confidential customer information in trans it while using AWS services Review methods for connection to AWS Consol e management API S3 RDS and Amazon EC2 VPN for enforcement of encryption Review internal policies and procedures for key management including AWS services and Amazon EC2 instances (Example API Call 12 14) 6 Security Logging and Monitoring Definition: Audit logs record a variety of events occurring within a customer’s information systems and networks Audit logs are used to identify activity that ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 may impact the security of those systems whether in realtime or after the fact so the proper configuration and protection of the logs is important Major audit focus: Systems must be logged and monitored just as they are for onpremise systems If AWS systems are not included in the overall company security plan critical systems may be omitted from scope for monitoring efforts Audit approach: Validate that audit logging is being performed on the guest OS and critical applications installed on the customers Amazon EC2 instances and that implementation is in alignmen t with the customer’s policies and procedures especially as it relates to the storage protection and analysis of the logs Security Logging and Monitoring Checklist: Checklist Item Logging Assessment Trails and Monitoring Review the customers logging and monitoring policies and procedures and ensure their inclusion of AWS services and that they address segregation of duties se curity and access authority Verify that there is a process to monitor service configuration changes (Example API Call 15) Verify that logging mechanisms are configured to send logs to a centralized server and ensure that for Ama zon EC2 instances the proper type and format of logs are retained in a similar manner as with physical systems For customers using Amaz on CloudWatch review the customer’s process and record their use of network monitoring Specifically review VPC FlowLog events (Example API Call 16) Intrusion Detection and Response Review host based IDS on Amazon EC2 instances in a similar manner as with physical systems Review AWS provided evidence on where information on intrusion detection processes can be reviewed Review the customer’s use and configuration of Amazon CloudWatch and how logs are stored and protected ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 7 Security Incident Response Definition: Under a Shared Responsibility Model security events may by monitored by the interaction of both AWS and AWS customers AWS detects and responds to events impacting the hypervisor and the underlying infrastructure Customers manage events from the guest operating system up through the application The customer should understand incident response responsibilities and adapt existing security monitoring/alerting/audit tools and processes for their AWS resources Major audit focus: Security events should be monitored regardless of where the assets reside The auditor can assess consistency of deploying incident management controls across all environments and validate full coverage through testing Audit approach: Assess the existence and operational effectiveness of the incident management controls for systems in the AWS environment Security Incident Response Checklist: Checklist Item Incident Reporting Ensure the incident response plan and policy includes appropriate AWS reporting processes as well as communication procedures between the customer and AWS Ensure the customer is leveraging existing incident monitoring tools as well as AWS available tools to monitor the use of AWS services (Example API Call 17 18) Verify that the customer’s use of AWS services aligns with and can support their internally defined thresholds Verify that the Incident Response Plan undergoes an annual review and changes related to AWS are made as needed Note if the Incident Response Plan has a customer notification procedure 8 Disaster Recovery Definition: AWS provides a highly available infrastructure that allows customers to architect resilient applications and quickly respond to major incidents or disaster scenarios However customers must ensure that they ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 configure systems that require high availability or quick recovery times to take advantage of the multiple Regions and Availability Zones that AWS offers Major audit focus: An unidentified single point of failure and/or inadequate planning to address disaster recovery scenarios could result in a significant impact to the customer While AWS provides service level agreements (SLAs) at the individual instance/service level these should not be confused with a customer’s business continuity (BC) and disaster recovery (DR) objectives such as Recovery Time Objective (RTO) Recovery Point Objective (RPO) The BC/DR parameters are associated with solution design A more resilient design would often utilize multiple components in different AWS availability zones and involve data replication Audit approach: Understand the DR strategy for the customer’s environment and determine the faulttolerant architecture employed for the cus tomer’s critical assets Note: AWS Trusted Advisor can be leveraged to validate and verify some aspects of the customer’s resiliency capabilities Disaster Recovery Checklist : Checklist Item Business Continuity Planning (BCP) Ensure the customer ha s a comprehensive BCP that includes AWS services Within the Plan ensure that AWS is included in the emergency preparedness and crisis managem ent elements senior manager oversight responsibilities and the testing plan Ensure the customer has a recovery plan that includes the proper use of AWS availability zones Review the annual BCP test for AWS services Backup and Storage Controls Review the use of AWS services for off site backup and ensure it is consist ent with the customer’s policy and procedures as well as follows AWS best practices Review inventory of data backed up to AWS services as off site backup Ensure policies and procedures address scalability as it relates to AWS services Conduct a test of backup data stored in AWS services (Example API Call 19 21) ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 9 Inherited Controls Definition: Amazon has many years of experience in designing constructing and operating largescale datacenters This experience has been applied to the AWS platform and infrastructure AWS datacenters are housed in nondescript faciliti es Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance intrusion detection systems and other electronic means Authorized staff must pass twofactor authentication a minimum of two times to access datacenter floors All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff AWS only provides datacenter access and information to employees and contractors who have a legitimate business need for such privileges When an employee no longer has a business need for these privileges his or her access is immediately revoked even if he or she continue s to be an employee of Amazon or Amazon Web Services All physical access to datacenters by AWS employees is logged and audited routinely Major audit focus: The purpose of this audit section is to demonstrate that the customer conducted the appropriate due diligence in selecting service providers Audit approach: Understand how the customer can request and evaluate thirdparty attestations and certifications in order to gain reasonable assurance of the design and operating effectiveness of control objectives and controls Inherited Controls Checklist Checklist Item Physical Security & Environmental Controls Review the AWS provided evidence for details on where information on intrusion detection processes can be reviewed that are managed by AWS for physical security controls ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Conclusion There are many thirdparty tools that can assist you with your assessment As AWS customers have full control of their operating systems network settings and traffic routing a majority of tools used inhouse can be used to assess and audit the assets in AWS A useful tool provided by AWS is the AWS Trusted Advisor tool AWS Trusted Advisor draws upon best practices learned from AWS’ aggregated operational history of serving hundreds of thousands of AWS customers The AWS Trusted Advisor performs several fundamental checks of your AWS environment and makes recommendations when opportunities exist to save money improve system performance or close security gaps This tool may be leveraged to perform some of the audit checklist items to enhance and support your organizations auditing and assessment processes ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Appendix A: References and Further Reading 1 Amazon Web Services Risk and Compliance Whitepaper – https://d0awsstaticcom/whitepapers/compliance/AWS_Risk_and_Com pliance_Whitepaperpdf 2 AWS OCIE Cybersecurity Workbook https://d0awsstaticcom/whitepapers/compliance/AWS_SEC_Workboo kpdf 3 Using Amazon Web Services for Disaster Recovery http://d36cz9buwru1ttcloudfrontnet/AWS_Disaster_Recoverypdf 4 Identity federation sample application for an Active Directory use case http://awsamazoncom/code/1288653099190193 5 Single Signon with Windows ADFS to Amazon EC2 NET Applications http://awsamazoncom/articles/3698?_encoding=UTF8&queryArg=sear chQuery&x=20&y=25&fromSearch=1&searchPath=all&searchQuery=iden tity%20federation 6 Authenticating Users of AWS Mobile Applications with a Token Vending Machine http://awsamazoncom/articles/4611615499399490?_encoding=UTF8& queryArg=searchQuery&fromSearch=1&searchQuery=Token%20Vending %20machine 7 ClientSide Data Encryption with the AWS SDK for Java and Amazon S3 http://awsamazoncom/articles/2850096021478074 8 AWS Command Line Interface – http://docsawsamazoncom/cli/latest/userguide/clichapwelcomehtml 9 Amazon Web Services Acceptable Use Policy http://awsamazoncom/aup/ ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Appendix B: Glossary of Terms API: Application Programming Interface (API) in the context of AWS These customer access points are called API endpoints and they allow secure HTTP access (HTTPS) which allows you to establish a secure communication session with your storage or compute instances within AWS AWS provides SDKs and CLI reference which allows customers to programmatically manage AWS services via API Authentication: Authentication is the process of determining whether someone or something is in fact who or what it is declared to be Availability Zone: Amazon EC2 locations are composed of regions and Availability Zones Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive low latency network connectivity to other Availability Zones in the same region EC2: Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud It is designed to make webscale cloud computing easier for developers Hypervisor: A hypervisor also called Virtual Machine Monitor (VMM) is software/hardware platform virtualization software that allows multiple operating systems to run on a host computer concurrently IAM: AWS Identity and Access Management (IAM) enables a customer to create multiple Users and manage the permissions for each of these Users within their AWS Account Object: The fundamental entities stored in Amazon S3 Objects consist of object data and metadata The data portion is opaque to Amazon S3 The metadata is a set of name value pairs that describe the object These include some default metadata such as the date last modified and standard HTTP metadata such as ContentType The developer can also specify custom metadata at the time the Object is stored Service: Software or computing ability provided across a network (eg EC2 S3 VPC etc) ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 Appendix C: API Calls The AWS Command Line Interface is a unified tool to manage your AWS services Read more: http://docsawsamazoncom/cli/latest/reference/indexhtml#cliaws and http://docsawsamazoncom/cli/latest/userguide/clichapwelcomehtml 1 List all resources with tags aws ec2 describetags http://docsawsamazoncom/cli/latest/reference/ec2/describetagshtml 2 Review VPNs aws ec2 describecustomergateways aws ec2 describevpnconnections 3 Review Direct Connect aws directconnect describeconnections aws directconnect describeinterconnects aws directconnect describeconnections oninterconnect aws directconnect describevirtualinterfaces 4 Review VPCs Subnets and Routing Tables aws ec2 describevpcs aws ec2 describesubnets aws ec2 describeroutetables 5 Review Security Groups and Network ACLs aws ec2 describenetworkacls aws ec2 describesecuritygroups 6 List IAM Roles/Groups/Users aws iam listroles aws iam listgroups aws iam listusers 7 List all IAM Policies aws iam listpolicies 8 API to list IAM Users with MFA aws iam listmfadevices 9 API to list Security Groups: ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 aws ec2 describesecuritygroups 10 List Policies assigned to Groups/Roles/Users: aws iam listattachedrolepolicies rolename XXXX aws iam listattachedgrouppolicies groupname XXXX aws iam listattacheduserpolicies username XXXX where XXXX is a resource name within the Customers AWS Account 11 Review Amazon EC2 instances launched as roles: a Identify Amazon EC2 Role ARN: aws iam listroles b Filter Amazon EC2 instances by ARN: aws ec2 describeinstances filters "Name=iaminstance profilearnValues=arn:aws:iam::accountid:instanceprofile/rolename" 12 List KMS Keys aws kms listaliases 13 List Key Rotation Policy aws kms getkeyrotationstatus –keyid XXX (where XXX = keyid In AWS account 14 List EBS Volumes encrypted with KMS Keys aws ec2 describevolumes 15 Confirm AWSConfig Service is enabled within a region aws configservice getstatus –region XX XXXXX (where XX XXXXX = AWS region targeted eg useast 1) 16 Examine FlowLog current status aws ec2 describeflowlogs a View VPC Flow Log events in Cloudwatch taking output of loggroupname from above API call aws logs describe logstreams loggroupname mylogs aws logs get logevents loggroup name my logs log stream name 20150601 17 Review all Cloudwatch Alarms awscloudwatch describealarms 18 Review alarms associated with a specific resource and metric aws cloudwatch describe alarms formetric metric name CPUUtilization namespace AWS/EC2 dimensions Name=InstanceIdValue=XXXXX (Where XXXX = ec2 instance id) 19 Create Snapshot/Backup of EBS volume ArchivedAmazon Web Services – FFIEC Audit Guide October 2015 aws ec2 createsnapshot volumeid XXXXXXX (where XXXXXX = ID of volume within the AWS Account) 20 Confirm Snapshot/Backup completed aws ec2 describesnapshots filters “Name=volume idValues=XXXXXX) 21 Create volume from Snapshot (Restoring Backup) aws ec2 createvolume availabilityzone XXXX snapshotid YYYY (where XXX is the availability zone you want the new volume created) (where YYY is the snapshotid you want to restore from)
|
General
|
consultant
|
Best Practices
|
File_Gateway_for_Hybrid_Cloud_Storage_Architectures
|
This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers File Gateway for Hybrid Cloud Storage Architectures Overview and Best Practices for the File Gateway Configuration of the AWS Storage Gateway Service March 2019 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents AWS’s current product offerings and practices which are subject to change without notice and (c) does not create any commitments or assu rances from AWS and its affiliates suppliers or licensors AWS’s products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied AWS’s responsibilities and liabilities to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Introduction 1 File Gateway Architecture 1 File to Object Mapping 2 Read/Write Operations and Local Cache 4 Choosing the Right Cache Resources 6 Security and Access Controls Within a Local Area Network 6 Monitoring Cache and Traffic 7 File Gateway Bucket Inventory 7 Amazon S3 and the File Gateway 10 File Gateway Use Cases 12 Cloud Tiering 13 Hybrid Cloud Backup 13 Conclusion 15 Contributors 15 Further Reading 15 Document Revisions 15 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract Organizations are looking for ways to reduce their physical data center footprints particularly for storage arrays used as secondary file backup or on demand workloads However providing data services that bridge private data centers and the cloud comes with a unique set of challenges Traditional data center storage services rely on low latency network attached storage (NAS) and storage area network (SAN) protocols to access storage locally Cloud native applications are generally optimized for API acces s to data in scalable and durable cloud object storage such as Amazon Simple Storage Service (Amazon S3) This paper outlines the basic architecture and best practices for building hybrid cloud storage environments using the AWS Storage Gateway in a file gateway configuration to address key use cases such as cloud tiering hybrid cloud backup distribution and cloud processing of data generated by on premises applications This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 1 Introduction Organizations are looking for ways to reduce their physical data center infrastructure A great way to start is by moving secondary or tertiary workloads such as long term file retention and backup and re covery operations to the cloud In addition organ izations want to take advantage of the elasticity of cloud architectures and features to access and use their data in new on demand ways that a traditional data center infrastructure can’t support AWS Storage Gateway has multiple gateway types including a file gateway that provides lowlatency Network File System (NFS) and Server Message Block (SMB) access to Amazon Simple Storage Service (Amazon S3) objects from on premises applications At the same time customers can access that data from any Amazon S 3 APIenabled application Configuring AWS Storage Gateway as a file gateway enables hybrid cloud storage architectures in use cases such as archiving on demand bursting of workloads and backup to the AWS Cloud Individual files that are written to Amazo n S3 using the file gateway are stored as independent objects This provides high durability lowcost flexible storage with virtually infinite capacity Files are stored as objects in Amazon S3 in their original format without any proprietary modificatio n This means that data is readily available to data analytics and machine learning applications and services that natively integrate with Amazon S3 buckets such as Amazon EMR Amazon Athena or Amazon Trans cribe It also allows for storage management through native Amazon S3 features such as lifecycle policies analytics and crossregion replication (CRR) A file gateway communicates efficiently between private data centers and AWS Traditional NAS protocols (SMB and NFS) are trans lated to object storage API calls This makes file gateway an ideal component for organizations looking for tiered storage of file or backup data with lowlatency local access and durable storage in the cloud File Gateway Architecture A file gateway provides a simple solution for presenting one or more Amazon S3 buckets and their objects as a mountable NFS or SMB file share to one or more clients onpremises The file gateway is deployed as a virtual machine in VMware ESXi or Microsoft Hyper V environments on premises or in an Amazon Elastic Compute Cloud (Amazon EC2) instance in AWS File gateway can also be deployed in data center and remote office locations on a Stora ge Gateway hardware appliance When deployed file gateway provides a seamless connection between onpremises NFS (v30 or v41) or SMB (v1 or v2) client s—typically application s—and Amazon S3 buckets hosted in a given AWS Region The file gateway employs a local read/write cache to provide a lowlatency This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 2 access to data for file share clients in the same local area network (LAN) as the file gateway A bucket share consists of a file share hosted from a file gateway across a single Amazon S3 bucket The file gateway virtual machine appliance currently supports up to 10 bucket shares Figure 1: Basic file gateway architecture Here are the components of the fi le gateway architecture shown in Figure 1 : 1 Clients access objects as files using an NFS or SMB file share exported through an AWS Storage Gateway in the file gateway configuration 2 Expandable read/write cache for the file gateway 3 File gateway virtual appliance 4 Amazon S3 which provides persistent object storage for all files that are written using the file gateway File to Object Mapping After deploy ing activat ing and configur ing the file gateway one or more bucket shares can be presented to clients that support NFS v3 or v41 protocols or mapped to a share via SMB v1 or v2 protocols on the local LAN Each share (or mount point) on the gateway is paired to a single bucket and the contents of the bucket are available as files and folders in the share Writing an individual file to a share on the file gateway creates an identically named object in the associated bucket All newly created objects are written to Amazon S3 Standard Amazon S3 Standard – Infrequent Access ( S3 Standard – IA) or Amazon S3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cl oud Storage Architectures Page 3 One Zone – Infrequent Access ( S3 One Zone – IA) storage classes depending on the configuration of the share The Amazon S3 key name of a newly created object is identical to the full path of the file that is written to the moun t point in AWS Storage Gateway Figure 2: Files stored over NFS on the file gateway mapping to Amazon S3 objects One difference between storing data in Amazon S3 versus a traditional file system is the way in which granular permi ssions and metadata are implemented and stored Access to files stored directly in Amazon S3 is secured by policies stored in Amazon S3 and AWS Identity and Access Management (IAM) All other attributes such as storage class and creation date are stored in a given object’s metadata When a file is accessed over NFS or SMB the file permissions folder permissions and attributes are stored in the file system To reliably persist file permissions and attributes the file gateway stores this information as part of Amazon S3 object metadata If the permissions are changed on a file over NFS or SMB the gateway modifies the metadata of the associated objects that are stored in Amazon S3 to reflect the changes Custom default UNIX permissions are defined for all existing S3 objects within a bucket when a share is created from the AWS Management Console or using the file gateway API This feature lets you create NFS or SMB enabled shares from buckets with existing content without having to manually assign permissions after you create the share The following is an example of a file that is stored in a share bucket and is listed from a Linux based client that is mounting the share bucket over NFS The example shows that the file “file1txt” has a mod ification date and standard UNIX file permissions [e2user@host]$ ls l /media/filegateway1/ total 1 rwrwr 1 ec2user ec2 user 36 Mar 15 22:49 file1txt [e2user@host]$ This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 4 The following example shows the output from the head object on Amazon S3 It shows the same file from the perspective of the object that is stored in Amazon S3 Note that the permissions and time stamp in the previous example are stored durably as metadata for the object [e2user@host]$ aws s3api head object bucket filegateway1 key file1txt { "AcceptRanges": "bytes" "ContentType": "application/octet stream" "LastModified": "Wed 15 Mar 2017 22:49:02 GMT" "ContentLength": 36 "VersionId": "93XCzHcBUHBSg2yP8yKMHzxUumhovEC" "ETag": " \"0a7fb5dbb1a e1f6a13c6b4e4dcf54977 1\"" "ServerSideEncryption": "AES256" "Metadata": { "filegroup": "500" "useragentid": "sgw 7619FB1F" "fileowner": "500" "awssgw": "57c3c3e92a7781f868cb10020b33aa6b2859d58c86819066 1bcceae87f7b96f1" "filemtime": "1489618141421" "filectime": "1489618141421" "useragent": "aws storagegateway" "filepermissions": "0664" } } [e2user@host]$ Read/Write Operations and Local Cache As part of a file gateway deployment dedicated local storage is allocated to provide a read/write cache for all hosted share buckets The read/write cache greatly improves response times for onpremises file (NFS/SMB) operations The local cache holds both recently wr itten and recently read content and does not proactively evict data while the cache disk has free space However when the cache is full AWS Storage Gateway evicts data based on a least recently used (LRU) algorithm Recently accessed data is available fo r reads and write operations are not impeded Read Operations (Read Through Cache) When an NFS client performs a read request the file gateway first checks the local cache for the requested data If the data is not in the cache the gateway retrieves the This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 5 data from Amazon S3 using Range GET requests to minimize data transferred over the Internet while repopulating the read cache on behalf of the client 1 The NFS /SMB client performs a read request on part of a given file 2 The file gateway first checks to see if required bytes are cached locally 3 In the event the bytes are not in the loca l cache the file gateway performs a byte range GET on the associated S3 object Figure 3: File gateway read operations Write Operations (Write Back Cache) When a file is written to the file gateway over NFS /SMB the gateway first commits the write to the local cache At this point the write success is acknowledged to the local NFS/SMB client taking full advantage of the low latency of the local area network After the write cache is populated the file is transferred to the associated Amazon S3 bucket asynchronously to increase local performance of Internet transfers When an existing file is modified the file gateway transfers only the newly written bytes to the associated Amazon S3 bucket This uses Amazon S 3 API calls to construct a new object from a previous version in combination with the newly uploaded bytes This reduces the amount of data required to be transferred when clients modify existing files within the file gateway This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 6 1 File share client performs many parallel writes to a given file 2 File gateway appliance acknowledges writes synchronously aggregates writes locally 3 File gateway appliance uses S3 multi part upload to send new writes (bytes) to S3 4 New object is constructed in S3 from a combination of new uploads and byte ranges from the previous version of an object Figure 4: File gateway write operations Choosing the Right Cache Resources When configuring a file gateway VM on a host machine you can allocate disks for the local cache Selecting a cache size that can sufficiently hold the active working set (eg a Database backup file) provide s optimal performance for file share clients Addit ionally splitting the cache across multiple disks maximize s throughput by parallelizing access to storage resulting in faster reads and writes When available for your on premises gateway we also recommend using SSD or ephemeral disks which can provide write and read (cache hits) throughputs of up to 500MB /s Security and Access Controls Within a Local Area Network When you creat e a mount point (share) on a deployed gateway you select a single Amazon S3 bucket to be the persistent object storage for files and associated metadata Default UNIX permissions are defined a s part of the configuration of the mount point These permissions are applied to all existing objects in the Amazon S3 bucket This process ensures that clients that access the mount point adhere to file and directory level security for existing content In addition an entire mount point and its associated Amazon S3 content can be protected on the LAN by limiting mount access to individual hosts or a range of hosts This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 7 For NFS file shares this limitation is defined by using a Classless Inter Domain Routing (CIDR) block or individual IP addresses For SMB file shares you can control access using Active Directory (AD) domains or authenticated guest access You can further limit a ccess to selected AD users and groups allowing only specified users (or users in the specified groups) to map the file share as a drive on their Microsoft Windows machines Monitoring Cache and Traffic As workloads or architectures evolve t he cache and Internet requirements that are associated with a given file gateway deployment can change over time To give visibility into resource use the file gateway provides statistical information in the form of Amazon CloudWatch metrics The metrics cover cache consumptio n cache hits/misses data transfer and read/write metrics For more information see Monitoring Your File Share File Gateway Bucket Inventory To re duce both latency and the number of Amazon S3 operations when performing list operations the file gateway stores a local bucket inventory that contains a record of all recently listed objects The bucket inventory is populated on demand as the file share clients list parts of the file share for the first time The file gateway updates inventory records only when the gateway itself modifies deletes or creates new objects on behalf of the clients The file gateway cannot detect changes to objects in an NFS or SMB file share’s bucket by a secondary gateway that is associated with the same Amazon S3 bucket or by any other Amazon S3 API call outside of the file gateway When Amazon S3 objects have to be modified outside of the file share and recognized by the file gateway (such as changes made by Amazon EMR or other AWS services ) the bucket inventory must be refreshed using either the RefreshCache API call or RefreshCache AWS Command Line Interface (CLI) command RefreshCache can be manually invoked automate d using a CloudWatch Event or triggered through the use of the NotifyWhenUploaded API call once the files have been written to the file share using a secondary gateway A CloudWatch notification named Storage Gatew ay Upload Notification Event is triggered once the files written by the secondary gateway have been uploaded to S3 The target of this event could be a Lambda function invoking RefreshCache to inform the primary gateway of this change RefreshCache reinventories the existing records in a file gateway’s bucket inventory This communicates changes of known objects to the file share clients that access a given share This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Clou d Storage Architectures Page 8 1 Object created by secondary gateway or external source 2 RefreshCache API called on file g ateway appliance share 3 Foreign object is reflected in file gateway bucket inventory and accessible by clients Figure 5: RefreshCache API called to re inventory Amazon S3 bucket Bucket Shares with Multiple Contributors When deploying more c omplex architectures such as when more than one file gateway share is associated with a single Amazon S3 bucket or in scenarios where a single bucket is modified by one or more file gateways in conjunction with other Amazon S3 enabled app lications note that file gateway does not support object locking or file coherency across file gateways Since file gateways cannot detect other file gateways be cautious when designing and deploy ing solutions that use more than one file gateway share wi th the same Amazon S3 bucket File gateways associated with the same Amazon S3 bucket detect new changes to the content in the bucket only in the following circumstances: 1 A file gateway recognizes changes it makes to the associated Amazon S3 bucket and ca n notify other gateways and applications by invoking the NotifyWhenUploaded API after it is done writing files to the share 2 A file gateway recognize s changes made to objects by other file gateways when the affected objects are located in folders (or prefixes) that have not been queried by that particular file gateway 3 A file gateway recognizes changes in an associated Amazon S3 bucket (bucket share) m ade by other contributors after the RefreshCache API is executed We recommend that you use the read only mount option on a file gateway share when you dep loy multiple gateways that have a common Amazon S3 bucket Designing architectures with only one writer and many readers is the simplest way to avoid write conflicts If multiple writers are required the clients accessing each gateway must be This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 9 tightly cont rolled to ensure that they don’t write to the same objects in the shared Amazon S3 bucket When multiple file gateways are accessing the same objects in the same Amazon S3 bucket make sure to call the RefreshCache API on file gateway shares that have to recognize changes made by other file gateways To fu rther optimize this operation and reduce the time it takes to run you can invoke the RefreshCache API on specific folders (recursively or not) in your share 1 Client creates a new file and file gateway #1 uploads object to S3 2 Customer invokes NotifyWhenUploaded API on file share of file gateway #1 3 CloudWatch Event (generated upon completion of Step 1 ) initiate s the RefreshCache API call to initiate a re inventory on file gateway #2 4 File gateway #2 presents newly created objects to clients Figure 6: RefreshCache API makes objects created by file gateway #1 visible to file gateway #2 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 10 Amazon S3 and the Fi le Gateway The file gateway uses Amazon S3 buckets to provide storage for each mount point (share) that is created on an individual gateway When you use Amazon S3 buckets mount points provide limitless capacity 99999999999% durability on objects stored and a consumption based pricing model Costs for data stored in Amazon S3 via AWS Storage Gateway are based on the region where the gateway is located and the storage class A given mount point writes data directly to Amazon S3 Standard Amazon S3 Standa rd – IA or Amazon S3 One Zone – IA storage depending on the initial configuration select ed when creating the mount point All of these storage classes provide equal durability However Amazon S3 Standard – IA and Amazon S3 One Zone – IA have a different pricing model and lower availability (ie 999% compared with 9999%) which makes them good solution s for less frequently accessed objects The pricing for Amazon S3 Standard – IA and Amazon S3 One Zone – IA is ideal for objects that exist for more than 30 days and are larger than 128 KB per object For details about price differences for Amazon S3 storage classes see the Amazon S3 Pricing page Using Amazon S3 Object Lifecycle Management for Cost Optimization Amazon S3 offers many storage classes Today AWS Storage Gateway file gateway supports S3 Standard S3 Standard – Infrequent Access and S3 One Zone – IA natively Amazon S3 lifecycle policies automate the management of data across storage tiers It’s also possible to expire objects based on the object’s age To transition data between storage classes lifecycle policies are applied to an entire Amazon S3 bucket which reflects a single mount point on a storage gateway Lifecycle policies can also be applied to a specific prefix that reflects a folder within a hosted mount point on a file gateway The lifecycle policy transition condition is based on the creation date or optionally on the object tag key value pair For more information about tagging see Object Tagging in the Amazon S3 Developer Guide As an example a lifecycle policy in its simplest implementation move s all objects in a given Amazon S3 bucket from Amazon S3 Standard to Amazon S3 Standard – IA and finally to Amazon S3 Glacier as the data ages This means that files created by the file gateway are stored as objects in Amazon S3 buckets and can then be automatically transitioned to more economic al storage classes as the content ages This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 11 Figure 7: Example of f ile gateway storing files as objects in Amazon S3 Standard and transitioning to Amazon S3 Standard – IA and Amazon S3 Glacier If you use file gateway to store data in S3 Standard IA or S3 One Zone IA or acce ss data from any of the infrequent storage classe s see Using Storage Classes in the AWS Storage Gateway User Guide to learn how the gateway mediates between NFS/SMB (file based) uploads to update or access the object Transitioning Objects to Amazon S3 Glacier Files migrated using lifecycle policies are immediately available for NFS file read/write operations Objects transitioned to Amazon S3 Glacier are visible when NFS files are listed on the file gateway However they are not readable unless restored to an S3 storage class using an API or the Amazon S3 console If you try to read files that are stored as objects in Amazon S3 Glacier you encounter a read I/O error on the client that tries the read operation For this reason we recommend using lifecycle to transition files to Amazon S3 Glacier objects only for file content that does not require immediate access from an NFS /SMB client in an AWS Storag e Gateway environment Amazon S3 Object Replication Across AWS Regions Amazon S3 crossregion replication (CRR) can be combined with a file gateway architecture to store objects in two Amazon S3 buckets across two separate AWS Regions CRR is used for a va riety of use cases such as protection against human error protection against malicious destruction or to minimize latency to clients in a remote AWS Region Adding CRR to the file gateway architecture is just one example of how native Amazon S3 tools an d features can be used in conjunction with the file gateway This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 12 Figure 8: File gateway in a private data center with CRR to duplicate objects across AWS Regions Using Amazon S3 Object Versioning You can use f ile gateway with Amazon S3 Object Versioning to store multiple versions of files as they are modified If you require access to a previous version of the object using the gateway it first must be restore d to the previous version in S3 You must also use t he RefreshCache operation for the gateway to be notified of this restore See Object Versioning Might Affect What You See in Your Fil e System in the AWS Storage Gateway User Guide to learn more about using Amazon S3 versioned buckets for your file share Using the File Gateway for Write Once Read Many (WORM) Data You can also use f ile gateway to store and access data in environments with regulatory requirements that require use of WORM storage In this case select a bucket with S3 Object Lock enabled as the storage for the file share If there are file modifications or renames through the file share clients the file gateway creates a new version of the object without affecting prior versions so the original locked version remains unchanged See also Using the file gateway with Amazon S3 Object Lock in the AWS Storage Gateway User Guide File Gateway Use Cases The following scenarios demonstrate how a file gateway can be used in both cloud tiering and backup architectures This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 13 Cloud Tiering In on premises environments where storage resources are reaching capacity migrating colder data to the file gateway can extend the life span of existing storage on premises and reduce the need to use capital expenditure s on additional storage hardware and data center resources When adding the file gateway to an existing storage environment on premises applications can take advantage of Amazon S3 storage durability consumption based pricing and virtual infinite scale while ensuring low latency access to recently accessed data over NFS or SMB Data can be tiered using either native host OS tools or third party tools that integra te with standard file protocols such as NFS or SMB Figure 9: File gateway in a private data center providing Amazon S3 Standard or Amazon S3 Standard – IA as a complement to existing storage deployments Hybrid Cloud Backup The file gateway provides a low latency NFS /SMB interface that creates Amazon S3 objects of up t o 5 TiB in size stored in a supported AWS Region This makes it an ideal hybrid target for backup solutions that can use NFS or SMB By using a mixture of Amazon S3 storage classes data is stored on low cost highly durable cloud storage and automaticall y tiered to progressively lower cost storage as the likelihood of restoration diminishes Figure 10 shows an example architecture that assumes backups must retained for one year After 30 days the likelihood of restoration beco mes infrequent and after 60 days it becomes extremely rare In this solution you use Amazon S3 Standard as the initial location for backups for the first 30 days The backup software or scripts write backups to the file share preferably in the form of multi megabyte or larger size files Larger files offer better cost This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 14 optimization in the end toend solution including colder storage costs and lifecycle transition costs because fewer transitions are required After anoth er 30 days the backups are transitioned to Amazon S3 Glacier Here they are held until a full year has passed since they were first created at which point they are deleted 1 Client writes backups to file gateway over NFS or SMB 2 File gateway cache siz ed greater than expected backup 3 Initial backups stored in S3 Standard 4 Backups are transitioned to S3 Standard IA after 30 days 5 Backups are transitioned to S3 Glacier after 60 days Figure 10: Example of file gateway storing file s as objects in Amazon S3 Standard and transitioning to Amazon S3 Standard IA and Amazon S3 Glacier When sizing the file gateway cache in this type of solution understand the backup process itself One approach is to size the cache to be large enough to contain a complete full backup which allows restores from that backup to come directly from the cache —much more quickly than over a wide area network (WAN) link If the backup solution uses software that consolidates backup files by reading existing back ups before writing ongoing backups factor this configuration into the sizing of cache also This is because reading from the local cache during these types of operations reduces cost and increases overall performance of ongoing backup operations For both cases specified above you can use AWS DataSync to transfer data to the cloud from an onpremises data store From there the access to the data can be retain ed using a file gateway This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 15 Conclusion The file gateway configuration of AWS Storage Gateway provides a simple way to bridge data between private data centers and Amazon S3 storage The file gateway can enable hybrid architectures for cloud migration cloud tiering and hybrid cloud backup The file gat eway’s ability to provide a translation layer between the standard file storage protocol s and Amazon S3 APIs without obfuscation makes it ideal for architectures in which data must remain in its native format and be available both on premises and in the AWS Cloud For more information about the AWS Storage Gateway service see AWS Storage Gateway Contributors The following individuals and organizations contributed to this document: • Peter Levett Solut ions Architect AWS • David Green Solutions Architect AWS • Smitha Sriram Senior Product Manager AWS • Chris Rogers Business Development Manager AWS Further Reading For additional information see the following: • AWS Storage Services Overview Whitepaper • AWS Whitepapers Web page • AWS Storage G ateway Documentation • AWS Documentation Web page Document Revisions Date Description March 2019 Updated for S3 One Zone IA storage class This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services File Gateway for Hybrid Cloud Storage Architectures Page 16 Date Description April 2017 Initial document creation
|
General
|
consultant
|
Best Practices
|
Financial_Services_Grid_Computing_on_AWS
|
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlFinancial Services Grid Computing on AWS First Published January 2015 Updated August 24 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlContents Overview 1 Introduction 2 Grid computing on AWS 5 Compute and networking 6 Storage and data sharing 15 Data management and transfer 22 Operations and management 23 Task scheduling and infrastructure orchestration 26 Security and compliance 30 Migration approaches patterns and anti patterns 32 Conclusion 35 Contributors 36 Further reading 36 Glossary of terms 37 Document versions 39 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAbstract Financial services organizations rely on high performance computing (HPC) infrastructure grids to calculate risk value portfolios and provide reports to their internal control functions and external regulators The scale cost and complexity of this infrastructure is an increasing challenge Amazon Web Services (AWS) provides a number of services that enable these customers to surpass their current capabilities by delivering results quickly and at a lower cost than onpremises resources The intended audience for this paper include s grid computing managers architects and engineers within financial services o rganizations who want to improve their service It describes the key AWS services to consider some best practices and includes relevant reference architecture diagram s This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 1 Overview High performance computing (HPC) in the financial services industry is an ongoing challenge because of the pressures from everincreasing computational demand across retail commercial and investment groups combined with growing cost and capital constrai nts The traditional on premises approaches to solving these problems have evolved from centralized monolithic solutions to business aligned clusters of commodity hardware to modern multi tenant grid architectures with centralized schedulers that mana ge disparate compute capacity Regulators and large financial institutions increasing ly accept hyperscale cloud provider s which resulted in significant interest in how to best leverage new capabilities while ensuring good governance and cost controls C loud concepts such as capacity on demand and pay as you go pricing models offer new opportunities to teams who run HPC platforms Historically the challenge has been to manage a fixed set of on premises resources while maximizing utilization and minimiz ing queuing times In a cloud based model with capacity that is effectively unconstrained the focus shifts away from managing and throttling demand and towards optimizing supply With this model decisions become more granular and tailored to each customer and focus on how fast and at what cost with the ability to make adjustments as required by the business With this basica lly limitless capacity concepts such as queuing and prioritization become irrelevant as clients are able to submit calculation requests and have them ser viced immediately This also results in u pstream consumers increasingly expect ing and demand ing near instantaneous processing of their workloads at any scale Initial cloud migrations of HPC platforms are often seen as extensions or evolutions of onpremises grid implementations However forward looking institutions are experimenting with the everexpand ing ecosystem of capabilities enabled by AWS Some emerging themes i nclud e refreshing financial models to run on open source Linux based operating systems and exploring the performance benefits of the latest Arm Neoverse N1 central processing units ( CPUs ) through AWS Graviton2 Amazon SageMaker increasingly democratiz es the use of artificial intelligence/machine learning (AI/ML ) techniques and customers are looking to these tools to enable accelerated development of predictive risk models For data heavy calculations Amazon EMR offers a fully managed industry leading cloud big data platform based on standard tooling using directed acyclic graph This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 2 structures This topic is explored further in the blog post How to improve FRTB’s Internal Model Approach implementation using Apache Spark and Amazon EMR As H PC environments move to the cloud the applications that are associated with them start to migrate too Risk management systems which drive compute grids quickly become a bottleneck when the downstream HPC platform is unconstrained By migrating these applications with the compute grid the applications benefit from the elasticity that the cloud provides In turn data sources such as market and static data are sourced natively from within the cloud from the same providers that customers work with today through services such as AWS Data Exchange Many of the building blocks required for fully serverless risk management and report ing solutions already exist today within AWS with services like AWS Lambda for serverless compute and AWS Step Functions to coordinate them As financial institutions become increasingly familiar and comfortable with these services it’s likely that serverless patterns will become the predominant HPC architectures of the future Introduction In general traditional HPC systems are used to solve complex mat hematical problems that require thousands or even millions of CPU hours These system s are commonly used in academic institutions biotech and engineering firms In banking organizations HPC systems are used to quantify the risk of given trades or portfolios which enables traders to develop effective hedging strategies price trades and report positions to their internal control functions and ultimately to external regulators Insurance companies leverage HPC systems in a similar way for actuarial modeling and in support of their own regulatory requirements Unpredictable global events seasonal variation and regulatory reporting commitments contribute to a mixture of demands on HPC pla tforms This includes short latency sensitive intraday pricing tasks near real time risk measures calculated in response to changing market conditions or large overnight batch workloads and back testing to measure the efficacy of new models to historic events Combined these workloads can generate hundreds of millions of tasks per day with a significant proportion running for less than a second Because of t he regulatory landscape demand for these calculations continues to outpace the progress of Moor e’s law Regulations such as the Fundamental Review of the Trading Book (FRTB) and IFRS 17 require even more analysis with some customers estimating between 40% and 1000% increases in demand as a result In turn This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 3 financial services organizations continue to grow their grid computing platforms and increasingly wrestle with the costs associated with purchasing and managing this infrastructure The blog post How cloud increases flexibility of trading risk infrastructure for FRTB compliance explores this topic in greater detail discussing the challenges of data compute and the agility benefits achi eved by running these workloads in the cloud Risk and pricing calculations in financial services are most commonly embarrassingly parallel do not requir e communication between nodes to complete calculations and broadly benefit from horizontal scalability Because of this they are well suited to a shared nothing architectural approach in which each compute node is independent from the other For example a financial model based on the Monte Carlo method can create millions of scenarios to be divided across a large number (often hundreds or thousands) of compute nodes for calculation in parallel Each scenario reflects a different market condition based on a number of variables In general doubling the number of compute nodes allow s these tasks to be distributed more wide ly which reduces by half the overall duration of the job Access to increased compute capacity through AWS allows for additional scenarios and greater precision in the results in a given timeframe Alternatively you can use the additional capacity to complete the same calculations in less time Financial services firms typically use a thirdparty grid scheduler to coordinate the allocation of compute tasks to available capacity Grid schedulers have these features in common: • A central scheduler to coordin ate multiple clients and a large number (typically hundreds or thousands) of compute nodes The scheduler manage s the loss of any given component and reschedul es the work accordingly • Deployment tools to ensure that software binaries and relevant data are reliably distributed to compute nodes that are allocated a specific task • An engine to allow rules to be defined to ensure that certain workloads are prioritized over others in the even t that the total capacity of the grid is exhausted This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 4 • Brokers are typically employed to manage the direct allocation of tasks that are submitted by a client to the compute grid In some cases an allocated compute node make s a direct connection back to a cli ent to collect tasks to reduce latency Brokers are usually horizontally scalable and are well suited to the elasticity of cloud In some cases the client is another grid node that generat es further tasks Such multi tier recursive architectures are not uncommon but present further challenges for software engineers and HPC administrators who want to maximize utilization while managing risks such as deadlock when parent tasks are unable to yield to child tasks The key benefit of running HPC workloads on AWS is the ability to allocate large amounts of compute capacity on demand without the need to commit to the upfront and ongoing costs of a large hardware investment Capacity can be scaled minute by minute according to your needs at the time This avoi ds preprovision ing of capacity according to some estimate of future peak demand Because AWS infrastructure is charged by consumption of CPU hours it’s possible to complete the same workload in less time for the same price by simply scaling the capacit y The following figure shows two approaches to provisioning capacity In the first two CPUs are provisioned for ten hours In the second ten CPUs are provisioned for two hours In a CPU hour billing model the overall cost is the same but the latter produces results in one fifth of the time Two approaches to provisioning 20 CPU hours of capacity Developers of the analytics calculations us ed in HPC applications can use the latest CPUs graphics processing units ( GPUs ) and fieldprogrammable gate arrays ( FPGAs ) available through the many Amazon EC2 instance types This drives effici ency per core and differs from on premises grids that tend to be a mixture of infrastructure which reflect s historic procurement rather than current needs This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services G rid Computing on AWS Page 5 Diverse pricing models offer flexibility to these customers For example Amazon EC2 Spot Instances can reduce compute costs by up to 90% These instances are occasionally interrupted by AWS but HPC schedulers with a history of managing scavenged CPU resources can react to these events and reschedu le tasks accordingly This document includes several recommended approaches to building HPC systems in the cloud and highlight s AWS services that are used by financi al services organizations to help to address their compute networking storage and security requirements Grid computing on AWS A key driver for the migration of HPC workloads from onpremises environments to the cloud is flexibility AWS offers HPC teams the opportunity to build reliable and cost efficient solutions for their customers while retaining the ability to experiment and innovate as new solutions and approaches become available HPC teams that want to migrat e an existing HPC solution to the cloud or to build a new solution should review the AWS Well Architected Framework which also includes a specific Financial Services Industry Lens with a focus on how to design deploy and architect financial services industry (FSI) workloads that promo te resiliency security and operational performance in line with risk and control objectives This framework applies to any cloud deployment and seeks to ensure that systems are architected according to best practice s Additionally t he HPC specific lens document also identifie s key elements to help ensure the successful deployment and operation of HPC system s in the cloud The following secti ons include information about AWS services that are most relevant to HPC systems particular ly those that support financial services customers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 6 A typical HPC architecture with the key components including the risk management system (RMS ) grid controller grid brokers and two compute instance pools Compute and networking AWS offers a wide range of Amazon Elastic Compute Cloud (Amazon EC2) instance types which enable you to select the configuration t hat is best suited to your needs at any given time This is a departure from the typical Bill of Materials approach which limits the configurations available on premises in favor of deployment simplicity It also offers evergreening which enables you to take advantage of the latest CPU technologies as they are released without consideration for any prior investment HPC customers in financial services should consider the following instances types : This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 7 • Amazon EC2 compute optimized instances — C class instance s are optimized for compute intensive workload s and deliver costeffective high performance at a low price per compute ratio • Amazon EC2 General purpose instances — o M class — Commonly used in HPC applications because they offer a good balance of compute memory and networking resources o Z class — Offer the highest CPU frequencies with a high memory footprint o T series — Provide a baseline level of CPU performance with the ability to burst to a higher level when required The use of these instances for HPC workloads can be attractive for some workloads ; however their variable performance profile can result in inconsistent behavi or which might be undesirable o Amazon EC2 memory optimized instances — o R class – These instances offer higher memory toCPU ratios and so may be applied to X Valuation Adjustment (XVA) calculations such as Credit Value Adjustments which typically require a dditional memory • Instances with the suffix a have AMD processors for example R5a • Instances with the suffix g have Arm based AWS Graviton2 processors for example C6g • Amazon EC2 Accelerated Computing instances use hardware accelerators or co processors to perform functions such as floating point number calculations graphics processing or data pattern matching more efficiently than is possible in software running on CPUs o P class instances are intended for g eneral purpose GPU compute applications o F class instances offer customizable hardware acceleration with field programmable gate arrays (FPGAs) The latest AWS instance s are based on the AWS Nitro System The Nitro System is collection of AWS built hardware and software components that enable high performance high availability high security and bare metal capabilities to eliminate virtualization overhead By selecti ng Nitro based instances HPC applications can expect performance levels that are indistinguishable to a baremetal system while retaining all of the benefits of an ephemeral virtual host This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 8 Table 1 – Amazon EC2 instance types that are typically used for HPC workloads Instance Type Class Description General purpose T Burstable general purpose low cost M General purpose instances Compute optimized C For compute intensive workloads Memory optimized R For memory intensive workloads X For memory intensive workloads Z High compute capacity and high memory Accelerated computing P / F General purpose GPU (P) or FPGA (F) capabilities This diverse selection of instance types helps support a wide variety of workloads with optimal hardware and promotes experimentation HPC teams can benchmark various sets of instances to optimize their scheduling strategies Quantitative developers can try new approaches with GPUs FPGAs or the latest CPU s without upfront cost s or protracted procurement processes You can immediately deploy at scale your optimal approach without the traditional hardware lifecycle considerations When you run experiments or if a subset of production workloads require s a specific instance type grid schedulers typically enable tasks to be directed to the appropriate hardware through compute resource groups x86 based Amazon EC2 instances support multithreading which enables multiple threads to run concurrently on a single CPU core Each thread is represented as a virtual CPU (vCPU) on the instance An instance has a default number of CPU cores which varies according to instance type To ensure that each vCPU is used effectively it’s important to understand the behavior of the calculations run ning in the HPC environment If all processes are single threaded a good initial strategy is to have the scheduler assign one process per vCPU on each instance However if the calculations require multithreading tuning might be required to maximize the use of vCPUs without introducing excessive CPU context switching By default x86 based Amazon EC2 instances have hyperthreading (HT) enabled You can disable HT either at boot or at runtime if the analytics perform better without it which you can establi sh through benchmarking The Disabling Intel Hyper Threading This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 9 Technology on Amazon Linux blog post has an explanation of the methods you can use to configure HT on an Amazon Linux instance You might typically tune your infrastructure to increase processor performance consistency or to reduce latency Some Amazon EC2 instances enable control of processor C states ( idle state pow er saving) and P states (optimization of voltage and CPU frequency during run) The default settings for C state and P state are tuned for maximum performance for most workloads If an application might benefit from reduced latency in exchange for lower f requencies or from more consistent performance without the benefit of Turbo Boost then changes to the C state and P state configurations might be worth considering For information about the instance types that support the adjustment and how to make thes e changes to an Amazon Linux 2 based instance see Processor State Control for Your EC2 Instance in the Amazon Elastic Compute Cloud User Guide for Linux Instanc es Another potential optimization is over subscription This approach is useful when you know processes spend time on non CPU intensive activities such as waiting on data transfers or loading binaries into memory For example if this overhead is estimat ed at 10% you might be able to schedule one additional task on the host for every 10 vCPUs to achieve higher CPU utilization and throughput There are many performance benefits of AWS Graviton processors AWS Graviton processors are custom built by AWS using 64 bit Arm Neoverse cores AWS Graviton2 processors provide up to 40% better price performance over comparable current generat ion x86 based instances for a wide variety of workloads including application servers microservices high performance computing electronic design automation gaming open source databases and in memory caches Interpreted and bytecode compiled languag es such as Python Jav a Nodejs and NET Core on Linux may run on AWS Graviton2 without modification Support for Arm architectures is also increasingly common in third party numerical libraries aiding the path to adoption Compiler selection is another consideration The use of a complier that is optimized for the target CPU architecture can yield performance improvements For example quant itative analyst s might see value in developing analytics using the Intel C++ Compiler and running on instances that support AVX512 capable CPUs The AVX 512 instruction set allows developers to run twice the number of floating point operations per second (FLOPS) per clock cycle Similarly AMD offers the AMD Optimizing C/C++ Compiler which optimizes for AMD EPYC archi tectures This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 10 In addition to the instance types and classes shown in Table 1 there are also options for procuring instances in AWS : • Amazon EC2 On Demand Instances offer capacity as required for a s long as they are needed You are only charged for the time that the instance is active These are ideal for components that benefit from elasticity and predictable availability such as brokers compute instances hosting longrunning tasks or tasks that generate further generations of tasks • Amazon EC2 Spot Instances are particularly appropriate fo r HPC compute instances because they benefit from substantial savings over the equivalent on demand cost Spot Instances can occasionally be ended by AWS when capacity is constrained but grid schedulers can typically accommodate these occasional interrupt ions • Amazon EC2 Reserved Instances provide a significant discount of up to 7 2% based on a one year or three year commitment Convertible Reserved Instances offer additional flexibility on the instance family operating system and tenancy of the reservation Relatively static hosts such as HPC grid controller nodes or data caching host s might benefit from Reserv ed Instances • Savings Plans is a flexible pricing model that also provides savings of up to 72% on your AWS compute usage regardless of instance family size operating system ( OS) tenancy or AWS Region Savings Plans offer significant discounts in exchange for a commitment to use a specific amount of compute power (measured in $/hour) for a one or three year period Just like Amazon EC2 Reserved Instances Savings Plans are ideal for long running hosts such as HPC Controller nodes It’s important to note that regardless of the procurement model selected the instances delivered by AWS are exactly the same Compute instance provisioning and management strategies Spot Instances are not suitable for workloads that are inflexible stateful fault intolerant or tightly coupled between instance nodes They are also not recommended for workloads that are intolerant of occasional periods when the target capacity is not completely available However many financial services organizations make use of Spot Instances for part of their HPC workloads A Spot Instance interruption notice is a warning that is issued two minutes before Amazon EC2 interrupts a Spot Instance You can configure your Spot Instances to be This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 11 stopped or hibernated instead of being ended when they are interrupted Amazon EC2 then automatically stops or hibernates your Spot Instances on interruption and automatically resumes the instances when capacity is available AWS enables you to minimize the impact of a Spot Instance interruption through instance rebalance recommendations and Spot Instance interruption notices An EC2 Instance rebalance recommendation is a signal that notifies you when a Spot Instance is at elevated risk of interruption The signal gives you the opportunity to proactively manage the Spot Instance in advance of the two minute Spot Instance interruption notice You can decide to rebalance your workload to new or existing Spot Instances that are not at an elevated risk of interruption AWS has made it easy for you to use this new signal by using the Capacity Rebalancing feature in EC2 Auto Scaling groups and Spot Fleet If hibernation is configured this feature operate s like closing and opening the lid on a laptop computer and saves the memory state to an Amazon Elastic Block Store (Amazon EBS) disk However this approach to managing interruptions should be used with caution because the grid scheduler might not be able to track such quiesced workloads which could result in timeouts and rescheduling tasks if the hibernated image is not reactivated quickly • Amazon EC2 Spot Fleets enable you to launch a fleet of Spot Instances that span various EC2 instance types and Availability Zones By defining the target capacity using an appropriate metric ( for example a Slot for an HPC application) the fleet source s capacity from EC2 Spot Instances at the best possible price HPC teams can define Spot Fleet strategies that use diverse instance types to make sure you have the best experience at the lowest cost • Amazon EC2 Fleet also enables you to quickly create fleets that are diversified by using EC2 On Demand Instances Reserved Instanc es and Spot Instances With this approach you can optimize your HPC capacity management plan according to the changing demands of your workloads Both EC2 Fleet and Spot Fleet integrate with Amazon Even tBridge to notify you about important Fleet events state changes and errors This enables you to automate actions in response to Fleet state changes and monitor the state of your Fleet from a central place without need ing to continuously poll Fleet APIs They both also support the Capacity Optimized allocation strategy which automatically makes the most efficient use of available spare capacity while still taking advantage of the steep discounts offered by Spot Instances This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 12 • Amazon EC2 Auto Scaling groups contain a collection of Amazon EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management An Auto Scaling group enables you to use Amazon EC2 Auto Scaling features such as health check replacements and scaling policies • Amazon EC2 launch templates contain the configuration i nformation used to launch an instance The template can define the AMI ID (Operating system image) instance type and network settings for the compute instances You can use Launch Templates with EC2 Fleet Spot Fleet or Amazon EC2 Auto Scaling and make it easier to implement and track configuration standards • Launch Template versioning can be used within the EC2 Auto Scaling Group ‘Instance Refresh’ feature to update pools of capacity while minimizing interruptions to the workload All you need to do is specify the percentage of healthy instances to keep in the group while the Auto Scaling group terminates and launches instances You can a lso specify t he warm up time which is the time period that the Auto Scaling group waits between instances that get refreshed via Instance Refresh One option to begin an HPC deployment is to use only OnDemand Instances After you understand the performance of your workloads you can develop and optimize a strategy to provision instance s using Amazon EC2 Auto Scaling Groups Amazon EC2 Fleet or Amazon EC2 Spot Fleet For example you can deploy a number of Reserved Instances or Savings Plans to host core grid services such as schedulers that are required to be available at all times You can provision OnDemand Instances during the intraday period to ensure predictable performance for synchronous pricing calculations For an overnight batch you can use large fleets of Spot Instances to provide massive volumes of capacity at a minimum cost and supplement them as necessary with OnDemand Instances to ensure predictable performance for the most timesensitive workloads The following figure shows two approaches to provisioning In each case ten vCPUs of Reserved Instance capacity remain online for the stateful scheduling components In the first case 20 further vCPUs are provisioned using On Demand Instances for ten hours to accommodate a b atch that runs for 200 vCPU hours with a tenhour SLA In the second approach the 20 vCPUs are also provisioned at the outset using On Demand Instances to provide confidence in the batch delivery but 70 vCPUs based on lowcost Spot Instances are also ad ded Because of the volume of Spot Instances the batch completes much more quickly (in about three hours) and at a significantly reduced This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 13 cost However if the Spot Instances were not available for any reason the batch would still complete on time with th e On Demand Instances provisioned AWS instance provisioning strategies One of the key benefits of deploying applications in the AWS Cloud is elasticity Amazon EC2 Auto Scaling enables HPC managers to configure Amazon EC2 instance provisioning and decommissioning events based on the real time demands of their platform The concept of ‘Instance Weightings’ allows Auto Scaling groups to start instances from a diverse pool of instance types to meet an overall capacity target for the workload Though g rids were previously provisioned based on predictions of peak demands (with periods of both constraint and idle capacity ) Amazon EC2 Auto Scaling has a rich API that enables it to be integrated with schedulers to easily manage scaling events When you remove hosts from a running cluster make sure to allow for a drain down period During this period t he targeted host stops taking on new work but is allowed to complete work in progress When y ou select n odes for removal avoid any long running tasks so that the shutdown is not delayed and you don’t lose progress on those calculations If the scheduler allows a query of total runtime of tasks in progress grouped by instance you can use this to identi fy which are the optimal candidates for removal specifically the instances with the lowest aggregate total of runtime by tasks in progress Where capacity is managed automatically Amazon EC2 Auto Scaling groups offer ‘scale in’ protection as well as configurable termination policies to allow HPC managers to minimize disruption to tasks in flight Scale in protection allows an Auto Scaling Group or an individual instance to be marked as ‘InService’ and so ineligible for termination in a ‘scale in’ event You also have the option to build custom ending policies using AWS Lambda to give more control over which instances are ended This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 14 These protections can be controlled by an API for integration with the scheduler to automate the drain down process Paradoxically adding instances to a cluster can temporarily slow the flow of tasks if those new instances need some time to reach optimal performance as binaries are loaded into memory and local caches are populated Amazon EC2 Auto Scaling groups also support warm pools A warm pool is a pool of pre initialized EC2 instances that sits alongside the Auto Scaling group Whenever your application needs to scale out the Auto Scaling group can draw on the warm pool to meet its new desired capacity The goal of a warm pool is to ensure that instances are ready to quickly start serving applicati on traffic accelerating the response to a scale out event This is known as a warm start So far this section has addressed compute instance provisioning at the host level Increasingly customers are looking to serverless solutions based on either contai ner technologies such as Amazon Elastic Container Service (Amazon ECS) Amazon Elastic Kubernetes Service (Amazon EKS) or AWS Lambda For both Amazon ECS and Amazon EKS the AWS Fargate serverless compute engine removes the need to orchestrate infrastructure capacity to support containers Fargate allocates the right amount of compute eliminating the need to choose instances and scale cluster capacity You pay only for the resources req uired to run your containers so there is no over provisioning and paying for additional servers Fargate supports both Spot Pricing for ECS and Compute Savings Plans for Amazon ECS and Amazon EKS To illustrate how Amazon EKS might be used in a high throughput compute (HTC) environment AWS has released the open source solution ‘awshtcgrid ’ This project shows how AWS technologies such as Lambda Amazon DynamoDB and Amazon Simple Queue Service (Amazon SQS ) can be combined to provide much of the functionality of a traditional HPC scheduler Note that awshtc grid is not a supported AWS service offering For customers using AWS Lambda there are no instances to be scaled ; however there is the concept of Concurrency which is the number of instances of a function which can serve requests at a time There are default Regional concurrency limits which can be increased through a request in the Support Center console Financial services firms have already built completely serverless HPC solution s based on Lambda (similar to the architecture outlined here) that support tens of million s of calculations per day In addition to considering alternative CPU architectures and accelerated computing options customers are increasingly looking at their existing dependencies on This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 15 commercial operating systems such as Microsoft Windows Such dependencies are often historical stemming from risk management systems built around spreadsheets however today th e cost premiums can be very material especially when compared to deeply discounted EC2 capacity under Amazon EC2 Spot AWS offers a variety of Linux distributions including Red Hat SUSE CentOS Debian Kali Ubuntu and Amazon Linux The latter is a sup ported and maintained Linux image provided by AWS for use on Amazon EC2 (it can also be run on premises for development and testing) It is designed to provide a stable secure and high performance run environment for applications running on Amazon EC2 I t supports the latest EC2 instance type features and includes packages that enable easy integration with AWS AWS provides ongoing security and maintenance updates to all instances running the Amazon Linux AMI and it is provided at no additional charge t o Amazon EC2 users Storage and data sharing In HPC systems there are two primary data distribution challenges The first is the distribution of binaries In financial services large and complex analytical packages are common These packages are often 1GB or more in size and often multiple versions are in use at the same time on the same HPC platform to support different businesses or back testing of new models In a constrained onpremises environment you can mitigate this challenge through relatively infrequent updates to the package and a fixed set of insta nces However in a cloud based environment instances are short lived and the number of instances can be much larger As a result multiple packages may be distributed to thousands of instances on an hourly basis as new instances are provisioned and new p ackages are deployed There are a number of possible approaches to this problem One is to maintain a build pipeline that incorporates binary packages into the Amazon Machine Images (AMIs) This means that once the machine has started it can process a workload immediately because the packages are already in place The EC2 Image Builder tool simplifies the process of building testing and deploying AMIs A limitation of this approach is that it doesn’t accommodate the deployment of new packages to running instances and it require s them to be ended and replaced to get new versions Anoth er approach is to update running instances There are two different methods for this type of update which are sometimes combined : This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 16 • Pull (or lazy) deployment — In this mode when a task reaches an instance and it depends on a package that is not in place the engine pulls it from a central store before it runs the task This approach minimizes the distribution of packages and saves on local storage because only the minimum set of packages is deployed However these benefits are at the expense of delaying tasks in an unpredictable way such as the introduction of a new instance in the middle of a latency sensitive pricing job This approach may not be acceptable if large volumes of tasks have to wait for the grid nodes to pull packages from a central store which could struggle to service very large numbers of requests for data • Push deployment — In this mode you can instruct instance engines to proactively get a specific package before they receive a task that depends on it This approach allows for rolling upgrades and ensures tasks are not delayed by a package update One challenge with this method is the possibility that new instances (which can be added at any time) might miss a push message which means you must keep a list of all currently live packages In practice a combination of these approaches is common Standard analytics packages are pushe d because they’re likely to be needed by the majority of tasks Experimental packages or incremental ‘ Delta’ releases are then pulled perhaps to a smaller set of instances It might also be necessary to purge deprecated packages especially if you deploy experimental packages In this case you can use a list of live packages to enable your compute instances to purge any packages that are not in the list and thus are not current The following figure shows a cloud native implementation of these approaches It uses a centralized package store in Amazon Simple Storage Service (Amazon S3 ) with agents that respond to messages delivered through an Amazon Simple Notification Service (Amazon SNS) topic After the package is in place on Amazon S3 notifications of new releases can be generated either by an operator or as a final step in an automated build pipeline Compute instances subscribed to an SNS topic (or to multiple topics for different applicatio ns) use these messages as a trigger to retrieve packages from Amazon S3 You can also use the same mechanism to distribute delete messages to remove packages if required This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 17 Data distribution architecture using Amazon SNS messages and S3 Object Storage The second data distribution challenge i n HPC is managing data related to the tasks being processed Typically this is bi directional with data flowing to the engines that support the processing and resul ting data passed back to the clients There are thre e common approaches for this process : • In the first approach communications are inbound (see the following figure) with all data passing through the grid scheduler along with task data This is less common because it can cause a performance bottleneck as the cluster grows An inbound data distribution approach This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 18 • In another approach tasks pass through the scheduler but the data is handled outofbound s through a shared scalable data store or an inmemory data gri d (see the following figure) The t ask data contain s a reference to the data’s location and the compute instances can retrieve it as required An outofbound s data distribution approach Finally some schedulers support a direct data transfer (DDT) approach In this model the scheduler grid broker allocates compute instances which then communicate directly with the client This architecture can work well especially with very short running tasks with little data However in a hybrid model with thousands of engines running on AWS that need to access a single onpremises client this can present challenges to on premises firewall rules or to the availability of ephemeral ports on the clie nt host This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 19 DDT (direct data transfer ) data distribution approach All of these approaches can be enhanced with caches located as close as possible to or hosted on the compute instances Such caches help to minimize the distribution of data especially if a significant ly similar set is required for many calculations Some schedulers support a form of data aware scheduling that tries to ensure that tasks that require a specific dataset are scheduled to instances that already have that dataset This cannot b e guaranteed but often provides a significant performance improvement at the cost of local memory or storage on each compute instance Though the combination of grid schedulers and distributed cache technologies used on premises can provide solutions to these challenges their capabilities vary and they are not typically engineered for a cloud deployment with highly elastic ephemeral instances You can consider t he following AWS services as potential solutions to the typical HPC data managem ent use cases This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 20 Amazon Simple Storage Service (Amazon S3) The Amazon S3 provides virtually unlimited object storage designed for 99999999999% of durability and high availability For binary packages it offers both versioning and various immutability features such as S3 Object Lock which prevents deletion or replacement of objects and has been assessed by Cohasset Associates for use in environments that are subject to SEC 17a 4 CFTC and FINRA regulations Binary immutability is a common audit requirement in regulated industries which require you to demonstrate that the binaries approved in the testing phase are identical to those used to produce reports You can include this feature in your deployment pipeline to make sure that the analytics binaries you use in production are the same as those that you validated This service also offers easy to implement encryption and granu lar access control s Some HPC architectures use checkpointing (compute instance s save a snapshot of their current state to a datastore ) to minimize the computational effort that could be lost if a node fails or is interrupted during processing For this purpose a dist ributed object store (such as Amazon S3) might be an ideal solution Because the data is likely to only be needed for the life of the batch you can use S3 life cycling rules to automatically purge these objects after a small number of days to reduce cost s Amazon Elastic File System (Amazon EFS) Amazon EFS offers shared network storage that is elastic which means it grow s and shrink s as required Thousands of Amazon EC2 instances can mount EFS volumes at the same time which enables shared access to common data such as analytics packages Amazon EFS does not currently support Windows clients Amazon FSx for Windows File Ser ver Amazon FSx for Windows File Server provides fully managed highly reliable and scalable file storage that is accessible over the open standard Server Message Block (SMB) protocol It is built on Windows Server delivering a wide range of administrative features such as user quotas end user file restores and Microsoft Active Directory integration It offers single and MultiAvailability Zone deployment options fully managed backups and encryption of data at rest and in transit This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services G rid Computing on AWS Page 21 Amazon FSx for Lustre For transient job data the Amazon FSx for Lustre service provides a highperformance file system that offers sub millisecond access to data and read /write speeds of up to hundreds of gigabytes per second with millions of IOPs Amazon FSx fo r Lustre can link to an S3 bucket which makes it easy for clients to write data objects to the bucket (including clients from an on premises system ) and have those objects available to thousands of compute nodes in the cloud (see the following figure ) FSx for Lustre is ideal for HPC workloads because it provides a file system that’s optimized for the performance and costs of high performance workloads with file system access across thousands of EC2 instances An example of an Amazon FSx for Lustre implementation Amazon Elastic Block Store (Amazon EBS) After a compute instance has binary or job data it might not be possible to keep it in memory so you might want to keep a copy on a local disk Amazon EBS offers persistent block storage volumes for Amazon EC2 instances Though the volumes for compute nodes can be relatively small (10GB can be sufficient to store a variety of binary package versions and some job data) there might be some benefit to the higher IOPS and throughput offered by the Amazon EBSprovisioned input/output operations per second ( IOPS ) solid state drives ( SSDs) These offer up to 64000 IOPS p er volume and up to 1000MB/s of throughout which can be valuable for workloads that require frequent highperformance access to these datasets This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 22 Because these volumes incur additional cost you should complete an analysis of whether they provide any add itional value over the standard general purpose volumes AWS Cloud hosted data providers AWS Data Exchange makes it easy to find subscribe to and use third party data in the cloud The catalog includes hundreds of financial services datasets from a wide variety if providers Once subscribed to a data product you can use the AWS Data Exchange API to load data directly into S3 The Bloomberg Market Data Feed (B PIPE) is a managed service providing programmatic access to Bloomberg’s complete catalog of content (all the same asset classes as the Bloomberg Terminal) Network connectivity with Blo omberg B PIPE leverages AWS PrivateLink exposing the services a s set of local IP addresses within your Amazon Virtual Private Cloud (Amazon VPC) subnet and elim inating DNS issues BPIPE services are presented via Network Load Balancers to further simplify the architecture Additionally Refinitiv’s Elektron Data Platf orm provides cost efficient access to global realtime exchange ‘over the counter ’ (OTC) and contributed data The data is also provided using AWS PrivateLink allowing simple and secure connectivity from your Virtual Private Cloud (VPC) Data managemen t and transfer Although HPC systems in financial services are typically loosely coupled with limited need for EastWest communication between compute instances there are still significant demands for North South communication bandwidth between layers in the stack A key consideration for networking is where in the stack any separation between onpremises systems and cloudbased systems occurs This is because communication within the AWS network is typically of higher bandwidth and lower cost than communication to external networks As a result any architecture that cause s hundreds or thousands of compute instances to connect to an external network —particularly if they’re requesting the same binaries or task data—would create a bottlenec k Ideally the fanout point (the point in the architecture at which large numbers of instances are introduced) is in the cloud This mean s that the larger volumes of communication stay in the AWS network with relatively few connections to on premises systems This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 23 AWS offers networking services that complement the financial services HPC systems A common starting point is to deploy AWS Direct Connect connections between customer data centers and an AWS Region through a third party point of presence (PoP) provider A Direct Connect link offers a consistent and predictable experience with speeds of up to 100Gbps You can employ multiple diverse Direct Connect links to provide highly resi lient highbandwidth connect ivity Though most HPC applications within financial services are loosely coupled this isn’t universal and there are times when network bandwidth is a significant component of overall performance The current AWS Nitro –based i nstances offer up to 100Gbps of network bandwidth for the largest instance types such as the c5n18xlarge or up to 400Gbps in the case of the p4d24xlarge instance Additionally a cluster placement group packs instances close together inside an Availability Zone This strategy enables workloads to achieve the low latency network performance necessary for tightly coupled node tonode communication that is typical of HP C applications The Elastic Fabric Adaptor service (EFA) enhances the Elastic Network Adaptor (ENA) and is specifically engineered to support tightly coupled HPC workloads which require low latency communication between instances An EFA is a virtual network device which can be attached to an Amazon EC2 instance EFA is suited to workloads using the Message Passing Interface (MPI) EFA may be worthy of consideration for some financial services workloads such as weather predictions as part of an insurance industry catastrophic event model EFA traffic that bypasses the operating system (OS bypass) is not routable so it’s limited to a single subnet As a result any peers in this network must be in the same subnet and Availability Zone which could alter resiliency strateg ies The O Sbypass capabilities of EFA are also not supported on Windows Some Amazon EC2 instance types support jumbo frames where the Network Maximum Transmission Unit ( the number of bytes per packet) is increased AWS supports MTUs of up to 9001 bytes By using f ewer packets to send the same amount of data endto end network performance is improved Operations and management HPC systems are traditionally highly decoupled and resilient to the failure of any given component with minimal disruption However HPC sy stems in financial services organizations tend to be both mission critical and limited by the capabilities of traditional approaches such as physical primary and secondary data centers In this model HPC teams ha ve to choose between having secondary infrastructure sitting mostly idle in This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 24 case of the loss of a data center or using all of the infrastructure on a daily basis but with the possibility of losing up to 50% of that capacity in a disaster event Some add a third or fourth location to reduce the impact of the loss of a site but at the cost of an increased likelihood of an outage and network inefficiencies When you move to the cloud you not only open up the availability of new services but also new approach es to solving these problems AWS operates a model with Regions and Availability Zones that are always active and offer high levels of availability By architecting HPC systems for mul tiple AWS Availability Zones financial services you can benefit from high levels of resiliency and utilization In the unlikely event of the loss of an Availability Zone additional instances can be automatically provisioned in the remaining Availability Zones to enable workloads to continue without any loss of data and only a brief interruption in service A sample HPC architecture for a MultiAZ deployment This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 25 The high level architect ure in the preceding figure shows the use of multiple Availability Zones and separate subnets for the stateful scheduler infrastructure ( including schedulers brokers data stores) and the compute instances You can base your scheduler instances on long running Reserved Instances with static IP addresses to help them c ommunicat e with onpremises infrastructure by simplifying firewall rules Conversely you can base your compute instances on On Demand Instance or Spot Instances with dynamically allocated IP addresses Security groups act as a virtual firewall which you can configure to allow the compute instances to communicate only with scheduler instances With the Compute Instances being inherently ephemeral and with potentially limited connectivity needs it can be beneficial to have them sit within separate private address ranges to avoid the need for you to manage demand for and allocate IPs from your own pools This can be achieved either through a secondary CIDR on the VPC or with a separate VPC for the compute infrastructure connected through VPC peering The majority of AWS services relevant to financial services customers are accessible from within the VPC using AWS PrivateLink which offers private connec tivity to those services and services hosted by other AWS accounts and supported AWS Marketplace partner solutions Traffic between your VPC and the service does not leave the Amazon network and is not exposed to the public internet One of the key s to effective HPC operations are the metrics you collect and the tools to explore and manipulate them A common question from end users is “Why is my job slow?” It’s important to set up your HPC operation in a way that enables you to either answer that question or to empower user s to find it for themselves AWS offers tools you can use to collect metrics and log s at scale Amazon CloudWatch is a monitoring and management service that not only collects metrics and logs related to AWS services but through an agent it can also be a target for telemetry from HPC systems and the applications running on them This provides a valuable central sto re for your data and allows diverse data sources to be presented on a common time series and helps you to correlat e events when you diagnos e issues You can also use CloudWatch as an auditable record of the calculations that were completed with the analytics binary versions that were used You can export these logs to S3 and protect them with the object lock feature for long term immutable retention You may want to use a third party log analytics tool Many of the most common products have native integrations with Amazon Web Services Additionally Amazon Managed Service for Grafana enables you to analyze monitor and alarm on metrics This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 26 logs and traces acro ss multiple data sources including AWS third party independent software vendors ( ISVs ) databases and other resources Some grid schedulers require a relational database for the retention of statistics data For this purpose you can use Amazon Relational Database Service (Amazon RDS) which provides costefficient and resizable database capacity while automating administration tasks such as hardware provisioning patching and backups Another common challen ge with shared tenancy HPC systems is the apportioning of cost The ability to provide very granular cost metrics according to usage can drive effective business decisions within financial services The pay as you go pricing model of AWS empowers HPC manag ers and their end customers to realize the benefits from the optimization of the system or its us e AWS tools such as resource tagging and the AWS Cost Explorer can be combined t o provide rich cost data and to build reports that highlight the sources of cost within the system Tags can include details of report types cost centers or other information pertinent to the client organization There’s also an AWS Budgets tool which can be used to create reports and alerts according to consumption When you combine d etailed infrastructure costs with usage statistics you can create granular cost attribution reports Some trades are particularly demanding of HPC capacity to the extent that the business might decide to exit the trade instead of continu ing to support the cost Task scheduling and infrastructure orchestration A high performance computing system needs to achieve two goals : • Scheduling — Encompasses the lifecycle of compute tasks including: capturing and prioriti zing tasks allocating them to the appropriate compute resources and handling failures • Orchestration — Making compute capacity available to satisfy those demands It’s common for financial services organizations to use a third party grid scheduler to coordinate HPC workloads Orchestration is often a slow moving exercise in procurement and physical infrastructure provisioning Traditional schedulers are therefore highly optimized for making lowlatency scheduling decisions to maximize usage of a relative ly fixed set of resources This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 27 As customers migrate to the cloud the dynamics of the problem change s Instead of nearstatic resource orchestration capacity can be scaled to meet the demands at that instant As a result the scheduler doesn’t need to reason about which task to schedule next but rather just inform the orchestrator that additional capacity is needed Table 2 — Task scheduling and infrastructure orchestration approaches HPC hosting Task scheduling approach Infrastructure orchestration approach OnPremises Rapid task scheduling decisions to manage prioritization and maximize utilization while minimizing queue times Static a procurement and physical provisioning process run over weeks or months Cloud based Focus on managing the task lifecycle decisions around prioritization and queue times are minimized by dynamic orchestration Highly dynamic capacity on demand with ‘pay as you go’ pricing Optimized for cost and performance through selection of instance type and procurement model When you plan a migration a valid option is to migrate the on premises solution first and the n consider optimizations For example an initial ‘lift and shift’ implementation might use Amazon EC2 OnDemand Instances to provision capacity which yields some immediate benefits from elasticity Some of the commercial schedulers also have integration s with AWS which enable them to add and remove nodes according to demand When you are comfortable with running c ritical workloads on AWS you can further optimize your implementation with options such as using more native services for data management capacity provisioning and orchestration Ultimately the scheduler might be in scope for replacement at which poin t you can consider a few different approaches Though financial services workloads are often composed of very large volumes of relatively short running calculations there are some cases where longer running calculations need to be scheduled In these situ ations AWS Batch could be a viable alternative or a complementary service AWS Batch plans schedules and runs batch workloads while dynamically provisioning compute resources using containers You can configure parallel computation and job dependencies to allow for workloads where the results of one job are used by another AWS Batch is offered at no additional ch arge; only the AWS resources it consumes generate costs This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 28 Customers looking to simplify their architecture might consider a queue based architecture in which clients submit tasks to a stateful queue This can then be service d by an elastic group of hungry w orker processes that take pending workloads process them and then return results The Amazon SQS can be used for this purpose Amazon SQS is a fully managed message queuing service that is ideal for this type of decoupled architecture As a serverless offering it reduces the administrative burden of infrastructure management and offers seamless elastic scaling A simple HPC approach with Amazon SQS Amazon SQS queues can be service d by groups of Amazon EC2 instances that are managed by AWS Auto Scaling groups You can configure the AWS Auto Scaling groups to scale capacity up or down based on metrics such as average CPU load or the depth of the queue AWS Auto Scaling groups can also incorporate provisioning strategies that can combine Amazon EC2 On Demand Instances or Spot Instances to provide flexible and low cost capacity With serverless queuing provided by Amazon SQS it’s logical to think about serverless compute capacity With AWS Lambda you can run code without provisioning or managing any servers This function asaservice product allows you to only pay for the computation time you consume You can also configure Lambda to process workloads from SQS scaling out horizon tally to consume messages in a queue Lambda attempt s to process the items from the queue as quickly as possible and is constrained only by the maximum concurrency allowed by the account memory and runtime limits In 2020 these limits were increased significantly You can now allocate up to 10GB of memory and six vCPUs to your functions which also have support for the AVX2 instruction set This makes Lambda functions suitable for an even wider range of HPC applications This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 29 A serverless event driven approach to HPC Taking these concepts further the blog post Decoupled Serverless Scheduler To Run HPC Applications At Scale on EC2 describes a decoupled serverless HPC scheduler which can run on hundreds of thousands of cores using EC2 Spot Instances The following figure shows a cloud native serverless HPC scheduling architecture A cloudnative serverless scheduler architecture When you explore these alternative cloud native approaches especially in comparison to established schedulers it’s important to consider all of the features required to run This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 30 what can be a critical system Metrics gathering data management and management tooling are only some of the typical requirements that must be addressed and should not be overlooked A key benefit of running HPC workloads on AWS is the flexibility of the offerings that enable you to combine various solutions to meet very specific needs An HPC architect can use Amazon EC2 Reserved Instances for long runnin g stateful hosts You can use Amazon EC2 OnDemand Instances for long running tasks or to secure capacity at the start of a batch Additionally you can provision Amazon EC2 Spot Instances to try to deliver a batch more quickly and at lower cost Some wo rkloads can then be directed to alternative platforms such as GPU enabled instances or Lambda functions You can optimize t he overall mix of these options on a regular basis to adapt to the changing needs of your business Security and compliance The approach to security in HPC systems running in the cloud is often different from other applications This is because of the ephemeral and stateless nature of the majority of the resources Issues of patching inventory tooling or human access can be eliminated because of the short lived nature of the resources • Patching – When you use a pre patched AMI the host is in a known compliant state at startup If a relatively short limit is placed on the life of the instance it’s likely that this approach wi ll meet all necessary patching standards Additionally AWS Systems Manager Patch Manager can be used to automate the process of patching managed instan ces if necessary • Inventory tooling – Onpremises hosts typically interact with compliance and inventory systems In the AWS Cloud controls around the instance image and the delivery of binaries mean that instances remain in a known state and can be progr ammatically audited so these historic controls might not be necessary Additionally b ecause h ighly scalable and elastic resources can put excessive load on such systems fully managed cloud based solutions such as AWS CloudTrail might provide a more suitable alternative • Root access – When you enable all debugging through centralized metrics and automated reporting you c an mandate zero access to the compute nodes Without any root access you can avoid key rotation and access control issues When you consider migrating to the cloud an important early step is to decide which internal tools and processes (if any) need to be replicated in the cloud Amazon EC2 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 31 instances that are unencumbered by tooling tend to start up more quickly which is important when additional capacity is required to meet a business need Because of the stateless nature of the workloads there is of ten little need to store data for long periods particularly when the job data isn’t especially sensitive doesn’t include personally identifying information (PII) and largely consists of public market datasets Regardless encryption by default is easy to implement across a wide range of AWS services Binary analytics packages often contain proprietary code that has intellectual value financial services organizations typically encrypt these binaries while in transit and us e builtin AWS tools to ensure they’re encrypted while at rest in AWS storage If compute instances are configured for minimal or no access the risk of exfiltration while the binaries are in memory is minimized AWS has a wide range of certifications and attestations relevant to financial services and other industries For full details of AWS certifications see AWS Compliance Before you design secure systems in AWS to make sure you understand the respect ive areas of responsibility for AWS and the customer review the Shared Responsibility Model The AWS Shared Responsibility Model This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 32 This model is complemented by a n extensive suite of tools and services to help you be secure in the cloud For more detailed information review the AWS Well Architected Framework Security Pillar One service of particular interest to HPC applications is the AWS Identity and Access Management (AWS IAM) service which provides finegrained access control across all of the AWS services included in this paper IAM also offers integration with your existing identity providers through identity federation Interactions with the AWS APIs can be tracked with AWS CloudTrail a service that enables governance and auditing across the AWS account This event history simplifies security analyse s changes to resources and troubleshooting Encryption by default is becoming increasing common within financial services and many AWS services now offer simple encryption features that integrate with AWS Key Management Service (AWS KMS) This service makes it easy for you to create and manage keys that can be used across a wide variety of AWS services For HPC applications keys managed by AWS KMS might be used to encrypt AMIs or S3 buckets that contain analytics binaries or to encrypt data stored in the Parameter Store AWS KMS uses FIPS 140 2 validated hardware security modules (HSMs) to generate and protect customer keys The keys never lea ve these devices unencrypted Customers with specific internal or external rules regarding HSMs can choose AWS CloudHSM which is a fully managed FIPS 140 2 Level 3 validated HSM cluster with dedicated singletenant access Migration approaches patterns and antipatterns Many financial services organizations already have some form of HPC environment hosted in an on premises data center If you’re migrating from such an implementation it’s important to con sider what might be the best method to complete the migration The optimal approach depend s on the desired outcome risk appetite and timescale but typically begin s with one of the 6 Rs: Rehosting Replatforming Repurchasing Refactoring /Rearchitecting and (to a lesser degree ) Retiring or Retain ing (revisiting) HPC cloud migrations typically progress through three stages The nuances and timings of each stage depend s on the individual businesses involved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 33 The first stage is Bursting capacity In this mode very little changes with the existing on premises HPC environment However at times of peak demand Amazon EC2 instance s can be created and added to the system to provide additional capacity The trigger for the creation of these instances is usually either: • Scheduled – If workloads are predictable in terms of timing and scale then a simple schedule to add and remove a fi xed number of hosts at predefined times can be effective The schedule can be managed by an on premises system or with Amazon EventBridge rules • Demand ba sed – In this mode a component can monitor the performance of workloads and add or remove capacity based on demand If a task queue starts to increase additional instances can be requested through the AWS API and if the queue decreases some instances can be removed • Predictive – In some cases especially when the startup time for a new instance is long (perhaps because of very large package dependencies or complex OS builds) it might be desirable to use a simple machine learning model to an alyze historic demand and determine when to bring capacity online This approach is rare but can work well when combined with a demand based approach As customers build confidence in the ir ability to supplement existing capacity with cloud based instance s they often make a decision to complete a migration However with existing on premises hardware still available customers want to keep the value of that infrastructure before it can be decommissioned In this case it can make sense to provision a new strategic grid — with all of the same scheduler components — into the cloud and retain the existing on premises grid It’s then left to the upstream clients to direct workloads accordingly switching to the cloud based grid as the on premises capacity is gradually retired When you have completed migration and are running all of their HPC workloads in the cloud the on premises infrastructure can be removed At this point you have completed a Rehosting approach When your infrastructur e is in the cloud you then have the flexibility to look at Replatforming or Refactoring your environment The ability to build entirely new architectures in the cloud alongside existing production systems means that new approaches can be fully tested before they’re put into production One anti pattern that’s occasionally proposed by customers involves platform stacking In this approach solutions such as virtualization and/or container platforms are placed under the HPC platform to try t o create portability or parity between cloud based systems and on premises systems This approach can have some disadvantages: This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 34 • Computational inefficiency – By adding more layers between the analytics binaries and CPUs performance computational efficiency is inevitably degraded as CPU cycles are consumed by the abstraction layers • Licensing costs – HPC environments are large and continue to grow Though enterprise licenses can keep the upfront costs of using these technologies very low the large number of CPU cores involved in HPC workloads can mean significant additional costs when the licenses are due for renewal • Management overhead – In the simplest approach an Amazo n EC2 instance can be created on demand using an Amazon Linux 2 AMI This AMI is patched and up to date and because it exist s for just a few hours it require s no further management However by building HPC stacks on top of other abstractions those long running layers need patching and upgrading and when multiple layers are involved the scope for disruption through planned maintenance or an unplanned outage increases significantly • Scaling challenges – Amazon EC2 instances can be available very quickly on demand If scaling out involves the creation of a complex stack before processes can run this adds to the billing time of the instance before useful work can be done In worst case scenarios there can be a temptation to leave large numbers of instance s running so that they’re available if additional workloads arise • Optimization challenges – HPC systems are already complex especially when supporting huge volumes of variable workloads with different CPU and memory requirements Knowing where CPU and memory resources are consumed is vital to identifying bottlenecks or debugging failures If an HPC platform is based on a series of abstract ion layers this can introduce additional variables that make it difficult to see where inefficiencies exist and as a result they might never be found • Security challenges – Securing a more complex stack can be challenging because there are more component s to configure monitor and maintain to ensure the integrity of the system By defining portability in terms of a virtual machine image or a Docker image you can find a good balance of portability while off setting some of the disadvantages through the u se of cloud native virtualization with Amazon EC2 and/or container management solutions such as Amazon ECS and EKS especially when combined with AWS Fargate This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 35 Keeping HPC systems as simple as possible provides the best performance at the lowest cost Most HPC solutions are already platforms by design and offer portability through simple deployment patterns to standard operating systems Conclusion AWS has a long history of helping customers from various industries — including financial services — to optimize their HPC workloads This experience over many years from customers with diverse requirements has directly contributed to the products and services offered today and will continue to do so AWS regularly accommodates very large scale requests for Amazon EC2 instances Some of these clusters are large enough to be recognized among the world’s largest supercomputers For example a group of researchers from Clemson University created a high performance cluster on the AWS Cloud using more than 11 million vCPUs on Amazon EC2 Spot Instances running in a single AWS Region This cluster was used to study how human language is processed by computer s by analyzing over 500000 documents AWS also partnered with TIBCO to demonstrate the creation of a 13 million vCPU grid on AWS using AWS Spot Instances They were able to secure 61299 instances in total for the test which ran sample calculations based on the Strata open source analytics and market risk library from OpenGamma and was set up with their assistance TIBCO now offers their DataSynapse GridServer Manager scheduler via the AWS Mar ketplace as a ‘pay as you go’ offering The PathWise HPC solution from professional services firm Aon allows (re)insurers and pension funds to rapidly solve key insurance challenges The platform relies upon cloud compute capacity from AWS and recently moved to Amazon EC2 P3 instances powered by NVIDIA V100 Tensor Core GPUs These GPUs enable PathWise to run immense calcul ations in parallel completing in seconds or minutes analysis that can take days or weeks in traditional systems Standard Chartered cut their Grid costs by 60% by leveraging Amazon EC2 Spot Instances and recently DBS Bank shared their architecture for a scalable serverless compute grid based on AWS technologies HPC platforms are crucial enablers for many different types of financial services organizations including capital markets insurance banking and payments However as demands on these platforms increase as a result of regulatory requirements it’s clear This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 36 that the tradit ional approaches to provisioning HPC infrastructure are inefficient and ultimately unsustainable Constraints on capital and capital expenditure furth er compound the challenge By migrating these systems to AWS customers benefit from a wide variety of compute instances and relevant services but also from a fundamental change in the delivery of compute capacity This new approach offers tremendous flex ibility both in terms of the management of workloads that vary daytoday but also in the overall approach to cost optimizations security availability and operations HPC workloads already have much in common with stateless function asaservice architectural patterns Just as financial services moved from local calculations to clusters and into grids they are starting to explore decentralized serverless approaches As scaling become transparent bottlenecks will continue to be removed until process ing becomes near real time If you have challenges with the scale cost and capacity challenges of managing a high performance computing system today AWS has a number of services and partner relationships that can help To learn more you can contact AWS Financial Services through the AWS Financial Services – Contact Sales form Contributors Contributors to this document include : • Alex Kimber Solutions Architect Global Financial Services Amazon Web Services • Richard Nicholson S olutions Architect Global Financial Services Amazon Web Services • Carlos Manzanedo Rueda Specialis t Solutions Architect Amazon Web Services • Ian Meyers S olutions Architect Head of Technology Amazon Web Services Further reading For additional information see: • AWS Well Architected Framework This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services G rid Computing on AWS Page 37 • AWS Well Architected Framework – HPC Lens • AWS Well Architected Framework – Financial Services Industry Lens • AWS HPC Blog Glossary of terms The following are the definitions for the terms that appear throughout this document Binary package – A set of binaries that run tasks A typical HPC environment can support multiple packages of various versions running in parallel The package and version required are defined by the client or risk system at the point of job submission These packages typically contain proprietary models that are built by the firm’s Quantitative Analysis teams ( quants ) and are often the subject of intellectual property concerns as they can form competitive differentiation Broker – A component of a typical HPC/Grid platform The broker is typically responsible for coordinating tasks an d/or client connections to compute instances As grids and task volumes grow the number of brokers is typically scaled out to ensure throughput can be maintained Client – A software system accessed by a user that generates job requests and presents res ults In financial services this is generally some form of risk management system (RMS) Engine – A software component responsible for invoking the calculation of a task using a given binary package A compute instance can run multiple engines in parallel perhaps one or more within each Slot Grid controller – A component of a typical HPC/Grid platform The controller is responsible for tracking the state of compute instances and Brokers and hosting API or GUI interfaces and metrics The controller host is generally not involved in the scheduling of individual tasks Instance – An Amazon EC2 virtual server Each instance ha s a number of available virtual CPUs (vCPU s) and an allocation of memory Job (or session) – The definition of a series of one or more related tasks For example a job might define a series of scenarios and how they are sub divided into a set of tasks This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 38 Job data – The set of data that is required in addition to the task metadata Typically job data is passed to the compute instance as a reference bypassing the scheduler itself In investment banking applications job data is generally a combination of static reference data (such as holiday calendars used to calculate trade expir ation dates ) marke t data (used to build the market environment ) and trade data (referencing the trade or portfolio of trades which are the focus of the calculation ) Quantitative analys ts / Quant s – The team associated with the development of mathematical models to predict the behavior of financial products Risk management system (RMS) – To improve oversight of risk calculations centralize operations and improve efficiency financial services firms are increasingly leveraging risk management systems to sit between the users and the HPC platform Scheduler /Grid scheduler – A software compone nt responsible for managing the lifecycle of tasks through receipt allocation to compute instances collection of results and metrics and management processes Slot – A unit of compute currency used to approximate homogeneity within a heterogenous comput e environment For example a slot might be defined as two CPU cores and 8GB of RAM and would be considered interchangeable regardless of whether the compute instance was able to provide two or 32 slots Task – A unit of work to be scheduled to a compute instance A task can define external dependencies (such as market and reference data) In recursive workload patterns a parent task can spawn a child Job or a series of other child tasks Thread – An engine run s either single threaded or multi threaded p rocesses Ideally each thread run s on a separate vCPU to minimize the overhead of CPU context switching User – In financial services a user is t ypically a member of the front office either a trader managing positions or desk head who wants oversight and ensur es successful internal or external reporting is completed This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ financialservicesgridcomputing/financial servicesgridcomputinghtmlAmazon Web Services Financial Services Grid Computing on AWS Page 39 Document versions Date Description August 24 2021 Updates to reflect AWS service improvements more modern and inclusive terminology and new cloud native architectures September 2019 Updates to services diagrams and topology January 2016 Updates to services and topology January 2015 Initial publication
|
General
|
consultant
|
Best Practices
|
Guidance_for_Trusted_Internet_Connection_TIC_Readiness_on_AWS
|
This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Guidance for Trusted Internet Connection (TIC) Readiness on AWS February 201 6 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 2 of 57 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 3 of 57 Contents Abstract 4 Introduction 4 FedRAMPTIC Overlay Pilot 5 Pilot Objectives Process and Methods 7 Pilot Results 8 Customer Implementation Guidance 9 Connection Scenarios 9 AWS Capabilities and Features 13 Conclusion 17 Contributors 17 APPENDIX A: 18 Control Implementation Summary 18 APPENDIX B: 21 Implementation Guidance 21 APPENDIX C: 32 TIC Capabilities Matrix 32 Notes 57 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 4 of 57 Abstract The Trusted Internet Connection (TIC) Initiative1 is designed to reduce the number of United States Government (USG) network boundary connections including Internet points of presence (POPs) to optimize federal network services and improve cyber protection detection and resp onse capabilities In the face of an everincreasing body of laws and regulations related to information assurance USG customers wanting to move to the cloud are confronted with security policies guidelines and frameworks that assume onpremises infrastructure and that do not align with cloud design principles Today TIC capa bilities are not available “in the cloud” This document serves as a guidance for TIC readiness on the Amazon Web Services (AWS) cloud Introduction USG agencies must route connections for the increasing number of mobile users accessing cloud services via smart phones and tablets through their agency network2 In alignment with this trend toward mobile use USG employees and contractors now want the ability to access cloudbased content anytime anywhere and with any device Agencies want to leverage compliant cloud service providers (CSPs) for agile development and rapid delivery of modern scalable and costoptimized applications without compromising on either their information assurance posture or the capabilities of the cloud In its current form a TICcompliant architecture precludes direct access to applications running in the cloud Users are required to access their compliant CSPs through an agency TIC connection either a TIC Access Provider (TICAP) or a Managed Trusted IP Service (MTIPS) provider This architecture often results in application latency and might strain existing government infrastructure In response to these challenges the TIC program recently proposed a Draft Federal Risk and Authorization Management Program (FedRAMP) –TIC Overlay3 that provides a mapping of National Institute of Standards and Technology (NIST) 80053 security controls to the required TIC capabilities Figure 1 below shows the challenge mobile applications face with the current state of the TIC architecture; it also shows a proposed future state of the architecture contemplated by the Department of Homeland Security (DHS) TIC Program Office and General Services Administration (GSA) FedRAMP Program This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 5 of 57 Office This new approach enables direct access to applications running in a compliant CSP Through a pilot program DHS and GSA sought to understand whether the objectives of the TIC initiative could be achieved in a cloud environment Figure 1: TIC Pilot Objective FedRAMPTIC Overlay Pilot In May of 2015 GSA and DHS invited AWS to participate in a FedRAMPTIC Overlay pilot The purpose of the pilot was to determine whether the proposed TIC overlay on the FedRAMP moderate security control baseline was achievable In collaboration with GSA and DHS AWS assessed how remote agency users could use the TIC overlay to access cloudbased resource s and whether existing AWS capabilities would allow an agency to enforce TIC capabilities The scope of the pilot leveraged the existing AWS FedRAMP Moderate authorization Participants in the pilot included a USG customer the DHS TIC Program Management Office (PMO) the GSA FedRAMP PMO and AWS The alignment to FedRAMP and TIC control objectives was evaluated and administered by an accredited FedRAMP thirdparty assessment organization (3PAO) Table 1 below indicates the count of TIC capabilities included in the overlay pilot Appendix C provides the supporting data for Table 1 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 6 of 57 TIC Capabilities Group Total Description Original Capabilities 74 Total TIC v20 Reference Architecture Capabilities Excluded Capabilities 4 TIC Capabilities determined by DHS as excluded from Draft FedRAMP – TIC Overlay These capabilities are not applicable to FedRAMP Cloud Service Provider environments and are not included in the FedRAMP – TIC Overlay baseline Mapped Capabilities 70 Original Capabilities less Excluded Capabilities These define the baseline FedRAMP – TIC Overlay as defined in the Draft FedRAMP – TIC Overlay Control Mapping Deferred Capabilities 13 Mapped Capabilities determined to be specific to the agency (TIC Provider) and removed from the initial s cope of the assessment as directed by DHS TIC and GSA FedRAMP PMO Included Capabilities 57 Mapped Capabilities less Deferred Capabilities These capabilities represent the evaluation target of the pilot Table 1: FedRAMP Associated TIC Capabilities Evaluated The following items were also included in the assessment scope: Customer AWS Management Console Customer services Amazon Simple Storage Service (Amazon S3) Amazon Elastic Compute Cloud (Amazon EC2) Amazon Elastic Block Store (Amazon EBS) Amazon Virtual Private Cloud (Amazon VPC) AWS Identity and Access Management (IAM) Customer thirdparty tools and AWS ecosystem providers used to enforce TIC capabilities AWS supporting infrastructure Control r esponsibilities shown in Table 2 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 7 of 57 Responsible Party Total Description Customer 16 TIC capabilities determined to be solely the responsibility of the AWS customer Shared 36 TIC capabilities determined to be a shared responsibility between the customer and AWS AWS 5 TIC capabilities determined to be solely the responsibility of AWS TIC Capabilities Evaluated 57 Total number of candidate capabilities evaluated as part of the pilot Table 2: Control Responsibilities Pilot Objectives Process and Methods To test the overlay AWS worked with a FedRAMPaccredited 3PAO and a USG customer to produce results for the following testing objectives: Identify whether and how agencies can use TIC overlay controls vi a mapping to the FedRAMP Moderate control baseline to provide remote agency users access to AWS while enforcing TIC compliance Determine whether the required capabilities exist within AWS to implement and enforce TIC compliance Determine the allocation of responsibility for implementing and enforcing TIC compliance An initial analysis of the TIC overlay controls by AWS revealed that over 80 percent of the TIC capability requirements map directly to one or more existing FedRAMP Moderate controls satisfied under the current AWS FedRAMP Authority to Operate (ATO) With the control mapping inhand and in collaboration with our 3PAO AWS developed a TIC security requirements traceability matrix (SRTM) that included control responsibilities The results from this exercise shown in Table 2 above demonstrated that only 16 TIC capabilities would rest solely with the customer Next our 3PAO proceeded with the following testing process and methods: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 8 of 57 Leveraged previous writeups evidence security documentation and interviews from the existing AWS FedRAMP Moderate ATO to determine the satisfaction of security controls that were either the responsibility of AWS or a shared responsibility Developed a customer test plan for the controls that were either a customer responsibility or a shared responsibility using guidance provided by AWS Certified Solutions Architects Tested the covered AWS services (IAM Amazon EC2 Amazon S3 Amazon EBS and Amazon VPC) and supporting infrastructure including features functionality and underlying components that assist with enforcing TIC capabilities Tested implementation of shared and customer responsibilities using a Customer Test Plan and a TIC Pilot SRTM Interviewed the USG customer on internal policies procedures and security tools used to enforce TIC capabilities as defined by DHS Collected evidence from the customer to complete assessment of the customer and shared responsibility controls Pilot Results After completion of the assessment phase of the pilot roughly two dozen of the included TIC capabilities required additional discussion with the DHS TIC PMO The outstanding items were reviewed sequentially and final dispositions were recorded based on DHS TIC PMO direction Table 3 below summarizes the results of the pilot assessment and final disposition discussion as synthesized by AWS FedRAMP Associated TIC Capabilities Version 20 Disposition Total Description Implemented 43 TIC capability determined as satisfied or able to be satisfied on AWS Gap 1 TIC capability determined to require further evaluation on AWS by FedRAMP PMO and DHS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 9 of 57 Not Assessed 13 TIC capability determined to be not applicable to a CSP or not included in the customer environment FedRAMP TIC Capabilities Evaluated 57 Total number of candidate capabilities evaluated as part of the pilot Table 3: Synthesized FedRAMP TIC Associated Capability Dispositions Customer Implementation Guidance Based on the results of the pilot and lessons learned AWS is providing guidance on both relevant connection scenarios and the use of AWS capabilities and features that align with the FedRAMPTIC Overlay work described above Following the conclusion of the overlay pilot and pending official guidance from the FedRAMP PMO and TIC PMO AWS designed the next sections to provide USG agencies and contractors with information to assist in the development of “TIC Ready” architectures on AWS As additional referen ce Appendix A contains a Control Implementation Summary (CIS) showing TIC Capability to FedRAMP Control mappings and includes responsible party information Appendix B provides percontrol guidance for AWS and ecosystem capabilities that enable customer compliance with required TIC capabilities Finally Appendix C contains a mapping of TIC Capabilities to their AWSsynthesized dispositions Connection Scenarios In this section we highlight common connection scenarios that relate to TIC compliance For each scenario we provide a brief explanation and a highlevel architecture diagram Public Web and Mobile Applications (Not Included in Pilot) This use case covers public unauthenticated web and mobile applications These applications are accessible via the Internet typically over HTTPS by the general public Users access these web and mobile applications using their choice of web browser and device They can access these web and mobile applications from their home or any public WiFi networks or via their mobile devices These applications are deployed in one or more AWS regions Figure 2 below illustrates this connection scenario This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 10 of 57 Figure 2: Public Web and Mobile Applications (Unauthenticated) In this architecture an Internet Gateway (IGW) provides Internet connectivity to two or more customerdefined public subnets across two or more Availability Zones (MultiAZ) in the VPC An Elastic Load Balancing (ELB) load balancer is placed in these public subnets A web tier is configured within an Auto Scaling group leveraging the load balancer to provide a continuously available web front end The web tier securely communicates with back end resources such as databases and other persistent storage The environment is completely contained within the cloud Public Web and Mobile Applications Requiring Authentication: “All in” Deployments This use case covers authenticated web and mobile application used in an “all in cloud” deployment These applications are accessible via the Internet typically over HTTPS by the agency users They access these web and mobile applications from their home any public WiFi networks or agency networks using either personal or agencyissued electronic devices These applications are deployed in one or more AWS regions These applications leverage rolebased authentication This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 11 of 57 to arbitrate access to application functionality The following examples are public websites with authentication requirements: System for Award Management (SAM) GSA Advantage OMB Max Portal Cloudbased software as a service (SaaS) offerings (eg email) Figure 3 below illustrate s this connection scenario In this architecture an IGW provides Internet connectivity to two or more customerdefined public subnets across multiple Availability Zones in the VPC An ELB load balancer is p laced in these public subnets A web tier is configured within an Auto Scaling group leveraging the ELB load balancer to provide a continuously available web front end This web tier securely communicates with other backend resources most notably the backend identity store used for role based authentication The environment is completely contained within the cloud Figure 3: Public Web and Mobile Applications Authenticated All In This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 12 of 57 Public Web and Mobile Applications Requiring Authentication: “Hybrid” D eployments This use case covers authenticated web and mobile application use where a portion of the environment resides within a customer datacenter These applications are accessible via Internet typically over HTTPS by the agency users They access these web and mobile applications from their home any public WiFi networks or agency networks using either personal or agencyissued electronic devices These applications are deployed in one or more Amazon Web Services (AWS) regions and one or more customer datacenters These application s leverage rolebased authentication to arbitrate access to application functionality In the hybrid deployment scenario a portion of the application architecture typically the public web presence resides in the cloud while another portion typically sensitive data sources reside in an agency datacenter This scenario is most commonly seen when an agency wishes to maintain its identity and/or data stores outside of the cloud environment Connectivity between the incloud portions of the application and the controlled onpremises components is achieved using AWS Direct Connect or VPN service in conjunction with a TICAP or MTIPS provider In this way data flow between the customer’s in cloud and This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 13 of 57 onpremises services are seen by the TIC Figure 4 below illustrates this connection scenario Figure 4: Public Web and Mobile A pplications Authenticated Hybrid AWS Capabilities and Features In order to achieve TIC compliance on AWS we recommend using the following AWS capabilities and features and following our published best practices to secure the resources AWS Identity and Access Management (IAM) is a web service that enables IT organizations to manage multiple users groups roles and permissions for AWS services such as Amazon EC2 Amazon Relational Database Service (RDS) and Amazon VPC IT can centrally manage AWS Service related resources through IAM policies using security credentials such as Access Keys These access keys can be applied to users groups and roles This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 14 of 57 AWS CloudFormation is a web service that uses JSON templates within which customers can describe their IT architecture as code These templates can then be used to launch or create AWS resources that were defined within the template This collection of resources is called a stack CloudFormation templates allow agencies to programmatically implement controls for new and existing environments These controls provide comprehensive rule sets that can be systematically enforced AWS CloudTrail provides a log of all requests and a history of AWS API calls for AWS resources This includes calls made by using the AWS Management Console AWS SDKs commandline tools (CLI) and higherlevel AWS services IT can identify which users and accounts called AWS for services that support CloudTrail the source IP address the calls were made from and when the calls were made Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS You can use Amazon CloudWatch to collect and track metrics collect and monitor log files and set alarms Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances Amazon DynamoDB tables and Amazon RDS DB instances as well as custom metrics generated by your applications and services and any log files your applications generate You can use Amazon CloudWatch to gain systemwide visibility into resource utilization application performance and operational health You can use these insights to react and keep your application running smoothly CloudWatch Logs can be used to monitor your logs for specific phrases values or patterns For example you could set an alarm on the number of errors that occur in your system logs or view graphs of web request latencies from your application logs You can view the original log data to see the source of the problem if needed Log data can be stored and accessed for as long as you need using highly durable low cost storage so you don’t have to worry about filling up hard drives AWS Config is a managed service that provides an AWS resource inventory configuration history and configuration change notifications to enable security and governance With AWS Config IT can discover existing AWS resources export a complete inventory of AWS resources with all configuration details and determine how a resource was configured at any point in time This facilitates This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 15 of 57 compliance auditing security analysis resource change tracking and troubleshooting You can use AWS Config Rules to create custom rules used to evaluate controls applied to AWS resources AWS also provides a list of standard rules that you can evaluate against your AWS resources such as checking that port 22 is not open in any production security group Amazon S3 is storage for the Internet Amazon S3 is a highly scalable durable and available distributed object store designed for missioncritical and primary data storage Amazon S3 stores objects redundantly on multiple devices across multiple facilities within an AWS region Amazon S3 is designed to protect data and allow access to it even in the case of a failure of a data center The versioning feature in Amazon S3 allows the retention of prior versions of objects stored in Amazon S3 and also protects against accidental deletions initiated by staff or software error Versioning can be enabled on any Amazon S3 bucket Amazon EC2 is a web service that provides resizable compute capacity in the cloud; it is essentially server instances used to build and host software systems Amazon EC2 is designed to make webscale computing easier for developers and customers to deploy virtual machines on demand The simple web service interface allows customers to obtain and configure capacity with minimal friction; it provides complete control of their computing resources Amazon EC2 changes the economics of computing because it allows enterprises to avoid large capital expenditures by paying only for capacity that is actually used Amazon VPC enables the creation of a logically separate space within AWS that can house compute resources and storage resources that can be connected to a customer’s existing infrastructure through a virtual private network (VPN) AWS Direct Connect or the Internet With Amazon VPC it is possible to extend existing management capabilities and security services such as DNS LDAP Active Directory firewalls and intrusion detection systems to include private AWS resources maintaining a consistent means of protecting information whether residing on internal IT resources or on AWS Amazon Glacier is an extremely lowcost storage service that provides secure durable and flexible storage for data backup and archival With Amazon Glacier customers can reliably store their data for as little as $0007 per gigabyte per month Amazon Glacier enables customers to offload the administrative burdens of operating and scaling storage to AWS so that they don’t have to worry about This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 16 of 57 capacity planning hardware provisioning data replication hardware failure detection and repair or timeconsuming hardware migrations Amazon VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC Flow log data is stored using Amazon CloudWatch Logs After you've created a flow log you can view and retrieve its data in Amazon CloudWatch Logs Flow logs can help you with a number of tasks; for example you can troubleshoot why specific traffic is not reaching an instance which in turn can help you diagnose overly restrictive security group rules You can also use flow logs as a security tool to monitor the traffic that is reaching your instance This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 17 of 57 Conclusion AWS services features and our partner ecosystem deliver a suite of capabilities that assist in delivering “TIC Ready” cloud architectures Through collaboration with a USG customer the DHS TIC Program Management Office (PMO) the GSA FedRAMP PMO and our accredited FedRAMP thirdparty assessment organization (3PAO) AWS has demonstrated how customers might enforce many of the capabilities prescribed by TIC While the FedRAMP TIC Overlay is being finalized using the evidence resulting from our TIC Mobile assessment USG customers can implement the TIC capabilities as part of their virtual perimeter protection solution using functionality provided by AWS with a clear definition of the customer responsibility for implementation of the additional TIC capabilities Contributors The following individuals and organizations contributed to this document: Jennifer Gray US Public Sector Compliance Architect AWS Security Alan Halachmi Principal Solutions Architect Amazon Web Services Nandakumar Sreenivasan Senior Solutions Architect Amazon Web Services This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 18 of 57 APPENDIX A: Control Implementation Summary TIC v20 Associated FedRAMP Security Controls FedRAMP Control Mapping RESPONSIBILITY ID ID TMAU01 AC6 (1) SHARED AC6 (2) IA1 IA2 IA2 (1) IA2 (2) IA2 (3) IA2 (8) IA2 (11) IA2 (12) IA3 IA4 IA4 (4) IA5 IA5 (1) IA5 (2) IA5 (3) IA5 (6) IA5 (7) IA5 (11) IA6 IA7 IA8 TMCOM02 AC8 SHARED CA3 PL4 TMDS01 AU4 CUSTOMER TMDS02 CP2 CUSTOMER CP10 TMDS03 AU1 SHARED SI4 N/A TMDS04 AU1 SHARED TMDS05 N/A CUSTOMER TMLOG01 AU8 (1) SHARED TMLOG02 AU3 SHARED TMLOG03 AU11 SHARED TMLOG04 AU11 SHARED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 19 of 57 TIC v20 Associated FedRAMP Security Controls FedRAMP Control Mapping RESPONSIBILITY ID ID TMPC06 N/A SHARED TMTC01 CP8 AWS CP8 (1) CP8 (2) TMTC02 CM7 SHARED TMTC03 CP11 SHARED TMTC04 SC20 CUSTOMER SC21 SC22 TMTC05 IR8 SHARED TMTC06 IR1 SHARED TMTC07 CP2 SHARED TOMG01 CM8 SHARED TOMG02 CM3 SHARED CM9 TOMG04 CP2 SHARED TOMG07 CM8 SHARED TOMG08 N/A AWS TOMG09 N/A AWS TOMG10 N/A AWS TOMG11 N/A AWS TOMON02 CA2 SHARED TOMON03 AU6 (1) SHARED TOMON04 AU1 SHARED AU2 TOMON05 IR3 CUSTOMER TOREP01 CA7 SHARED TOREP02 CA7 SHARED TOREP03 CA7 SHARED TOREP04 IR6 SHARED TORES01 IR8 SHARED TORES02 SI2 SHARED TORES03 SC5 SHARED TSCF01 SC7 SHARED SC7 (8) TSCF02 SC7 SHARED SC7 (8) TSCF03 SC7 SHARED SC7 (8) TSCF04 SC7 CUSTOMER SI3 SI8 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 20 of 57 TIC v20 Associated FedRAMP Security Controls FedRAMP Control Mapping RESPONSIBILITY ID ID TSCF05 SI4 CUSTOMER TSCF06 SC8 (1) CUSTOMER TSCF07 SC8 (1) CUSTOMER TSCF08 SI4 CUSTOMER TSCF09 IA9 SHARED TSCF10 IA5 SHARED TSCF13 AU3 (1) CUSTOMER SC7 SC20 SC21 SC22 TSINS01 AU1 CUSTOMER AU6 AU6 (1) SC7 TSPF01 AC4 CUSTOMER SC7 TSPF03 SC7 SHARED TSPF04 SC7 SHARED TSPF06 AU3 (1) SHARED TSRA01 AC17 CUSTOMER AC17 (2) IA2 (2) SC7 (7) TSRA02 AC20 CUSTOMER CA3 CA3 (3) CA3 (5) TSRA03 AC20 CUSTOMER This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 21 of 57 APPENDIX B: Implementation Guidance TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TMAU01 User Authentication SHARED Leverage IAM and its multi factor authentication capabilities TMCOM02 TIC and Customer SHARED Leverage IAM Policies to control and to restrict access to AWS resources TMDS01 Storage Capacity CUSTOMER Leverage AWS Marketplace providers for packet capture and analysis Leverage VPC Flow Logs to capture data flow metadata Leverage CloudWatch Logs with appropriate log retention for log aggregation Enable logging with AWS service s (eg S3 logs ELB logs) TMDS02 Back up Data CUSTOMER Leverage AWS CloudFormation to template the environment Leverage EC2 AMI Copy S3 versioning S3 cross region replication S3 MFA delete and S3 life cycle policies for backup Leverage EC2 autoscaling to recovery from transient hardware failures TMDS03 Data Ownership SHARED Administrative control This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 22 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TMDS04 Data Attribution & Retrieval SHARED Leverage S3 buckets with IAM policies and S3 bucket policies to segregate access to data Configure services such as CloudTrail to log to the appropriate bucket If needed leverage S3 Events to initiate data processing workflows Leverage CloudWatch Logs with IAM policies to consolidate or segregate agency data as required Implement VPC Flo w Logs on all VPCs Enable Cloud Trail logs Enable AWS Config Enable ELB logs Enable S3 logs TMDS05 DLP CUSTOMER Leverage AWS Marketplace providers for DLP technologies Leverage S3 buckets with versioning enabled and MFA delete Enable S3 cross region replication of critical or sensitive data into another AWS account in another region Leverage Glacier Vault Lock for data retention TMLOG01 NTP Server SHARED Configure approved NTP providers within the customer environment TMLOG02 Time Stamping SHARED Configure approved NTP providers within the customer environment This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 23 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TMLOG03 Session Traceability SHARED Leverage S3 buckets with appropriate lifecycle policies Configure services such as CloudTrail to log to the appropriate bucket If needed leverage S3 Events to initiate data processing workflows Leverage CloudWatch Logs to receive AWS specific and customer service logs with appropriate retention policies Configure services such as VPC Flow Logs to log to the appropriate Log S tream Leverage AWS Marketplace offerings for log aggregation and analysis TMLOG04 Log Retention SHARED Leverage S3 lifecycle policies Leverage Glacier Vault Lock TMPC06 Geographic Diversity SHARED AWS provides geographic diversity within a region Customers must leverage multiple Availability Zones to achieve this diversity Customers may also elect to deploy multi region applications TMTC01 Route Diversity AWS AWS provides route diversity intraregion inter region and for Internet access TMTC02 Least Functionality SHARED Leverage IAM Policies to restrict access to AWS resources Leverage Network Access Control Lists (NACLs) for course grained stateless packet filtering Leverage Security Groups (SGs) for fine grained stateful flow filtering Consider a separation of duties approach for management of NACLs and SGs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 24 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TMTC03 IPv6 SHARED Contact AWS Sales Representative regarding current IPv6 offerings TMTC04 DNS Authoritative Servers CUSTOMER Leverage customer managed DNS systems TMTC05 Response Authority SHARED Leverage AWS access and flow control capabilities including IAM Network Access Control Lists and Security Groups Leverage AWS Marketplace providers TMTC06 TIC Staffing SHARED AWS provides network and security operations continuously Leverage AWS log sources (eg S3 logs ELB logs VPC Flow Logs CloudTrail Config etc) and customer specific logs (eg OS logs application logs etc) to assess network and security operation Customer designates security points of contact within a customer account such that AWS may communicate detected anomalies This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 25 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TMTC07 Response Access SHARED AWS provides network and security operations continuously Leverage AWS log sources (eg S3 logs ELB logs VPC Flow Logs CloudTrail Config etc) and customer specific logs (eg OS logs application logs etc) to assess network and security operation Customer designates security points of contact within a customer account such that AWS may communicate detected anomalies Leverage AWS CloudFormation to template the environment Leverage EC2 AMI Copy S3 versioning S3 cross region replication S3 MFA delete and S3 life cycle policies for backup Leverage EC2 Auto Scaling to recovery from transient hardware failures TOMG01 System Inventory SHARED Leverage AWS Config Leverage resource level tags TOMG02 Change & Configuration Management SHARED AWS maintains a formalized change and configuration management system Customers are responsible for these processes within their AWS environment TOMG04 Contingency Planning SHARED Leverage AWS CloudFormation to template the environment Leverage EC2 AMI Copy S3 versioning S3 cross region replication S3 MFA delete and S3 life cycle policies for backup Leverage EC2 Auto Scaling to recovery from transient hardware failures Plan f or alternate region recovery This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 26 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TOMG07 Network Inventory SHARED Leverage AWS Config Leverage resource level tags TOMG 08 Service Level Agreement AWS AWS provides service level information through published artifacts including the AWS website TOMG 09 Tailored Service Level Agreement AWS AWS provides elasticity natively as a cloud service provider AWS services can expand/contract based on customer configuration and demand TOMG10 Tailored Security Policies AWS AWS allows customers to customize t heir cloud environment including security policies TOMG11 Tailored Communications AWS AWS provides services and features that enable customers to tailor communication processes AWS develops new capabilities based on customer demand TOMON02 Vulnerability Scanning SHARED Leverage pre authorized products from the AWS Marketplace and/or submit request to AWS for customer executed vulnerability scans TOMON03 Audit Access SHARED Leverage IAM to control and to restrict access to AWS resources This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 27 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TOMON04 Log Sharing SHARED Leverage S3 buckets with IAM policies and S3 bucket policies to segregate access to data Configure services such as CloudTrail to log to the appropriate bucket If needed leverage S3 Events to initiate data processing work flows Leverage CloudWatch Logs with IAM policies to consolidate or segregate agency data as required Implement VPC Flow Logs on all VPCs Enable CloudTrail logs Enable AWS Config Enable ELB logs Enable S3 logs TOMON05 Operational Exercises CUSTOMER Customer Responsibility Contact your AWS Sales Representative regarding Security Incident Response Simulation (SIRS) Game Day offering TOREP01 Customer Service Metrics SHARED AWS maintains customer service metrics Customers must provide customer service for their application Customer secures an AWS Support plan and designates an account point of contact such that AWS customer service may engage as required TOREP02 Operational Metrics SHARED AWS maintains operational metrics Customers must provide operational metrics for their application This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 28 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TOREP03 Customer Notification SHARED AWS provides network and security operations continuously Leverage AWS log sources (eg S3 logs ELB logs VPC Flow Logs CloudTrail Config etc) and customer specific logs (eg OS logs application logs etc) to assess network and security operation Customer designates security points of contact within a customer account such that AWS may communicate detected anomalies Customers provide like capabilities for users of applications they operate on AWS TOREP04 Incident Reporting SHARED AWS provides network and security operations continuously Leverage AWS log sources (eg S3 logs ELB logs VPC Flow Logs CloudTrail Config etc) and customer specific logs (eg OS logs application logs etc) to assess network and security operation Customer designates security points of contact within a customer account such that AWS may communicate detected anomalies Customers provide like capabi lities for users of applications they operate on AWS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 29 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TORES01 Response Timeframe SHARED AWS provides network and security operations continuously Leverage AWS log sources (eg S3 logs ELB logs VPC Flow Logs CloudTrail Config etc) and customer specific logs (eg OS logs application logs etc) to assess network and security operation Customer designates security points of contact within a customer account such that AWS may communicate detected anomalies Customers provide their own incident response plan TORES02 Response Guidance SHARED AWS provides network and security operations continuously Leverage AWS log sources (eg S3 logs ELB logs VPC Flow Logs CloudTrail Config etc) and customer specific logs (eg OS logs application logs etc) to assess network and security operation Customer designates security points of contact within a customer account such that AWS may communicate detected anomalies Customers provide their own incident response plan TORES03 Denial of Service Response SHARED Leverage Anti DDoS design patterns described in AWS whitepapers Leverage Elastic Load Balanc ing Leverage Auto Scaling Leverage Network Access Controls Lists Leverage Security Groups Leverage AWS Marketplace providers for appropriate tools This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 30 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TSCF01 Application Layer Filtering SHARED Leverage AWS Marketplace providers for appropriate tools TSCF02 Web Session Filtering SHARED Leverage AWS Marketplace providers for appropriate tools TSCF03 Web Firewall SHARED Leverage AWS Marketplace providers for appropriate tools TSCF04 Mail Filtering CUSTOMER Customer responsibility TSCF05 Agency Specific Mail Filters CUSTOMER Customer responsibility TSCF06 Mail Forgery Detection CUSTOMER Customer responsibility TSCF07 Digitally Signing Mail CUSTOMER Customer responsibility TSCF08 Mail Quarantine CUSTOMER Customer responsibility TSCF09 Crypto graphically authenticated protocols SHARED AWS Direct Connect requires customer use of BGP MD5 authentication TSCF10 Reduce the use of clear text management protocols SHARED Leverage IAM aws:SecureTransport Policy Condition TSCF13 DNS Filtering CUSTOMER Customer responsibility TSINS01 NCPS CUSTOMER Customer responsibility TSPF01 Secure all TIC traffic CUSTOMER Customer responsibility TSPF03 Stateless Filtering SHARED Leverage Network Access Control Lists TSPF04 Stateful Filtering SHARED Leverage Security Groups Leverage AWS Marketplace providers TSPF06 Asymmetric Routing SHARED Implement symmetric routing in to or out from AWS TSRA01 Agency User Remote Access CUSTOMER Customer responsibility Leverage Customer Gateway VPN and Virtual Private Gateway to connect a VPC with a site tosite VPN This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 31 of 57 TIC v20 Associated FedRAMP Security Controls RESPONSIBILITY AWS Feature Mapping TSRA02 External Dedicated Access CUSTOMER Customer responsibility TSRA03 Extranet Dedicated Access CUSTOMER Customer responsibility This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 32 of 57 APPENDIX C: Mapped FedRAMP TIC Capabilities Matrix FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION TMAU01 User Authentication TIC systems and components comply with NIST SP 800 53 identification and authentication controls for high impact systems (FIPS 199) Administrative access to TIC access point devices requires multi factor authentication (OMB M 1111) INCLUDED IMPLEMENTED TMCOM01 TIC and US CERT (TS/SCI) The TICAP has a minimum of three qualified people with TOP SECRET/SCI clearance available within 2 hours 24x7x365 with authority to report acknowledge and initiate action based on TOP SECRET/SCI level information including tear line information with U S CERT Authorized personnel with TOP SECRET/SCI clearances have 24x7x365 access to an ICD 705 accredited Sensitive Compartment Information Facility (SCIF) including EXCLUDED N/A This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 33 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION the following TOP SECRET/SCI communications channels: Secure telephone (STE/STU) an d card authorized for TOP SECRET/SCI and Secure FAX machine Typically personnel with appropriate clearances to handled classified information will include at least the Senior NOC/SOC manager Chief Information Security Officer (CISO) and Chief Inf ormation Officer (CIO) and other personnel as determined by the agency The SCIF may be shared with another agency and should be within 30 minutes of the TIC management location during normal conditions in order for authorized personnel to exchange clas sified information evaluate the recommendations initiate the response and report operational status with US CERT within two hours of the notification TMCOM02 TIC and Customer The Multi Service TICAP secures and authenticates the administrative communications (ie customer service) between the TICAP operator and each TICAP client INCLUDED IMPLEMENTED TMCOM03 TIC and US CERT (SECRET) The TICAP has a minimum of one qualified person with SECRET or higher clearance immediately available on each shift 24x7x365 with authority to report acknowledge and initiate action based on SECRET level information; including tear line information wit h US CERT Authorized personnel with SECRET EXCLUDED N/A This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 34 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION clearances or higher have 24x7x365 immediate access at the TIC management location (NOC/SOC) to the following SECRET communications channels: Secure telephone (STE/STU) and card authorized for SECRET or hig her Secure FAX machine SECRET level email account able to exchange messages with the Homeland Secure Data Network (HSDN) and Access to the US CERT SECRET website Additionally authorized personnel with TOP SECRET/SCI clearances have 24x7 x365 access within 2 hours of notification to an ICD 705 accredited Sensitive Compartment Information Facility (SCIF) including the following TOP SECRET/SCI communications channels: Secure telephone (STE/STU) and card authorized for TOP SECRET/SCI Secure FAX machine TOP SECRET/SCI level email account able to exchange messages with the Joint Worldwide Intelligence Communications System (JWICS) and Access to the US CERT TOP SECRET website TMDS01 Storage Capacity Each TIC access point must be able to perform real time header and content capture of all inbound and outbound traffic for administrative legal audit or other operational purposes The TICAP has storage capacity to retain at least 24 hours of data genera ted at full TIC operating capacity The TICAP INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 35 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION is able to selectively filter and store a subset of inbound and outbound traffic TMDS02 Back up Data In the event of a TICAP system failure or compromise the TICAP has the capability to restore operations to a previous clean state Backups of configurations and data are maintained offsite in accordance with the TICAP continuity of operations plan INCL UDED IMPLEMENTED TMDS03 Data Ownership The Multi Service TICAP documents in the agreement with the customer agency that the customer agency retains ownership of its data collected by the TICAP INCLUDED IMPLEMENTED TMDS04 Data Attribution & Retrieval The Multi Service TICAP identifies and can retrieve each customer agency's data for the customer agency without divulging any other agency's data INCLUDED IMPLEMENTED TMDS05 DLP The TICAP has a Data Loss Prevention program and follows a documented procedure for Data Loss Prevention INCLUDED IMPLEMENTED TMLOG01 NTP Server Each TIC access point has a Network Time Protocol (NTP) Stratum 1 system as a stable Primary Reference Tim e Server (PRTS) synchronized within 025 seconds relative to Coordinated Universal Time (UTC) The primary synchronization method is an out of band NIST/USNO national reference time source (Stratum 0) such as the Global Positioning System (GPS) or WWV radi o clock See the TIC Reference Architecture Appendix F for additional information INCLUDED IMPLEMENTED TMLOG02 Time Stamping All TIC access point event recording clocks are synchronized to within 3 seconds relative to Coordinated INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 36 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION Universal Time (UTC) All TICAP log timestamps include the date and time with at least to thesecond granularity Log timestamps that do not use Coord inated Universal Time (UTC) include a clearly marked time zone designation The intent is to facilitate incident analysis between TICAPs and TIC networks and devices TMLOG03 Session Traceability The TICAP provides online access to at least 7 days of session traceability and audit ability by capturing and storing logs / files from installed TIC equipment including but not limited to firewalls routers servers and other designated devices The TICAP maintains the logs needed to est ablish an audit trail of administrator user and transaction activity and sufficient to reconstruct security relevant events occurring on performed by and passing through TIC systems and components Note: This capability is intended for immediate online access in order to trace session connections and analyze security relevant events In addition TMLOG04 requires retaining logs for an additional period of time either online or offline INCLUDED IMPLEMENTED TMLOG04 Log Retention The TICAP follows a documented procedure for log retention and disposal including but not limited to administrative logs session connection logs and application transaction logs Record retention and disposal schedules are in accordance with the National Archives and Reco rds Administration existing General Records Schedules in particular Schedule 12 “Communications INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 37 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION Records” and Schedule 20 “Electronic Records;” or NARA approved agency specific schedule Note: This capability is intended for the management and operation of the TICAP itself and does not require the TICAP infer or implement retention policies based on the content of TICAP client communications The originator and recipient of communications through a TICAP remain responsible for their own retention and dis posal policies TMPC01 TIC Facility The TIC access points comply with NIST SP 800 53 physical security controls for high impact systems (FIPS 199) DEFERRED N/A TMPC02 NOC/SOC Facilities The TIC management locations such as a Network Operations Center (NOC) and a Security Operations Center (SOC) comply with NIST SP 80053 physical security controls for medium impact systems (FIPS 199) DEFERRED N/A TMPC03 SCIF Facilities The TICAP maintains access to an accredited Sensitive Compartment Information Facility (SCIF) that complies with ICD 705 “Sensitive Compartmented Information Facilities” EXCLUDED N/A TMPC04 Dedicated TIC Spaces The TIC access points and TIC management functions such as NOC/SOC are located in spaces dedicated for exclusive use or support of the US Government The space is secured by physical access controls to ensure that TIC systems and components are accessi ble only by authorized personnel Examples of dedicated spaces include but are not DEFERRED N/A This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 38 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION limited to secured racks cages rooms and buildings TMPC05 Facility Resiliency The TIC access point is equipped for uninterrupted operations for at lea st 24 hours in the event of a power outage and conforms to specific physical standards including but not limited to: Electrical systems meet or exceed the building operating and maintenance standards as specified by the GSA Public Buildings Service Standards PBS 100 TIC systems and components are connected to uninterruptable power in order to maintain mission and business essential functions including but not limited to TIC systems support systems and powered telecommunications facilities inc luding at the DEMARC or MPOE Uninterruptable power systems HVAC and lighting are connected to an onsite automatic standby/emergency generator capable of operating continuously (without refueling) for at least 24 hours DEFERRED N/A TMPC06 Geographic Diversity The Multi Service TICAP has geographic separation between its TIC access points with at least 10 miles separation recommended It is also recommended that single agency TICAPs have geographic separation between their TIC access points INCLUDED IMPLEMENTED TMTC01 Route Diversity The TIC access point follows the National Communications System (NCS) recommendations for Route Diversity including at least two physically separate points of entry at the TIC access point and physically INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 39 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION separate cabling paths to an external telecommunications provider or Internet provider facility TMTC02 Least Functionality TIC systems and components in the TIC access point are configured according to the principal of "least functionality" in that they provide only essential capabilities and specifically prohibit or restrict the use of non essential functions ports protoco ls and/or services INCLUDED IMPLEMENTED TMTC03 IPv6 All TIC systems and components of the TIC access point support both IPv4 and IPv6 protocols in accordance with OMB Memorandum M 0522 and Federal CIO memorandum “Transition to IPv6” The TICAP supports both IPv4 and IPv6 addresses and can transit both native IPv4 and native IPv6 traffic (ie dualstack) between external connections and agency internal networks The TICAP may also support other IPv6 transit methods such as tunneling or translation The TICAP ensures that TIC access point systems implement IPv6 capabilities (native tunneling or translation) without compromising IPv4 capabilities or security IPv6 security capabilities should achieve at least functional parity with IPv4 security capabilities INCLUDED GAP TMTC04 DNS Authoritative Servers The TIC access point supports hosted DNS services including DNSSEC for TICAP client domains The TICAP configures DNS services in accordance with but not limited to the following recommendations from NIST INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 40 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION SP 800 81 Rev 1: 1 The TICAP deploys separate authoritative name servers from caching (also known as resolving/recursive) name servers or an alternative architecture preventing cache poisoning 2 The TICAP implements DNS SEC by meeting NIST SP 800 81 Rev 1 for key generation key storage key publishing zone signing and signature verification TMTC05 Response Authority The TICAP maintains normal delegations and devolution of authority to ensure essential incident response performance to a no notice event This includes but is not limited to terminating limiting or modifying access to external connections including to the Internet based on documented criteria including when advised by US CERT INCLUDED IMPLEMENTED TMTC06 TIC Staffing The TIC management location such as a Network Operations Center (NOC) and/or Security Operations Center (SOC) is staffed 24x7 On scene personnel are qualified and authorized to initiate appropriate technical responses including when external access is disrupted INCLUDED IMPLEMENTED TMTC07 Response Access TICAP Operations personnel have 24x7 physical or remote access to TIC management systems which control the TIC access point devices Using this access TICAP operations personnel can terminate tro ubleshoot or repair external connections including to the Internet as required INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 41 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION TOMG01 System Inventory The TICAP develops documents and maintains a current inventory of all TIC information systems and components including relevant ownership information INCLUDED IMPLEMENTED TOMG02 Change & Configuration Management The TICAP follows a formal configura tion management and change management process to maintain a proper baseline INCLUDED IMPLEMENTED TOMG03 Change Communication The TICAP communicates all changes approved through the formal configuration management and change management processes to customers as defined in SLAs or other authoritative documents DEFERRED N/A TOMG04 Contingency Planning The TICAP maintains an Information Systems Contingency Plan (ISCP) that provides procedures for the assessment and recovery of TIC systems and components following a disruption The contingency plan should be structured and implemented in accordance with N IST SP 800 34 Rev 1 INCLUDED IMPLEMENTED TOMG05 TSP The TICAP has telecommunications service priority (TSP) configured for external connections including to the Internet to provide for priority restoration of telecommunication services DEFERRED N/A TOMG06 Maintenance Scheduling The TICAP employs a formal technical review process to schedule conduct document and communicate maintenance and repairs The TICAP maintains maintenance records for TIC systems and components The intent of this capability is to minimize DEFERRED N/A This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 42 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION downtime and op erational impact of scheduled maintenance and outages TOMG07 Network Inventory The TICAP maintains a complete map or other inventory of all customer agency networks connected to the TIC access point The TICAP validates the inventory th rough the use of network mapping devices Static translation tables and appropriate points of contact are provided to US CERT on a quarterly basis to allow in depth incident analysis INCLUDED IMPLEMENTED TOMG08 Service Level Agreement The Multi Servi ce TICAP provides each customer with a detailed Service Level Agreement INCLUDED NOT ASSESSED TOMG09 Tailored Service Level Agreement The Multi Service TICAP provides an exception request process for individual customers INCLUDED NOT ASSESSED TOMG10 Tailored Security Policies The Multi Service TICAP accommodates individual customer agencies’ security policies and corresponding security controls as negotiated with the customer INCLUDED NOT ASSESSED TOMG11 Tailored Communications The Multi Service TICAP accommodates tailored communications processes to meet individual customer requirements INCLUDED NOT ASSESSED TOMON01 Situational Awareness The TICAP maintains situational awareness of the TIC and its supported networks as need ed to support customer security requirements Situational awareness can be achieved by correlating data from multiple sources multiple vendors and multiple types of data by using for example Security Incident & Event Management (SIEM) tools DEFERRED N/A This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 43 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION TOMON02 Vulnerability Scanning At a minimum the TICAP annually conducts and documents a security review of the TIC access point and undertakes the necessary actions to mitigate risk to an acceptable level (FISMA FIPS 199 and FIPS 200) Vulnerability scanning of the TIC architecture is a component of the security review INCLUDED IMPLEMENTED TOMON03 Audit Access The TICAP provides access for government authorized auditing of the TIC access point including all TIC systems and components Authorized assessment teams are provided acce ss to previous audit results of TIC systems and components including but not limited to C&A and ICD documentation INCLUDED IMPLEMENTED TOMON04 Log Sharing The TICAP monitors and logs all network services where possible including but not limited to DNS DHCP system and network devices web servers Active Directory Firewalls NTP and other Information Assurance devices/tools These logs can be made avail able to US CERT on request INCLUDED IMPLEMENTED TOMON05 Operational Exercises The TIC Access Provider participates in operational exercises that assess the security posture of the TIC The lessons learned from operational exercises are incorporated into network defenses and operational procedures for both the TICAP and its customers INCLUDED IMPLEMENTED TOREP01 Customer Service Metrics The TICAP collects customer service metrics about the TIC access point and reports them to its customers INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 44 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION DHS and/or OMB as required Examples of customer service metrics include but are not limited to performance within SLA provisions issue identific ation issue resolution customer satisfaction and quality of service TOREP02 Operational Metrics The TICAP collects operational metrics about the TIC access point and reports them to its customers DHS and/or OMB as requested Examples of operational metrics include but are not limited to performance within SLA provisions network activity data (including normal and peak usage) and improvement to customer security posture INCLUDED IMPLEMENTED TOREP03 Customer Notification The Multi Service TICAP reports threats alerts and computer security related incidents and suspicious activities that affect a subscribing agency to the subscribing agency INCLUDED IMPLEMENTED TOREP04 Incident Reporting The TICAP report s incidents to US CERT in accordance with federal laws regulations and guidance INCLUDED IMPLEMENTED TORES01 Response Timeframe The TICAP has a documented and operational incident response plan in place that defines actions to be taken during a decla red incident In the event of a declared incident or notification from US CERT TICAP operations personnel immediately activate incident response plan(s) TICAP operations personnel report operational status to US CERT within two hours and continue to repo rt based on US CERT direction INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 45 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION TORES02 Response Guidance TIC operations personnel acknowledge implement and document tactical threat and vulnerability mitigation guidance provided by US CERT INCLUDED IMPLEMENTED TORES03 Denial of Service Response The TICAP manages filters excess capacity bandwidth or other redundancy to limit the effects of information flooding types of denial of service attacks on the organization’s internal networks and TICAP services The TICAP has a greements with external network operators to reduce the susceptibility and respond to information flooding types of denial of service attacks The Multi Service TICAP mitigates the impact on non targeted TICAP clients from a DOS attack on a particular TICA P client This may included diverting information flooding types of denial of service attacks targeting a particular TICAP client in order to maintain service to other TICAP clients INCLUDED IMPLEMENTED TSCF01 Application Layer Filtering The TIC access point uses a combination of application firewalls (stateful application protocol analysis) application proxy gateways and other available technical means to implement inbound and outbound application layer filtering The TICAP will develop and implement a risk based policy on filtering or proxying new protocols INCLUDED IMPLEMENTED TSCF02 Web Session Filtering The TIC access point filters outbound web sessions from TICAP clients based on but not limited to: web content active content destination URL pattern and IP address Web INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 46 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION filters have the capability of blocking malware fake software updates fake antiv irus offers phishing offers and botnets/keyloggers calling home TSCF03 Web Firewall The TIC access point filters inbound web sessions to web servers at the HTTP/HTTPS/SOAP/XML RPC/Web Service application layers from but not limited to cross site scripting (XSS) SQL injection flaws session tampering buffer overflows and malicious web crawlers INCLUDED IMPLEMENTED TSCF04 Mail Filtering The TIC access point performs malware scanning filters content and blocks spam sending servers as specified by NIST 800 45 "Guidelines for Electronic Mail Security" for inbound and outbound mail These TIC access point protections are in addition to ma lware scanning and content filtering performed by the agency's mail servers and end user’s host systems The TICAP takes agency specified actions for potentially malicious or undesirable mail including at least the following actions: block messages tag undesirable content sanitize malicious content and deliver normally Multi Service TICAPs tailor their malware and content filtering services for individual agency mail domains INCLUDED NOT ASSESSED TSCF05 Agency Specific Mail Filters The TIC access point uses an agency specified custom processing list with at least the combinations of senders recipients network IP addresses or host names The agency specified custom processing list has custom TICAP malware and content filtering INCLUDED NOT ASSESSED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 47 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION actio ns Mail allowed by an agency specified custom processing list is still scanned by the TICAP for malware or undesirable content and tagged if found Multi Service TICAPs tailor their malware and content filtering services for individual agency mail domains TSCF06 Mail Forgery Detection For email received from other agency mail domains known to have domain level sender authentication (for example Domain Keys Identified Mail or Sender Policy Framework) the TIC access point includes the results of the domain level sender forgery analysis when determining potentially suspicious or undesirable email This capability is intended to support domain level sender authentication but does not necessarily confirm a particular sender or message is trustworthy Scoring criteria for this capability will be aligned with the National Strategy for Trusted Identities in Cyberspace (NSTIC) The TICAP takes agency specific actions for email determined to be suspicious or undesirable INCLUDED NOT ASSESSED TSCF07 Digitally Signing Mail For email sent to oth er agency mail domains the TICAP ensures the messages have been digitally signed at the Domain Level (for example Domain Keys Identified Mail) in order to allow receiving agencies to verify the source and integrity of email This capability is intended to support domain level sender authentication but does not necessarily confirm a particular sender or message is trustworthy Signing procedures will be in alignment with the National Strategy INCLUDED NOT ASSESSED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 48 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION for Trusted Identities in Cyberspace and may occur at the burea u or agency sub component level instead of the TIC access point TSCF08 Mail Quarantine The TICAP quarantines mail categorized as potentially suspicious while the agency's mail domain reviews and decides what action to take The agency's mail domain can take at least the following actions: block the message deliver the message sanitize mali cious content and tag undesirable content Note: this is intended to be an additional option which agency mail operators can specify with capability TSCF04 It does not require agencies to quarantine potentially suspicious mail INCLUDED NOT ASSESSED TSCF09 Crypto graphically authenticated protocols The TICAP validates routing protocol information using authenticated protocols The TICAP configures Border Gateway Protocol (BGP) sessions in accordance with but not limited to the following recommendat ion from NIST SP 800 54: BGP sessions are protected with the MD5 signature option NIST and DHS are collaborating on additional BGP robustness mechanisms and plan to publish future deployment recommendations and guidance INCLUDED IMPLEMENTED TSCF10 Reduce the use of clear text management protocols The TIC access point limits and documents the use of unauthenticated clear text protocols for TIC management and will phase out such protocols or enable cryptographic authentication where technically and operationally feasible INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 49 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION TSCF11 Encrypted Traffic Inspection The TICAP has a documented procedure or plan that explains how it inspects and analyzes encrypted traffic The document includes a description of defensive measures taken to protect TICAP clients from malicious content or unauthorized data exfiltration whe n traffic is encrypted The TIC access point analyzes all encrypted traffic for suspicious patterns that might indicate malicious activity and logs at least the source destination and size of the encrypted connections for further analysis DEFERRED N/A TSCF12 User Authentication The TICAP has a documented procedure or plan that explains how it inspects and analyzes connections by particular TICAP client end users or host systems which have custom requirements for malware and content filtering Connecti on content is still scanned by the TICAP for malware or undesirable content and logged by the TICAP when found DEFERRED N/A TSCF13 DNS Filtering The TIC access point filters DNS queries and performs validation of DNS Security Extensions (DNSSEC) signed domains for TICAP clients The TICAP configures DNS resolving/recursive (also known as caching) name servers in accordance with but not limited t o the following recommendations from NIST SP 800 81 Revision 1 (Draft): 1 The TICAP deploys separate recursive name servers from authoritative name servers to prevent cache poisoning 2 The TICAP filters DNS queries for known malicious domains INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 50 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION 3 Th e TICAP logs at least the query answer and client identifier TSINS01 NCPS The TIC access point participates in the National Cyber Protection System (NCPS operationally known as Einstein) INCLUDED NOT ASSESSED TSINS02 IDS/NIDS The TIC access point passes all inbound/outbound network traffic through Network Intrusion Detection Systems (NIDS) configured with custom signatures including signatures for the application layer This includes but is not limited to critical signatures published by US CERT DEFERRED N/A TSPF01 Secure all TIC traffic All external connections are routed through a TIC access point scanned and filtered by TIC systems and components according to the TICAP's documented policy which includes critical sec urity policies when published by US CERT The definition of "external connection" is in accordance with the TIC Reference Architecture Appendix A (Definition of External Connection) INCLUDED IMPLEMENTED TSPF02 Default Deny By default the TIC access point blocks network protocols ports and services The TIC access point only allows necessary network protocols ports or services with a documented mission requirement and approval DEFERRED N/A TSPF03 Stateless Filtering The TIC access point implements stateless blocking of all inbound and outbound connections without being limited by connection state tables of TIC systems and components INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 51 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION Attributes inspected by stateless blocks include but are not limited to: Directio n (inbound outbound interface) Source and destination IPv4/IPv6 addresses and network masks Network protocols (TCP UDP ICMP etc) Source and destination port numbers (TCP UDP) Message codes (ICMP) TSPF04 Stateful Filtering By default the TIC access point blocks unsolicited inbound connections For authorized outbound connections the TIC access point implements stateful inspection that tracks the state of all outbound connections and blocks packets that deviate from standard protocol state transitions Protocols supported by stateful inspection devices include but are not limited to: ICMP (errors matched to original protocol header) TCP (using protocol state transitions) UDP (using timeouts) Other Inter net protocols (using timeouts) Stateless network filtering attributes INCLUDED IMPLEMENTED TSPF05 Filter by Source Address The TIC access point only permits outbound connections from previously defined TICAP clients using Egress Source Address Verification It is recommended that inbound filtering rules block traffic from packet source addresses assigned to internal networks a nd special use addresses (IPv4 RFC5735 IPv6 RFC5156) DEFERRED N/A TSPF06 Asymmetric Routing The TIC access point stateful inspection devices correctly process traffic returning through asymmetric INCLUDED IMPLEMENTED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 52 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION routes to a different TIC stateful inspection device; or documents how return traffic is always routed to the same TIC access point stateful inspection device TSPF07 FedVRE (H323) The TIC access point supports Federal Video Relay Service (FedVRS) for the Deaf (wwwgsagov/fedrelay) network connections including but not limited to devices implementing stateful packet filters Please refer to http://wwwfedvrsus/supports/technical for FedVRS technical requirements Agencies may document alternative ways to achieve reasonable accommod ation for users of FedVRS EXCLUDED N/A TSRA01 Agency User Remote Access The TIC access point supports telework/remote access for TICAP client authorized staff and users using adhoc Virtual Private Networks (VPNs) through external connections including the Internet This capability is not intended to include permanent VPN con nections for remote branch offices or similar locations In addition to supporting the requirements of OMB M0616 “Protection of Sensitive Agency Information" the following baseline capabilities are supported for telework/remote access at the TIC Access Point: 1 The VPN connection terminates behind NCPS and full suite of TIC capabilities which means all outbound traffic to/from the VPN users to external connections including the Internet can be inspected by NCPS 2 The VPN connection terminates in front of TICAP managed security controls including but not limited to a INCLUDED NOT ASSESSED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 53 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION firewall and IDPS to allow traffic to/from remote access users to internal networks to be inspected 3 NIST FIPS 140 2 validated cryptography is used to implement encryption on all VPN connections (see NIST SP 800 46 Rev1) 4 Split tunneling is not allowed (see NIST SP 800 46 Rev1) Any VPN connection that allows split tunneling is considered an external connection and terminates in front of NCPS 5 Multi factor authentication is used (see NIST SP 800 46 Rev1 OMB M 1111) 6 VPN concentrators and Virtual Desktop/Application Gateways use hardened appliances maintained as TICAP network security boundary devices 7 If telework/remote clients use Government Furnished Equipment (GFE) the VPN connection may use access at the IP network level and access through specific Virtual Desktops/Application Gateways 8 If telework/remote clients use non GFE the VPN connection uses only access through specific Virtual Desktops/Applicatio n Gateways TICAP clients may support additional telework/remote access connections for authorized staff and users using equivalent agency managed security controls at non TIC Access Point locations The agency level NOC/SOC is responsible for maintainin g the inventory of additional telework/remote access connections and coordinating agency managed security controls This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 54 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION Because of the difficulty verifying the configuration sanitizing temporary and permanent data storage and analyzing possible compromises of nonGovernment Furnished Equipment it is the agency’s responsibility to document in accordance with OMB M 0716 if sensitive data may be accessed remotely using non GFE and informing the TIC Access Provider of the appropriate security configuration policies to implement TSRA02 External Dedicated Access The TIC access point supports dedicated external connections to external partners (eg non TIC federal agencies externally connected networks at business partners state/local governments) with a documented mission requirement and approval This include s but not limited to permanent VPN over external connections including the Internet and dedicated private line connections to other external networks The following baseline capabilities are supported for external dedicated VPN and private line connect ions at the TIC Access Point: 1 The connection terminates in front of NCPS to allow traffic to/from the external connections to be inspected 2 The connection terminates in front of the full suite of TIC capabilities to allow traffic to/from external c onnections to be inspected 3 VPN connections use NIST FIPS 1402 validated cryptography over shared public networks including the INCLUDED NOT ASSESSED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 55 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION Internet 4 Connections terminated in front of NCPS may use split tunneling TSRA03 Extranet Dedicated Access The TIC access point supports dedicated extranet connections to internal partners (eg TIC federal agencies closed networks at business partners state/local governments) with a documented mission requirement and approval This includes but not limited to permanent VPN over external connections including the Internet and dedicated private line connections to other internal networks The following baseline capabilities are supported for extranet dedicated VPN and private line connections at the TIC Access Point: 1 The connection terminates behind NCPS and full suite of TIC capabilities which means all outbound traffic to/from the extranet connections to external connections including the Internet is inspected by NCPS 2 The connection terminates in front of TICAP managed security controls including but not limited to a firewall and IDPS to allow traffic to/from extranet connections to internal networks including other extranet connections to be inspected 3 VPN connections use NIST FIPS 1402 validated cryptography over shared public networks including the Internet 4 Split tunneling is not allowed Any VPN connection that allows split tunneling is considered an external connection and must terminate in front INCLUDED NOT ASSESSED This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 56 of 57 FedRAMP TIC Capabilities Version 20 PILOT ASSESSMENT ASSESSMENT STATUS ID SUMMARY CAPABILITY DEFINITION of NCPS TICAP clients may support dedicated extranet connections with internal partners using equivalent agency managed security controls at non TIC Access Point locations The agency level NOC/SOC is responsible for maintaining the inventory of extranet connections with internal p artners and coordinating agency managed security controls This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services – Guidance for TIC Readiness on AWS February 2016 Page 57 of 57 Notes 1https://wwwwhitehousegov/sites/default/files/omb/assets/omb/memoranda/ fy2008/m0805pdf 2 https://wwwfedrampgov/files/2015/04/Description FTOverlaydocx 3 https://wwwfedrampgov/draftfedrampticoverlay/
|
General
|
consultant
|
Best Practices
|
Homelessness_and_Technology
|
Homelessness and Technology How Technology Can Help Communities Prevent and Combat Homelessness March 2019 This document has been archived For the latest technical content see the AWS Whitepapers & Guides page: https://awsamazoncom/whitepapers ArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents AWS’s current product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or lice nsors AWS’s products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied AWS’s responsibilities and liabilities to its customers are controlled by AWS agreements and this document i s not part of nor does it modify any agreement between AWS and its customers © 201 9 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 6 Best Practices for Combatting Homelessness 6 Connect Data Sources with Data Lakes 7 Data Lake Solution 9 AWS Lake Formation 9 Enable Data Analytics Using Big Data and Machine Learning Techniques 10 Data Processing and Storage 10 Make Predictions with Machine Learning and Analytics 11 Manage Identity and Vital Records 12 Leverage AWS for HIPAA Compliance 13 HMIS Data Privacy and HIPAA 13 Conclusion 14 Contributors 14 Further Reading 15 Document Revisions 15 ArchivedAbstract The disparate nature of current homeless information management systems limits a community’s ability to identify trends or emerging needs measure internal performance goals and make data driven decisions about the effective deployment of limited resources With the shift in recent years to whole person care there is increasing demand to connect these disparate systems to affect better outcomes In this document we have outlined four pillars of how AWS technology and services can act as a best practice to organizations looking to leverage the cloud for Homeless Management Information Systems (HMIS) These pillars are as follows: • Connect disparate data sources using a data lake design patte rn • Make predictions using data analytics workloads big data and machine learning • Manage identity and vital records for people experiencing or at risk for experiencing homelessness • Leverage the AWS Business Associates Addendum (BAA) and associated services for Health Insurance Portability and Accountability Act (HIPAA) Compliance and NIST based assurance frameworks ArchivedAmazon Web Services Homelessness and Technology Page 6 Introduction Preventing and combatting homelessness depends on a coordinated Continuum of Care (CoC) on the ground locally sharing information across disparate systems and collaborating with the public nonprofit philanthropic and private sector partners The systems that collect this information today (ie homelessness services electronic health records education and criminal justice information systems ) were designed independently to address particular applications and are managed by different entities with separate IT systems and governance The disparate nature of these systems limits a community’s ability to identify trends or emerging needs measure internal performance goals and make data driven decisions about the effective deployment of limited resources With the shift in recent years to whole person care there is increasing deman d to connect these disparate systems to affect better outcomes Redesigning these systems for interoperability is critical but it will take time In the meantime you can use the best practices in this document to connect disparate information today to d evelop a comprehensive view for each client to drive better outcomes and enable analytics that support data drive n decision making Best Practices for Combatting Homelessness The following best practices focus on addressing some of the challenges of comba tting homelessness but they are highly applicable to other socioeconomic and healthcare challenges that cross multiple systems • Connect disparate data sources using a data lake design pattern • Make predictions using d ata analytics workloads big data and machine learning • Manage identity and vital records for people experiencing or at risk or experiencing homelessness • Leverage the AWS Business Associates Addendum (BAA) and associated services for Health Insurance Portability and Accountability Act (HIPA A) Compliance and NIST based assurance frameworks ArchivedAmazon Web Services Homelessness and Technology Page 7 Connect Data Sources with Data Lakes Connecting disparate data sources to create a comprehensive view of the homeless population and their interactions across numerous service providers and government entities can come with many technical challenges Schema and structural differences in separate locations can be difficult to combine and query in a single place Also some data may be highly structured whereas other dataset s may be less structured and involv e a smaller signal to noise ratio For example data stored in a tabular CSV format from a traditional database combined with a nested JSON schema that may come from a fleet of devices (eg personal health records v ersus realtime medical equipment data) can be difficult to join and query together using a relational database alone A data lake is a centralized repository that allows you to store all of your structured and unstructured data at any scale You can store your data as is without having to firs t structure the data and run different types of analytics Dashboards visualizations big data processing real time analytics and machine learning can all help contribute to better decision making and improve client outcomes A data warehouse is a central repository of structured information that can be analyzed to make better informed decisions Data flows into a data warehouse from transactional systems relational databases and other sources typically on a regular cadence Business analysts da ta scientists and decision makers access the data through business intelligence (BI) tools SQL clients and other analytics applications Data warehouses and data lakes complement each other well by allowing separation of concerns and leveraging scalable storage and scalable analytic capability respectively ArchivedAmazon Web Services Homelessness and Technology Page 8 Figure 1: Connecting Disparate Data Sources A Homeless Management Information System (HMIS) is an information technology system used to collect client level data and data o n the provision of housing and services to homeless individuals and families and persons at risk of homelessness You can create data lakes to connect disparate HMIS data across CoC and regional boundaries With a consolidated dataset you gain a comprehen sive and unduplicated understanding of who is served with which programs and to what outcomes across a region or state This depth of understanding reveals patterns that can help care providers rapidly create and tune interventions to the unique needs of homeless groups (eg veterans youth elders chronically homeless and so on ) and provides the public elected officials and funders with transparency about investments versus outcomes By centralizing data and allowing Federated access to a searchabl e data catalog you can address pain points around connecting disparate data systems The data lake can accept data from many different sources These may include but are not limited to: • Existing relational database and data warehouse engines (either on premises or in the cloud) • Clickstream data from mobile or web applications • Internet of Things (IoT) device data • Flat file imports ArchivedAmazon Web Services Homelessness and Technology Page 9 • API data • Media sources such as v ideo and audio streams This data should be stored durably and encrypted with industry standard open source tools both at rest and in transit since the data may contain personally identifiable information (PII) and be subject to compliance controls Federated access through an Identity provider (eg Active Directory Google Facebook etc) should also be used as a means of authorization to enable different teams to access the correct level of data Metadata concerning the data should be held within a searchable data catalog to enable fast access to structural and data classification inform ation This should all be accomplished in a cost effective and scalable manner with the data held in its native format to facilitate export further transformation and analysis Data Lake Solution The Data Lake solution automatically crawls data sources identifies data formats and then suggests schemas and transformations so you don’t have to spend time hand coding data flows For example if you upload a series of JSON files to Amazon Simple Storage Service (Amazon S3) AWS Glue a fully managed extract transform and load (ETL) tool can scan these files and work out the schema and data types present within these files Thi s metadata is then stored in a catalog to be used in subsequent transforms and queries Additionally user defined tags are stored in Amazon DynamoDB a key value document database to add business relevant context to each dataset The solution enables you to create simple governance policies that require specific tags when datasets are registered with the data lake You can browse available datasets or search on dataset attributes and tags to quickly find an d access data relevant to your business needs AWS Lake Formation The AWS Lake Formation service builds on the existing data lake solution by allow ing you to set up a secure data lake within days Once you define where your lake is located Lake Formation collects and catalogs this data moves the data into Amazon S3 for secure access and finally cleans and classifies the data using machine learning algorithms You can then access a centralized data ca talog which describes available dataset s and their appropriate usage This approach has a number of benefits from ArchivedAmazon Web Services Homelessness and Technology Page 10 building out a data lake quickly to simplifying security management and allowing easy and secure self service access Enabl e Data Analytics Using Big Data and Machine Learning Techniques Communities want a better understanding of the circumstances that contribute to homelessness prevent homelessness and accelerate someone’s path out of homelessness These predictions are crit ical inputs for the development of interventions across a continuum of care and for disaster response planning With a data lake communities can build train and tune machine learning models to predict outcomes Data Processing and Storage In today's co nnected world a number of data sources are available to be consumed Some examples include public APIs sensor/device data website analytics imagery as well as traditional forms of data such as relational databases and data warehouses Amazon Relational Database Service ( Amazon RDS) allows developers to build and migrate existing databases into the cloud AWS supports a large range of commercial and open source database engines (eg MySQL PostGres Amazon Au rora Oracle SQL Server) allowing developers freedom to keep their current database or migrate to an open source platform for cost savings and new features Amazon RDS maintains highavailab ility through the use of Multi Availability Zone deployments to ensure that production databases stay operational in the event of a hardware failure For customers with data warehousing needs Amazon Redshift enables developers to query large sets of structured data within Redshift a nd with in Amazon S3 When combined with a business intelligence tool such as Amazon Quick Sight Tableau or Microsoft Power BI you can create powerful data visualizations and gain insights into data that were previously out of reach on legacy IT systems Amazon Kinesis makes it easy to collect process and analyze streaming data Kinesis enables th e construction of real time data dashboards video analytics and stream transformations to filter and query data as it comes into the organization from an array of sources ArchivedAmazon Web Services Homelessness and Technology Page 11 Make Predictions with Machine Learning and Analytics Machine learning can help ans wer complicated questions by making predictions about future events from past data Some examples of machine learning models include image classification regression analysis personal recommendation systems and time series forecasting For a CoC these ca pabilities may seem out of reach but due to the power and scale of the cloud these capabilities are now within anyone’s reach Amazon Comprehend Medical Amazon Forecast and Amazon Personalize put powerful machine learning model creation capabilities int o the hands of developers requiring no machine learning background or servers to manage Amazon Comprehend Medical Amazon Comprehend Medical is a natural language processing service that makes it easy to use natural language processing and machine learning to extract relevant medical information from unstructured text For example you can use Comprehend Medical to identify and search for key terms in a large corpus of health records allowing case officers and medical professionals to look for recurring patterns or key phrases in patient records when providing treatment to homeless individuals Amazon Forecast Amazon Forecast uses machine learning to combine time series data with additional variables to build forecasts You can use Amazon Forecast to predict changes in a homeless population over time Forecast can also consider how other correlating external factors affect the population such as natu ral disasters or severe weather or the introduction of new programs and initiatives Amazon Personalize Amazon Personalize is a machine learning service that makes it easy for developers to create individ ualized recommendations for customers using their applications For example many times individuals at risk of or experiencing homelessness struggle to find assistance programs Navigating these many programs and facilities can be daunting and time consumi ng By using HMIS data from other individuals in similar situations you can build a recommendation engine that suggests relevant programs to individuals and families These recommendations enable them to access programs that they may not be aware of or have the time to research ArchivedAmazon Web Services Homelessness and Technology Page 12 Manag e Identity and Vital Records Proof of identity and eligibility are critical to matching the right people at the right time to the right interventions Copies of vital records such as social security cards birth certificates proof of disability and copies of utility bills lease or property title documents are often required by various programs that are designed to help those experiencing or at risk of experien cing homelessness However without a secure and reliable place to store and access these documents the most vulnerable people are often left the worst off Their lack of documentation can become a barrier to service and extend the length of crisis In ad dition to the need for a secure storage location customers need a mechanism to control and share documents with authorized parties to evaluate eligibility for various programs and/or to verify authenticity This mechanism must track who accesses these documents at what time and in what manner in a cryptographically verifiable immutable way Ledger or blockchain based applications can meet this requirement by storing the interaction event metadata for a document or set of documents in a verifiable ledger This ledger creates a verifiable audit trail that can store all of the events that occur during a document ’s lifetime Amazon Simple Storage Service (Amazon S3) Amazon Simple Storage Service ( Amazon S3) store s objects in the cloud reliably and at scale Using Amazon S3 you can build the substrate for a document storage and retrieval application Amazon S3 has many pertinent security features such as multi factor control of deleting and modifying objects and object versioning Amazon S3 also uses encryption at rest and in transit using industry standard encryption algorithms and a simple HTTPS based API Amazon S3 supports signed URLs so that access to objects can be granted for a limited time Finally Amazon S3 offers cost savings with intelligent tiering so that documents can be automatically moved into different storage tiers depending on their usage patterns Amazon Quantum Ledger Database (Amazon QLDB) Amazon Quantu m Ledger Database ( Amazon QLDB) is a fully managed ledger database that provides a transparent immutable and cryptographically verifiable transaction log owned by a central trusted authority Amazon QLDB tracks each and ArchivedAmazon Web Services Homelessness and Technology Page 13 every application data change and maintains a complete and verifiable history of changes over time Amazon Managed Blockchain Amazon Managed Blockchain is fully managed blockchain service that makes it easy to create and manage scalable blockchain networks using popular open source frameworks such as Hyperledger Fabric and Ethereum By combining secure storage in the cloud with a cryptographically verifiabl e event log it is possible to build a scalable application that can store documents in a secure manner and be able to verify the contents and access patterns to each individual document during its lifetime Leverage AWS for HIPAA Compliance Health Insurance Portability and Accountability Act (HIPAA) compliance concerns the storage and processing of protected health information (PHI) such as insurance and billing information diagnosis dat a lab results and so on HIPAA applies to covered entities (eg health care providers health plans and health care clearinghouses) as well as business associates (eg entities that provide services to a covered entity involving the processing stora ge and transmission of PHI) AWS offers a standardized Business Associates Addendum (BAA) for business associates Customers who execute a BAA may process store and transmit PHI using HIPAA eligible services defined in the AWS BAA such as Amazon S3 Amazon QuickSight AWS Glue and Amazon DynamoDB For a complete list of services see HIPAA Eligible Services Referenc e HMIS Data Privacy and HIPAA Each CoC is responsible for selecting an HMIS software solution that complies with the Department of Housing and Urban Development's (HUD) standards HMIS has a number of privacy and security standards that were developed to protect the confidentiality of personal information while at the same time allowing limited data disclosure in a responsible manner These standards were developed after careful review of the HIPAA standards regarding PHI The Reference Architecture for HIPAA on AWS deploys a model environment that can help organizations with workloads that fall within the scope of HIPAA The reference ArchivedAmazon Web Services Homelessness and Technology Page 14 architecture addresses certain technic al requirements in the Privacy Security and Breach Notification Rules under the HIPAA Administrative Simplification Regulations (45 CFR Parts 160 and 164) AWS has also produced a quick start reference deployment for Standardized Architecture for NIST based Assurance Frameworks on the AWS Cloud This quick start focuses on the NIST based assurance frameworks: • National Institute of Standards and Technology (NIST) SP 800 53 (Revision 4) • NIST SP 800 122 • NIST SP 800 171 • The OMB Trusted Internet Connection (TIC) Initiative – FedRAMP Overlay (pilot) • The DoD Cloud Computing Security Requirements Guide (SRG) This quick start includes AWS CloudFormation templates which can be integrated with AWS Service Catalog to automate building a standardized reference architecture that aligns with the requirements within the controls listed above It also includes a security controls matrix which maps the security controls and requirements to architecture decisions features and configuration of the baseline to enhance your organization’s ability to understand and assess th e system security configuration Conclusion AWS technology can help communities drive better outcomes for citizens using the technology and services included this paper However w e understand that homelessness is fundamentally a human problem —all of these initiatives must have strong backing by forward thinking officials and program managers to make an impact in the lives of those at risk or experiencing homelessness Contributors The following individuals and organizations contributed to this document: • Alistair McLean Sr Solutions Architect AWS • Jessie Metcalf Program Manager AWS ArchivedAmazon Web Services Homelessness and Technology Page 15 • Casey Burns Health and Human Services Leader AWS Further Reading For additional information see the following: • HMIS Data and Technical Standards • Reference Architecture for HIPAA on AWS • Reference Architecture for HIPAA on the AWS Cloud: Quick Start Reference Deployment • Standardized Architecture for NIST based A ssurance Frameworks on the AWS Cloud: Quick Start Reference Deployment • AWS Machine Learning Blog: Create a Question and Answ er Bot with Amazon Lex and Amazon Alexa • AWS Government Education and Non Profits Blog Document Revisions Date Description March 2019 Initial document release Archived
|
General
|
consultant
|
Best Practices
|
Hosting_Static_Websites_on_AWS_Prescriptive_Guidance
|
This paper has been archived For the latest technical content refer t o: https://docsawsamazoncom/whitepapers/latest/build staticwebsitesaws/buildstaticwebsitesawshtml Hosting Static Websites on AWS Pr escriptive Guidance First published May 2017 Updated May 21 2021 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Contents Abstract vi Introduction 1 Static Website 1 Dynamic Website 2 Core Architecture 2 Moving to an AWS Architecture 4 Use Amazon S3 Website Hosting to Host Without a Single Web Server 6 Scalability and Availability 7 Encrypt Data in Transit 7 Configuration Basics 8 Evolving the Architecture with Amazon CloudFront 13 Factors Contributing to Page Load Latency 13 Speeding Up Your Amazon S3 Based Website Using Amazon CloudFront 14 Using HTTPS with Amazon CloudFront 16 Amazon CloudFront Reports 17 Estimating and Tracking AWS Spend 17 Estimating AWS Spend 17 Tracking AWS Spend 18 Integration with Your Continuous Deployment Process 18 Access Logs 19 Analyzing Logs 19 Archiving and Purg ing Logs 20 Securing Administration Access to Your Website Resources 21 Managing Administrator Privileges 22 Auditing API Calls Made in Your AWS Account 23 Controlling How Long Amazon S3 Content is Cached by Amazon CloudFront 24 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Set Maximum TTL Value 24 Implement Content Versioning 25 Specify Cache Control Headers 26 Use CloudFront Invalidation Requests 1 Conclusion 1 Contributors 2 Further Reading 2 Document Revisions 2 Notes 2 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract This whitepaper covers comprehensive architectural guidance for developing deploying and managing static websites on Amazon Web Services (AWS) while keeping operational simplicity and business requirements in mind We also recommend an approach that provides 1) insignificant cost of operation 2) little or no management required and 3) a highly scalable resilient and reliable website This whitepaper first reviews how static websites are hosted in traditional hosting environments Th en we explore a simpler and more cost efficient approach using Amazon Simple Storage Service (Amazon S3) Finally we show you how you can enhance the AWS architecture by encrypting data in transit and to layer o n functionality and improve quality of service by using Amazon CloudFront This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 1 Introduction As enterprises become more digital operations their websites span a wide spectrum from mission critical e commerce sites to departmental apps and from business tobusiness (B2B) portals to marketing sites Factors such as business value mission criticality service level agreements (SLAs) quality of service and information security drive the choice of architecture and techn ology stack The simplest form of website architecture is the static website where users are served static content (HTML images video JavaScript style sheets and so on) Some examples include brand microsites marketing websites and intranet informa tion pages Static websites are straightforward in one sense but they can still have demanding requirements in terms of scalability availability and service level guarantees For example a marketing site for a consumer brand may need to be prepared for an unpredictable onslaught of visitors when a new product is launched Static Website A static website delivers content in the same format in which it is stored No server side code execution is required For example if a static website consists of HTML documents displaying images it delivers the HTML and images as is to the browser without altering the contents of the files Static websites can be delivered to web browsers on desktops tablets or mobile devices They usually consist of a mix of HTML documents images videos CSS style sheets and JavaScript files Static doesn’t have to mean boring —static sites can provide client side interactivity as well Using HTML5 and client side JavaScript technologies such as jQuery AngularJS React and Backbone you can deliver rich user experiences that are engaging and interactive Some examples of static sites include: • Marketing websites • Product landing pages • Microsites that display the same content to all users • Team homepages This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 2 • A website that lists available assets (eg image files video files and press releases) allows the user to download the files as is • Proofs ofconcept used in the early stages of web development to test user experience flows and gather feedback Static websites load quickly since content is delivered as is and can be cached by a content delivery network (CDN) The web server doesn’t need to perform any application logic or database queries They’re also relatively inexpensive to develop and host However maintaining large static websites can be cumbersome without the aid of automated tools and static websites can’t deliver personalized information Static websites are most suitable when the content is infrequently updated After the content evolves in complexity or needs to be frequently updated personalized or dynamically generated it's best to consider a dynamic website architecture Dynamic Website Dynamic websites can display dynamic or personalized content They usually interact with data sources and web services and require code development expertise to create and maintain For example a sports news site can displa y information based on the visitor's preferences and use server side code to display updated sport scores Other examples of dynamic sites are e commerce shopping sites news portals social networking sites finance sites and most other websites that di splay ever changing information Core Architecture In a traditional (non AWS) architecture web servers serve up static content Typically content is managed using a content management system (CMS) and multiple static sites are hosted on the same infrastructure The content is stored on local disks or on a file share on network accessible storage The following example shows a sample file system structure This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 3 ├─ css/ │ ├─ maincss │ └─ navigationcss ├─ images/ │ ├─ bannerjpg │ └─ logojpg ├─ indexhtml ├─ scripts/ │ ├─ script1js │ └─ script2js ├─ section1html └─ section2html A network firewall protects against unauthorized access It’s common to deploy multiple web servers behind a load balancer for high availability (HA) and scalability Since pages are static the web servers don’t need to maintain any state or session information and the load balancer doesn’t need to implement session affinity (“sticky sessions”) The following diagram shows a traditional (nonAWS) hosting environment: Figure 1: Basic architecture of a traditional hosting environment This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 4 Moving to an AWS Architecture To translate a traditional hosting environment to an AWS architecture you could use a “lift andshift” approach where you substitute AWS services instead of using the traditional environment In this approach you can substitute the following AWS services: • Amazon Elastic Compute Cloud (Amazon EC2) to run Linux or Windows based servers • Elastic Load Balancing (ELB) to load balance and distribute the web traffic • Amazon Elastic Block Store (Amazon EBS) or Amazon Elastic File System (Amazon EFS) to store static content • Amazon Virtual Private Cloud (Amazon VPC) to deploy Amazon EC2 instances Amazon VPC is your isolated and private virtual network in the AWS Cloud and gives you full control over the network topology firewall configuration and routing rules • Web servers can be spread across multiple Availability Zones for high availability even if an entire data center were to be down • AWS Auto Scaling automatically adds servers during high traffic periods and scales back when traffic decreases The following diagram shows the basic architecture of a “lift and shift” approach This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 5 Figure 2: AWS architecture for a “Lift and Shift” Using this AWS architecture you gain the security scalability cost and agility ben efits of running in AWS This architecture benefits from AWS world class infrastructure and security operations By using Auto Scaling the website is ready for traffic spikes so you are prepared for product launches and viral websites With AWS you only pay for what you use and there’s no need to over provision for peak capacity In addition you gain increased agility because AWS services are available on demand (Compare this to the traditional process in which provisioning servers storage or ne tworking can take weeks) You don’t have to manage infrastructure so this frees up time and resources to create business differentiating value AWS challenges traditional IT assumptions and enables new “cloud native” architectures You can architect a modern static website without needing a single web server This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 6 Use Amazon S3 Website Hosting to Host Without a Single Web Server Amazon Simple Storage Service (Amazon S3) can host static websites without a need for a web server The website is highly performant and scalable at a fraction of the cost of a traditional web server Amazon S3 is storage for the cloud providing you with secure durable highly scalable ob ject storage A simple web services interface allows you to store and retrieve any amount of data from anywhere on the web1 You start by creating an Amazon S3 bucket enabling the Amazon S3 website hosting feature and configuring access permissions for the bucket After you upload files Amazon S3 takes care of serving your content to your visitors Amazon S3 provides HTTP webserving capabilities and the content can be viewed by any browser You must also configure Amazon Route 53 a managed Domain Name System (DNS) service to point your domain to your Amazon S3 bucket Figure 3 illustrates this architecture where http://examplecom is the domain Figure 3: Amazon S3 website hosting This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 7 In this solution there are no Windows or Linux servers to manage and no need to provision machines install operating systems or f inetune web server configurations There’s also no need to manage storage infrastructure (eg SAN NAS) because Amazon S3 provides practically limitless cloud based storage Fewer moving parts means fewer troubleshooting headaches Scalability and Avail ability Amazon S3 is inherently scalable For popular websites Amazon S3 scales seamlessly to serve thousands of HTTP or HTTPS requests per second without any changes to the architecture In addition by hosting with Amazon S3 the website is inherently highly available Amazon S3 is designed for 99999999999% durability and carries a service level agreement (SLA) of 999% availability Amaz on S3 gives you access to the same highly scalable reliable fast and inexpensive infrastructure that Amazon uses to run its own global network of websites As soon as you upload files to Amazon S3 Amazon S3 automatically replicates your content across multiple data centers Even if an entire AWS data center were to be impaired your static website would still be running and available to your end users Compare this solution with traditional non AWS costs for implementing “active active” hosting for impo rtant projects Active active or deploying web servers in two distinct data centers is prohibitive in terms of server costs and engineering time As a result traditional websites are usually hosted in a single data center because most projects can’t ju stify the cost of “active active” hosting Encrypt Data in Transit We recommend you use HTTPS to serve static websites securely HTTPS is the secure version of the HTTP protocol that browsers use when communicating with websites In HTTPS the communication protocol is encrypted using Transport Layer Security (TLS) TLS protocols are cryptographic protocols designed to provide privacy and data integrity between two or more communicating computer applications HTTPS protects against maninthemiddle (MITM) attacks MITM attacks intercept and maliciously modify traffic Historically HTTPS was used for sites that handled financial information such as banking and ecommerce sites However HTTPS is now becoming more of the norm This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 8 rather than the exception For example the percentage of web pages loaded by Mozilla Firefox using HTTPS has increased from 49% to 75% in the past two years2 AWS Certificate Manager (ACM) is a service that lets you easily provision manage and deploy public and private Secure Sockets Layer (SSL)/TLS certificates for use with AWS services and your internal connected resources See Using HTTPS with Amazon CloudFro nt in this document for more implementation information Configuration Basics Configuration involves these steps: 1 Open the AWS Management Console 2 On the Amazon S3 console create an Amazon S3 bucket a Choose the AWS Region in which the files will be geographically stored 3 Select a Region based on its proximity to your visitors proximity to your corporate data centers and/or your regulatory or compliance requirements (eg some countries have restrictive data residency regulations) b Choose a bucket name that complies with DNS naming conventions If you plan to use your own custom domain/subdomain such as examplecom or wwwexamplecom your buc ket name must be the same as your domain/subdomain For example a website available at http://wwwexamplecom must be in a bucket named wwwexamplecom Note: Each AWS account can have a maximum of 1000 buckets 3 Toggle on the static website hosting feature for the bucket This generates an Amazon S3 website endpoint You can access your Amazon S3hosted website at the following URL: http://<bucket name>s3 website<AWSregion>amazonawscom Domain Names For small non public websites the Amazon S3 website endpoint is probably adequate You can also use internal DNS to poin t to this endpoint For a public facing website we recommend using a custom domain name instead of the provided Amazon S3 website endpoint This way users can see userfriendly URLs in their browsers If you plan to This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 9 use a custom domain name your bucket name must match the domain name For custom root domains (such as examplecom ) only Amazon Route 53 can configure a DNS record to point to the Route S3 hosted website For non root subdomains (such as wwwexa mplecom ) any DNS service (including Amazon Route 53) can create a CNAME entry to the subdomain See the Amazon Simple Storage Service Develo per Guide for more details on how to associate domain names with your website Figure 4: Configuring static website hosting using Amazon S3 console The Amazon S3 website hosting configuration screen in the Amazon S3 console presents additional options to configure Some of the key options are as follows: • You can configure a default page that users see if they visit the domain name directly (without specifying a specific page)4 You can also specify a custom 404 Page Not Found error page if the user stumbles onto a nonexistent page • You can enable logging to give you access to the raw web access logs (By default logging is disabled) • You can add tags to your Amazon S3 bucket These tags help when you want to analyze your AWS spend by project This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 10 Amazon S3 Object Names In Amazon S3 a bucket is a flat container of objects It doesn’t provide a hierarchical organization the way the file system on your computer does However there is a straightforward mapping between a file system’s folders/files to Amazon S3 objects The example that follows shows how folders/files are mapped to Amazon S3 objects Most thirdparty tools as well as the AWS Management Console and AWS Command Line Interface (AWS CLI) handle this m apping transparently for you For consistency we recommend that you use lowercase characters for file and folder names Uploading Content On AWS you can design your static website using your website authoring tool of choice Most web design and authori ng tools can save the static content on your local hard drive Then upload the HTML images JavaScript files CSS files and other static assets into your Amazon S3 bucket To deploy copy any new or modified files to the Amazon S3 bucket You can use the AWS API SDKs or CLI to script this step for a fully automated deployment You can upload files using the AWS Management Console You can also use AWS partner offerings such as CloudBerry S3 Bucket Explorer S3 Fox and other visual management tools The easiest way however is to use the AWS CLI The S3 sync command recursively uploads files and synchronizes your Amazon S3 bucket with your local folder5 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 11 Making Your Content Publicly Accessible For your visitors to access content at the Amazon S3 website endpoint the Amazon S3 objects must have the appropriate permissions Amazon S3 enforces a security by default policy New objects in a new bucket are private by default For example an Access Denied error appears when trying to view a newly uploaded file using your web browser To fix this configure the content as publicly accessible It’s possible to set objectlevel permissions for every individual object but that quickly becomes tedious Instead define an Amazon S3 bucket wide policy The following sample Amazon S3 bucket policy enables everyone to view all objects in a bucket: { "Version":"2012 1017" "Statement":[{ "Sid":"PublicReadGetObject" "Effect":"Allow" "Principal": "*" "Action":["s3:GetObject"] "Resource":["arn:aws:s3:::S3_BUCKET_NAME_GOES_HERE/*"] } ] } This policy defines who can view the contents of your S3 bucket See Securing Administrative Access to Your Website Resources for the AWS Identity and Access Management (IAM) policies to manage permissions for your team members Together S3 bucket policies and IAM policies g ive you fine grained control over who can manage and view your website Requesting a Certificate through ACM You can create and manage public private and imported certificates with ACM This section focuses on creating and using public certificates to be used with ACM integrated services specifically Amazon Route 53 and Amazon CloudFront This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 12 To request a certificate: 1 Add in the qualified domain names (eg examplecom ) you want to secure with a certificate 2 Select a validation method ACM can validate ownership by using DNS or by sending email to the contact addresses of the domain owner 3 Review the domain names and validation method 4 Validate If you used the DNS validation method you must create a CNAME record in the DNS configuration for each of the domains If the domain is not currently managed by Amazon Route 53 you can choose to export the DNS configurati on file and input that information in your DNS web service If the domain is managed by Amazon Route 53 you can click “Create record in Route 53” and ACM can update your DNS configuration for you After validation is complete return to the ACM console Your certificate status changes from Pending Validation to Issued Low Costs Encourage Experimentation Amazon S3 costs are storage plus bandwidth The actual costs depend upon your asset file sizes and your site’s popularity (the number of visito rs making browser requests) There’s no minimum charge and no setup costs When you use Amazon S3 you pay for what you use You’re only charged for the actual Amazon S3 storage required to store the site assets These assets include HTML files images JavaScript files CSS files videos audio files and any other downloadable files Your bandwidth charges depend upon the actual site traffic More specifically the number of bytes that are delivered to the website visitor in the HTTP responses Small websites with few visitors have minimal hosting costs Popular websites that serve up large videos and images incur higher bandwidth charges The Estimating and Tracking AWS Spend section of this document describes how you can estimate and track your costs With Amazon S3 experi menting with new ideas is easy and cheap If a website idea fails the costs are minimal For microsites publish many independent microsites at once run A/B tests and keep only the successes This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 13 Evolving the Architecture with Amazon CloudFront Amazon CloudFront content delivery web service integrates with other AWS products to give you an easy way to distribute content to users on your website with low latency high data transfer speeds and no minimum usage commitments Factors Contr ibuting to Page Load Latency To explore factors that contribute to latency we use the example of a user in Singapore visiting a web page hosted from an Amazon S3 bucket in the US West (Oregon) Region in the United States From the moment the user visits a web page to the moment it shows up in the browser several factors contribute to latency: • FACTOR (1) Time it takes for the browser (Singapore) to request the web page from Amazon S3 (US West [Oregon] Region) • FACTOR (2) Time it takes for Amazon S3 to retrieve the page contents and serve up the page • FACTOR (3) Time it takes for the page contents (US West [Oregon] Region) to be delivered across the Internet to the browser (Singapore) • FACTOR (4) Time it takes for the browser to parse and display the web page This latency is illustrated in the following figure Figure 5: Factors affecting page load latency This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 14 AWS addresses FACTOR (2) by optimizing Amazon S3 to serve up content as quickly as possible You can improve FACTOR (4) by optimizing the actual page content (eg minifying CSS and JavaScript using efficient image and video formats) However page loading studies consistently show that most latency is due to FACTOR (1) and FACTOR (3)6 Most of the delay in accessing pages over the internet is due to the round trip delay associated with establishing TCP connections (the infamous three way TCP handshake) and the time it takes for TCP packets to be delivered across long Internet distan ces) In short serve content as close to your users as possible In our example users in the USA will experience relatively fast page load times whereas users in Singapore will experience slower page loads Ideally for the users in Singapore you would want to serve up content as close to Singapore as possible Speeding Up Your Amazon S3 Based Website Using Amazon CloudFront Amazon CloudFront is a CDN that uses a global network of edge locations for content delivery Amazon CloudFront also provides reports to help you understand how users are using your website As a CDN Amazon CloudFront can distribute content with low latency and high data transfer rates There are multiple CloudFront edge locations all around the world Therefore no matter where a visitor lives in the world there is an Amazon CloudFront edge location that is relatively close (from an Internet laten cy perspective) The Amazon CloudFront edge locations cache content from an origin server and deliver that cached content to the user When creating an Amazon CloudFront distribution specify your Amazon S3 bucket as the origin server The Amazon CloudFront distribution itself has a DNS You can refer to it using a CNAME if you have a custom domain name To point the A record of a root domain to an Amazon CloudFront distribution you can use Amazon Route 53 alias records as illustrated in the following diagram This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 15 Figure 6: Using Amazon Route 53 alias records with an Amazon CloudFront distribution Amazon CloudFront also keeps persistent connections with your origin servers so that those files can be fetched from the origin servers as quickly as possible Finally Amazon CloudFront uses additional optimizations (eg wider TCP initial congestion window) to provide higher perfor mance while delivering your content to viewers When an end user requests a web page using that domain name CloudFront determines the best edge location to serve the content If an edge location doesn’t yet have a cached copy of the requested content CloudFront pulls a copy from the Amazon S3 origin server and holds it at the edge location to fulfill future requests Subsequent users requesting the same content from that edge location experience faster page loads because that content is already cached The following diagram shows the flow in detail This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 16 Figure 7: How Amazon CloudFront caches content Using HTTPS with Amazon CloudFront You can configure Amazon CloudFront to require that viewers use HTTPS to request your objects so that connections are encrypted when Amazon CloudFront communicates with viewers You can also configure Amazon CloudFront to use HTTPS to get objects from your origin so that connections are encrypted when Amazon CloudFront commun icates with your origin If you want to require HTTPS for communication between Amazon CloudFront and Amazon S3 you must change the value of the Viewer Protocol Policy to Redirect HTTP to HTTPS or HTTPS Only This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 17 We recommend using Redirect HTTP to HTTPS Viewers can use both protocols HTTP GET and HEAD requests are automatically redirected to HTTPS requests Amazon CloudFront returns HTTP status code 301 (Moved Permanently) along with the new HTTPS URL The viewer then resubmits the request to Amazon CloudFront using the HTTPS URL Amazon CloudFront Reports Amazon CloudFront includes a set of reports that provide insight into and answers to the following questions: • What is the overall health of my website? • How many visitors are viewing my website? • Which browsers devices and operating systems are they using? • Which countries are they coming from? • Which websites are the top referrers to my site? • What assets are the most popular ones on my site? • How often is CloudFront caching taking place? Amazon CloudFront reports can be used alongside other online analytics tools and we encourage the use of multiple reporting tools Note that some analytics tools may require you to embed client side JavaScript in your HTML pages Ama zon CloudFront reporting does not require any changes to your web pages See the Amazon CloudFront Developer Guide for more information on reports Estimating and Tracking AWS Spend With AWS there is no upper limit to the amount of Amazon S3 storage or network bandwidth you can consume You pay as you go and only pay for actual usage Because you’re not using web servers in this architecture you have no licensing costs or concern for server scalability or utilization Estimating AWS Spend To estimate your monthly costs you can use the AWS Simple Monthly Calculator Pricing sheets for Amazon Route 53 Amazon S3 and Amazon CloudFront are This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 18 available online 7 AWS pricing is Region specific See the following links for the most recent pricing information: Amazon Route 53 Amazon CloudFront Amazon S3 Tracking AWS Spend The AWS Cost Explorer can help you track cost trends by service type It’s integrated in the AWS Billing console and runs in your browser The Monthly Cost by Service chart allows you to see a detailed breakdown by service The Daily Cost report helps you track your spending as it happens If you configured tags for your Amazon S3 bucket you can filter your reports against specific tags for cost allocation purposes See Using the Default Cost Explorer Reports Integration with Your Continuous Deployment Process Your website content should be managed using version control software (such as Git Subversion or Microsoft Team Foundation Server) to make it possible to revert to older versions of your files8 AWS offers a managed source control service called AWS CodeCommit that makes it easy to host secure and private Git repo sitories Regardless of which version control system your team uses consider tying it to a continuous build/integration tool so that your website automatically updates whenever the content changes For example if your team is using a Git based repository for version control a Git post commit hook can notify your continuous integration tool (eg Jenkins) of any content updates At that point your continuous integration tool can perform the actual deployment to synchronize the content with Am azon S3 (using either the AWS CLI or the Amazon S3 API) and notify the user of the deployment status This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 19 Figure 8: Example continuous deployment process If you don’t want to use version control then be sure to periodically download your website and back up the snapshot The AWS CLI lets you download your entire website with a single command: aws s3 sync s3://bucket /my_local_backup_directory Access Logs Access logs can help you troubleshoot or analyze traffic coming to your site Both Amazon CloudFront and Amazon S3 give you the option of turning on access logs There’s no extra charge to enable logging other than the storage of the actual logs The access logs are delivered on a best effort basis; they are usually delivered within a few hours after the events are recorded Analyzing Logs Amazon S3 access logs are deposited in your Amazon S3 bucket as plain text files Each record in the log files provides details about a single Amazon S3 access request such as the requester bucket name request time request action response status and error code if any You can open individual log files in a text editor or use a third party tool that can interpret the Amazon S3 access log format CloudFront logs are deposited in your Amazon S3 bucket as GZIP compressed text files CloudFront logs follow the standard W3C extended log file format and can be analyzed using any log analyzer This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 20 You can also build out a custom analytics solution using AWS Lambda and Amazon Elasticsearch Service (Amazon ES) AWS Lambda functions can be hooked to an Amazon S3 bucket to detect when new log files are available for processing AWS Lambda function code can process the log files and send the data to an Amazon ES cluster Users can then analyze the logs by querying Amazon ES or using the Kibana visual dashboard Both AWS Lambda and Amazon ES are managed services and there are no servers to manage Figure 9: Using AWS Lambda to send logs from Amazon S3 to Amazon Elasticsearch Service Archiving and Purging Logs Amazon S3 buckets don’t have a storage cap and you’re free to retain logs for as long as you want However an AWS best practice is to archive files into Amazon S3 Glacier Amazon S3 Glacier is suitable for long term storage of infrequently accessed files Like Amazon S3 Amazon S3 Glacier is also designed for 99999999999% data durability and you have practically unlimited storage The difference is in retrieval time Amazon S3 supports immediate file retrieval With Amazon S3 Glacier after you initiate a file retrieval request there is a delay before you can start downloading the files Amazon S3 Glacier storage costs are cheaper than S3 disks or tape drives See the Amazon S3 Glacier page for pricing The easiest way to archive data into Amazon S3 Glacier is to use Amazon S3 lifecycle policies The lifecycle policies can be applie d to an entire Amazon S3 bucket or to specific objects within the bucket (eg only the log files) A minute of configuration in the Amazon S3 console can reduce your storage costs significantly in the long run Here’s an example of setting up data tiering using lifecycle policies: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 21 • Lifecycle policy #1: After X days automatically move logs from Amazon S3 into Amazon S3 Glacier • Lifecycle policy #2: After Y days automatically delete logs from Amazon S3 Glacier Data tiering is illustrated in the following figure Figure 10: Data tiering using Amazon S3 lifecycle policies Securing Administration Access to Your Website Resources Under the AWS shared responsibility model the responsibility for a secure website is shared between AWS and the customer (you) AWS provides a global secure infrastructure and foundation compute storage networking and database services as well as higher level services AWS also provides a range of security services and features that you can use to secure your assets As an AWS customer you’re responsible for protecting the confidentiality integrity and availability of your data in the cloud and for meeting your specific business requirements for information protection We strongly rec ommend working closely with your Security and Governance teams to implement the recommendations in this whitepaper A benefit of using Amazon S3 and Amazon CloudFront as a serverless architecture is that the security model is also simplified You have no o perating system to harden servers to patch or software vulnerabilities to generate concern Also S3 offers security options such as server side data encrypt ion and access control lists This results in a significantly reduced surface area for potential attacks This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 22 Managing Administrator Privileges Enforcing the principle of least privilege is a critical part of a security and governance strategy In most organizations the team in charge of DNS configurations is separate from the team that manages web content You should grant users appropriate levels of permissions to access only the resources they need In AWS you can use IAM to lock down permissions You can create multiple IAM users under your AWS account each with their own login and password9 An IAM user can be a person service or application that requires access to your AWS resources (in this case Amazon S3 buckets Amazon CloudFront distributions and Amazon Route 53 hosted zones) through the AWS Management Console command line tools or APIs You can then organize them into IAM groups based on functional roles When an IAM user is placed in an IAM group it inherits the group’s permissions The finegrained policies of IAM allow you to grant administrators the minimal privileges needed to accomplish their tasks The permissions can be scoped to specific Amazon S3 buckets and Amazon Route 53 hosted zones The following is an example separation of duties : IAM configuration can be managed by: • Super_Admins Amazon Route 53 configuration can be managed by: • Super_Admins • Network_Admins CloudFront configuration can be managed by: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 23 • Super_Admins • Network_Admins • Website_Admin Amazon S3 configuration can be managed by: • Super_Admins • Website_Admin Amazon S3 content can be updated by: • Super_Admins • Website_Admin • Website_Content_Manager An IAM user can belong to more than one IAM group For example if someone must manage both Amazon Route 53 and Amazon S3 that user can belong to both the Network_Admins and the Website_Admins groups The best practice is to require all IAM users to rotate their IAM passwords periodically Multi factor authentication (MFA) should be enabled for any IAM user account with administrator privileges Auditing API Calls Made in Your AWS Account You can use AWS CloudTrail to see an audit trail for API activity in your AWS account Toggle it on for all AWS Regions and the audit logs will be deposited to an Amazon S3 bucket You can use the AWS Management Console to search against API acti vity history Or you can use a third party log analyzer to analyze and visualize the CloudTrail logs You can also build a custom analyzer Start by configuring CloudTrail to send the data to Amazon CloudWatch Logs CloudWatch Logs allows you to create automated rules that notify you of suspicious API activity CloudWatch Logs also has seamless integration with Amazon ES and you can configure the data to be automatically streamed over to a managed Amazon ES cluster Once the data is in Amazon ES users can query against that data directly or visualize the analytics using a Kibana dashboard This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 24 Controlling How Long Amazon S3 Content is Cached by Amazon CloudFront It is important to control how long your Amazon S3 content is cached at the CloudFront edge locations This helps make sure that website updates appear correctly If you’re ever confused by a situation in which you’ve updated your website but you are still seeing stale content when visiting your CloudFront powered website one likely reason is that CloudFront is still serving up cached content You can control CloudFront caching behavior with a co mbination of Cache Control HTTP headers CloudFront Minimum Time toLive (TTL) specifications Maximum TTL specifications content versioning and CloudFront Invalidation Requests Using these correctly will help you manage website updates CloudFront will typically serve cached content from an edge location until the content expires After it expires the next time that content is requested by an end user CloudFront goes back to the Amazon S3 origin server to fetch the content and then cach e it CloudFront edge locations automatically expire content after Maximum TTL seconds elapse (by default this is 24 hours) However it could be sooner because CloudFront reserves the flexibility to expire content if it needs to before the Maximum TTL is reached By default the Minimum TTL is set to 0 (zero) seconds but this value is configurable Therefore CloudFront may expire content anytime between the Minimum TTL (default is 0 seconds) and Maximum TTL (default is 24 hours) For example if Minimum TTL=60s and Maximum TTL=600s then content will be cached for at least 60 seconds and at most 600 seconds For example say you deploy updates to your marketing website with the latest and greatest product images After uploading your new images to Amazon S3 you immediately browse to your website DNS and you still see the old images! It is likely that one and possibly more CloudFront edge locations are still holding onto cached versions of the older images and serving the cached versions up to your website visitors If you’re the patient type you can wait for CloudFront to expire the content but it could take up to Maximum TTL seconds for that to happen There are several approaches to address this issue each with its pros and cons Set Maximum TTL Value Set the Maximum TTL to be a relatively low value The tradeoff is that cached content expires faster because of the low Maximum TTL value This results in more frequent This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 25 requests to your Amazon S3 bucket because the CloudFront caches need to be repopulated more often In addition the Maximum TTL setting applies across the board to all CloudFront files and for some websites you might want to control cache expiration behaviors based on file types Implement Content Ve rsioning Every time you update website content embed a version identifier in the file names It can be a timestamp a sequential number or any other way that allows you to distinguish between two versions of the same file For example instead of banner jpg call it banner_20170401_v1jpg When you update the image name the new version banner_20170612_v1jpg and update all files that need to link to the new image In the following example the banner and logo images are updated and receive new file names However because those images are referenced by the HTML files the HTML markup must also be updated to reference the new image file names Note that the HTML file names shouldn’t have version identifiers in order to provide stable URLs for end users This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 26 Content versioning has a clear benefit: it sidesteps CloudFront expiration behaviors altogether Since new file names are involved CloudFront immediately fetches the new files from Amazon S3 (and afterwards cache them) NonHTML website changes are reflected immediately Additionally you can roll back and roll forward between different versions of your website The main challenge is that content update processes must be “version aware” File names must be versioned Files with references to changed files must also be updated For example if an image is updated the following items must be updated as well: • The image file name • Content in any HTML CSS and JavaScript files referencing the older image file name • The file names of any referencing files (with the exception of HTML files) A few static site generator tools can automatically rename file names with version identifiers but most tools aren’t version aware Manually managing version identifiers can be cumbersome and error prone If your website would benefit from content versioning it may be worth investing in a few automation scripts to streamline your content update process Specify Cache Control Headers You can manage CloudFront expiration behavior by specifying Cache Control headers for your website content If you keep the Minimum TTL at the default 0 seconds then This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Hosting Static Websites on AWS Amazon Web Services Page 27 CloudFront honors any CacheControl: max age HTTP header that is individually set for your content If an image is configured with a Cache Control: maxage=60 header then CloudFront expires it at the 60 second mark This gives you more precise control over content expiration for individual files You can configure Amazon S3 to return a CacheControl HTTP header with the value of maxage=<seconds> when S3 serves up the content This setting is on a file byfile basis and we recommend using different values depending on the file type (HTML CSS JavaScript images etc) Since HTML files won’t have version identifiers in their file names we recommend using smaller maxage values for HTML files so that CloudFront will expire the HTML files sooner than other content You can set this by editing the Amazon S3 object metadata using the AWS Management Console Figure 11: Setting Cache Control Values in the console In practice you should automate this as part of your Amazon S3 upload process With AWS CLI you can alter your deployment scripts like the following example: aws s3 sync /path s3://yourbucket/ delete recursive \ cachecontrol max age=60 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Error! No text of specified style in document 1 Use CloudFront Invalidation Requests CloudFront invalida tion requests are a way to force CloudFront to expire content Invalidation requests aren’t immediate It takes several minutes from the time you submit one to the time that CloudFront actually expires the content For the occasional requests you can submit them using the AWS Management Console Otherwise use the AWS CLI or AWS APIs to script the invalidation In addition CloudFront lets you specify which content should be invalidated: You can choose to invalidate your entire Amazon S3 bucket indivi dual files or just those matching a wildcard pattern For example to invalidate only the images directory issue an invalidation request for: /images/* In summary the best practice is to understand and use the four approaches together If possible implement content versioning It allows you to immediately review changes and gives you the most precise control over the CloudFront and Amazon S3 experience Set the Minimum TTL to be 0 seconds and the Maximum TTL to be a relatively low value A lso use CacheControl headers for individual pieces of content If your website is infrequently updated then set a large value for Cache Control:max age=<seconds> and then issue CloudFront invalidation requests every time your site is updated If the website is updated more frequently use a relatively small value for CacheControl:max age=<seconds> and then issue CloudFront invalidation requests only if the CacheControl:max age=<seconds> settings exceeds several minutes Conclusion This whitepaper began with a look at traditional (non AWS) architectures for static websites We then showed you an AWS Cloud native architecture based on Amazon S3 Amazon CloudFront and Amazon Route 53 The AWS architecture is highly available and scalable secure and provides for a responsive user experience at very low cost By enabling and analyzing the available logs you can you understand your visitors and how well the website is performing Fewer moving parts means less maintenance is required In addition the architecture costs only a few dollars a month to run This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Error! No text of specified style in document 2 Contributors Contributors to this document include: •Jim Tran AWS Principal Enterprise Solutions Architect •Bhushan Nene Senior Manager AWS Solutions Architecture •Jonathan Pan Senior Product Marketing Manager AWS •Brent Nash Senior Software Development Engineer AWS Further Reading For additional information see: •AWS Whitepapers page •Amazon CloudFront Developer Guide •Amazon S3 Documentation Document Revisions Date Description May 21 2021 Updated for technical accuracy February 2019 Added usage of HTTPS May 2017 First publication Notes 1 Each S3 object can be zero bytes to 5 TB in file size and there’s no limit to the number of Amazon S3 objects you can store 2 https://letsencryptorg/stats/ 3 If your high availability requirements require that your website must remain available even in the case of a failure of an entire AWS Region explore the Amazon S3 Cross Region Replication capability to automatically replicate your S3 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Error! No text of specified style in document 3 data to another S3 bucket in a second AWS Region 4 For Microsoft IIS web servers this is equivalent to “defaulthtml”; for Apache web servers this is equivalent to “indexhtml” 5 For moving very large quantities of data into S3 see https://awsamazoncom/s3/transfer acceleration/ 6 “We find that the performance penalty incurred by a web flow due to its TCP handshake is between 10% and 30% of the latency to serve the HTTP request as we show in detail in Section 2” f rom https://raghavanuscedu/papers/tfo conext11pdf 7 Any pricing information included in this document is provided only as an estimate of usage charges for AWS services based on the prices effective at the time of this writing Monthly charges will be based on your actual use of AWS services and may vary from the estimates provided 8 If version control software is not in use at your organization one alternative approach is to look at the Amazon S3 object versioning feature for versioning your critical files Note that object versioning introduces storage costs for each version and requires you to programmatically manage the different versions For more information see http://docsawsamazoncom/AmazonS3/latest/dev/Versioninghtml 9 The AWS account is the account that you create when you first sign up for AWS Each AWS account has root permissions to all AWS resources and services The best practice is to enable multi factor authentication (MFA) for your root account and then lock away the root credentials so that no person or system uses the root credentials directly for day today ope rations Instead create IAM groups and IAM users for the day today operations
|
General
|
consultant
|
Best Practices
|
How_AWS_Pricing_Works
|
ArchivedHow AWS Pricing Works AWS Pricing Overview October 30 2020 This paper has been archived For the latest technical guidance on How AWS Pricing Works see https://docsawsamazoncom/whitepapers/latest/how awspricingworks/welcomehtmlArchivedNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved ArchivedContents Introduction 1 Key principles 1 Understand the fundamentals of pricing 1 Start early with cost optimization 2 Maximize the power of flexibility 2 Use the right pricing model for the job 2 Get started with the AWS Free Tier 3 12 Months Free 3 Always Free 4 Trials 4 AWS Pricing/TCO Tools 4 AWS Pricing Calculator 5 Migration Evaluator 5 Pricing details for individual services 6 Amazon Elastic Compute Cloud (Amazon EC2) 6 AWS Lambda 10 Amazon Elastic Block Store (Amazon EBS) 11 Amazon Simple Storage Service (Amazon S3) 12 Amazon S3 Glacier 13 AWS Outposts 14 AWS Snow Family 16 Amazon RDS 18 Amazon DynamoDB 19 Amazon CloudFront 23 Amazon Kendra 23 Amazon Kinesis 25 ArchivedAWS IoT Events 27 AWS Cost Optimization 28 Choose the right pricing models 28 Match Capacity with Demand 28 Implement processes to identify resource waste 29 AWS Support Plan Pricing 30 Cost calculation examples 30 AWS Clou d cost calculation example 30 Hybrid cloud cost calculation example 33 Conclusion 37 Contributors 38 Further Reading 38 Document Revisions 39 ArchivedAbstract Amazon Web Services (AWS) helps you move faster reduce IT costs and attain global scale through a broad set of global compute storage database analytics application and deployment services One of the main benefits of cloud services is the ability it gives you to optimize costs to match your needs even as those needs change over time ArchivedAmazon Web Services How AWS Pricing Works Page 1 Introduction AWS has the services to help you build sophisticated applications with increased flexibility scalability and reliability Whether you're looking for compute power database storage content delivery or other functionality with AWS you pay only for the individual services you need for as long as you use them without complex licensing AWS offers you a variety of pricing models for over 160 cloud services You only pay for the services you consume and once you stop using them there are no additional costs or termination fees This whitepaper provides an overview of how AWS pricing works across some of the most widely u sed services The latest pricing information for each AWS service is available at http://awsamazoncom/pricing/ Key principles Although pricing models vary across services it’s worthwhile to review key principles and best practices that are broadly applicable Understand the fundamentals of pricing There are three fundamental drivers of cost with AWS: compute storage and outbound data transfer These characteristics vary somewhat depending on the AWS product and pricing model you choose In most cases there is no charge for inbound data transfer or for data transfer between other AWS services within the same Region There are some exceptions so be sure to verify data transfer rates before beginni ng Outbound data transfer is aggregated across services and then charged at the outbound data transfer rate This charge appears on the monthly statement as AWS Data Transfer Out The more data you transfer the less you pay per GB For compute resources you pay hourly from the time you launch a resource until the time you terminate it unless you have made a reservation for which the cost is agreed upon beforehand For data storage and transfer you typically pay per GB Except as otherwise noted AWS prices are exclusive of applicable taxes and duties including VAT and sales tax For customers with a Japanese billing address use of AWS is subject to Japanese Consumption Tax For more information see Amazon Web Services Consumption Tax FAQ ArchivedAmazon Web Services How AWS Pricing Works Page 2 Start early with cost optimization The cloud allows you to trade fixed expenses (such as data centers and physical servers) for variable expenses and only pay for IT as you consume it And because of the economies of scale the variable expenses are much lower than what you would pay to do it yourself Whether you started in the cloud or you are just starting your migration journey to the cloud AWS has a set of solutions to help you manage and optimize your spend This includes services tools and resources to organize and track cost and usage data enhance control throu gh consolidated billing and access permission enable better planning through budgeting and forecasts and further lower cost with resources and pricing optimizations To learn how you can optimize and save costs today visit AWS Cost Optimization Maximize the power of flexibility AWS services are priced independently transparently and available on demand so you can choose and pay for exactly what you need You may also choose to save money through a reservation model By paying for services on an as needed basis you can redirect your focus to innovation and invention reducing procurement complexity and enabling your business to be fully elastic One of the key advantages of cloud based resources is that you don’t pay for them when they’re not running By turning off instances you don’t use you can reduce costs by 70 percent or more compared to using them 24/7 This enables you to be cost efficient and at t he same time have all the power you need when workloads are active Use the right pricing model for the job AWS offers several pricing models depending on product These include: • OnDemand Instances let you pay for compute or database capacity by the hour or second (minimum of 60 seconds) depending on which instances you run with no long term commitments or upfront payments • Savings Plans are a flexible pricing model that offer low prices on Amazon EC2 AWS Lambda and AWS Fargate usage in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a one or three year term ArchivedAmazon Web Services How AWS Pricing Works Page 3 • Spot Instances are an Amazon EC2 pricing mechanism that let you request spare computing capacity with no upfront commitment and at disc ounted hourly rate (up to 90% off the on demand price) • Reservations provide you with the ability to receive a greater discount up to 75 percent by paying for capacity ahead of time For more details see the Optimizing costs with reservations section Get started with the AWS Free Tier The AWS Free Tier enables you to gain free hands on experience with more than 60 products on AWS platform AWS Free Tier includes the following free offer types : • 12 Months Free – These tier offers include 12 months free usage following your initial sign up da te to AWS When your 12 month free usage term expires or if your application use exceeds the tiers you simply pay standard pay asyougo service rates • Always Free – These free tier offer s do not expire and are available to all AWS customers • Trials – These offers are short term free trial s starting from d ate you activate a particular service Once the trial period expires you simply pay standard pay as yougo service rates This section lists some of the most commonly used AWS Free Tier services Terms and conditions apply For the full list of AWS F ree Tier services see AWS Free Tier 12 Months Free • Amazon Elastic Compute Cloud (Amazon EC2) : 750 hours per month of Linux RHEL or SLES t2micro/t3micro instance usage or 750 hours per month of Windows t2micro/t3micro instance usage dependent on Region • Amazon Simple Storage Service (Amazon S3) : 5 GB of Amazon S3 standard storage 20000 Get Requests and 2000 Put Requests • Amazon Relational Database Service (Amazon RDS) : 750 hours of Amazon RDS Single AZ dbt2micro database usage for running MySQL PostgreSQL MariaDB Oracle BYOL or SQL Server (running SQL Server Express Editi on); 20 GB of general purpose SSD database storage and 20 GB of storage for database backup and DB snapshots ArchivedAmazon Web Services How AWS Pricing Works Page 4 • Amazon CloudFront: 50 GB Data Transfer Out and 2000000 HTTP and HTTPS Requests each month Always Free • Amazon DynamoDB : Up to 200 million requests per month (25 Write Capacity units and 25 Read Capacity units ); 25 GB of storage • Amazon S3 Glacier : Retrieve up to 10 GB of your Amazon S3 Glacier data per month for free (applies to standard retrievals using the Glacier API only) • AWS Lambda : 1 million free requests per month; up to 32 million seconds of compute time per month Trials • Amazon SageMaker: 250 hours per month of t2medi um notebook50 hours per month of m4xlarge for training 125 hours per month of m4xlarge for hosting for the first two months • Amazon Redshift : 750 hours per month for fr ee enough hours to continuously run one DC2Large node with 160GB of compressed SSD storage You can also build clusters with multiple nodes to test larger data sets which will consume your free hours more quickly Once your two month free trial expires or your usage exceeds 750 hours per month you can shut down your cluster to avoid any charges or keep it running at the standard OnDemand Rate The AWS Free Tier is not available in th e AWS GovCloud (US) Regions or the China (Beijing) Region at this time The Lambda Free Tier is available in the AWS GovCloud (US) Region AWS Pricing/TCO Tools To get the most out of your estimates you should have a good idea of your basic requirements For example if you're going to try Amazon Elastic Compute Cloud (Amazon EC2) it might help if you know what kind of operating system you need what your memory requirements are and how much I/O you need You should also decide whether you need storage such as if you're going to run a database and how long you intend to use the servers You don't need to make these decisions before generating an estimate thoug h You can play around with the service configuration and parameters to ArchivedAmazon Web Services How AWS Pricing Works Page 5 see which options fit your use case and budget be st For more information about AWS service pricing see AWS Services Pricing AWS offers couple of tools (free of cost) for you to use If the workload details and services to be used are identified AWS pricing calculator can help with calculating the total cost of ownership Migration Evaluator helps with inventorying your existin g environment identifying workload information and designing and planning your AWS migration AWS Pricing Calculator AWS Pricing Calculator is a web based service that you can use to create cost estimates to suit your AWS use case s AWS Pricing Calculat or is useful both for people who have never used AWS and for those who want to reorganize or expand their usage AWS Pricing Calculator allows you to explore AWS services based on your use cases and create a cost estimate You can model your solutions befo re building them explore the price points and calculations behind your estimate and find the available instance types and contract terms that meet your needs This enables you to make informed decisions about using AWS You can plan your AWS costs and us age or price out setting up a new set of instances and services AWS Pricing Calculator is free for use It provides an estimate of your AWS fees and charges The estimate doesn't include any taxes that might apply to the fees and charges AWS Pricing Cal culator provides pricing details for your information only AWS Pricing Calculator provides a console interface at https://calculatoraws/#/ Migration Evaluator Migration Evaluator (Formerly TSO Logic) is a complimentary service to create data driven business cases for AWS Cloud planning and migration Creating business cases on your own can be a time consuming process and does not always identify the most cost effective deployment and purchasing options Migration Evaluator quickly provides a business case to make sound AWS planning and migration decisions With Migration Evaluator your organization can build a data driven business case for AWS gets access to AW S expertise visibility into the costs associated with multiple migration strategies and insights on how reusing existing software licensing reduces costs further ArchivedAmazon Web Services How AWS Pricing Works Page 6 A business case is the first step in the AWS migration journey Beginning with on premises inventory discovery you can choose to upload exports from 3rd party tools or install a complimentary agentless collector to monitor Windows Linux and SQL Server footprints As part of a white gloved experience Migration Evaluator includes a team of program managers and solution architects to capture your migration objective and use analytics to narrow down the subset of migration patterns best suited to your business needs The results are captured in a transparent business case which aligns business and technology stakeholders to provide a prescriptive next step in your migration journey Migration Evaluator service analyzes an enterprise’s compute footprint including server configuration utilization annual costs to operate eligibility for bring yourownlicense and hundreds of other parameters It then statistically models utilization patterns matching each workload with optimized placements in the AWS Amazon Elastic Cloud Compute and Amazon Elastic Block Store Finally it outputs a business case with a comparison of the current state against multiple future state configurations showing the flexibility of AWS For more information see Migration Evaluator Pricing details for individual se rvices Different types of services lend themselves to different pricing models For example Amazon EC2 pricing varies by instance type wh ereas the Amazon Aurora database service includes charges for data input/output (I/ O) and storage This section provi des an overview of pricing concepts and examples for few AWS services You can always find current price information for each AWS service at AWS Pricing Amazon Elastic Compute Cloud (Amazon EC2) Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure resizable compute capacity in the cloud It is designed to make web scale cloud computing easier for developers The simple web service interf ace of Amazon EC2 allows you to obtain and configure capacity with minimal friction with complete control of your computing resources Amazon EC2 reduces the time required to obtain and boot new server instances in minutes allowing you to quickly scale ca pacity both up and down as your computing requirements change ArchivedAmazon Web Services How AWS Pricing Works Page 7 Pricing models for Amazon EC2 There are five ways to pay for Amazon EC2 instances: OnDemand Instances Savings Plans Reserved Instances and Spot Instances OnDemand Instances With OnDemand Instances you pay for compute capacity per hour or per second depending on which instances you run No long term commitments or upfront payments are required You can increase or decrease your compute capacity to meet the demands of your application and only pay the specified hourly rates for the instance you use On Demand Instances are recommende d for the following use cases : • Users who prefer the low cost and flexibility of Amazon EC2 without upfront payment or long term commitments • Applications with short term spiky or unpredictable workloads that cannot be interrupted • Applications being develo ped or tested on Amazon EC2 for the first time Savings Plans Savings Plans are a flexible pricing model that offer low prices on Amazon EC2 AWS Lambda and AWS Fargate usage in exchange for a commitmen t to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term Savings Plans is a flexible pricing model that provides savings of up to 72% on your AWS compute usage This pricing model offers lower prices on Amazon EC2 instances usage reg ardless of instance family size OS tenancy or AWS Region and also applies to AWS Fargate and AWS Lambda usage For workloads that have predictable and consistent usage Savings Plans can provide significant savings compared to On Demand Instances it i s recommended for: • Workloads with a consistent and steady state usage • Customers who want to use different instance types and compute solutions across different locations • Customers who can make monetary commitment to use EC2 over a oneor threeyear term ArchivedAmazon Web Services How A WS Pricing Works Page 8 Spot Instances Amazon EC2 Spot Instances allow you to request spare Amazon EC2 computing capacity for up to 90 percent off the On Demand price Spot Instances are recommended for: • Applications that have f lexible start and end times • Applications that are only feasible at very low compute prices • Users with fault tolerant and/or stateless workloads Spot Instance prices are set by Amazon EC2 and adjust gradually based on long term trends in supply and demand f or Spot Instance capacity Reserved Instances Amazon EC2 Reserved Instances provide you with a significant discount (up to 75 percent) compared to On Demand Instance pricing In addi tion when Reserved Instances are assigned to a specific Availability Zone they provide a capacity reservation giving you additional confidence in your ability to launch instances when you need them Persecond billing Persecond billing saves money and has a minimum of 60 seconds billing It is particularly effective for resources that have periods of low and high usage such as development and testing data processing analytics batch processing and gaming applications Learn more about per second billing Estimating Amazon EC2 costs When you begin to estimate the cost of using Amazon EC2 consider the following: • Clock hours of server time: Resources incur charges when they are running — for example from the time Amazon EC2 instances are launched until they are terminated or from the time Elastic IP addresse s are allocated until the time they are de allocated ArchivedAmazon Web Services How AWS Pricing Works Page 9 • Instance type: Amazon EC2 provides a wide selection of instance types optimized to fit different use cases Instance types comprise varying combinations of CPU memory storage and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications Each instance type includes at least one instance size allowing you to scale your resources to the requirements of your target workload • Pricing model : With On Demand Instances you pay for compute capacity by the hour with no required mini mum commitments • Number of instances : You can provision multiple instances of your Amazon EC2 and Amazon EBS resources to handle peak loads • Load balancing: You can use Elastic Load Balanc ing to distribute traffic among Amazon EC2 Instances The number of hours Elastic Load Balanc ing runs and the amount of data it processes contribute to the monthly cost • Detailed monitoring : You can use Amazon CloudWatch to monitor your EC2 instances By default basic monitoring is enabled For a fixed monthly rate you can opt for detailed monitoring which includes seven preselected metrics recorded once a minute Partial months are charged on an hourly pro rata basis at a per instance hour rate • Amazon EC2 Auto Scal ing: Amazon EC2 Auto Scaling automatically adjusts the number of Amazon EC2 instances in your deployment according to the scaling policies you define This service is available at no additional charge beyond Amazon CloudWatch fees • Elastic IP addresses : You can have one Elastic IP address associated with a running instance at no charge • Licensing : To run operating systems and applications on AWS you can obtain variety of software licenses from AWS on a pay asyougo basis that are fully compliant and do no t require you to manage complex licensing terms and conditions However i f you have existing licensing agreements with software vendors you can bring your eligible licenses to the cloud to reduce total cost of ownership (TCO) AWS offers License Manager which makes it easier to manage your software licenses from vendors such as Microsoft SAP Oracle and IBM across AWS and on premises environments For more information see Amazon EC2 pricing ArchivedAmazon Web Services How AWS Pricing Works Page 10 AWS Lambda AWS Lambda lets you run code without provisioning or managing servers You pay only for the compute time you consume —there is no charge when your code is not running With Lambda you can run code for virtually any type of application or backend service —all with zer o administration Just upload your code and Lambda takes care of everything required to run and scale your code with high availability AWS Lambda pricing With AWS Lambda you pay only for what you use You are charged based on the number of requests for y our functions and the time it takes for your code to execute Lambda registers a request each time it starts executing in response to an event notification or invoke call including test invokes from the console You are charged for the total number of req uests across all your functions Duration is calculated from the time your code begins executing until it returns or otherwise terminates rounded up to the nearest 100 milliseconds The price depends on the amount of memory you allocate to your function AWS Lambda participates in Compute Savings Plans a flexible pricing model that offers low prices on Amazon EC2 AWS Fargate and AWS Lambda usage in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term Wi th Compute Savings Plans you can save up to 17% on AWS Lambda Savings apply to Duration Provisioned Concurrency and Duration (Provisioned Concurrency) Request pricing • Free Tier: 1 million requests per month 400000 GB seconds of compute time per mont h • $020 per 1 million requests thereafter or $00000002 per request Duration pricing • 400000 GB seconds per month free up to 32 million seconds of compute time • $000001667 for every GB second used thereafter ArchivedAmazon Web Services How AWS Pricing Works Page 11 Additional charges You may incur additional c harges if your Lambda function uses other AWS services or transfers data For example if your Lambda function reads and writes data to or from Amazon S3 you will be billed for the read/write requests and the data stored in Amazon S3 Data transferred int o and out of your AWS Lambda functions from outside the Region the function executed in will be charged at the EC2 data transfer rates as listed on Amazon EC2 On Demand Pricing under Data Transf er Amazon Elastic Block Store (Amazon EBS) Amazon Elastic Block Store (Amazon EBS) an easy to use high performance block storage service designed for use with Amazon EC2 instances Amazon EBS volumes are off instance storage that persists independently from the life of an instance They are analogous to virtual disks in the cloud Amazon EBS provides two volume types: • SSDbacked volumes are optimized for transactional workloads involving frequent read/write o perations with small I/O size where the dominant performance attribute is IOPS • HDD backed volumes are optimized for large streaming workloads where throughput (measured in megabits per second) is a better performance measure than IOPS How Amazon EBS is priced Amazon EBS pricing includes three factors: • Volumes : Volume storage for all EBS volume types is charged by the amount of GB you provision per month until you release the storage • Snapshots : Snapshot storage is based on the amount of space your da ta consumes in Amazon S3 Because Amazon EBS does not save empty blocks it is likely that the snapshot size will be considerably less than your volume size Copying EBS snapshots is charged based on the volume of data transferred across Regions For the f irst snapshot of a volume Amazon EBS saves a full copy of your data to Amazon S3 For each incremental snapshot only the changed part of your Amazon EBS volume is saved After the snapshot is copied standard EBS snapshot charges apply for storage in the destination Region ArchivedAmazon Web Services How AWS Pricing Works Page 12 • EBS Fast Snapshot Restore (FSR) : This is charged in Date Services Unit Hours (DSUs) for each Availability Zone in which it is enabled DSUs are billed per minute with a 1 hour minimum The price of 1 FSR DSU hour is $075 per Availabil ity Zone (pricing based on us east1 (NVirginia)) • EBS direct APIs for Snapshots : EBS direct APIs for Snapshots provide access to directly read EBS snapshot data and identify differences between two snapshots The following charges apply for these APIs o ListChangedBlocks and ListSnapshotBlocks APIs are charged per request o GetSnapshotBlock API is charged per SnapshotAPIUnit (block size 512 KiB) • Data transfer: Consider the amount of data transferred out of your application Inbound data transfer is free a nd outbound data transfer charges are tiered If you use external or cross region data transfers additional EC2 data transfer charges will apply For more information see the Amazon EBS pricing page Amazon Simple Storage Service (Amazon S3) Amazon Simple Storage Service (Amazon S3) is object storage built to store and retrieve any amount of data from anywhere: websites mobile apps corporate applications and data from IoT sensors or devices It is designed to deliver 99999999999 percent durability and stores data for millions of applications used by market leaders in every industry As with other AWS services Amazon S3 provides the simplicity and cost effectiveness of pay asyougo pricing Estimating Amazon S3 storage costs With Amazon S3 you pay only for the storage you use with no minimum fee Prices are based on the location of your Amazon S3 bucket When you begin to estimate the cost of Amazon S3 consider the following: ArchivedAmazon Web Services How AWS Pricing Works Page 13 • Storage class: Amazon S3 offers a range of storage classes designed for different use cases These inc lude S3 Standard for general purpose storage of frequently accessed data; S3 Intelligent Tiering for data with unknown or changing access patterns; S3 Standard Infrequent Access (S3 Standard IA) and S3 One Zone Infrequent Access (S3 One Zone IA) for long lived but less frequently accessed data; and Amazon S3 Glacier (S3 Glacier) and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long term archive and digital preservation Amazon S3 also offers capabilities to manage your data throughout its lifecycle Once an S3 Lifecycle policy is set your data will automatically transfer to a different storage class without any changes to your application • Storage: Costs vary with number and size of objects stored in your Amazon S3 buckets as well as type of storage • Requests and Data retrievals: Requests costs made against S3 buckets and objects are based on request type and quantity of requests • Data transfer: The amount of data transferred out of the Amazon S3 region Transfers between S3 buckets or from Amazon S3 to any service(s) within the same AWS Region are free • Management and replication: You pay for the storage management features (Amazon S3 inventory analytics and object tagging) that are enabled on your account’s buckets For more information see Amazon S3 pricing You can estimate your monthly bill using the AWS Pricing Calculator Amazon S3 Glacier Amazon S3 Glacier is a secure durable and extremely low cost cloud storage service for data archiving and long term backup It is designed to deliver 99999999999 percent durability with comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements Amazon S3 Glacier provides query in place functionality allowing you to run powerful analytics directly on your archived data at rest Amazon S3 Glacier provides low cost long term storage Starti ng at $0004 per GB per month Amazon S3 Glacier allows you to archive large amounts of data at a very low cost You pay only for what you need with no minimum ArchivedAmazon Web Services How AWS P ricing Works Page 14 commitments or upfront fees Other factors determining pricing include requests and data transf ers out of Amazon S3 Glacier (incoming transfers are free) Data access options To keep costs low yet suitable for varying retrieval needs Amazon S3 Glacier provides three options for access to archives that span a few minutes to several hours For details see the Amazon S3 Glacier FAQs Storage and bandwidth include all file overhead Rate tiers take into account your aggregate usage for Data Transfer Out to the internet across Amazon EC2 Amazon S3 Amazon Glacier Amazon RDS Amazon SimpleDB Amazon SQS Amazon SNS Amazon DynamoDB and AWS Storage Gateway Amazon S3 Glacier Select pricing Amazon S3 Glacier Select allows queries to run directly on data stored in Amazon S3 Glacier without having to retrieve the entire archive Pricing for this feature is based on the total amount of data scanned the amount of data returned by Amazon S3 Glacier Select and the number of Amazon S3 Glacier Select requests initiated For more information see the Amazon S3 Glacier pricing page Data transfer Data transfer in to Amazon S3 is free Data transfer out of Amazon S3 is priced by Region For more information on AWS Snowball pr icing see the AWS Snowball pricing page AWS Outposts AWS Outposts is a fully managed service that extends AWS infrastructure AWS services APIs and tools to any datacenter co location space or on premises facility AWS Outposts is ideal for workloads that require low latency access to on premises systems local data processing or local data storage Outposts are connected to the nearest AWS Region to provide the same management and control p lane services on premises for a truly consistent operational experience across your on premises and cloud environments Your Outposts infrastructure and AWS services are managed monitored and updated by AWS just like in the cloud ArchivedAmazon Web Services How AWS Pricing Works Page 15 Figure 1: Example AWS Outposts architecture Pricing of Outposts configurations Priced for Amazon EC2 and Amazon EBS capacity in the SKU Three year term with partial upfront all upfront and no upfront options available Price includes d elivery installation servicing and removal at the end of term AWS Services running locally on AWS Outposts will be charged on usage only Amazon EC2 capacity and Amazon EBS storage upgrades available Operating system charges are billed based on usage as an uplift to cover the license fee and no minimum fee required Same AWS Region data ingress and egress charges apply No additional data transfer charges for local network Figure 2: AWS Outposts ingress/egress charges For more information see the AWS Outposts pricing page ArchivedAmazon Web Services How AWS Pricing Works Page 16 AWS Snow Family The AWS Snow Family helps customers that need to run operations in austere non data center environments and in locations where there's lack of consistent network connectivity The Snow Family comprised of AWS Snowcone AWS Snowball and AWS Snow mobile offers a number of physical devices and capacity points most with builtin computing capabilities These services help physically transport up to exabytes of data into and out of AWS Snow Family devices are owned and managed by AWS and integrate with AWS security monitoring storage management and computing capabilities AWS Snowcone AWS Snowcone is the smallest member of the AWS Snow Family of edge computing and data transfer devices Snowcone is portable rugged and secure You can use Snowcone to collect process and move data to AWS either offline by shipping the device or online with AWS DataSync With AWS Snowcone you pay only for the use of the device and for data transfer out of AWS Data transferred offline into AWS with Snowcone does not incur any transfer fees For online data transfer pricing with AWS DataSync please refer to the DataSync pricing page Standard pricing applies once data is stored in the AWS Cloud For AWS Snowcone you pay a service fee per job which includes five days usage on site and for any extra days y ou have the device on site For high volume deployments contact your AWS sales team For p ricing details see AWS Snowcone Pricing AWS Snowball AWS Snowball is a data migration and edge computing d evice that comes in two device options: Compute Optimized and Storage Optimized Snowball Edge Storage Optimized devices provide 40 vCPUs of compute capacity coupled with 80 terabytes of usable block or Amazon S3 compatible object storage It is wellsuite d for local storage and large scale data transfer Snowball Edge Compute Optimized devices provide 52 vCPUs 42 terabytes of usable block or object storage and an optional GPU for use cases such as advanced machine learning and full motion video analysis in disconnected environments Customers can use these two options for data collection machine learning and processing and storage in environments with ArchivedAmazon Web Services How AWS Pricing Works Page 17 intermittent connectivity (such as manufacturing industrial and transportation) or in extremely remot e locations (such as military or maritime operations) before shipping it back to AWS These devices may also be rack mounted and clustered together to build larger temporary installations AWS Snowball has three pricing elements to consider: usage device type and term of use First understand your planned use case Is it data transfer only or will you be running compute on the device? You can use either device for data transfer or computing but it is more cost effective to use a Snowball Edge Storage Optimized for data transfer jobs Second choose your device either Snowball Edge Storage Optimized or Snowball Edge Compute Optimized You can also select the option to run GPU instances on Snowball Edge Compute Optimized for edge applications For on demand use you pay a service fee per data transfer job which includes 10 days of on site Snowball Edge device usage Shipping days including the day the device is received and the day it is shipped back to AWS are not counted toward the 10 days After th e 10 days you pay a low per day fee for each additional day you keep the device For 1 year or 3 year commitments please contact your sales team; you cannot make this selection in the AWS Console Data transferred into AWS does not incur any data transfe r fees and standard pricing applies for data stored in the AWS Cloud For pricing details see AWS Snowball Pricing AWS Snow mobile AWS Snowmobile moves up to 100 PB of data in a 45 foot long rugge dized shipping container and is ideal for multi petabyte or Exabyte scale digital media migrations and data center shutdowns A Snowmobile arrives at the customer site and appears as a network attached data store for more secure high speed data transfer After data is transferred to Snowmobile it is driven back to an AWS Region where the data is loaded into Amazon S3 Snowmobile pricing is based on the amount of data stored on the truck per month ArchivedAmazon Web Services How AWS Pricing Works Page 18 Snowmobile can be made available for use with AWS services in select AWS regions Please follow up with AWS Sales to discuss data transport needs for your specific region and schedule an evaluation For pricing details see AWS Snowmobile Pricing Amazon RDS Amazon RDS is a web service that makes it easy to set up operate and scale a relational database in the cloud It provides cost efficient and resizable capacity while managing time consuming database administration tasks so you can focus on your applications and busine ss Estimating Amazon RDS costs The factors that drive the costs of Amazon RDS include: • Clock hours of server time: Resources incur charges when they are running — for example from the time you launch a DB instance until you terminate it • Database characteristics: The physical capacity of the database you choose will affect how much you are charged Database characteristics vary depending on the database engine size and memory class • Database purchase type : When you use On Demand DB Insta nces you pay for compute capacity for each hour your DB Instance runs with no required minimum commitments With Reserved DB Instances you can make a low one time upfront payment for each DB Instance you wish to reserve for a 1 or 3year term • Number of database instances: With Amazon RDS you can provision multiple DB instances to handle peak loads • Provisioned storage : There is no additional charge for backup storage of up to 100 percent of your provisioned database storage for an active DB Instance After the DB Instance is terminated backup storage is billed per GB per month • Additional storage: The amount of backup storage in addition to the provisioned storage amount is billed per GB per month ArchivedAmazon Web Services How AWS Pricing Works Page 19 • Long Term Retention : Long Term Retention is priced per vCPU per month for each database instance in which it is enabled The price depends on the RDS instance type used by your database and may vary by region If Long Term Retention is turned off performance data older than 7 days is deleted • API Request s: The API free tier includes all calls from the Performance Insights dashboard as well as 1 million calls outside of the Performance Insights dashboard API requests outside of the Performance Insights free tier are charged at $001 per 1000 requests • Deployment type: You can deploy your DB Instance to a single Availability Zone (analogous to a standalone data center) or multiple Availability Zones (analogous to a secondary data center for enhanced availability and durability) Storage and I/O charges var y depending on the number of Availability Zones you deploy to • Data transfer: Inbound data transfer is free and outbound data transfer costs are tiered Depending on your application’s needs it’s possible to optimize your costs for Amazon RDS database i nstances by purchasing reserved Amazon RDS database instances To purchase Reserved Instances you make a low one time payment for each instance you want to reserve and in turn receive a significant discount on the hourly usage charge for that instance For more information see Amazon RDS pricing Amazon DynamoDB Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent single digit millisecond latency at any scale It is a fully managed cloud database and supports both document and key value store models Its flexible data model reliable performance and au tomatic scaling of throughput capacity make it a great fit for mobile web gam es ad tech IoT and many other applications Amazon DynamoDB pricing at a glance DynamoDB charges for reading writing and storing data in your DynamoDB tables along with an y optional features you choose to enable DynamoDB has two capacity modes and those come with specific billing options for processing reads and writes on your tables: on demand capacity mode and provisioned capacity mode ArchivedAmazon Web Services How AWS Prici ng Works Page 20 DynamoDB read requests can be eith er strongly consistent eventually consistent or transactional OnDemand Capacity Mode With on demand capacity mode you pay per request for the data reads and writes your application performs on your tables You do not need to specify how much read and write throughput you expect your application to perform as DynamoDB instantly accommodates your workloads as they ramp up or down DynamoDB charges for the core and optional features of DynamoDB Table 1: Amazon DynamoDB OnDemand Pricing Core Feature Billing unit Details Read request unit (RRU) API calls to read data from your table are billed in RRU A strongly consistent read request of up to 4 KB requires one RRU For items larger than 4 KB additional RRUs are required For items up to 4 KB An eventually consistent read request requires one half RRU A transactional read request requires two RRUs Write request unit (WRU) Each API call to write data to your table is a WRU A standard WRU can write an item up to 1KB Items larger than 1 KB require additional WRUs Transactional write requires two WRUs Example RRU : • A strongly consistent read request of an 8 KB item requires two read request units • An eventually consistent read of an 8 KB item requires one read re quest unit • A transactional read of an 8 KB item requires four read request units Example WRU : • A write request of a 1 KB item requires one WRU • A write request of a 3 KB item requires three WRUs • A transactional write request of a 3 KB item requires six WRUs ArchivedAmazon Web Services How AWS Pricing Works Page 21 For details on how DynamoDB charges for the core and optional features of DynamoDB see Pricing for On Demand Capacity Provisioned Capacity Mode With provisioned capacit y mode you specify the number of data reads and writes per second that you require for your application You can use auto scaling to automatically adjust your table’s capacity based on the specified utilization rate to ensure application performance while reducing costs Table 2: Amazon DynamoDB Provisioned Capacity Mode Core Feature Billing unit Details Read Capacity unit (RCU) API calls to read data from your table is an RCU Items up to 4 KB in size one RCU can perform one strongly consistent read request per second For Items larger than 4 KB require additional RCUs For items up to 4 KB One RCU can perform two eventually consistent read requests per second Transactional read requests require two RCUs to perform one read per second Write Capacity Unit (WCU) Each API call to write data to your table is a write request For items up to 1 KB in size one WCU can perform one standard write request per second Items larger than 1 KB require additional WCUs Transactional write requests require two WCUs to perform one write per second for items up to 1 KB Data Storage DynamoDB monitors the size of tables continuously to determine storage charges DynamoDB measures the size of your billable data by adding the raw byte size of the data you upload plus a per item storage overhead of 100 bytes to account for indexing First 25 GB stored per month is free Example WCU • A standard write request of a 1 KB item would require one WCU • A standard write request of a 3 KB item would require three WCUs ArchivedAmazon Web Services How AWS Pricing Works Page 22 • A transactional write request of a 3 KB item would require six WCUs Example RCU: • A strongly consistent read of an 8 KB item would require two RCUs • An eventually consistent read of an 8 KB item would require one RCU • A transactional read of an 8 KB item would require four RCUs For details see Amazon DynamoDB pricing Data transfer There is no addit ional charge for data transferred between Amazon DynamoDB and other AWS services within the same Region Data transferred across Regions (eg between Amazon DynamoDB in the US East (Northern Virginia) Region and Amazon EC2 in the EU (Ireland) Region) wil l be charged on both sides of the transfer Global tables Global tables builds on DynamoDB’s global footprint to provide you with a fully managed multi region and multi master database tha t provides fast local read and write performance for massively scaled global applications Global tables replicates your Amazon DynamoDB tables automatically across your choice of AWS Regions DynamoDB charges for global tables usage based on the resource s used on each replica table Write requests for global tables are measured in replicated WCUs instead of standard WCUs The number of replicated WCUs consumed for replication depends on the version of global tables you are using Read requests and data st orage are billed consistently with standard tables (tables that are not global tables) If you add a table replica to create or extend a global table in new Regions DynamoDB charges for a table restore in the added regions per gigabyte of data restored C rossRegion replication and adding replicas to tables that contain data also incur charges for data transfer out For more information see Best Practices and Requirements for Managing Global Tables Learn more about pricing for additional DynamoDB features at the Amazon DynamoDB pricing page ArchivedAmazon Web Services How AWS Pricing Works Page 23 Amazon CloudFront Amazon CloudFront is a global content delivery network (CDN) service that securely delivers data videos applications and APIs to your viewers with low latency and high transfer speeds Amazon CloudFront pricing Amazon CloudFront charges are based on the data transfers and requests used to deliver content to your customers There are no upfront payme nts or fixed platform fees no long term commitments no premiums for dynamic content and no requirements for professional services to get started There is no charge for data transferred from AWS services such as Amazon S3 or Elastic Load Balancing And best of all you can get started with CloudFront for free When you begin to estimate the cost of Amazon CloudFront consider the following: • Data Transfer OUT (Internet/Origin) : The amount of data transferred out of your Amazon CloudFront edge locations • HTTP/HTTPS Requests: The number and type of requests (HTTP or HTTPS) made and the geographic region in which the requests are made • Invalidation Requests : No additional charge for the first 1000 paths requested for invalidation each month Thereafter $0 005 per path requested for invalidation • Field Level Encryption Requests : Field level encryption is charged based on the number of requests that need the additional encryption; you pay $002 for every 10000 requests that CloudFront encrypts using field level encryption in addition to the standard HTTPS request fee • Dedicated IP Custom SSL: $600 per month for each custom SSL certificate associated with one or more CloudFront distributions using the Dedicated IP version of custom SSL certificate support Th is monthly fee is pro rated by the hour For more information see Amazon CloudFront pricing Amazon Kendra Amazon Kendra is a highly accurate and ea sy to use enterprise search service that’s powered by machine learning Amazon Kendra enables developers to add search ArchivedAmazon Web Services How AWS Pricing Works Page 24 capabilities to their applications so their end users can discover information stored within the vast amount of content spread across the ir company When you type a question the service uses machine learning algorithms to understand the context and return the most relevant results whether that be a precise answer or an entire document For example you can ask a question like "How much is the cash reward on the corporate credit card?” and Amazon Kendra will map to the relevant documents and return a specific answer like “2%” Amazon Kendra pricing With the Amazon Kendra service you pay only for what you use There is no minimum fee or usage requirement Once you provision Amazon Kendra by creating an index you are charged for Amazon Kendra hours from the time an index is created until it is deleted Partial index instance hours are billed in one second increments This applies to Kendr a Enterprise Edition and Kendra Developer Edition Amazon Kendra comes in two editions Kendra Enterprise Edition provides a high availability service for production workloads Kendra Developer Edition provides developers with a lower cost option to build a proof ofconcept; this edition is not recommended for production workloads You can get started for free with the Amazon Kendra Developer Edition that provides free usage of up to 750 hours for the first 30 days Connector usage does not qualify for free usage regular run time and scanning pricing will apply If you exceed the free tier usage limits you will be charged the Amazon Kendra Developer Edition rates for the additional resources you use See Amazon Kendra Pricing for pricing details Amazon Macie Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS Amazon Macie uses machine learning and pattern matching to cost efficiently discover sensitive data at scale Macie automatically detects a large and growing list of sensitive data types including personally identifiable information (PII) such as names addresses and cred it card numbers It also gives you constant visibility of the data security and data privacy of your data stored in Amazon S3 Macie is easy to set up with one click in the AWS Management Console or a single API call Macie provides multi account support u sing AWS Organizations so you can enable Macie across all of your accounts with a few clicks ArchivedAmazon Web Services How AWS Pricing Works Page 25 Amazon Macie pricing With Amazon Macie you are charged based on the number of Amazon S3 buckets evaluated for bucket level security and access controls and the quantity of data processed for sensitive data discovery When you enable Macie the service will gather detail on all of your S3 buckets including bucket names size object count resource tags encryption status access controls and region placement M acie will then automatically and continually evaluate all of your buckets for security and access control alerting you to any unencrypted buckets publicly accessible buckets or buckets shared with an AWS account outside of your organization You are cha rged based on the total number of buckets in your account after the 30 day free trial and charges are pro rated per day After enabling the service you are able to configure and submit buckets for sensitive data discovery This is done by selecting the bu ckets you would like scanned configuring a one time or periodic sensitive data discovery job and submitting it to Macie Macie only charges for the bytes processed in supported object types it inspects As part of Macie sensitive data discovery jobs you will also incur the standard Amazon S3 charges for GET and LIST requests See Requests and data retrievals pricing on the Amazon S3 pricing page Free tier | Sensitive data discovery For sensitive data dis covery jobs the first 1 GB processed every month in each account comes at no cost For each GB processed beyond the first 1 GB charges will occu r Please refer this link for pricing details *You ar e only charged for jobs you configure and submit to the service for sensitive data discovery Amazon Kinesis Amazon Kinesis makes it easy to collect process and analyze real time streaming data so you can get timely insights and react quickly to new info rmation Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale along with the flexibility to choose the tools that best suit the requirements of your application With Amazon Kinesis you can ingest real time data such as video audio application logs website clickstreams and IoT telemetry data for machine learning analytics and other applications Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wa it until all your data is collected before the processing can begin ArchivedAmazon Web Services How AWS Pricing W orks Page 26 Amazon Kinesis Data Streams is a scalable and durable real time data streaming service that can continuously capture gigabytes of data per second from hundreds of thousands of sources See Amazon Kinesis Data Streams Pricing for pricing details Amazon Kinesis Data Firehose is the easiest way to capture transform and load data streams into AWS data stores for near rea ltime analytics with existing business intelligence tools See Amazon Kinesis Data Firehose Pricing for pricing details Amazon Kinesis Data Analytics is the easiest way to process data streams in real time with SQL or Apache Flink without having to learn new programming languages or processing frameworks See Amazon Kinesis Data Ana lytics Pricing for pricing details Amazon Kinesis Video Streams Amazon Kinesis Video Streams makes it easy to securely stream media from connected devices to AWS for storage analytics machine learning (ML) playback and other processing Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest streaming media from millions of dev ices It durably stores encrypts and indexes media in your streams and allows you to access your media through easy touse APIs Kinesis Video Streams enables you to quickly build computer vision and ML applications through integration with Amazon Rekog nition Video Amazon SageMaker and libraries for ML frameworks such as Apache MxNet TensorFlow and OpenCV For live and on demand playback Kinesis Video Streams provides fully managed capabilities for HTTP Live Streaming (HLS) and Dynamic Adaptive Stre aming over HTTP (DASH) Kinesis Video Streams also supports ultra low latency two way media streaming with WebRTC as a fully managed capability Kinesis Video Streams is ideal for building media streaming applications for camera enabled IoT devices and fo r building real time computer vision enabled ML applications that are becoming prevalent in a wide range of use cases Amazon Kinesis Video Streams pricing You pay only for the volume of data you ingest store and consume in your video streams WebRTC pricing If you use WebRTC capabilities you pay for the number of signaling channels that are active in a given month number of signaling messages sent and received and TURN streaming minutes used for relaying media A signaling channel is conside red active in ArchivedAmazon Web Services How AWS Pricing Works Page 27 a month if at any time during the month a device or an application connects to it TURN streaming minutes are metered in 1minute increments Note: You will incur standard AWS data transfer charges when you retrieve data from your video strea ms to destinations outside of AWS over the internet See Amazon Kinesis Video Streams Pricing for pricing details AWS I oT Events AWS IoT Events helps companies continuously monitor their equipment and fleets of devices for failure or changes in operation and trigger alerts to respond when events occur AWS IoT Events recognizes events across multiple sensors to identify operational issues such as equipment slowdowns and generates alerts such as notifying support teams of an issue AWS IoT Events offers a managed complex event detection service on the AWS Cloud accessible through the AWS IoT Events console a browser based GUI where you can define and manage your event detectors or direct ingest application program interfaces (APIs) code that allows two applications to communicate with each other Understanding equipment or a process based on telemetry from a single sensor is often not possible; a complex event detection service will combine multiple sources of telemetry to gain full insight into equipment and processes You define conditional logic and states inside AWS IoT Events to evaluate incoming telemetry data to detect events in equipment or a process When AWS IoT Events detects an event it can trigger pre defined actions in another AWS service such as sending alerts through Amazon Simple Notification Service ( Amazon SNS) AWS I oT Events pricing With AWS IoT Events you pay only for what you use with no minimum fees or mandatory service usage When you create an event detector in AWS IoT Events you apply conditional logic such as if thenelse statements to understand events such as when a motor might be stuck You are only charged for each message that is evaluated in AWS IoT Events See AWS IoT Events Pricing for pricing details The AWS Free Tier is available to you for 12 months starting on the date you create your AWS account When your free usage expires or if your application use exceeds the free usage tiers you simply pay the above rates Your usage is calculated each month across all regions and is automat ically applied to your bill Note that free usage does not accumulate from one billing period to the next ArchivedAmazon Web Services How AWS Pricing Works Page 28 AWS C ost Optimization AWS enable s you to take control of cost and continuousl y optimize you r spend while building modern scalable application s to meet you r needs AWS' s breadth of services and pricing option s offer the flexibilit y to effectivel y manage you r cost s and still keep the performance and capacit y you require AW S is dedicated to helping custome rs achieve highest saving potential During thi s period of crisis we will wo rk with you to develop a plan that meet s your financial needs Get started with the step s below that will have an immediate impact on you r bill today Choose the right pricing models Use Reserved Instances (RI) to reduce Amazon RDS Amazon Redshift Amazon ElastiCache and Amazon Elasticsearch costs For certain service s like Amazon EC2 and Amazon RDS you can invest in reserved capacity With Reserved Instances you can save up to 72% ove r equivalent ondemand capacity Reserved Instance s are available in 3 option s – All upfront (AURI) partial up f ront (PURI ) or no upfront payment s (NURI) Use the recommendation s provided in AWS Cost Explore r RI purchase recommendations which i s based on you r Amazon RDS Amazon Redshift Amazon ElastiCache and Elasticsearch usage Amazon EC2 Cos t Savings Use Amazon EC2 Spot Instances to reduce EC2 costs o r use Compute Savings Plans to reduce EC2 Fargate and Lambda cost Match Capacity with Demand Identify Amazon EC2 instances with lowutilization and reduce cos t by stopping or rightsizing Use AW S Cost Explore r Resource Optimization to get a report of EC2 instance s that are eithe r idle o r have low utilization You can reduce cost s by eithe r stopping or downsizing these instances Use AWS Instance Scheduler to automaticall y stop instances Use AWS Operation s Conductor to automaticall y resize the EC2 instances (based on the recommendation s report fro m Cost Explorer) ArchivedAmazon Web Services How AWS Pricing Works Page 29 Identify Amazon RDS Amazon Redshift instances with low utiliz ation and reduce cost by stopping (RDS) and pausing (Redshift) Use the Trusted Advisor Amazon RDS Idle DB instances check to identify DB instances which have not had any connection over the last 7 days To reduce costs stop these DB instances using the automation steps described in this blog post For Redshift use the Trusted Advisor Underutilized Redshift clusters check to identify clusters which hav e had no connections for the last 7 days and less than 5% cluster wide average CPU utilization for 99% of the last 7 days To reduce costs pause these clusters using the steps in this blog Analyze Amazon DynamoDB usage and reduce cost by leveraging Autoscaling or Ondemand Analyze your DynamoDB usage by monitoring 2 metrics ConsumedReadCapacityUnits and ConsumedWriteCapacityUnits in CloudWatch To automatically scale (in and out) your DynamoDB table use the AutoScaling feature Using the steps here you can enable AutoScaling on your existing tables Alternately you can also use the on demand option This option allows you to pay perrequest for read and write requests so that you only pay for what you use making it easy to balance costs and performance Implement processes to identify resource waste Identify Amazon EBS volumes with low utilization and reduce cost by snapshotting then deleting them EBS volumes that have very low activity (less than 1 IOPS per day) over a period of 7 days indicate that they are probably not in use Identify these volumes using the Trusted Advisor Underutilized Amazon EBS Volumes Check To reduce costs first snapshot the volume (in case you need it later) then delete these volumes You can automate the creation of snapshots using t he Amazon Data Lifecycle Manager Follow the steps here to delete EBS volumes Analyze Amazon S3 usage a nd reduce cost by leveraging lower cost storage tiers Use S3 Analytics to analyze storage access patterns on the object data set for 30 days or longer It makes re commendations on where you can leverage S3 Infrequently Accessed (S3 IA) to reduce costs You can automate moving these objects into lower cost storage tier using Life Cycle Policies Alternately you can also use S3 Intelligent ArchivedAmazon Web Services How AWS Pricing Works Page 30 Tiering which automatically analyzes and moves your objects to the appropriate storage tier Review networking and reduce costs by deleti ng idle load balancers Use the Trusted Advisor Idle Load Balancers check to get a report of load balancers that have RequestCount of less than 100 over the past 7 days Then use the steps here to delete these load balancers to reduce costs Additionally use t he steps provided in this blog review your data transfer costs using Cost Explorer AWS Support Plan Pricing AWS Support provides a mix o f tools and technology people and programs designed to proactively help you optimize performance lower costs innovate faster and focused on solving some of the toughest challenges that hold you back in your cloud journey There are three types of suppo rt plans available : Developer Business and Enterprise For more details see Compare AWS Support Plans and AWS Support Plan Pricin g Cost calculation examples The following sections use the AWS Pricing Calculator to provide example cost calculations for two use cases AWS Cloud cost calculation example This example is a common use case o f a dynamic website hosted on AWS using Amazon EC2 AWS Auto Scaling and Amazon RDS The Amazon EC2 instance runs the web and application tiers and AWS Auto Scaling match es the number of instances to the traffic load Amazon RDS uses one DB instance for its primary storage and t his DB instance is deployed across multiple Availability Zones Architecture Elastic Load Balanc ing balances traffic to the Amazon EC2 Instances in an AWS Auto Scaling group which adds or subtracts Amazon EC2 Instances to match load Deploying Amazon RDS across multiple Availability Zones enhances data durability and availability Amazon RDS provisions and maintains a standby in a different Availability Zone for automatic failover in the event of outages planned or unplanned The following illustration shows the example architecture for a dynamic website using Amazon EC2 ArchivedAmazon Web Services How AWS Pricing Works Page 31 AWS Auto Scaling Security Groups to enforce least privilege access to AWS infrastruct ure and selected architecture components and one Amazon RDS database instance across multiple Availability Zones ( Multi AZ deployment) All these components are deployed into single region and VPC The VPC is spread out into two availability zones to supp ort failover scenarios with and Route 53 Resolver to manage and route requests for 1 hosted zone towards Elastic Load Balancer Figure 3: AWS Cloud deployment architecture Daily usage profile You can monitor daily usage for your application so that you can better estimate your costs For instance you can look at the daily pattern to figure out how your application handles traffic For each hour track how many hits you get on your website and how many instances are running and t hen add up the total number of hits for that day Hourly instance pattern = (hits per hour on website) / (number of instances) VPC Availabilit y Zone Private Su bnet Amazon RDS DB Instance ReplicationPublic Su bnet Application Server Elastic Load Bala ncing Amazon Route 53 Internet gateway AWS Auto Scaling GroupsRegion Availabilit y Zone Private Su bnet Amazon RDS Standby DB I nstance Public Su bnet Application Server Route Table UsersDNS Resolution RequestArchivedAmazon Web Services How AWS Pricing Works Page 32 Examine the number of Amazon EC2 instances that run each hour and then take the average You can use the number of hits per day and the average number of instances for your calculations Daily profile = SUM(Hourly instance pattern) / 24 Amazon EC2 cost breakdown The following table shows the characteristics for Amazon EC2 used for this dynamic site in the US East Region Charac teristic Estimated Usage Description Utilization 100% All infrastructure components run 24 hour per day 7 days per week Instance t3axlarge 16 GB memory 4 vCPU Storage Amazon EBS SSD gp2 1 EBS volume per instance with 30 GB of storage per volume Data backup Daily EBS snapshots 1 EBS volume per instance with 30 GB of storage per volume Data transfer Data in: 1 Tb/month Data out: 1 Tb/month 10% incremental change per day Instance scale 4 On average per day there are 4 instances running Load Balancing 20 Gb/Hour Elastic Load Balancing is used 24 hours per day 7 days per week It processes a total of 20 Gb/Hour (data in + data out) Database MySQL dbm5large instance with 8 GB memory 2 vCPUs 100 GB storage Multi AZ deployment with synchron ous standby replica in separate Availability Zone The total cost for one month is the sum of the cost of the running services and data transfer out minus the AWS Free Tier discount We calculated the total cost using the AWS Pricing Calculator ArchivedAmazon Web Services How AWS Pricing Works Page 33 Table 3: Cost breakdown Service Monthly Annually Configuration Elastic Load Balancing $8760 $105120 Number of Network Load Balancers (1) Processed bytes per NLB for TCP (20 GB per hour) Amazon EC2 $43916 $526992 Operating system (Linux) Quantity (4) Storage for each EC2 instance (General Purpose SSD (gp2)) Storage amount (30 GB) Instance type (t3axlarge) Amazon Elastic IP address $0 $0 Number of EC2 instances (1) Number of EIPs per instance (1) Amazon RDS for MySQL $27266 $ 327192 Quantity (1) dbm5large Storage for each RDS instance (General Purpose SSD [gp2]) Storage amount (100 GB) Amazon Route 53 $18300 $219600 Hosted Zones (1) Number of Elastic Network Interfaces (2) Basic Checks Within AWS (0) Amazon Virtual Private Cloud (Amazon VPC) $9207 $110484 Data Transfer cost Inbound (from: Internet) 1 TB per month Outbound (to: Internet) 1 TB per month IntraRegion 0 TB per month Hybrid cloud cost calculation example This example is a hybrid cloud use case of AWS Outposts deployed on premise s connected to AWS Clou d using AWS Direct Connect AWS Outpost s extend s the existing VPC from the selected AWS R egion to the customer data center Selected AWS services required to run on premise s (ie Amazon EKS) are available at AWS Outposts inside the Outpost Availability Zone deployed inside a separate subnet Hybrid architecture description The following example shows Outpost deployment with distributed Amazon EKS service extending to on premise s environments ArchivedAmazon Web Services How AWS Pricing Works Page 34 Figure 4: AWS Outpost with Amazon EKS Control Plane and Data Plane Architecture Architecture • The Control Plane for Amazon EKS remain s in the R egion which means in the case of Amazon EKS the Kubernetes Primary node will stay in the Availability Zone deployed to the R egion (not on the Outposts ) • The Amazon EKS worker nodes are deployed on the Outpost controlled by Primary node deployed in the Availability Zone Traffic Flow • The EKS Control Plane Traffic between EKS AWS metrics and Amazon CloudWatch transits thirdparty network (AWS Direct Connect /AWS SitetoSite VPN to the AWS Region ) • The Application / Data Traffic is isolate d from Control plane and distributed between Outposts and local network • Distribution of AMIs (deployed on Outpost) is driven by central Amazon ECR in Region however all images are cached locally on the Outpost Load Balancers • Application Load Balancer is supported on Outpost as the only local Elastic Load Balancing available OutpostAWS Region VPC Availability Zone Subnet Primary Node EKS Control PlaneCorporate DC Availability Zone Subnet Subnet Worker Nodes Primary Node 3rdParty 3rdParty AWS Direct ConnectAWS Cloud ArchivedAmazon Web Services How AWS Pricing Works Page 35 • The Network Load Balancer and Classic Load Balancer stay in the Region but targets deployed at AWS Outposts are supported ( including Application Load Balancer ) • Onpremises (inside corporate DC) Load Balancers (ie F5 BIG IP NetScaler) can be deployed and routed via Local Gateway (inside AWS Outpost) Hybrid cloud components selection Customers can choose from a range of pre validated Outposts configurations ( Figure 2) offering a mix of EC2 and EBS capacity designed to meet a variety of application needs AWS can also work with customer to create a customized configuration designed for their unique applicat ion needs To consider correct configuration make sure to verify deployment and operational parameters of the selected physical location for AWS Outpost rack installation The following example represents a set of parameters highlighting facility networ king and power requirements needed for location validation (selected parameter: example value): Purchase Option: All Upfront Term: 3 Years Max on premises power capacity: 20kVA Max weight: 2500lb Networking uplink speed: 100Gbps Number of Racks: 1 Average Power Draw per Rack: 934 Constraint (power draw/weight): Power Draw Total Outpost vCPU: 480 Total Outpost Memory: 2496GiB In addition to minimum parameters you should make deployment assumptions prior to any order to minimize performance and security i mpact on existing infrastructure landscape deeply affecting existing cost of on premises infrastructure (selected question: example assumption) ArchivedAmazon Web Services How AWS Pricing Works Page 36 What is the speed of the uplink ports from your Outposts Network Devices (OND): 40 or 100Gbps How many uplinks per Outpost Networking Device (OND) will you use to connect the AWS Outpost to your network: 4 uplinks How will the Outpost service link (the Outpost control plane) access AWS services: Service link will access AWS over a Direct Connect public VIF Is ther e a firewall between Outposts and the Internet: Yes These assumptions together with selected components will further lead to an architecture with higher granularity of details influencing overall cost of a hybrid cloud architecture deployment (Figure 5) Figure 5: Hybrid cloud architecture deployment example ArchivedAmazon Web Services How AWS Pricing Works Page 37 Hybrid cloud architecture cost breakdown Hybrid cloud cost include multiple layers and components deployed across the AWS cloud and on premises location When you use AWS Managed Serv ices on AWS Outposts you are charged only for the services based on usage by instance hour and excludes underlying EC2 instance and EBS storage charges Breakdown of these services is showcased in next sections for a 3 year term with partial upfront all upfront and no upfront options (EC2 and EBS capacity) Price includes delivery installation servicing and removal at the end of term – there is no additional charge Outpost rack charges ( customized example) EC2 Charges • c524xlarge 11 TB o $714867 mo nthly; o $12365018 upfront $343473 monthly o $23976141 upfront • 1 m524xlarge 11 TB o $735969 monthly o $12716706 upfront $353242 monthly o $24637314 upfront EBS Charges • 11 TB EBS tier is priced at $030/GB monthly Conclusion Although the number and types of services offered by AWS have increased dramatically our philosophy on pricing has not changed You pay as you go pay for what you use pay less as you use more and pay even less when you reserve capacity All these options are empowering AWS customers to choose they preferred pricing model and increase flexibility of their cost strategy ArchivedAmazon Web Services How AWS Pricing Works Page 38 Projecting costs for a use case such as web application hosting can be challenging because a solution typically uses multiple features across multiple AWS p roducts which in turn means there are more factors and purchase options to consider The best way to estimate costs is to examine the fundamental characteristics for each AWS product estimate your usage for each characteristic and then map that usage to the prices posted on the website You can use the AWS Pricing Calculator to estimate your monthly bill The calculator provides a per service cost breakdown as well as an aggregate monthly estimate You can also use the calculator to see an estimation and breakdown of costs for common solutions Remember you can get started with most AWS services at no cost using the AWS Free Tier Contributors Contributors to this document include : • Vladimir Baranek Principal Partner Solution Architect Amazon Web Services • Senthil Arumugam Senior Partner Solutions Architect Amazon Web Services • Mihir Desai Senior Partner Solutions Architect Amazon Web Service s Further Reading For additional information see: • AWS Pricing • AWS Pricing Calculator • AWS Free Tier • AWS Cost Management • AWS Cost and Usage Reports • AWS Cloud Economics Center ArchivedAmazon Web Services How AWS Pricing Works Page 39 Document Revisions Date Description October 2020 Updated and added service pricing details options calculation and examples June 2018 First publication
|
General
|
consultant
|
Best Practices
|
How_Cities_Can_Stop_Wasting_Money_Move_Faster_and_Innovate
|
ArchivedHow Cities Can Stop Wasting Money Move Faster and Innovate Simplify and Streamline IT with AWS Cloud Computing January 2016 This paper has been archived For the latest technical content see the AWS Whitepapers & Guides page: h ttps://awsamazoncom/whitepapersArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 3 of 16 © 2016 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice Customers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document does not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 4 of 16 Contents Abstract 4 Stop Investing in Technology Infrastructure 5 Trend Toward the Cloud 6 Move Faster 7 Pick Your Project Pick One Thing 8 Manage the Scope 10 Take Advantage of New Innovations 12 Engage Your Citizens in Crowdsourcing 12 Automate Critical Functions for Citizens 14 Start Your Journey 15 Contributors 16 Abstract Local and r egional governments around the world are using the cloud to transform services improve their operations and reach new horizons for citizen services The Amazon Web Services (AWS) cloud enables data col lection analysis and decision making for smarter cities This whitepaper provides strategic considerations for local and regional governments to consider as they identify which IT systems and applications to move to the cloud Real examples that show how cities can stop wasting money move faster and innovate ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 5 of 16 Stop Investing in Technology Infrastructure Faced with pressure to innovate within fixed or shrinking budgets while meeting aggressive timelines governments are turning to Amazon Web S ervices (AWS) to provide costeffective scalable secure and flexible infrastructure necessary to make a difference The cloud provides rapid access to flexible and low cost IT resources With cloud computing local and regional governments no longer need to make large upfront investments in hardware or spend a lot of time and money on the heavy lifting of managing hardware “I wanted to move to a model where we can deliver more to our citizens and reduce the cost of delivering those services to them I wanted a product line that has the ability to scale and grow with my department AWS was an easy fit for us and the way we do business By shifting from capex to opex we can free up money and return those funds to areas that need it more—fire trucks a bridge or a sidewalk” Chris Chiancone CIO City of McKinney Instead government agencies can provision exactly the right type and size of computing resources needed to power your newest bright idea and drive operational efficiencies with your IT budget You can access as many resources as you need almost instantly and only pay for what you use AWS helps agencies reduce overall IT costs in multiple ways With cloud computing you do not have to invest in infrastructure before you know what AWS Cloud Computing AWS offers a broad set of global compute storage database analytics application and deployment services that help local and regional governments move faster lower IT costs and scale applications ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 6 of 16 demand will be You convert your capital expense into variable expense that fluctuates with demand and you pay only for the resources used Trend Toward the Cloud Local and regional governments are adopting cloud computing however identifying the correct projects to migrate can be overwhelming Applications that deliver increased return on investment (ROI) through reduced operational costs or deliver increased business results should be at the top of the priority list Applications are either critical or strategic —if they do not fit into either category they should be removed from the priority list Instead categorize applications that aren’t strategic or critical as legacy applications and determine if they need to be replaced or in some cases eliminated Figure 1: Focus Areas for Successful Cloud Projects When considering the AWS cloud for citizen services local and regional governments must first make sure that their IT plans align with their organizations’ business model Having a solid understanding of the core competencies of your organization will help you identify the areas that are best served through an external infrastructure such as the AWS cloud The following example shows how a city is using the AWS cloud to deliver more with less and reduc e costs City of McKinney City of McKinney Texas Turns to AWS to Deliver More Advanced Services for Less Money The City of McKinney Texas about 15 miles north of Dallas and home to 155000 people was ranked the No 1 Best Place to live in 2014 by Money Magazine The city’s IT department is going allin on AWS and uses the platform to run a wide range of services and applications such as its land management and records management systems By using AWS the city’s IT department can focus on Save on costs and provide efficiencies over current solutions Improve outcomes of existing services Capitalize on the advantages of moving to the cloud ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 7 of 16 delivering new and better services for its fastgrowing population and city employees instead of spending resources buying and maintaining IT infrastructure City of McKinney chose AWS for our ability to scale and grow with the needs of their department AWS provides an easy fit for the way they do business Without having to own the infrastructure the City of McKinney has the ability to use cloud resources to address business needs By moving from a capex to an opex model they can now return funds to critical city projects Move Faster AWS has helped over 2 000 government agencies around the world successfully identify and migrate applications to the AWS platform resulting in significant business benefits The following steps help governments identify plan and implement new citizen services that take advantage of current technology to boost efficiencies save tax dollars and deliver an excellent use r experience Business Benefits of Agile Development on AWS • Trade capital expense for variable expense ⎯ Instead of having to invest heavily in data centers and servers before you know how you’re going to use them you can pay only when you consume computing resources and pay only for how much you consume • Benefit from massive economies of scale ⎯ By using cloud computing you can achieve a lower variable cost than you can get on your own Because usage from hundreds of thousands of customers is aggregated in the cloud providers such as AWS can achieve higher economies of scale that translate into lower payasyougo prices • Stop guessing capacity ⎯ Eliminate guessing on your infrastructure capacity needs When you make a capacity decision prior to deploying an application you might end up either sitting on expensive idle resources or dealing with limited capacity With cloud computing these problems go away You can access as much or as little as you need and scale up and down as required with only a few minutes’ notice ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 8 of 16 • Increase speed and agility ⎯ In a cloud computing environment new IT resources are only a click away which means you reduce the time it takes to make those resources available to your developers from weeks to just minutes This results in a dramatic increase in agility for the organization since the cost and time it takes to experiment and develop is significantly lower • Stop spending money on running and maintaining data centers ⎯ Focus on projects that differentiate your business not the infrastructure Cloud computing lets you focus on your own customers rather than on the heavy lifting of racking stacking and powering your data center Pick Your Project Pick One Thing A common mistake is starting too many projects at once A good first step is to identify a critical need and focus your development efforts on that service Completing the following actions will help drive success of the new service throughout the development cycle: • Find the right resources • Get all team members on board during initial planning phases • Secure executive buyin • Clearly communicate status through regularly scheduled meetings with all stakeholders Be flexible throughout the project Periodically take a fresh look to review the progress and be open to changes that may need to be incorporated into the project plan Many organizations choose to begin their cloud experiments with either creating a test environment for a new project (since it allows rapid prototyping of multiple options) or solv ing a disaster recovery n eed given that it is not physically based in their location Below is an example of an ideal first workload to start with The City of Asheville started with a disaster recovery (DR) solution as their first workload in the cloud ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 9 of 16 City of Asheville The City of Asheville NC Uses AWS for Disaster Recovery Located in the Blue Ridge and Great Smoky mountains in North Carolina the City of Asheville attracts both tourists and businesses Recent disasters like Hurricane Sandy led the city’s IT department to search for an offsite DR solution Working with AWS partner CloudVelox the city used AWS to build an agile disaster recovery solution without the time and cost of investing in an onpremises data center The City of Asheville views the geographic diversity of AWS as the key component for a successful DR solution Now the City of Asheville is using AWS for economic development using tools to develop great sites that attract large businesses and job development Validate with a Proof of Concept A proof of concept (POC) demonstrates that the service under consideration is financially viable The overall objective of a POC is to find solutions to technical problems such as how systems can be integrated or throughput can be achieved with a given configuration A POC should accomplish the following: • Validate the scope of the project The project team can validate or invalidate assumptions made during the design phase to make sure that the service will meet critical requirements • Highlight areas of concern Technical teams have a clear view of potential problems during the development and test phase with the opportunity to make functional changes before the service goes live • Demonstrate a sense of momentum Projects can sometimes be slow to start By testing a small number of users acting in a “citizen role ” the POC shows both development progress and helps to establish whether the service satisfies critical requirements and delivers a good user experience King County used a POC to realize cost savings in the use case below validating the project’ s viability ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 10 of 16 King County King County Saves $1 Million in First Year by Archiving Data in AWS Cloud King County is the most populous county in Washington State with about 19 million residents The county needed a more efficient and costeffective solution to replace a tapebased backup system used to store information generated by 17 different county agencies It turned to AWS for longterm archiving and storage using Amazon Glacier and NetApp’s AltaVault solution which helps the county meet federal security standards including HIPAA and the Criminal Justice Information Services (CJIS) regulations The county is saving about $1 million in the first year by not having to replace outdated servers and projects; an annual savings of about $200000 by reducing operational costs related to data storage King County selected AWS due to the mature services and rich feature set that is highly available secure cost competitive and easy to use King County has a longterm vision to shift to a virtual data center based on cloud computing Manage the Scope Defining the scope of your cloud migration or cloud application development project is key to success Often when developing new citizen services there is a desire to address all citizen needs with a single project while insufficient resources and changing definitions (requirements scope timeframes purpose deliverables and lack of appropriate management support) add to the challenge With a flexible cloud computing environment it is possible to tightly focus on a single issue develop an application that addresses that need and then iterate upon it with updates while the application is in flight This can minimize the impact of these issues allowing realworld piloting and improvements Since processes are always linked to other processes any unplanned changes affect these other interfacing processes With just a little structure and some checkpoints most of the major changes in scope can be avoided Start with a project that will involve a limited number of users This will allow you to control and manage the service development and production process more efficiently and effectively To get started select a service and define scope using the following actions: ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 11 of 16 • Define terms related to the project • Involve the right people in defining the scope • Accurately define processes • Define process boundaries explicitly • Outline high level interfaces between processes • Conduct a health check on the process interfaces • Realize that certain aspects of the project still make it too large to manage By minimizing the project scope local and regional governments can reduce development and administrative costs as well as achieve time savings Release Minimally V iable P roduct and Iterate When is the right time to release a citizen service? If released too soon it may lack necessary functionality and deliver a poor user experience If it is too elegant developers may spend too much time on functionality Releasing a minimally viable service and then iterating based on feedback can be an effective design process when designing citizen services With this approach you still guide the development but an iterative process allows citizens to provide feedback to help shape the functionality before it is locked down Only the local or regional government knows the “minimum” With no upfront costs and the ability to scale the cloud allows for this to happen quickly and easily from anywhere with device independence By the time the citizens access the site IT has already made several iterations so the public sees a more mature site It’s more productive to release early This minimizes development work on functionality that citizens do not want Most people are happy to help test the service to make sure that it meets their needs Additionally this stress testing will help uncover bugs that need to be fixed before the site goes into production This will help meet the ultimate goal: an excellent user experience The City of Boston is an example of how a city released a minimally viable product and continued to iterate on the product to get the best version for the needs of their citizens ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 12 of 16 City of Boston Quickly Identifies Road Conditions that Need Immediate Attention and Repair The City of Boston with technology partner Connected Bits has created the Street Bump program to drive innovative scalable technology to tackle tough local government challenges They are using AWS to propel machine learning with an app that uses a smartphone’s sensors – including the GPS and accelerometers to capture enough (big) data to identify bumps and disturbances that motorists experience while they drive throughout the city The big data collected helps the Boston’s Public Works Department to better understand roads streets and areas that require immediate attention and long term repair They have chosen AWS to create a scalable open and robust infrastructure that allows for this information to flow to and from city staff via the Open311 API This solution was created as a large multitenant softwa reasaservice platform so other cities can also leverage the same repository creating one data store for all cities Several other cities are interested in testing the next version Take Advantage of New Innovations Engage Your Citizens in Crowdsourcing The idea of soliciting customer input is not new Crowdsourcing has become an important business approach to define solutions to problems By tapping into the collective intelligence of the public local and regional government can validate service requirements prior to a lengthy design phase Crowdsourcing can improve both the productivity and creativity of your IT staff while minimizing design development and testing expenses Let the citizens do the work—after all they are the ones who will be using the service Make sure it is designed to meet their requirements Two example s of using crowdsourcing to provide realtime updates to the citizens are Moovit and Transport of London ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 13 of 16 Moovit With AWS Moovit Now Proc esses 85 million Requests Each Da y Moovit headquartered in Israel is redefining the transit experience by giving people the realtime information they need to get to places on time With schedules trip planning navigation and crowdsourced reports Moovit guides transit riders to the best most efficient routes and makes it easy for locals and visitors to navigate the world's cities Since launching in 2012 Moovit's free awardwinning app for iPhone Android and Windows Phone serves nearly 10 million users and is adding more than a million new users every month The app is available across 400 cities in 35 countries including the US Canada France Spain Italy Brazil and the UK Moovit’s goal was to continue to add metros quickly and it needed a solution that would scale j ust as fast Moovit now uses AWS to host and deliver services for its public transportation tripplanning app — using Amazon CloudFront to rapidly deliver information to its users The company made the decision to use AWS because it has servers that can handle the app’s heavy request volume and different types of information and because it supports multiple databases including SQL and NoSQL and includes storage options Transport for London Transport for London Creates an Open Data Ecosystem with Amazon 4 Web Services with AWS Transport for London ( TfL) has been running its flagship tflgovuk website on AWS for over a year and serves over 3 million page views to between 600000 and 700000 visitors a day with 54% of visits coming from mobile devices TfL has been able to scale interactive services to this level (its previous site was static) by leveraging AWS services as an elastic buffer between its backoffice services and the 76% of London’s 84 million population that uses the site regularly to plan their journeys Enhanced personalization for customers is now available on this site; in parallel the department is fostering closer relationships with the thirdparty app and portal providers that contribute digital solutions of their own for London’s trave lers based on TfL’s (openly licensed) transport data TfL has chosen to ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 14 of 16 release this data under an open data license which has helped to establish an ecosystem of thirdparty developers also working on digital travelrelated projects Some 6000 developers are now engaged in digital projects using TfL’s anonymized open data spawning 360 mobile apps to date Automate Critical Functions for Citizens People are more connected to each other than ever before and the increased connectivity of devices creates new opportunities for the public sector to truly become hubs of innovation driving technology solutions to help improve citizens' lives The Internet of Things (IoT) is the everexpanding network of physical “things” that can connect to the Inte rnet and the information that they transfer without requiring human interaction “Things” in the IoT sense refer to a wide variety of devices embedded with electronics software sensors and network connectivity which enable them to collect and exchange data over the Internet AWS is working with local and regional governments to apply IoT capabilities and solutions to opportunities and challenges that face our customers While the possibilities for IoT are virtually endless the following diagram highlights use cases we are discussing with customers today Figure 2: Internet of Things Use Cases for Local and Regional Governments London City Airport IoT Technologies Enhance Customer Experience at London City Airport The ‘Smart Airport Experience’ project was funded by the government run Technology Strategy Board in the UK and implemented at London City Airport working with a Transportation Public Safety Health & WellBeing • Parking solutions • Connected smart intersections • Smart routing / navigation • Fleet tracking / monitoring • Crowd control / management • Officer safety • Emergency notification • Security solutions • Air / particle quality • Water control management • Trash / garbage collection • Lighting control • Water metering City Services • Infrastructure monitoring • Building automation systems ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 15 of 16 technology team led by Living PlanIT SA The goal of the project was to demonstrate how Internet of Things technologies could be used to both enhance customer experiences and improve operational efficiency at a popular business airport that already offers fast checkin to boarding times The project used the Living PlanIT Urban Operating System (UOS™) hosted in an AWS environment as the backbone for realtime data collection processing analytics marshaling and event management Start Your Journey AWS provides a number of important benefits to local and regional governments as the platform for running citizen services and infrastructure programs It provides a range of flexible cost effective scalable elastic and secure capabilities that you can use to manage citizen data in the AWS cloud Work with AWS Government & Education Experts Your dedicated Government and Education team includes solutions architects business developers and partner managers ready to help you get started solving business problems with AWS Get in touch with us to start building solutions » Support AWS customers can choose from a range of support options including our hands on support for enterprise IT environments Learn more about AWS support options » Professional Services AWS has a worldclass professional services team that can help you get more from your cloud deployment It's easy to build solutions using our toolsets but when you need help building complex solutions or migrating from an on premises environment we're there Talk to your Government & Education Experts to learn more about professional services from AWS » ArchivedAmazon Web Services – Stop Wasting Money Move Faster and Innovate January 2016 Page 16 of 16 Contributors The following individuals and organizations contributed to this document: • Frank DiGiammarino General Manager AWS State and Local Government • Carina Veksler Public Sector Solutions AWS Public Sector SalesVar
|
General
|
consultant
|
Best Practices
|
Hybrid_Cloud_DNS_Solutions_for_Amazon_VPC
|
This paper has been archived For the latest technical content refer t o the html version : https://docsawsamazoncom/whitepapers/latest/ hybridclouddnsoptionsforvpc/hybridclouddns optionsforvpchtml Hybrid Cloud DNS Options for Amazon VPC November 2019 This paper has been archived For the latest technical content refer t o the html version : https://docsawsamazoncom/whitepapers/latest/ hybridclouddnsoptionsforvpc/hybridclouddns optionsforvpchtml Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 2019 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest technical content refer t o the html version : https://docsawsamazoncom/whitepapers/latest/ hybridclouddnsoptionsforvpc/hybridclouddns optionsforvpchtml Contents Introduction 1 Key Concepts 1 Constraints 6 Solutions 7 Route 53 Resolver Endpoints and Forwarding Rules 7 Secondary DNS in an Amazon VPC 11 Decentralized Conditional Forwarders 13 Scaling DNS Management Across Multiple Accounts and VPCs 18 Selecting the Best Solution for Your Organization 22 Additional Considerations 23 DNS Logging 23 Custom EC2 DNS Resolver 25 Microsoft Windows Instances 27 Unbound – Additional Options 28 DNS Forwarder – Forward First 28 DNS Server Resil iency 28 Conclusion 30 Contributors 30 Document Revisions 31 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Abstract The Domain Name System (DNS) is a foundational element of the internet that underpins many services offered by Amazon Web Services (AWS) Amazon Route 53 Resolver provides resolution with DNS for public domain n ames Amazon Virtual Private Cloud ( Amazon VPC) and Route 53 private hosted zones This whitepaper includes solutions and considerations for advanced DNS architectures to help customers who have workloads with unique DNS requirement s or on premises resources that require DNS resolution between on premises data centers and Amazon EC2 instances in Amazon VPCs This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 1 Introduction Many organizations have both on premises resources and resources in the cloud DNS name resolution is essential for on premises and cloud based resources For customers with hybrid workloads which include on premises and cloud based resources extra steps are ne cessary to configure DNS to work seamlessly across both environments AWS services that require name resolution could include Elastic Load Balancing load balancer (ELB) Amazon Relational Database Service (Amazon RDS) Amazon Redshift and Amazon El astic Compute Cloud (Amazon EC2) Route 53 Resolver which is available in all Amazon VPCs responds to DNS queries for public records Amazon VPC resources and Route 53 private hosted zones (PHZs) You can configure it to forward queries to customer man aged authoritative DNS servers hosted on premises and to respond to DNS queries that your on premises DNS servers forward to your Amazon VPC This whitepaper illustrates several different architectures that you can implement on AWS using native and custo mbuilt solutions These architectures meet the need for name resolution of on premises infrastructure from your Amazon VPC and address constraints that have only been partially addressed by previously published solutions Key Concepts Before we dive into the solutions it is important to establish a few concepts and configuration options that we’ll reference throughout this whitepaper Amazon VPC DHCP Options Set The Dynamic Host Configuration Protocol (DHCP) provides a standard for pa ssing configuration information to hosts on a TCP/IP network The options field of a DHCP message contains configuration parameters such as domain name servers domain name ntpservers and netbios node type In any Amazon VPC you can create DHCP options sets and specify up to four DNS servers Currently these options sets are created and applied per VPC which means that you can’t have a DNS server list at the Availability Zone level For more information about DHCP options sets and configuration see Overview of DHCP Option Sets in the Amazon VPC Developer Guide 1 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 2 Amazon Route 53 Resolver Route 53 Resolver also known as the Amazon DNS Server or Amazon Provided DNS provides full public DNS resolution with additional resolution for internal records for the VPC and customer defined Route 53 private DNS records 2 Route 53 Resolver maps to a DNS server running on a reserved IP address at the base of the VPC network range plus two For example the DNS Server on a 10000/16 network is located at 10002 For VPCs with multiple CIDR bloc ks the DNS server IP address is located in the primary CIDR block Elastic Network Interfaces (ENIs) Elastic network interfaces (referred to as network interfaces in the Amazon EC2 console) are virtual network interfaces that you can attach to an instance in a VPC They’re available only for instances running in a VPC A virtual network interface like any network adapter is the interface that a device uses to connect to a network Each instance in a VPC depending on the instance type can have multiple network interfaces attached to it For more information see Elastic Network Interfaces in the Amazon EC2 User Guide for Linux Instances 3 How ENIs Work for Route 53 Resol ver A Route 53 Resolver endpoint is made up of one or more ENIs which reside in your VPC Each endpoint can only forward queries in a single direction Inbound endpoints are available as forwarding targets for DNS resolvers and use an IP address from the subnet space of the VPC to which it is attached Queries forwarded to these endpoints have the DNS view of the VPC to which the endpoints are attached Meaning if there are names local to the VPC such as AWS PrivateLink endpoints EFS clusters EKS clust ers PHZs associated etc the query can resolve any of those names This is also true for any VPCs peered with the VPC which owns the endpoint Outbound endpoints serve as the path through which all queries are forwarded out of the VPC Outbound endpoint s are directly attached to the owner VPC and indirectly associated with other VPCs via rules Meaning if a forwarding rule is shared with VPC that does not own the outbound endpoint all queries that match the forwarding rule pass through to the owner VPC and then forward out It is important to realize this when using queries to forward from one VPC to another The outbound endpoint may reside in an entirely different A vailability Zone than the VPC that originally sent the query and there is potential fo r an A vailability Zone outage in the owner VPC to impact query This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 3 resolution in the VPC using the forwarding rule This can be avoided by deploying outbound endpoints in multiple A vailability Zones Figure 1: Route 53 Resolver with Outbound Endpoint See Getting Starting with Route 53 Resolver in the Amazon Route 53 Developer Guide for more information Route 53 Private Hosted Zone A Route 53 private hos ted zone is a container that holds DNS records that are visible to one or more VPCs VPCs can be associated to the private hosted zone at the time of (or after) the creation of the private hosted zone For more information see Working with Private Hosted Zones in the Amazon Route 53 Developer Guide 4 Connection Tracking By default Amazon EC2 security groups use connection tracking to track information about traffic to and from the instance 5 Security group rules are applied based on the connection state of the traffic to determine if the traffic is allowed or denied This allows security groups to be stateful which means that responses to inbound traffic are allowed to flow out of the instan ce regardless of outbound security group rules and vice versa This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 4 Linux Resolver The stub resolver in Linux is responsible for initiating and sequencing DNS queries that ultimately lead to a full resolution A resolver is configured via a configuration file /etc/resolvconf The resolver queries the DNS server listed in the resolvconf in the order they are listed The following is a sample resolvconf: options timeout:1 nameserver 100010 nameserver 100110 Linux DHCP Client The DHCP client on Linux provides the option to customize the set of DNS servers that the instance uses for DNS resolution The DNS servers provided in the AWS DHCP options are picked up by this DHCP client to further update the resolvconf with a list of DNS Server IP addresses In addition you can use the supersede DHCP client option to replace the DNS servers provided by the AWS DHCP options set with a static list of DNS servers You do this by modifying the DHCP client configuration file /etc/dhcp/dhclientconf : interface "et h0" { supersede domain nameservers 100210 100310; } This sample statement replaces DNS servers 100010 and 100110 in the resolvconf sample with 100210 and 100310 We discuss the use of this option in the Zonal Forwarders Using Supersede solution Conditional Forwarder – Unbound A conditional forwarder examines the DNS queries received from instances and forwards them to different DNS servers based on rules set in its configuration typicall y using the domain name of the query to select the forwarder In a hybrid architecture conditional forwarders play a vital role to bridge name resolution between on premises and cloud resources For this particular solution we use Unbound which is a recu rsive and caching DNS resolver in addition to a conditional forwarder Depending on your This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 5 requirements this option can act as an alternative or hybrid to forwarding rules in Amazon Route 53 Resolver For instructions on how to set up an Unbound DNS serve r see the How to Set Up DNS Resolution Between On Premises Networks and AWS by Using Unbound blog post in the AWS Sec urity Blog 6 The following is a sample unboundconf: forwardzone: name: "" forwardaddr: 10002 # Amazon Provided DNS forwardzone: name: "examplecorp" forwardaddr: 192168110 # On premises DNS In this sample configuration queries to examplecorp are forwarded to the on premises DNS server and the rest are forwarded to Route 53 Resolver This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 6 Constraints In addition to the concepts established so far it is important that you are aware of some constraints that are key in shaping the rest of this whitepaper and its solutions Packet per Second (PPS) per Elastic Network Interface limit Each network interfac e in an Amazon VPC has a hard limit of 1024 packets that it can send to the Amazon Provided DNS server every second Therefore a computing resource on AWS that has a network interface attached to it and is sending traffic to the Amazon DNS resolver (for example an Amazon EC2 instance or AWS Lambda function) falls under this hard limit restriction In this whitepaper we refer to this limit as packet per second (PPS) per network interface When you’re designing a scalable solution for name resolution yo u must consider this limit because failure to do so can result in queries to Route 53 Resolver going unanswered if the limit is reached This limit is a key factor considered for the solutions proposed in this whitepaper This limit is higher for Route 53 resolver endpoints which have a limit of approximately 10000 QPS per elastic network interface Connection Tracking The number of simultaneous stateful connections that an Amazon EC2 security group can support by default is an extremely large value that the majority of standard TCP based customers never encounter any issues with In rare cases customers with restrictive security group policies and applications that create a large amount of concurrent connections for instance a self managed recursive DNS server may run into issues of exhausting all simultaneous connection tracking resources When that limit is exceeded subsequent connections fail silently In such cases we recommend that you have a security group set up that you can use to disable conn ection tracking To do this set up permissive rules on both inbound and outbound connections Linux Resolver The default maximum number of DNS servers that you can specify in the resolvconf configuration file of a Linux resolver is three which means it isn’t useful to specify four DNS servers in the DHCP options set because the additional DNS server won’t be used This limit further places an upper boundary on some of the solutions discussed in this whitepaper It is also key to note that different opera ting systems can handle the assignment and failover of DNS queries differently This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 7 Solutions The solutions in this whitepaper present options and best practices to architect a DNS solution in the hybrid cloud keeping in mind criteria like ease of implementat ion management overhead cost resilience and the distribution of DNS queries directed toward the Route 53 Resolver We cover the following solutions: • Route 53 Resolver Endpoints and Forwarding Rules – This solution focuses on using Route 53 Resolver end points to forward traffic between your Amazon VPC and on premises data center over both AWS Direct Connect and Amazon VPN • Secondary DNS in an Amazon VPC – This solution focuses on using Route 53 to mirror on premises DNS zones that can then be natively re solved from within VPCs without the need for additional DNS forwarding resources • Decentralized Conditional Forwarders – This solution uses distributed conditional forwarders and provides two options for using them efficiently While we use unbound as a c onditional forwarder in some of these solutions you can use any DNS server that supports conditional forwarding with similar features • Scaling DNS Management Across Multiple Accounts and VPCs – This solution walks through options for managing DNS names as you scale your hybrid DNS solution Route 53 Resolver Endpoints and Forwarding Rules In November 2018 Route 53 launched Route 53 Resolver endpoints and forwarding rules which allow you to forward traffic between your Amazon VPC and on premises data cen ter without having to deploy additional DNS servers For more detailed information about Amazon Route 53 Resolver see the Amazon Route 53 Resolver Developer Guide You use the following features of Route 53 resolver in this solution: inbound endpoint outbound endpoint and forwarding rules to make the hybrid resolution possible between on premises and AWS This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 8 Use Case Advantages Limitations • Customers that must forward queries between an Amazon VPC and on premises data center • Customers that have one or more VPCs connected to an on premises environment via AWS Direct Connect or Amazon VPN • Low management overhead you only have to manage forwarding rules and monito r query limits via CloudWatch alarms • Uses the highly available AWS backbone • Approximately 10000 QPS limit per elastic network interface on resolver endpoints • No logging visibility on queries answered by Resolver • Query source IP address is replaced with IP address of the endpoint from which it is forwarded This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 9 Figure 2 – Route 53 Resolver Endpoints and Forwarding Rules Description Private Hosted Zones are associated with a Shared Service VPC Create forward rules in on premises DNS server for Route 53 names you want to resolve from on premises These rules use an inbound endpoint as their destination Create Route 53 Resolver rules for names you want to resolve on premises from your Amazon VPC These rules use an outbound endpoint and can be shared with other VPCs through Resource Access Manager This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 10 Considerations: • Though you can have multiple VPCs across many accounts only a single Availability Zone redundant set of inbound endpoints is required in your Shared Services VPC • You only need one outbound endpoint for multiple VPCs You don’t have to create an outbound endpoint in ea ch VPC Instead you share an outbound endpoint by sharing the rule(s) created for that endpoint with additional accounts using Resource Access Manager (RAM) • Endpoints cannot be used across Regions Best Practices: • Manually specify the private IP addresse s of the inbound Route 53 resolver endpoint while creating it as opposed to having the resolver choose a random IP address from the subnet This way in case there is an accidental deletion of the endpoint you can reuse those IP addresses • When you creat e the inbound or outbound endpoints we recommend that you use at least two subnets in different Availability Zones for high availability For inbound resolver make sure that you us e both endpoint IP addresse s in your on premises DNS resolver so that the load can be spread across all available IP addresse s • For environments that require a high number of queries per second you should be aware that there is a limit of 10000 queries per second per elastic network interface in an endpoint More ENIs can be added to an endpoint to scale QPS • We publish InboundQueryVolume and OutboundQueryVolume metrics via CloudWatch and recommend that you set up monitoring rules that alert you if the threshold exceeds a certain value (for example 80% of 10000 QPS) This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 11 Seconda ry DNS in an Amazon VPC Alternatively you may decide to deploy and manage additional DNS infrastructure running on EC2 instances to handle DNS requests either from VPCs or on premises where you can still benefit from using AWS Managed Services This appr oach uses Route 53 private hosted zones with AWS Lambda and Amazon CloudWatch Events to mirror on premises DNS zones This can then be natively resolved from within a VPC without conditional forwarding and without a real time dependency on on premises DNS servers For the full solution see the Powering Secondary DNS in a VPC using AWS Lambda and Amazon Route 53 Priv ate Hosted Zones on the AWS Compute blog 7 The following table outlines this solution: Table 1 – Solution Highlights – Secondary DNS in an Amazon VPC Use Case Advantages Limitations • Customers cannot use the native Route 53 Resolver forwarding features • Customers that don’t want to build or manage conditional forwarder instances • Customers that do not have in house DevOps expertise • Infrequently changing DNS environment • Low management overhead • Low operational cost • Highly resilient DNS infrastructure • Low possibility for instances to breach the PPS per network interface limit • Onpremises instances can’t query Route 53 Resolver directly for Amazon EC2 hostnames without creating a forwarding target • Works well only when on premises DNS server records must be replicated to Route 53 • Requires on premises DNS server to support full zone transfer query • Requires working with the Route 53 API limits This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 12 Figure 3 – Secondary DNS running on Route 53 private hosted zones Description CloudWatch Events invokes a Lambda function The scheduled event is configured based on a JSON string that is passed to the Lambda function that sets a number of parameters including the DNS domain source DNS server and Route 53 zone ID This configuration allows you to reuse a single Lambda function for multiple zones A new network interface is created in the VPC’s subnets and attached to the Lambda function This allo ws the function to access any internal network resources based on the security group that you defined The Lambda function transfers the source DNS zone from the IP address specified in the JSON parameters Configure DNS servers to allow full zone trans fers which happen over TCP and UDP port 53 The Route 53 DNS zone is retrieved using the AWS API The two zone files are compared and then the resulting differences are returned as a set of actions to be performed using Route 53 Updates to the Route 53 zone are made using the AWS API and then the Start of Authority (SOA) is updated to match the source version This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 13 There are several benefits to using this approach Aside from the initial solution setup there is little management overhead after th e environment is set up as the solution continues working without any manual intervention Also there is no client side setup because the DHCP options that you set configure each instance to use Route 53 Resolver (aka AmazonProvidedDNS) by default This solution can be one of the more scalable hybrid DNS solutions in a VPC because queries for any domain go directly to Route 53 Resolver from each instance and then to the Amazon Route 53 infrastructure This ensures that each instance uses its own PPS per network interface limit There is also no correlation and impact of implementing this solution in one or more VPCs as you choose to associate the Route 53 hosted zone with multiple VPCs The possibility of failure of a DNS component is lower because of the highly available and reliable Amazon Route 53 infrastructure Note however that there is a hard limit of 1000 private hosted zone associations The main disadvantage of this solution is that it requires full zone transfer query (AXFR) so it isn’t app ropriate for customers that run DNS servers that don’t support AXFR Also because this solution involves working with the Route 53 APIs you must stay within the Route 53 API limits 8 This solution does not provide a method for resolving EC2 records from on premises directly Decentralized Conditional Forwarders While the Route 53 solution enables you to avoid the complexities in running a hybrid DNS architecture you might still prefer to configure your DNS infrastructure to use conditional forwarders within your VPCs One reason you may choose to run your own forwarders is to log DNS queries See DNS Logging (under additional considerations) to determine if this is right for you There are two options under this solution The first option called highly distributed forwarders discusses how to run forwarders on e very instance of the environment trying to mimic the scale that the Route 53 solution provides The second option called zonal forwarders using supersede presents a strategy of localizing forwarders to a specific Availability Zone and its instances The following table highlights these two options followed by their detailed discussion: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybri d Cloud DNS Options for Amazon VPC 14 Table 2 – Solution highlights – Decentralized conditional forwarders Option Use Case Advantages Limitations Highly Distributed Forwarders • Workload generates high volumes of DNS queries • Infrequently changing DNS environment • Resilient DNS infrastructure • Low possibility for instances to breach the PPS per network interface limit • Complex setup and management • Investment in relevant skill sets for configuration manageme nt Zonal Forwarders using Supersede • Customers with existing set of conditional forwarders • Environment that doesn’t generate a high volume of DNS traffic • Fewer forwarders to manage • Zonal isolation provides better overall resiliency • Complex setup and management as the DNS environment grows • Possibility of breaching the PPS per network interfaces limit is higher than the highly distributed option Highly Distributed Forwarders This option decentralizes forwarders and runs a small lightweight DNS forwarder on every instance in the environment The forwarder is configured to serve the DNS needs of only the instance it is running on which eliminates bottlenecks and dependency on a central set of instances Given the implementation and management complexity of this solution we recommend that you use a mature configuration management solution The following diagram shows how this solution functions in a single VPC: This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 15 Figure 4 – Distributed forwarders in a single VPC Description Each instance in the VPC runs its own conditional forwarder (unbound) The resolvconf has a single DNS Server entry pointing to 127001 A straightforward approach for modifying resolvconf would be by creating DHCP options set that has 127001 as the domain name server value You may alternatively choose to overwrite any existing DHCP options settings using the supersede option in the dhclientconf Records requested for on premises hosted zones are forwarded to the on premises DNS server by the forwarder running locally on the instance Any requests that don’t match the on premises forwarding filters are forwarded to Resolver Similar to the Route 53 solution this solution allows every single instance to use the limit of 1024 PPS per networ k interfaces to Route 53 Resolver to its full potential The solution also scales up as additional instances are added and works the same way regardless of whether you’re using a single or multi VPC setup The DNS infrastructure This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 16 is low latency and the fail ure of a DNS component such as an individual forwarder does not affect the entire fleet due to the decoupled nature of the design This solution poses implementation and management complexities especially as the environment grows You can manage and modif y configuration files at instance launch using Amazon EC2 user data 9 After instance launch you can use the Amazon EC2 Run Command 10 or AWS OpsWorks for Chef Automate 11 to deploy and maintain your configuration files The implementation of these solutions is outside the scope of this whitepaper but it is important to know that they provide the flexibility and power to manage configuration files and their state at a large scale Greater flexibility brings with it the challenge of greater complexity Consider additional operational costs including the need to have an inhouse DevOps workforce Zonal Forwarders Using Supersede If you don’t want to manage and implement a forwarder on each instance of your environment and you want to have conditional forwarder instances as the center piece of your hybrid DNS arc hitecture you should consider this option For this option you localize instances in an Availability Zone to forward queries to conditional forwarders only in the same Availability Zone of the Amazon VPC For reasons discussed in the Linux Resolver section eac h instance can have up to three DNS servers in their resolvconf as shown in the following diagram This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options fo r Amazon VPC 17 Figure 5 – Zonal forwarders with supersede option Description Instances in Availability Zone A are configured using the supersede option which uses a list of DNS forwarders that are local to that Availability Zone To avoid burdening any specific forwarder in the Availability Zone randomize the order for the DNS forwarders across instances in the Availability Zone Records requested for on premis es hosted zones are directly forwarded to the on premises DNS server by the DNS forwarder Any requests that don’t match the on premises forwarding filters are forwarded to the Route 53 Resolver This illustration doesn’t depict the actual flow of traff ic It’s presented for representation purposes only Similarly other Availability Zones in the VPC can be set up to use their own set of local conditional forwarders that serve the respective Availability Zone You determine the number of conditional forwarders serving an Availability Zone based on your need and the importance of the environment This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 18 If one of the three instances in Availability Zone A fails the other two instances continue serving DNS traffic It is important to note that pl acement groups must be used in order to guarantee that the forwarders are not running on the same parent hardware which is a single point of failure To ensure separate parent hardware you may set up and take advantage of Amazon Elastic Cloud Compute Placement Groups to avoid this type of failure domain If all three DNS forwarders in Availability Zone A fail at the same time the instances in Availability Zone A fails to resolve any DNS requests because they are unaware of the presence of forwarders in other Availability Zones This prevents the impact from spreading to multiple Availability Zones and ensures that other Availability Zones continue to function normally Currently the DHCP options that you set apply to the VPC as a whole Therefore you must self manage the list of DNS servers that are local to instances in each Availability Zone In addition we recommend that you don’t use the same order of DNS servers in your resolvconf for all instances in the Availability Zone because it would burden the first server in the list and push it closer to breaching the PPS per network interfaces limit While each Linux instance can only have three resolvers if you’re ma naging the resolver list yourself you can have as many resolvers as you wish per Availability Zone Each instance should be configured with three random resolvers from the resolver list Scaling DNS Management Across Multiple Accounts and VPCs In alignme nt with AWS best practices many organizations tend to build out a cloud environment with multiple accounts Whether you’re using Shared VPCs with multiple accounts hosted in a single VPC to share resources or using the more traditional model where a VPC is tied to a single account there are architectural considerations that must be made This whitepaper focuses on the more traditional model For more information on Shared VPCs see Working with Shared VPCs While having multiple accounts and VPCs helps provide a reduction of blast radius and granular account level billing it can make DNS infrastructure more complex Route 53’s ability to associate Private Hosted Zones (PHZs ) with VPCs and accounts helps reduce these complexities for both centralized and de centralized architectures We discuss both centralized and decentralized design paradigms in this section This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 19 Multi Account Centralized In this type of architecture Route 53 Private Hosted Zones (PHZs) are centralized in a shared services VPC This allows for central DNS management while enabling inbound Route 53 resolver endpoints to natively query the Private Hosted Zones This leaves th e need for VPC toVPC DNS resolution unaddressed Fortunately PHZs can be associated with many VPCs A simple CLI or API request can associate each PHZ with VPCs in accounts outside of the shared services VPC More information about cross account PHZ sha ring see Associating an Amazon VPC and a Private Hosted Zone That You Created with Different AWS Accounts Figure 6 – MultiAccount Centralized DNS with Private Hosted Zone sharing Description Instances within a VPC use the Route 53 Resolver (Amazon Provided DNS) Private hosted zones are associated with a shared services VPC Private hosted zones are also associated with other VPCs in the environment Conditional forward rule(s) from the on premises DNS servers have an inbound Route 53 Resolver endpoint as their destination Rule(s) for on premises domain names are created that leverage an outbound Route 53 Resolver endpoint This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 20 While this architecture provides for centralization you may require each VPC to have its own fully qualified domain name (FQDN) hosted within each account so that account owners can change and modify their own DNS records The n ext section provides more information on how this design paradigm is accomplished Multi Account Decentralized An organization may want to delegate DNS ownership and management to each AWS account This can have the advantages of decentralization of contro l and isolating the blast radius for failure to a specific account The ability to associate PHZs to VPCs between accounts again becomes useful in this scenario Each VPC can have its own PHZ(s) and then associate it with multiple other VPCs across accoun ts and across Regions This architecture is depicted in Figure 7 For unified resolution with the on premises environment this only requires that the shared services VPC be associated with each VPC hosting a PHZ Figure 7 – Multi Account DNS Decentralized Description Instances within a VPC use the Route 53 Resolver (Amazon Provided DNS) Private hosted zones are associated with a shared services VPC Private hosted zones are also associated with other VPCs in the e nvironment This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 21 Description Conditional forward rule(s) from the on premises DNS servers have an inbound Route 53 Resolver endpoint as their destination Rule(s) for on premises domain names are created that leverage an outbound Route 53 Resolver endpoint Alternative Approaches Alternative approaches have historically been to deploy DNS proxy servers in EC2 instances or to reply on Active Directory DNS servers This centralization was desired but did not take advantage of the benefits of using the Route 53 Resolver and can cause scaling as well as availability constraints Similarly a common anti pattern is to use Route 53 Resolver endpoints to centralize the management of DNS within a shared services VPC or Transit Gateway This is done by creating both a n inbound and outbound endpoint in the shared services VPC and then creating forwarding rules whose target is the IP address of the inbound endpoint in the centralized VPC These rules are then associated with other VPCs which will use the inbound endpoin t of the central VPC to resolve their DNS queries This has the effect of allowing spoke VPCs to use the DNS view of the central VPC For example if you have an EFS mount in the central VPC the spoke VPC would be able to resolve the EFS mount’s DNS name by forwarding its query to the inbound endpoint of the VPC where the file system is mounted This approach is NOT preferred Cross account sharing of PHZs is highly available and less costly than query forwarding This is because PHZ sharing preserves Avai lability Zone isolation meaning that your queries in VPC A are answered by an Availability Zone local to VPC A whereas your queries in VPC B are answered by an Availability Zone local to VPC B This means that in the event of an availability problem in V PC A VPC B's queries would not be affected as long as they are in two different Availability Zones There is no additional cost to associate a PHZ with a VPC and you can share a VPC with upwards of 1000 zones Query forwarding is optimized for sending qu eries to other DNS resolvers located outside the AWS network It provides a way to allow DNS resolvers from different networks to access each other when they would normally not be visible via a recursive DNS lookup If you choose to use query forwarding in order to resolve DNS answers local to another VPC you would must get an endpoint for every VPC for which you want This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 22 this view of DNS Additionally using endpoints to answer queries between VPCs breaks the previously mentioned Availability Zone isolation Meaning that instead of each VPC resolving queries within its local Availability Zone you have now made several VPC s dependent on the availability of a single VPC Regarding limits each endpoint elastic network interface has a limit of 10000 QPS but keep in mind that if you want to use an endpoint to centralize DNS management you are forwarding more query volume to a central VPC as opposed to distributing the query load between multiple VPCs This anti pattern is generally not recommended Selecting the Best Solution for Your Organization There are various advantages and trade offs with each of these solutions Choo sing the right solution for your organization depends on the specific requirements of each workload You might choose to run different solutions in different VPCs to meet the needs of your specific workloads The following table summarizes the criteria tha t you can use to evaluate what will work best for your organization These include the complexity of the implementation the management overhead the availability of the solution probability of hitting the PPS per network interface limit and the cost of the solution Table 4 – Solutions selection criteria Route 53 Resolver Secondary DNS in a VPC Highly Distributed Forwarders Zonal Forwarders Implementation complexity Low Medium High High Management overhead Low Low High Medium DNS Infrastructure resiliency High High High Medium PPS limit breach Low Low Low Medium Cost* Low Low High Medium * Cost is a combination of the infrastructure and operational expense This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 23 Additional Considerations DNS Logging DNS logging refers to logging specific DNS query from individual host Typically these logs are stored for security forensics and compliance GuardDuty provides machine learning based forensics and anomaly detection on recursive queries originating from local VPC resources If raw historical logging is not required GuardDuty may satisfy your requirements without any additional heavy lifting Route 53 provides query logs for public hosted zones If customers require logging for Private Hosted Zones and queries that originate fr om resources within a VPC they have several options while still following the Well Architected Framework and DNS best practices Centralized query logging distributed (on instance) query logging and a hybrid approach to log a percentage of queries base d on user defined domain whitelisting are three of the most popular and scalable methods for query logging available today Centralized Query Logging Query logging is accomplished in a centralized fashion when all queries are forwarded to a resolver that is not the Route 53 Resolver (Amazon Provided DNS) This resolver can be local to the VPC such as several instances running unbound or an on premises resource over DX VPN or the Internet Gateway The latter adds additional latency and dependencies outsi de of the VPC and is typically not recommended for that reason As with any centralized or distributed system it comes with pros and cons Centralization of query logs allows for easy aggregation and a single plane of glass to view and parse DNS client q ueries With centralization additional attention needs be directed at the scale of the instances acting as resolvers and number of queries that are directed at any single instance These instances become single points of failure and can become a barrier d ue to DNS packets per second limits Each EC2 instance is limited to 1024 packets per second for DNS queries toward the Route 53 Resolver (Amazon Provided DNS) If the request being sent to the customer managed instance based DNS resolvers are not distribu ted effectively and are not implementing caching techniques with high volume the DNS instances may exceed the 1024 per instance packet per second limit to the Route 53 Resolver DNS resolver with the VPC This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 24 Distributed Query Logging Another approach is logging DNS queries in a distributed fashion on instance This is accomplished by running unbound or another logging capable resolver or forwarder on each instance that requires logging With the distributed model of logging DNS queries each instance runs a local resolver in order to capture all DNS queries locally on each instance These logs can then be aggregated upstream to a centralized Amazon S3 bucket for historical collection and centralized parsing Depending on the aggregation process this may create a delayed ability for centralized parsing and forensics but removes any single points of failure and reduces the overall blast radius of any given upstream instance based resolver failure If On Demand Instance parsing is required the delivery Window can be shorted Depending on your operational model you may or may not allow on box forensics or external access so the logging delivery schedule should be considered With the launch of VPC Traffic mirroring at re:Inforce 2019 an a lternative off instance distributed logging mechanism can be achieved for supported instance types At this time all AWS Nitro based instances support VPC Traffic Mirroring By enabling traffic mirroring for TCP and UDP based traffic on port 53 on individ ual instance ENIs you have the ability to capture DNS requests in PCAP format Traffic Mirroring for DNS logs shares similar availability and scalability constructs as other distributed methods but increases simplicity and flexibility as it does not requ ire the application or Amazon Machine Image (AMI) to incorporate any additional DNS logic A Traffic Mirroring session can be attached and detached to instance ENIs as needed Traffic Mirroring is priced per elastic network interface that traffic mirroring is enabled on and the customer is responsible for configuring and managing the traffic mirror target For more information on Amazon VPC Traffic Mirroring see Traffic Mirroring Concepts Hybrid Query Logging The third option is a hybrid approach that allows more granularity on what queries are filtered This approach may be desired when companies are able to define “trusted” zones and “untrusted” zones Trust ed zones are approved by the organization and may not require logging while anything unapproved falls under the untrusted category to be logged and possibly acted upon such as a blacklist of the response For example any zones that are owned and operated by the organization and VPC local resources are trusted and everything else is to be logged and controlled This hybrid approach is now possible with the release of the Amazon Route 53 Resolver Service because of its ability to provide conditional This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 25 forward ing rules by zone In this approach all local VPC resources resolve to Route 53 Resolver (Amazon Provided DNS) as normal but when a query is made to an untrusted zone that matches an Amazon Route 53 Resolver conditional forwarding rule it will then be for warded to a specified instance or on premises based resolver such as the centralized DNS resolver mentioned above This approach does not require any modifications on the instance and removes any single points of failure for all trusted zones Custom EC2 D NS Resolver You can choose to host your own custom DNS resolver on Amazon EC2 that leverages public DNS Servers to perform recursive public DNS resolution instead of using Route 53 Resolver This is a good choice because of the nature of the application an d the ability to have more control and flexibility over the DNS environment You could also do this if the PPS per network interface limit is a hindrance in your ability to scale and none of the solutions discussed thus far suit your needs This whitepape r does not describe the details of architecting such a solution but we wanted to point out some caveats that will help you plan better in such a scenario The following diagram illustrates an approach to a hybrid VPC DNS setup where you have your own DNS resolver on Amazon EC2 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 26 Figure 8 – Amazon EC2 DNS instances with segregated resolver and forwarder Description DNS queries for internal EC2 names and Route 53 private hosted zones are forwarded to Route 53 Resolver DNS queries bound for on premises servers are conditionally forwarded to onpremises DNS servers DNS queries for public domains are conditional ly forwarded to the custom DNS resolver in the public subnet The resolver then recursively resolves public domains using the latest root hints available from the Internet Assigned Number Authority (IANA) For security reasons we recommend that the Cond itional forwarder instance that requires connectivity to on premises sits separately in a private subnet of the VPC As This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DN S Options for Amazon VPC 27 the custom DNS resolver must be able to query public DNS Servers it runs in its own public subnet of the VPC Ideally you would have security group rules on the EC2 instance running the custom DNS resolver but if this custom DNS resolver has high rates of querying out to the internet then there is a possibility that you will hit connection tracking limits as discussed in the connection tracking section Therefore to avoid running into such a scenario connection tracking by itself must be avoided and it is possible to do so by opening up all ports TCP and UDP to the whole world at the security group level both inbound and outbound As this is granting permissive rules to instance level security group you will have to handle the security of the instance at a different layer At the least it is recommended to control the traffic entering into the entire public subnet by using Network Access Control Lists (NACL) which thereby restricts access to the instance or you could use application level control mechanisms like access control provided by a DNS resolver like Unbound 12 Custom DNS resolvers might develop a reputation upstream on the internet If the instance is assigned a dynamic public IP address that belonged to another customer and previously earned a bad reputation requests upstream could be throttled or even blocked To avoid being throttled or blocked consider assigning Elastic IP addresses to these resolver instances This provides these IP addresses that talk to the upstream servers with the opportunity to build a good reputation over time that can be owned and maintained Scaling concerns can be mitigated through the use of a DNS server fleet sitting behind a Network Load Balancer(NLB) that is configured with both TCP and UDP listener on port 53 Microsoft Windows Instances Typically Microsoft Win dows instances are joined using Active Directory Domain Services (AD DS) In scenarios where you use the Amazon VPC DHCP options set unlike the Linux resolver you can set the full set of four DNS servers You can set the DNS servers independently from th e DHCP supplied IP address similar to the supersede option discussed earlier This can be accomplished using Active Directory Group Policy or via configuration management tools such as Amazon EC2 Run Com mand 13 or AWS OpsWorks for Chef Automate 14 mentioned earlier In addition the Windows DNS client also enables you to cache recently resolved queries which reduces the overall demand on the primary DNS server This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 28 The Windows DNS client service is designed to prompt a dynamic update from the DNS server if a change is made to its IP address information When prompted the DNS server updates the host record IP address for that computer (according to RFC 2136) Microsoft DNS provides support for dynamic updates and this is enabled by default in any Active Directory integrated DNS zone When you use a lightweight forwarder like unbound for Windows instances note that there isn’t any support for thes e dynamic updates and it can’t support RFC 2126 If you want to do this you should use the Microsoft DNS server as a primary for these instances Unbound – Additional Options Unbound caches the results for subsequent queries until the time to live (TTL) expires after which it forwards the request By enabling the prefetch option in unbound you can ensure that frequently used records are pre fetched before they expire to keep the cache up todate Also if the on premises DNS server is not available when the cache expires unbound returns SERVFAIL To protect yourself against such a situation you can enable the serve expired option to serve old responses from the cache with a TTL of zero in the response without waiting for the actual resolution to finish After the resolution is completed the response is cached for subsequent use DNS Forwarder – Forward First Some DNS servers (notably BIND) include a forward first option enabl ed by default which causes the server to query the forwarder first and if there is no response to recursively retry the internet DNS servers For private DNS domains in this scenario the internet DNS servers return an authoritative NXDOMAIN which is a nonexistent Internet or Intranet domain name or they return the public address if you’re using split horizon DNS for public zones which is used to provided different answers for private vs public IP addresses Therefore it is critical to specify the f orward only option which specifies that retries are made against the forwarders which means that you avoid ever seeing the response from public name servers The unbound DNS server has the forward first option disabled by default DNS Server Resiliency The solutions in this whitepaper are intended to provide high availability in the event that there is an issue with your primary DNS server However there are factors can prevent or delay this failover from occurring These factors include but are not lim ited to the timeout value in resolvconf configuration issues with the superseded DNS or incorrect DHCP options set settings In some cases these factors could impact the availability of This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 29 applications that are dependent on name resolution There are a f ew simple approaches to ensure the resilience of your forwarders in case there is an issue with the underlying hardware or instance software While these approaches don’t eliminate the need for wellarchitected design they can help you increase the overal l resiliency of your solution EC2 Instance Recovery In the case of an underlying hardware failure of a DNS forwarder instance you can use EC2 instance recovery to start the instance on a new host A recovered instance is identical to the original instan ce including the instance ID private IP addresses Elastic IP addresses and all instance metadata To do this you can create a CloudWatch alarm that monitors an EC2 instance and automatically recovers the instance if it becomes impaired You can use th e CloudWatch alarm to monitor issues like loss of network connectivity loss of system power software issues on the physical host or hardware issues on the physical host that affect network reachability For more information about instance recovery see Recover Your Instance in the Amazon EC2 User Guide for Linux Instances 15 For step bystep instructions on using CloudWatch alarms to recover an instance see Create Alarms That Stop Terminate Reboot or Recover an instance in the Amazon EC2 User Guide for Linux Instances 16 Secondary IP Address In an Amazon VPC instances ca n be assigned secondary IP addresses which are transferrable If an instance fails the secondary IP can be transferred to a standby instance and this avoids the need for every instance to reconfigure their resolver IP addresses This approach redirects tr affic to the healthy instance so that it can respond to DNS queries This approach is appropriate for scenarios where EC2 instance recovery might not provide fast enough recovery or might not be appropriate (for example an operating system fault or softwa re issue) For more information about working with multiple IP addresses see Multiple IP Addresses in the Amazon EC2 User Guide for Linux Instances 17 This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon V PC 30 Conclusion For organi zations with on premises resources operating in a hybrid architecture is a necessary part of the cloud adoption process As such architecture patterns that streamline this transition are essential for success We discussed concepts as well as constraints to help you obtain a better understanding of the fundamental building blocks of the solutions provided here as well as the limitations that help to create the most optimal solution for your workload The solutions that were provided included how to use Route 53 Resolver endpoints with conditional forwarding rules how to set up Secondary DNS in the Amazon VPC with AWS Lambda and Route 53 Private hosted zones and solutions leveraging decentralized forwarders using the Unbound DNS server We also provided guidance on how to select the appropriate solution for your intended workload Finally we examined some additional considerations to help you to better tailor your solution for different workload requirements faster failover and better DNS server resili ency By using the architectures provided you can achieve the most ideal private DNS interoperability between your on premises environments and your Amazon VPC Contributors Contributors to this document include: • Anthony Galleno Senior Technical Account Manager • Gavin McCullagh Principal Systems Development Engineer • Gokul Bellala Kuppuraj Technical Account Manager • Harsha Warrdhan Sharma Technical Account Manager • James Devine Senior Specialist Solutions Architect • Justin Davies Principal Network Specialist • Maritza Mills Senior Product Manager Technical • Sohamn Chaterjee Cloud Infrastructure Architect This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 31 Document Revisions Date Description November 2019 Minor edits September 2019 Fourth Publication June 2018 Third Publication November Second Publication October 2017 First Publication 1http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_DHCP_Optionshtml #DHCPOptionSets 2http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VP C_DHCP_Optionshtml #AmazonDNS 3 http://docsawsamazoncom/AWSEC2/latest/UserGuide/using enihtml 4 http://docsawsamazoncom/Route53/latest/DeveloperGuide/hosted zones privatehtml 5 http://d ocsawsamazoncom/AWSEC2/latest/UserGuide/using network securityhtml#security group connection tracking 6 https://a wsamazoncom/blogs/security/how tosetupdnsresolution between on premises networks andawsbyusing unbound/ 7 https://awsamazoncom/blogs/compute/powering secondary dnsinavpcusing aws lambda andamazon route 53private hosted zones/ 8http://docsawsamazoncom/Route53/latest/DeveloperGuide/DNSLimitationshtml#limit sapirequests Notes This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Guides page: https://awsamazoncom/whitepapers Amazon Web Services Hybrid Cloud DNS Options for Amazon VPC 32 9 http://docsawsamazoncom/AWSEC2/latest/UserGuide/user datahtml 10 https://awsamazoncom/ec2/run command/ 11 https://awsamazoncom/opsworks/chefautomate/ 12 http://docsawsamazoncom/AmazonVPC/latest/UserGuide/VPC_ACLshtml 13 https://awsamazoncom/ec2/run command/ 14 https://awsamazoncom/opsworks/chefautomate/ 15 http://docsawsamazoncom/AWSEC2/latest/UserGuide/ec2 instance recoverhtml 16http://docsawsamazoncom/AWSEC2/latest/UserGuide/UsingAlarmActionshtml 17 http://docsawsamazoncom/AWSEC2/latest/UserGuide/MultipleIPhtml
|
General
|
consultant
|
Best Practices
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.