A. Increase the concurrency limit for Lambda functions and configure notification alerts to be sent by Amazon CloudWatch when the ConcurrentExecutions metric approaches the limit.
B. Configure notification alerts for the limit of transactions per second on the API Gateway endpoint and create a Lambda function that will increase this limit, as needed.
C. Shard users to Amazon Cognito user pools in multiple AWS Regions to reduce user authentication latency.
D. Use DynamoDB strongly consistent reads to ensure the latest data is always returned to the client application.
Solution
Explanation
The 502 internal server errors will be returned intermittently by API Gateway if the Lambda function exceeds concurrency limits. B is incorrect because, in this case, API Gateway would return a 429 error for too many requests. C is incorrect because the error occurs when calling the API Gateway endpoint, not during the authentication process. D is incorrect because stale data would not cause a bad gateway error.
References
1. https://aws.amazon.com/premiumsupport/knowledge-center/malformed-502-api-gateway/
2. https://aws.amazon.com/premiumsupport/knowledge-center/lambda-troubleshoot-throttling/
3. https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html
4. https://aws.amazon.com/blogs/compute/operating-lambda-performance-optimization-part-1/
A. Launch Amazon EC2 instances in two Availability Zones to host a highly available MySQL database cluster.
B. Migrate the database to an Amazon Aurora MySQL DB cluster configured for Multi-AZ.
C. Migrate the database to a single-AZ Amazon RDS for MySQL DB instance.
D. Host static website content in Amazon S3. Use Amazon CloudFront to reduce latency while serving webpages. Use AWS WAF to improve website security.
E. Host static website content in Amazon S3. Use S3 Transfer Acceleration to reduce latency while serving webpages. Use AWS WAF to improve website security.
F. Create an Auto Scaling group of Amazon EC2 instances in two Availability Zones and attach an Application Load Balancer.
Solution
Explanation
This is a simple migration to cloud that requires a standard set of security, performance, and reliability requirements. To meet these requirements an ASG should be created across multiple AZs for the web layer. This should be behind an ALB for distributing incoming connections. For the database layer an Aurora MySQL DB cluster with an Aurora Replica in another AZ will provide Multi-AZ failover. This ensures the DB layer is highly available, and reduces maintenance. Another way to improve performance for global users is to host static content in Amazon S3 and use the Amazon CloudFront CDN to cache the content in Edge Locations around the world. Adding AWS WAF adds additional security.
A. Deploy a hot standby of the application tiers to another Region.
B. Use AWS DMS to replicate the Aurora DB to an RDS database in another Region.
C. Create daily snapshots of the EC2 instances and replicate to another Region.
D. Create a cross-Region Aurora MySQL Replica of the database.
Solution
Explanation
The recovery time objective (RTO) defines how quickly a service must be restored and a recovery point objective (RPO) defines how much data it is acceptable to lose. For example an RTO of 30 minutes means the service must be running again within half an hour and an RPO of 5 minutes means no more than 5 minutes’ worth of data can be lost. To achieve these requirements in this scenario a host standby is required of the EC2 instances. With a hot standby a minimum of application/web servers should be running and can be scaled out as required. For the data tier an Amazon Aurora cross-Region MySQL Replica is the best way to ensure that <5mins of data is lost. You can promote an Aurora MySQL Replica to a standalone DB cluster, and this would be performed in the event of a disaster affecting the source DB cluster. “Use AWS DMS to replicate the Aurora DB to an RDS database in another Region” is incorrect. There is no need to use AWS Database Migration Service (DMS) or to replicate data to an RDS database. Aurora can provide the required functionality natively. “Create daily snapshots of the EC2 instances and replicate to another Region” is incorrect. Snapshots could be used to create an AMI and launch EC2 instances in the second Region. However, depending on the specifics of the application this could take longer than 30 minutes. “Create snapshots of the Aurora database every 5 minutes” is incorrect. Aurora backs up your cluster volume automatically and retains restore data for the length of the backup retention period. Snapshots are used to retain data for longer than the retention period and cost extra.
A. Use an Amazon Route 53 weighted routing policy set to 100/0 across the two selected Regions. Set Time to Live (TTL) to 30 minutes.
B. Use an Amazon Route 53 failover routing policy for failover from the primary Region to the disaster recovery Region. Set Time to Live (TTL) to 30 seconds.
C. Use Amazon DynamoDB with a global table across both Regions so reads and writes can occur in either location.
D. Run the web and application tiers in both Regions in an active/active configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use Spot Instances for the required resources.
E. Use an Amazon Aurora global database across both Regions so reads and writes can occur either location.
F. Run the web and application tiers in both Regions in an active/passive configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use zonal Reserved Instances for the minimum number of servers and On-Demand Instances for any additional resources.
Solution
Explanation
: “Run the web and application tiers in both Regions in an active/active configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use Spot Instances for the required resources” is incorrect. Spot instances may not be available if the maximum price configured is exceeded. This could result in instances not being available when needed. “Use an Amazon Aurora global database across both Regions so reads and writes can occur either location” is incorrect. An Aurora database is a relational DB, not a NoSQL DB. Also, Aurora global database does not allow writes in multiple Regions, only reads. “Use an Amazon Route 53 weighted routing policy set to 100/0 across the two selected Regions. Set Time to Live (TTL) to 30 minutes” is incorrect. A weighted routing policy would need to be updated to change the weightings and the TTL here is too high as clients will cache the result for 30 minutes.
A. Create Auto Scaling groups for the web and application tiers and deploy them in multiple global Regions. Setup an AWS Direct Connect connection.
B. Migrate the database from Amazon Aurora to Amazon RDS for MySQL. Replace the web and application tiers with AWS Lambda functions, create an Amazon SQS queue.
C. Configure an Aurora global database for storage-based cross-Region replication. Use Amazon S3 with cross-Region replication for static content and resources and create Amazon CloudFront distributions.
D. Use Amazon Route 53 with a latency-based routing policy. Create Auto Scaling groups for the web and application tiers and deploy them in multiple global Regions.
E. Use logical cross-Region replication to replicate the Aurora MySQL database to a secondary Region. Replace the web servers with Amazon S3. Configure cross-Region replication for the S3 buckets.
Solution
Explanation
The website performance can be optimized for global users through a combination of using Amazon CloudFront to deliver static assets and EC2 instances launched in multiple Regions in Auto Scaling groups. To direct traffic to the correct instances latency-based routing policies can be created in Route 53 which will direct traffic to the closest (lowest-latency) AWS Region. The database layer can be configured as an Aurora Global Database. This configuration replicates data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages. This solution is optimized for providing high performance for reads. The application will need to be updated to write to the primary Aurora database and send reads to local endpoints. Migrate the database from Amazon Aurora to Amazon RDS for MySQL. Replace the web and application tiers with AWS Lambda functions, create an Amazon SQS queue is incorrect. There is no need to migrate database types or use Lambda functions in place of the EC2 instances. Using Auto Scaling the EC2 instances will provide adequate performance and we also don’t know how long processes may run for and if they can be migrated to serverless functions. There is also no mention of lowering costs here and no requirement for decoupling tiers. Create Auto Scaling groups for the web and application tiers and deploy them in multiple global Regions. Setup an AWS Direct Connect connection is incorrect. The AWS Direct Connect (DX) connection does not make sense here. DX connections are used for optimizing network performance between data centers and AWS. : “Use logical cross-Region replication to replicate the Aurora MySQL database to a secondary Region. Replace the web servers with Amazon S3. Configure cross-Region replication for the S3 buckets is incorrect. S3 cannot be used as a replacement for the web servers as they are dynamic websites and S3 can only be used to host a static website.
A. Use Amazon EBS cross-Region replication to create an AMI for each application, run the AMI on Amazon EC2.
B. Refactor the applications to Docker containers and deploy them to an Amazon ECS cluster behind an Application Load Balancer.
C. Use AWS SMS to create an AMI for each virtual machine, run the AMI on Amazon EC2.
D. Deploy each application to a single-instance AWS Elastic Beanstalk environment without a load balancer.
Solution
Explanation
The simplest option is to upload the application code to Elastic Beanstalk. This will result in a managed environment that runs on Amazon EC2 instances. Elastic Beanstalk is best suited for running web applications that are developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker. A load balancer should not be used as there is only a single instance of each application and a load balancer would not offer many advantages (and would increase the cost). Refactor the applications to Docker containers and deploy them to an Amazon ECS cluster behind an Application Load Balancer” is incorrect. This requires refactoring the application which entails operational overhead. Also, with over 100 single-container applications behind a single ALB, requests would be randomly distributed and not directed to the correct application. Complex path-based routing and target group configurations may be able to resolve this but it gets very complex with very little advantage. Better to use Route 53 to direct traffic to the correct containers. Use AWS SMS to create an AMI for each virtual machine, run the AMI on Amazon EC2” is incorrect. This would work but an operationally simpler approach would be to take the application code and deploy it to Elastic Beanstalk.
A. Use Amazon CloudFront with Amazon S3 to host the web application. Use Amazon API Gateway to build the application APIs with AWS Lambda for the custom authorizer. Authorize data access by performing user lookup in AWS Managed Microsoft AD.
B. Use Amazon CloudFront with Amazon FSx to host the web application. Use AWS AppSync to build the application APIs. Use IAM groups for RBAC. Authorize data access by leveraging IAM groups in AWS AppSync resolvers.
C. Use Amazon CloudFront with Amazon S3 to host the web application. Use AWS AppSync to build the application APIs. Use Amazon Cognito groups for RBAC. Authorize data access by leveraging Cognito groups in AWS AppSync resolvers.
D. Use Amazon CloudFront with Amazon EC2 to host the web application. Use Amazon API Gateway to build the application APIs. Use AWS Lambda for custom authentication and authorization. Authorize data access by leveraging IAM roles.
Solution
Explanation
CloudFront with S3 provides a low-latency solution for global users to access the web application. AWS AppSync can be used to provide a GraphQL API that can be used to query multiple databases, microservices, and APIs (allow the retrieval of data from multiple data sources). Amazon Cognito Groups can be used to create collections of users to manage their permissions or to represent different types of users. You can assign an AWS Identity and Access Management (IAM) role to a group to define the permissions for members of a group. AWS AppSync GraphQL resolvers connect the fields in a type’s schema to a data source. Resolvers are the mechanism by which requests are fulfilled. Cognito groups can be used with resolvers to provide authorization based on identity.
A. Create an AWS WAF web ACL with a rule to allow access from the IP addresses used by the companies. Associate the web ACL with the API. Create a usage plan with a request limit and associate it with the API. Create an API key and add it to the usage plan.
B. Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the companies. Associate the web ACL with the API. Create a resource policy with a request limit and associate it with the API. Configure the API to require an API key on the POST method.
C. Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit more than ten requests per day. Associate the web ACL with the CloudFront distribution. Add a custom header to the CloudFront distribution populated with an API key. Configure the API to require an API key on the GET method.
D. Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit more than ten requests per day. Associate the web ACL with the CloudFront distribution. Configure CloudFront with an origin access identity (OAI) and associate it with the distribution. Configure API Gateway to ensure only the OAI can execute the GET method.
Solution
Explanation
The rate-based rules associated with usage plans specify the number of web requests that are allowed by each client IP in a trailing, continuously updated, 5-minute period. The API key associated with the usage plan ensures that only clients who are using the API key in their requests are granted access. This solution requires that the IP addresses of clients are whitelisted and the API key is distributed to clients to use in their requests to the API. “Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit more than ten requests per day. Associate the web ACL with the CloudFront distribution. Configure CloudFront with an origin access identity (OAI) and associate it with the distribution. Configure API Gateway to ensure only the OAI can execute the GET method” is incorrect. As above for rate-based rules. An OAI is a special CloudFront user that is used with Amazon S3 buckets to prevent direct access using S3 URLs. It is usually used along with other protections such as signed URLs and signed cookies. It is not possible to use an OAI with API Gateway APIs.
References
1. https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-resource-policies.html
2. https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-rate-based.html
3. https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html
4. https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-control-access-aws-waf.html
A. Create an Amazon EventBridge rule that triggers an AWS Lambda function to use AWS Trusted Advisor to retrieve the most current utilization and service limit data. If the current utilization is above 80%, publish a message to an Amazon SNS topic to alert the cloud team.
B. Create an Amazon EventBridge rule that triggers an AWS Lambda function to use AWS Trusted Advisor to retrieve the most current utilization and service limit data. If the current utilization is above 80%, use AWS Budgets to send an alert to the cloud team.
C. Create a scheduled AWS Config rule to trigger an AWS Lambda function to call the ListServiceQuotas API. If any service utilization is above 80%, publish a message to an Amazon SNS topic to alert the cloud team.
D. Create a scheduled AWS Config rule to trigger an AWS Lambda function to call the GetServiceQuota API. If any service utilization is above 80%, publish a message to an Amazon SNS topic to alert the cloud team.
Solution
Explanation
Trusted Advisor draws upon best practices learned from serving hundreds of thousands of AWS customers. Trusted Advisor inspects your AWS environment, and then makes recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. AWS Budgets is used for tracking and alerting on spend on your AWS account, it is not a messaging service for alerting. The GetServiceQuota API will simply list the current quota, it does not return data showing how much of that quota has been utilized. The ListServiceQuotas API lists the service quota applied to a specific service. It also does not return data showing how much of that quota has been utilized.
A. Configure a CodeCommit trigger to invoke an AWS Lambda function to scan new code submissions for credentials. If any credentials are found, disable them and notify the user.
B. Run a cron job on an Amazon EC2 instance to check the CodeCommit repositories for unsecured credentials. If any unsecured credentials are found, generate new credentials and store them in AWS KMS.
C. Use AWS Trusted Advisor to check for unsecured AWS credentials. If any unsecured credentials are found, use AWS Secrets Manager to rotate the credentials.
D. Create an Amazon Macie job that scans AWS CodeCommit repositories for credentials. If any credentials are found an AWS Lambda function should be triggered that disables the credentials.
Solution
Explanation
You can configure a CodeCommit repository so that code pushes or other events trigger actions, such as sending a notification from Amazon Simple Notification Service (Amazon SNS) or invoking a function in AWS Lambda. You can create up to 10 triggers for each CodeCommit repository. In this case you can trigger AWS Lambda to scan the code for access keys. Secrets Manager cannot be used for rotating secret access keys. EC2 is not a very cost-effective option for running this task and KMS is used for storing encryption keys, not secret access keys. Macie scans Amazon S3 buckets but you cannot see the S3 bucket assigned to CodeCommit as it’s an AWS managed service bucket.
A. Use Amazon EC2 Auto Scaling with a standard AMI. Use a user data script to download the static data from an Amazon S3 bucket. Update the OS patches with AWS Systems Manager.
B. Use Amazon EC2 Auto Scaling with an AMI that includes the latest OS patches. Mount a shared Amazon EBS volume with the static data to the EC2 instances at launch time.
C. Use Amazon EC2 Auto Scaling with an AMI that includes the static data. Update the OS patches with AWS Systems Manager.
D. Use Amazon EC2 Auto Scaling with an AMI that includes the latest OS patches. Mount an Amazon EFS file system with the static data to the EC2 instances at launch time.
Solution
Explanation
You can mount a shared EBS volume but with certain constraints including that the instances must be in the same AZ. Downloading the data from S3 will take time and also will mean each instance has a copy in its EBS volume increasing data storage costs. Use Amazon EC2 Auto Scaling with an AMI that includes the static data. Update the OS patches with AWS Systems Manager” is incorrect. This will increase data storage costs as the data will be included in the EBS volumes for every instance.
A. Build an API using Docker containers running on AWS Fargate in multiple Regions behind Application Load Balancers. Use an Amazon Route 53 latency-based routing policy. Use Amazon Cognito to provide user management authentication functions.
B. Build an API with API Gateway and AWS Lambda, use Amazon S3 for hosting static web resources and create an Amazon CloudFront distribution with the S3 bucket as the origin. Use Amazon Cognito to provide user management authentication functions.
C. Build an API using Docker containers running on Amazon ECS behind an Amazon CloudFront distribution. Use AWS Secrets Manager to provide user management authentication functions.
D. Build an API with API Gateway and AWS Lambda, use Amazon S3 for hosting static web resources and create an AWS WAF Web ACL and attach it for DDoS attack mitigation. Use Amazon Cognito to provide user management authentication functions.
Solution
Explanation
“Build an API with API Gateway and AWS Lambda, use Amazon S3 for hosting static web resources and create an AWS WAF Web ACL and attach it for DDoS attack mitigation. Use Amazon Cognito to provide user management authentication functions is incorrect. A WAF Web ACL will not provide DDoS attack mitigation, use AWS Shield instead or use CloudFront with WAF. CloudFront also offers basic DDoS protections with AWS Shield standard offered for free for use with CloudFront. Amazon Cognito is ideal for implementing user authentication for this type of application and you can integrate with social IdPs.
A. Configure the S3 bucket policy to permit access using an aws:sourceVpce condition to match the S3 endpoint ID.
B. Configure the EMR cluster to use an AWS KMS managed CMK for at-rest encryption. Configure a gateway VPC endpoint for Amazon S3 and an interface VPC endpoint for AWS KMS.
C. Configure the S3 bucket policy to permit access to the Amazon EMR cluster only.
D. Configure the EMR cluster to use an AWS KMS CMK for at-rest encryption. Configure a gateway VPC endpoint for Amazon S3 and a NAT gateway to access AWS KMS.
E. Configure the EMR cluster to use an AWS CloudHSM appliance for at-rest encryption. Configure a gateway VPC endpoint for Amazon S3.
Solution
Explanation
This solution also includes using a gateway VPC endpoint for S3 and a bucket policy that restricts access to the gateway endpoint. This is the best way to ensure that traffic to the secure S3 bucket avoids the internet and is locked down to the correct source. “Build an API with API Gateway and AWS Lambda, use Amazon S3 for hosting static web resources and create an AWS WAF Web ACL and attach it for DDoS attack mitigation. Use Amazon Cognito to provide user management authentication functions is incorrect. A WAF Web ACL will not provide DDoS attack mitigation, use AWS Shield instead or use CloudFront with WAF. CloudFront also offers basic DDoS protections with AWS Shield standard offered for free for use with CloudFront. Amazon Cognito is ideal for implementing user authentication for this type of application and you can integrate with social IdPs. “Configure the S3 bucket policy to permit access to the Amazon EMR cluster only is incorrect. The bucket policy should restrict access to the VPC endpoint only, not the EMR cluster (there is no condition for EMR clusters).
A. An inbound rule in WebAppSG allowing port 80 from source ALB-SG.
B. An inbound rule in ALB-SG allowing port 80 from WebAppSG.
C. An outbound rule in WebAppSG allowing ports 1024 to 65535 to destination ALB-SG.
D. An inbound rule in ALB-SG allowing port 80 from source 0.0.0.0 slash 0.
E. An outbound rule in ALB-SG allowing ports 1024 65535 to destination 0.0.0.0 slash 0.
Solution
Explanation
ALB-SG: Inbound rule to allow port 80 from 0.0.0.0 slash 0. Outbound rule to allow port 80 to WebAppSG (and the health check port if different). Inbound rule to allow port 80 from the security group ID for ALB SG. Outbound rules are not necessary as the response traffic to the ALB is allowed by default (may require rules for security updates etc.)
A. Change to a failover routing policy in Amazon Route 53 and configure active-active failover. Write a custom health check the verifies successful access to the Application Load Balancers in each Region.
B. Write a custom health check that queries the AWS Service Dashboard API to verify the Amazon RDS service is healthy in each Region.
C. Set the value of Evaluate Target Health to Yes on the latency alias resources for both us-east-2 and us-west-1.
D. Set the value of Evaluate Target Health to Yes on the failover alias resources for both us-east-2 and us-west-1.
E. Write a custom health check that verifies successful access to the database endpoints in each Region. Add the health check within the latency-based routing policy in Amazon Route 53.
Solution
Explanation
The problem that is occurring is the database layer has failed (simulated) but the health checks (if there are any) are not checking access to the database. This can be resolved by creating a custom health check that connects to the RDS database endpoints in each Region and verifies that they are accessible. If a health check fails, Route 53 will no longer send any traffic to the deployment in that Region. “Change to a failover routing policy in Amazon Route 53 and configure active-active failover. Write a custom health check the verifies successful access to the Application Load Balancers in each Region is incorrect. Checking that the ALBs are accessible does not ensure that the database layer is accessible which is the problem in this scenario. “Set the value of Evaluate Target Health to Yes on the failover alias resources for both us-east-2 and us-west-1 is incorrect. There is no need to change to failover routing and this configuration would lose some of the performance benefits of latency-based routing. “Write a custom health check that queries the AWS Service Dashboard API to verify the Amazon RDS service is healthy in each Region is incorrect. This could return a result that shows that the RDS service is available but the specific endpoint might not be.
A. Launch the Amazon EC2 instances in a public subnet. Set the HTTPS_PROXY and NO_PROXY application parameters to send non-VPC outbound HTTPS connections to an EC2 proxy server deployed in a public subnet. Associate an Elastic IP address to the EC2 proxy host that can be whitelisted on the external API service.
B. Launch the Amazon EC2 instances in a private subnet. Configure HTTP_PROXY application parameters to send outbound connections to an EC2 proxy server in a public subnet. Associate an Elastic IP address to the EC2 proxy host that can be whitelisted on the external API service.
C. Launch the Amazon EC2 instances in a public subnet with an internet gateway. Associate an Elastic IP address to the internet gateway that can be whitelisted on the external API service.
D. Launch the Amazon EC2 instances in a private subnet with an outbound route to a NAT gateway in a public subnet. Associate an Elastic IP address to the NAT gateway that can be whitelisted on the external API service.
Solution
Explanation
The simplest solution is to use a NAT gateway in a public subnet that has an Elastic IP address assigned. This address can be whitelisted in the external API service as all connections will appear to come from the NAT gateway Elastic IP address. “Launch the Amazon EC2 instances in a public subnet with an internet gateway. Associate an Elastic IP address to the internet gateway that can be whitelisted on the external API service is incorrect. You cannot assign an Elastic IP address to an internet gateway. “Launch the Amazon EC2 instances in a private subnet. Configure HTTP_PROXY application parameters to send outbound connections to an EC2 proxy server in a public subnet. Associate an Elastic IP address to the EC2 proxy host that can be whitelisted on the external API service is incorrect. Even if setup correctly, a NAT gateway will be a simpler solution to using an Amazon EC2 proxy instance and will not require any modifications to the company application. “Launch the Amazon EC2 instances in a public subnet. Set the HTTPS_PROXY and NO_PROXY application parameters to send non-VPC outbound HTTPS connections to an EC2 proxy server deployed in a public subnet. Associate an Elastic IP address to the EC2 proxy host that can be whitelisted on the external API service is incorrect. This solution solves a problem in the previous answer by ensuring that VPC connections are not routed via the proxy but still requires application modification and introduces unnecessary complexity.