Question 7
A retail company runs a server-less mobile app built on Amazon API Gateway, AWS Lambda, Amazon Cognito, and Amazon DynamoDB. During heavy holidays traffic spikes, the company receives complains of intermittent system failures. Developers find that the API Gateway endpoint is returning 502 Bad Gateway errors to seemingly valid requests. Which method should address this issue?

A. Increase the concurrency limit for Lambda functions and configure notification alerts to be sent by Amazon CloudWatch when the ConcurrentExecutions metric approaches the limit.

B. Configure notification alerts for the limit of transactions per second on the API Gateway endpoint and create a Lambda function that will increase this limit, as needed.

C. Shard users to Amazon Cognito user pools in multiple AWS Regions to reduce user authentication latency.

D. Use DynamoDB strongly consistent reads to ensure the latest data is always returned to the client application.

Solution

Correct: A

Explanation

The 502 internal server errors will be returned intermittently by API Gateway if the Lambda function exceeds concurrency limits. B is incorrect because, in this case, API Gateway would return a 429 error for too many requests. C is incorrect because the error occurs when calling the API Gateway endpoint, not during the authentication process. D is incorrect because stale data would not cause a bad gateway error.

Question 16
An eCommerce company runs a successful website with a growing base of customers. The website is becoming popular internationally and demand is increasing quickly. The website is currently hosted in an on-premises data center with web servers and a MySQL database. The company plans to migrate the workloads to AWS. A Solutions Architect has been asked to create a solution that: Improves security Improves reliability Improves availability Reduces latency Reduces maintenance Which combination of steps should the Solutions Architect take to meet these requirements? (Select THREE.)

A. Launch Amazon EC2 instances in two Availability Zones to host a highly available MySQL database cluster.

B. Migrate the database to an Amazon Aurora MySQL DB cluster configured for Multi-AZ.

C. Migrate the database to a single-AZ Amazon RDS for MySQL DB instance.

D. Host static website content in Amazon S3. Use Amazon CloudFront to reduce latency while serving webpages. Use AWS WAF to improve website security.

E. Host static website content in Amazon S3. Use S3 Transfer Acceleration to reduce latency while serving webpages. Use AWS WAF to improve website security.

F. Create an Auto Scaling group of Amazon EC2 instances in two Availability Zones and attach an Application Load Balancer.

Solution

Correct: B, D, F

Explanation

This is a simple migration to cloud that requires a standard set of security, performance, and reliability requirements. To meet these requirements an ASG should be created across multiple AZs for the web layer. This should be behind an ALB for distributing incoming connections. For the database layer an Aurora MySQL DB cluster with an Aurora Replica in another AZ will provide Multi-AZ failover. This ensures the DB layer is highly available, and reduces maintenance. Another way to improve performance for global users is to host static content in Amazon S3 and use the Amazon CloudFront CDN to cache the content in Edge Locations around the world. Adding AWS WAF adds additional security.

Question 18
An application consists of three tiers within a single Region. A Solutions Architect is designing a disaster recovery strategy that includes an RTO of 30 minutes and an RPO of 5 minutes for the data tier. Application tiers use Amazon EC2 instances and are stateless. The data tier consists of a 30TB Amazon Aurora database. Which combination of steps satisfies the RTO and RPO requirements while optimizing costs? (Select TWO.)

A. Deploy a hot standby of the application tiers to another Region.

B. Use AWS DMS to replicate the Aurora DB to an RDS database in another Region.

C. Create daily snapshots of the EC2 instances and replicate to another Region.

D. Create a cross-Region Aurora MySQL Replica of the database.

Solution

Correct: A, D

Explanation

The recovery time objective (RTO) defines how quickly a service must be restored and a recovery point objective (RPO) defines how much data it is acceptable to lose. For example an RTO of 30 minutes means the service must be running again within half an hour and an RPO of 5 minutes means no more than 5 minutes’ worth of data can be lost. To achieve these requirements in this scenario a host standby is required of the EC2 instances. With a hot standby a minimum of application/web servers should be running and can be scaled out as required. For the data tier an Amazon Aurora cross-Region MySQL Replica is the best way to ensure that <5mins of data is lost. You can promote an Aurora MySQL Replica to a standalone DB cluster, and this would be performed in the event of a disaster affecting the source DB cluster. “Use AWS DMS to replicate the Aurora DB to an RDS database in another Region” is incorrect. There is no need to use AWS Database Migration Service (DMS) or to replicate data to an RDS database. Aurora can provide the required functionality natively. “Create daily snapshots of the EC2 instances and replicate to another Region” is incorrect. Snapshots could be used to create an AMI and launch EC2 instances in the second Region. However, depending on the specifics of the application this could take longer than 30 minutes. “Create snapshots of the Aurora database every 5 minutes” is incorrect. Aurora backs up your cluster volume automatically and retains restore data for the length of the backup retention period. Snapshots are used to retain data for longer than the retention period and cost extra.

Question 27
A Solutions Architect needs to design the architecture for an application that requires high availability within and across AWS Regions. The design must support failover to the second Region within 1 minute and must minimize the impact on the user experience. The application will include three tiers, the web tier, application tier and NoSQL data tier. Which combination of steps will meet these requirements? (Select THREE.)

A. Use an Amazon Route 53 weighted routing policy set to 100/0 across the two selected Regions. Set Time to Live (TTL) to 30 minutes.

B. Use an Amazon Route 53 failover routing policy for failover from the primary Region to the disaster recovery Region. Set Time to Live (TTL) to 30 seconds.

C. Use Amazon DynamoDB with a global table across both Regions so reads and writes can occur in either location.

D. Run the web and application tiers in both Regions in an active/active configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use Spot Instances for the required resources.

E. Use an Amazon Aurora global database across both Regions so reads and writes can occur either location.

F. Run the web and application tiers in both Regions in an active/passive configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use zonal Reserved Instances for the minimum number of servers and On-Demand Instances for any additional resources.

Solution

Correct: B, C, F

Explanation

: “Run the web and application tiers in both Regions in an active/active configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use Spot Instances for the required resources” is incorrect. Spot instances may not be available if the maximum price configured is exceeded. This could result in instances not being available when needed. “Use an Amazon Aurora global database across both Regions so reads and writes can occur either location” is incorrect. An Aurora database is a relational DB, not a NoSQL DB. Also, Aurora global database does not allow writes in multiple Regions, only reads. “Use an Amazon Route 53 weighted routing policy set to 100/0 across the two selected Regions. Set Time to Live (TTL) to 30 minutes” is incorrect. A weighted routing policy would need to be updated to change the weightings and the TTL here is too high as clients will cache the result for 30 minutes.

Question 31
An online retailer is updating its catalogue of products. The retailer has a dynamic website which uses EC2 instances for web and application servers. The web tier is behind an Application Load Balancer and the application tier stores data in an Amazon Aurora MySQL database. There is additionally a lot of static content and most website traffic is read-only. The company is expecting a large spike in traffic to the website when the new catalogue is launched and optimal performance is a high priority. Which combination of steps should a Solutions Architect take to reduce system response times for a global audience? (Select TWO.)

A. Create Auto Scaling groups for the web and application tiers and deploy them in multiple global Regions. Setup an AWS Direct Connect connection.

B. Migrate the database from Amazon Aurora to Amazon RDS for MySQL. Replace the web and application tiers with AWS Lambda functions, create an Amazon SQS queue.

C. Configure an Aurora global database for storage-based cross-Region replication. Use Amazon S3 with cross-Region replication for static content and resources and create Amazon CloudFront distributions.

D. Use Amazon Route 53 with a latency-based routing policy. Create Auto Scaling groups for the web and application tiers and deploy them in multiple global Regions.

E. Use logical cross-Region replication to replicate the Aurora MySQL database to a secondary Region. Replace the web servers with Amazon S3. Configure cross-Region replication for the S3 buckets.

Solution

Correct: C, D

Explanation

The website performance can be optimized for global users through a combination of using Amazon CloudFront to deliver static assets and EC2 instances launched in multiple Regions in Auto Scaling groups. To direct traffic to the correct instances latency-based routing policies can be created in Route 53 which will direct traffic to the closest (lowest-latency) AWS Region. The database layer can be configured as an Aurora Global Database. This configuration replicates data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages. This solution is optimized for providing high performance for reads. The application will need to be updated to write to the primary Aurora database and send reads to local endpoints. Migrate the database from Amazon Aurora to Amazon RDS for MySQL. Replace the web and application tiers with AWS Lambda functions, create an Amazon SQS queue is incorrect. There is no need to migrate database types or use Lambda functions in place of the EC2 instances. Using Auto Scaling the EC2 instances will provide adequate performance and we also don’t know how long processes may run for and if they can be migrated to serverless functions. There is also no mention of lowering costs here and no requirement for decoupling tiers. Create Auto Scaling groups for the web and application tiers and deploy them in multiple global Regions. Setup an AWS Direct Connect connection is incorrect. The AWS Direct Connect (DX) connection does not make sense here. DX connections are used for optimizing network performance between data centers and AWS. : “Use logical cross-Region replication to replicate the Aurora MySQL database to a secondary Region. Replace the web servers with Amazon S3. Configure cross-Region replication for the S3 buckets is incorrect. S3 cannot be used as a replacement for the web servers as they are dynamic websites and S3 can only be used to host a static website.

Question 32
A company is closing an on-premises data center and needs to move some business applications to AWS. There are over 100 applications that run on virtual machines in the data center. The applications are simple PHP, Java, Ruby, and Node.js web applications. The applications are not developed and are not heavily utilized. A Solutions Architect must determine the best approach to migrate these applications to AWS with the LOWEST operational overhead. Which method best fits these requirements?

A. Use Amazon EBS cross-Region replication to create an AMI for each application, run the AMI on Amazon EC2.

B. Refactor the applications to Docker containers and deploy them to an Amazon ECS cluster behind an Application Load Balancer.

C. Use AWS SMS to create an AMI for each virtual machine, run the AMI on Amazon EC2.

D. Deploy each application to a single-instance AWS Elastic Beanstalk environment without a load balancer.

Solution

Correct: D

Explanation

The simplest option is to upload the application code to Elastic Beanstalk. This will result in a managed environment that runs on Amazon EC2 instances. Elastic Beanstalk is best suited for running web applications that are developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker. A load balancer should not be used as there is only a single instance of each application and a load balancer would not offer many advantages (and would increase the cost). Refactor the applications to Docker containers and deploy them to an Amazon ECS cluster behind an Application Load Balancer” is incorrect. This requires refactoring the application which entails operational overhead. Also, with over 100 single-container applications behind a single ALB, requests would be randomly distributed and not directed to the correct application. Complex path-based routing and target group configurations may be able to resolve this but it gets very complex with very little advantage. Better to use Route 53 to direct traffic to the correct containers. Use AWS SMS to create an AMI for each virtual machine, run the AMI on Amazon EC2” is incorrect. This would work but an operationally simpler approach would be to take the application code and deploy it to Elastic Beanstalk.

Question 35
A company wants to host a web application on AWS. The application will be used by users around the world. A Solutions Architect has been given the following design requirements: Allow the retrieval of data from multiple data sources. Minimize the cost of API calls. Reduce latency for user access. Provide user authentication and authorization and implement role-based access control. Implement a fully serverless solution. How can the Solutions Architect meet these requirements?

A. Use Amazon CloudFront with Amazon S3 to host the web application. Use Amazon API Gateway to build the application APIs with AWS Lambda for the custom authorizer. Authorize data access by performing user lookup in AWS Managed Microsoft AD.

B. Use Amazon CloudFront with Amazon FSx to host the web application. Use AWS AppSync to build the application APIs. Use IAM groups for RBAC. Authorize data access by leveraging IAM groups in AWS AppSync resolvers.

C. Use Amazon CloudFront with Amazon S3 to host the web application. Use AWS AppSync to build the application APIs. Use Amazon Cognito groups for RBAC. Authorize data access by leveraging Cognito groups in AWS AppSync resolvers.

D. Use Amazon CloudFront with Amazon EC2 to host the web application. Use Amazon API Gateway to build the application APIs. Use AWS Lambda for custom authentication and authorization. Authorize data access by leveraging IAM roles.

Solution

Correct: C

Explanation

CloudFront with S3 provides a low-latency solution for global users to access the web application. AWS AppSync can be used to provide a GraphQL API that can be used to query multiple databases, microservices, and APIs (allow the retrieval of data from multiple data sources). Amazon Cognito Groups can be used to create collections of users to manage their permissions or to represent different types of users. You can assign an AWS Identity and Access Management (IAM) role to a group to define the permissions for members of a group. AWS AppSync GraphQL resolvers connect the fields in a type’s schema to a data source. Resolvers are the mechanism by which requests are fulfilled. Cognito groups can be used with resolvers to provide authorization based on identity.

Question 38
A Solutions Architect has deployed a REST API using an Amazon API Gateway Regional endpoint. The API will be consumed by a growing number of US-based companies. Each company will use the API twice each day to get the latest data. Following the deployment of the API the operations team noticed thousands of requests coming from hundreds of IP addresses around the world. The traffic is believed to be originating from a botnet. The Solutions Architect must secure the API while minimizing cost. Which approach should the company take to secure its API?

A. Create an AWS WAF web ACL with a rule to allow access from the IP addresses used by the companies. Associate the web ACL with the API. Create a usage plan with a request limit and associate it with the API. Create an API key and add it to the usage plan.

B. Create an AWS WAF web ACL with a rule to allow access to the IP addresses used by the companies. Associate the web ACL with the API. Create a resource policy with a request limit and associate it with the API. Configure the API to require an API key on the POST method.

C. Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit more than ten requests per day. Associate the web ACL with the CloudFront distribution. Add a custom header to the CloudFront distribution populated with an API key. Configure the API to require an API key on the GET method.

D. Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit more than ten requests per day. Associate the web ACL with the CloudFront distribution. Configure CloudFront with an origin access identity (OAI) and associate it with the distribution. Configure API Gateway to ensure only the OAI can execute the GET method.

Solution

Correct: A

Explanation

The rate-based rules associated with usage plans specify the number of web requests that are allowed by each client IP in a trailing, continuously updated, 5-minute period. The API key associated with the usage plan ensures that only clients who are using the API key in their requests are granted access. This solution requires that the IP addresses of clients are whitelisted and the API key is distributed to clients to use in their requests to the API. “Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit more than ten requests per day. Associate the web ACL with the CloudFront distribution. Configure CloudFront with an origin access identity (OAI) and associate it with the distribution. Configure API Gateway to ensure only the OAI can execute the GET method” is incorrect. As above for rate-based rules. An OAI is a special CloudFront user that is used with Amazon S3 buckets to prevent direct access using S3 URLs. It is usually used along with other protections such as signed URLs and signed cookies. It is not possible to use an OAI with API Gateway APIs.

Question 41
A company uses an AWS account with resources deployed in multiple Regions globally. Operations teams deploy and manage resources within each Region. Some Region-specific service quotas have been reached causing an inability for the local operations teams to deploy resources. A centralized cloud team is responsible for monitoring and updating service quotas. The cloud team needs to create an automated and operationally efficient solution to proactively monitor service quotas. Monitoring should occur every 15 minutes and send alerts when a team exceeds 80% utilization. Which solution will meet these requirements?

A. Create an Amazon EventBridge rule that triggers an AWS Lambda function to use AWS Trusted Advisor to retrieve the most current utilization and service limit data. If the current utilization is above 80%, publish a message to an Amazon SNS topic to alert the cloud team.

B. Create an Amazon EventBridge rule that triggers an AWS Lambda function to use AWS Trusted Advisor to retrieve the most current utilization and service limit data. If the current utilization is above 80%, use AWS Budgets to send an alert to the cloud team.

C. Create a scheduled AWS Config rule to trigger an AWS Lambda function to call the ListServiceQuotas API. If any service utilization is above 80%, publish a message to an Amazon SNS topic to alert the cloud team.

D. Create a scheduled AWS Config rule to trigger an AWS Lambda function to call the GetServiceQuota API. If any service utilization is above 80%, publish a message to an Amazon SNS topic to alert the cloud team.

Solution

Correct: A

Explanation

Trusted Advisor draws upon best practices learned from serving hundreds of thousands of AWS customers. Trusted Advisor inspects your AWS environment, and then makes recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. AWS Budgets is used for tracking and alerting on spend on your AWS account, it is not a messaging service for alerting. The GetServiceQuota API will simply list the current quota, it does not return data showing how much of that quota has been utilized. The ListServiceQuotas API lists the service quota applied to a specific service. It also does not return data showing how much of that quota has been utilized.

Question 42
A security team has discovered that developers have been storing IAM secret access keys in AWS CodeCommit repositories. The security team requires that measures are put in place to automatically find and remediate all instances of this vulnerability on an ongoing basis. Which solution meets these requirements?

A. Configure a CodeCommit trigger to invoke an AWS Lambda function to scan new code submissions for credentials. If any credentials are found, disable them and notify the user.

B. Run a cron job on an Amazon EC2 instance to check the CodeCommit repositories for unsecured credentials. If any unsecured credentials are found, generate new credentials and store them in AWS KMS.

C. Use AWS Trusted Advisor to check for unsecured AWS credentials. If any unsecured credentials are found, use AWS Secrets Manager to rotate the credentials.

D. Create an Amazon Macie job that scans AWS CodeCommit repositories for credentials. If any credentials are found an AWS Lambda function should be triggered that disables the credentials.

Solution

Correct: A

Explanation

You can configure a CodeCommit repository so that code pushes or other events trigger actions, such as sending a notification from Amazon Simple Notification Service (Amazon SNS) or invoking a function in AWS Lambda. You can create up to 10 triggers for each CodeCommit repository. In this case you can trigger AWS Lambda to scan the code for access keys. Secrets Manager cannot be used for rotating secret access keys. EC2 is not a very cost-effective option for running this task and KMS is used for storing encryption keys, not secret access keys. Macie scans Amazon S3 buckets but you cannot see the S3 bucket assigned to CodeCommit as it’s an AWS managed service bucket.

Question 48
A company needs to deploy an application into an AWS Region across multiple Availability Zones and has several requirements for the deployment. The application requires access to 100 GB of static data before the application starts and must be able to scale up and down quickly. Startup time must be minimized as much as possible. The Operations team must be able to install critical OS patches within 48 hours of release. The solution should also be cost-effective. Which deployment strategy meets these requirements?

A. Use Amazon EC2 Auto Scaling with a standard AMI. Use a user data script to download the static data from an Amazon S3 bucket. Update the OS patches with AWS Systems Manager.

B. Use Amazon EC2 Auto Scaling with an AMI that includes the latest OS patches. Mount a shared Amazon EBS volume with the static data to the EC2 instances at launch time.

C. Use Amazon EC2 Auto Scaling with an AMI that includes the static data. Update the OS patches with AWS Systems Manager.

D. Use Amazon EC2 Auto Scaling with an AMI that includes the latest OS patches. Mount an Amazon EFS file system with the static data to the EC2 instances at launch time.

Solution

Correct: D

Explanation

You can mount a shared EBS volume but with certain constraints including that the instances must be in the same AZ. Downloading the data from S3 will take time and also will mean each instance has a copy in its EBS volume increasing data storage costs. Use Amazon EC2 Auto Scaling with an AMI that includes the static data. Update the OS patches with AWS Systems Manager” is incorrect. This will increase data storage costs as the data will be included in the EBS volumes for every instance.

Question 51
A company is planning to launch a new web application on AWS using a fully serverless design. The website will be used by global customers and should be highly responsive and offer minimal latency. The design should be highly availably and include baseline DDoS protections against spikes in traffic. The users will login in to the web application using social IdPs such as Google, and Amazon. How can the design requirements be met?

A. Build an API using Docker containers running on AWS Fargate in multiple Regions behind Application Load Balancers. Use an Amazon Route 53 latency-based routing policy. Use Amazon Cognito to provide user management authentication functions.

B. Build an API with API Gateway and AWS Lambda, use Amazon S3 for hosting static web resources and create an Amazon CloudFront distribution with the S3 bucket as the origin. Use Amazon Cognito to provide user management authentication functions.

C. Build an API using Docker containers running on Amazon ECS behind an Amazon CloudFront distribution. Use AWS Secrets Manager to provide user management authentication functions.

D. Build an API with API Gateway and AWS Lambda, use Amazon S3 for hosting static web resources and create an AWS WAF Web ACL and attach it for DDoS attack mitigation. Use Amazon Cognito to provide user management authentication functions.

Solution

Correct: B

Explanation

“Build an API with API Gateway and AWS Lambda, use Amazon S3 for hosting static web resources and create an AWS WAF Web ACL and attach it for DDoS attack mitigation. Use Amazon Cognito to provide user management authentication functions is incorrect. A WAF Web ACL will not provide DDoS attack mitigation, use AWS Shield instead or use CloudFront with WAF. CloudFront also offers basic DDoS protections with AWS Shield standard offered for free for use with CloudFront. Amazon Cognito is ideal for implementing user authentication for this type of application and you can integrate with social IdPs.

Question 52
A company is creating a secure data analytics solution. Data will be uploaded into an Amazon S3 bucket. The data will then be analyzed by applications running on an Amazon EMR cluster that is launched into a VPC in a private subnet. The environment must be fully isolated from the internet at all times. Data must be encrypted at rest using keys that are controlled and provided by the company. Which combination of actions should a Solutions Architect take to meet these requirements? (Select TWO.)

A. Configure the S3 bucket policy to permit access using an aws:sourceVpce condition to match the S3 endpoint ID.

B. Configure the EMR cluster to use an AWS KMS managed CMK for at-rest encryption. Configure a gateway VPC endpoint for Amazon S3 and an interface VPC endpoint for AWS KMS.

C. Configure the S3 bucket policy to permit access to the Amazon EMR cluster only.

D. Configure the EMR cluster to use an AWS KMS CMK for at-rest encryption. Configure a gateway VPC endpoint for Amazon S3 and a NAT gateway to access AWS KMS.

E. Configure the EMR cluster to use an AWS CloudHSM appliance for at-rest encryption. Configure a gateway VPC endpoint for Amazon S3.

Solution

Correct: A, E

Explanation

This solution also includes using a gateway VPC endpoint for S3 and a bucket policy that restricts access to the gateway endpoint. This is the best way to ensure that traffic to the secure S3 bucket avoids the internet and is locked down to the correct source. “Build an API with API Gateway and AWS Lambda, use Amazon S3 for hosting static web resources and create an AWS WAF Web ACL and attach it for DDoS attack mitigation. Use Amazon Cognito to provide user management authentication functions is incorrect. A WAF Web ACL will not provide DDoS attack mitigation, use AWS Shield instead or use CloudFront with WAF. CloudFront also offers basic DDoS protections with AWS Shield standard offered for free for use with CloudFront. Amazon Cognito is ideal for implementing user authentication for this type of application and you can integrate with social IdPs. “Configure the S3 bucket policy to permit access to the Amazon EMR cluster only is incorrect. The bucket policy should restrict access to the VPC endpoint only, not the EMR cluster (there is no condition for EMR clusters).

Question 56
A company has launched a web application on Amazon EC2 instances. The instances have been launched in a private subnet. An Application Load Balancer (ALB) is configured in front of the instances. The instances are assigned to a security group named WebAppSG and the ALB is assigned to a security group named ALB SG. The security team requires that the security group rules are locked down according to best practice. What rules should be configured in the security groups? (Select TWO.)

A. An inbound rule in WebAppSG allowing port 80 from source ALB-SG.

B. An inbound rule in ALB-SG allowing port 80 from WebAppSG.

C. An outbound rule in WebAppSG allowing ports 1024 to 65535 to destination ALB-SG.

D. An inbound rule in ALB-SG allowing port 80 from source 0.0.0.0 slash 0.

E. An outbound rule in ALB-SG allowing ports 1024 65535 to destination 0.0.0.0 slash 0.

Solution

Correct: A, D

Explanation

ALB-SG: Inbound rule to allow port 80 from 0.0.0.0 slash 0. Outbound rule to allow port 80 to WebAppSG (and the health check port if different). Inbound rule to allow port 80 from the security group ID for ALB SG. Outbound rules are not necessary as the response traffic to the ALB is allowed by default (may require rules for security updates etc.)

Question 58
A Solutions Architect is designing a highly available infrastructure for a popular mobile application that offers games and videos for mobile phone users. The application runs on Amazon EC2 instances behind an Application Load Balancer. The database layer consist of an Amazon RDS MySQL Multi-AZ instance. The entire application stack is deployed across us-east-2 and us-west-1. Amazon Route 53 is configured to route traffic to the two deployments using a latency-based routing policy. A testing team blocked access to the Amazon RDS DB instance in us-east-2 to verify that users who are typically directed to that deployment would be directed to us-west-1. This did not occur and users close to us-east-2 were directed there and the application failed. Which changes to the infrastructure should a Solutions Architect make to resolve this issue? (Select TWO.)

A. Change to a failover routing policy in Amazon Route 53 and configure active-active failover. Write a custom health check the verifies successful access to the Application Load Balancers in each Region.

B. Write a custom health check that queries the AWS Service Dashboard API to verify the Amazon RDS service is healthy in each Region.

C. Set the value of Evaluate Target Health to Yes on the latency alias resources for both us-east-2 and us-west-1.

D. Set the value of Evaluate Target Health to Yes on the failover alias resources for both us-east-2 and us-west-1.

E. Write a custom health check that verifies successful access to the database endpoints in each Region. Add the health check within the latency-based routing policy in Amazon Route 53.

Solution

Correct: C, E

Explanation

The problem that is occurring is the database layer has failed (simulated) but the health checks (if there are any) are not checking access to the database. This can be resolved by creating a custom health check that connects to the RDS database endpoints in each Region and verifies that they are accessible. If a health check fails, Route 53 will no longer send any traffic to the deployment in that Region. “Change to a failover routing policy in Amazon Route 53 and configure active-active failover. Write a custom health check the verifies successful access to the Application Load Balancers in each Region is incorrect. Checking that the ALBs are accessible does not ensure that the database layer is accessible which is the problem in this scenario. “Set the value of Evaluate Target Health to Yes on the failover alias resources for both us-east-2 and us-west-1 is incorrect. There is no need to change to failover routing and this configuration would lose some of the performance benefits of latency-based routing. “Write a custom health check that queries the AWS Service Dashboard API to verify the Amazon RDS service is healthy in each Region is incorrect. This could return a result that shows that the RDS service is available but the specific endpoint might not be.

Question 59
A company runs an application on Amazon EC2 instances in an Amazon VPC and must access an external security analytics service that runs on an HTTPS REST API. The provider of the external API service can only grant access to a single source public IP address per customer. Which configuration can be used to enable access to the API service using a single IP address without making modifications to the company application?

A. Launch the Amazon EC2 instances in a public subnet. Set the HTTPS_PROXY and NO_PROXY application parameters to send non-VPC outbound HTTPS connections to an EC2 proxy server deployed in a public subnet. Associate an Elastic IP address to the EC2 proxy host that can be whitelisted on the external API service.

B. Launch the Amazon EC2 instances in a private subnet. Configure HTTP_PROXY application parameters to send outbound connections to an EC2 proxy server in a public subnet. Associate an Elastic IP address to the EC2 proxy host that can be whitelisted on the external API service.

C. Launch the Amazon EC2 instances in a public subnet with an internet gateway. Associate an Elastic IP address to the internet gateway that can be whitelisted on the external API service.

D. Launch the Amazon EC2 instances in a private subnet with an outbound route to a NAT gateway in a public subnet. Associate an Elastic IP address to the NAT gateway that can be whitelisted on the external API service.

Solution

Correct: D

Explanation

The simplest solution is to use a NAT gateway in a public subnet that has an Elastic IP address assigned. This address can be whitelisted in the external API service as all connections will appear to come from the NAT gateway Elastic IP address. “Launch the Amazon EC2 instances in a public subnet with an internet gateway. Associate an Elastic IP address to the internet gateway that can be whitelisted on the external API service is incorrect. You cannot assign an Elastic IP address to an internet gateway. “Launch the Amazon EC2 instances in a private subnet. Configure HTTP_PROXY application parameters to send outbound connections to an EC2 proxy server in a public subnet. Associate an Elastic IP address to the EC2 proxy host that can be whitelisted on the external API service is incorrect. Even if setup correctly, a NAT gateway will be a simpler solution to using an Amazon EC2 proxy instance and will not require any modifications to the company application. “Launch the Amazon EC2 instances in a public subnet. Set the HTTPS_PROXY and NO_PROXY application parameters to send non-VPC outbound HTTPS connections to an EC2 proxy server deployed in a public subnet. Associate an Elastic IP address to the EC2 proxy host that can be whitelisted on the external API service is incorrect. This solution solves a problem in the previous answer by ensuring that VPC connections are not routed via the proxy but still requires application modification and introduces unnecessary complexity.

Company

About UsBlogCareersContact Us

Install App

© 2022 Entest. All Rights Reserved.

TwitterYouTubeInstagram