Question 1
A company has multiple AWS accounts. The company has integrated its on-premises Active Directory with AWS SSO to grant Active Directory users least privilege abilities to manage infrastructure across all the accounts. A solutions architect must integrate a third-party monitoring solution that requires read-only access across all AWS accounts. The monitoring solution will run in its own AWS account. How can the monitoring solution be given the required permissions?

A. Create a user in an AWS SSO directory and assign a read-only permissions set. Assign all AWS accounts to be monitored to the new user. Provide the third-party monitoring solution with the user name and password.

B. Create an IAM role in the organization master account. Allow the AWS account of the third-party monitoring solution to assume the role.

C. Invite the AWS account of the third-party monitoring solution to join the organization. Enable all features.

D. Create an AWS CloudFormation template that defines a new IAM role for the third-party monitoring solution with the account of the third party listed in the trust policy. Create the IAM role across all linked AWS accounts by using a stack set.

Solution

Correct: D

Explanation

AWS CloudFormation StackSets can deploy the IAM role across multiple accounts with a single operation. A is incorrect because credentials supplied by AWS SSO are temporary, so the application would lose permissions and have to log in again. B would grant access to the master account only. C is incorrect because accounts belonging to an organization do not receive permissions in the other accounts.

Question 2
A company is launching a new web service on an Amazon ECS cluster. Company policy requires that the security group on the cluster instances block all inbound traffic but HTTPS (port 443). The cluster consists of 100 Amazon EC2 instances. Security engineers are responsible for managing and updating the cluster instances. The security engineering team is small, so any management efforts must be minimized. How can the service be designed to meet these operational requirements?

A. Change the SSH port to 2222 on the cluster instances with a user data script. Log in to each instance using SSH over port 2222.

B. Change the SSH port to 2222 on the cluster instances with a user data script. Use AWS Trusted Advisor to remotely manage the cluster instances over port 2222.

C. Launch the cluster instances with no SSH key pairs. Use the Amazon Systems Manager Run Command to remotely manage the cluster instances.

D. Launch the cluster instances with no SSH key pairs. Use AWS Trusted Advisor to remotely manage the cluster instances.

Solution

Correct: C

Explanation

The Systems Manager Run Command requires no inbound ports to be open; it operates entirely over outbound HTTPS (which is open by default for security groups). A and B are ruled out because the requirements clearly state that the only inbound port to be open is 443. D is ruled out because Trusted Advisor does perform management functions.

Question 3
A company operates an ecommerce application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. After an order is successfully processed, the application immediately posts order data to an external third-party affiliate tracking system that pays sales commissions for order referrals. During a highly successful marketing promotion, the number of EC2 instances increased from 2 to 20. The application continued to work correctly, but the increased request rate overwhelmed the third-party affiliate and resulted in failed requests. Which combination of architectural changes could ensure that the entire process functions correctly under load? (Select TWO.)

A. Move the code that calls the affiliate to a new AWS Lambda function. Modify the application to invoke the Lambda function asynchronously.

B. Move the code that calls the affiliate to a new AWS Lambda function. Modify the application to place the order data in an Amazon SQS queue. Trigger the Lambda function from the queue.

C. Increase the timeout of the new AWS Lambda function.

D. Adjust the concurrency limit of the new AWS Lambda function. E) Increase the memory of the new AWS Lambda function.

Solution

Correct: B, D

Explanation

B, D – Putting the messages in a queue (B) will decouple the main application from calls to the affiliate. That will not only protect the main application from the reduced capacity of the affiliate, it will also allow failed requests to automatically go back to the queue. Limiting number of concurrent executions (D) will prevent overwhelming the affiliate application. A is incorrect because, while asynchronously invoking the Lambda function will reduce load on the EC2 instances, it will not lower the number of requests to the affiliate application. C is incorrect because, while it will allow the Lambda function to wait longer for the external call to return, it does not reduce the load on the affiliate application (which will still be overwhelmed). E is incorrect because adjusting the memory will have no effect on the interaction between the Lambda function and the affiliate application.

References

1.

Question 4
A company has two AWS accounts: one for production workloads and one for development workloads. Creating and managing these workloads are a development team and an operations team. The company needs a security strategy that meets the following requirements: Developers need to create and delete development application infrastructure. Operators need to create and delete both development and production application infrastructure. Developers should have no access to production infrastructure. All users should have a single set of AWS credentials. What strategy meets these requirements?

A. In the development account: Create a development IAM group with the ability to create and delete application infrastructure. Create an IAM user for each operator and developer and assign them to the development group. In the production account: Create an operations IAM group with the ability to create and delete application infrastructure. Create an IAM user for each operator and assign them to the operations group.

B. In the development account: Create a development IAM group with the ability to create and delete application infrastructure. Create an IAM user for each developer and assign them to the development group. Create an IAM user for each operator and assign them to the development group and the operations group in the production account. In the production account: Create an operations IAM group with the ability to create and delet application infrastructure.

C. In the development account: Create a shared IAM role with the ability to create and delete application infrastructure in the production account. Create a development IAM group with the ability to create and delete application infrastructure. Create an operations IAM group with the ability to assume the shared role. Create an IAM user for each developer and assign them to the development group. Create an IAM user for each operator and assign them to the development group and the operations group.

D. In the development account:Create a development IAM group with the ability to create and delete application infrastructure. Create an operations IAM group with the ability to assume the shared role in the production account. Create an IAM user for each developer and assign them to the development group. Create an IAM user for each operator and assign them to the development group and the operations group. In the production account: Create a shared IAM role with the ability to create and delete application infrastructure. Add the development account to the trust policy for the shared role.

Solution

Correct: D

Explanation

This is the only response that will work and meets the requirements. It follows the standard guidelines for granting cross-account access between two accounts that you control. A requires two sets of credentials for operators, which breaks the requirements. B will not work, as an IAM user cannot be added to an IAM group in a different account. C will not work, as a role cannot grant access to resources in another account; the shared role must be in the account with resources it manages.

References

1.

Question 5
A solutions architect needs to reduce costs for a big data application. The application environment consists of hundreds of devices that send events to Amazon Kinesis Data Streams. The device ID is used as the partition key, so each device gets a separate shard. Each device sends between 50 KB and 450 KB of data per second. The shards are polled by an AWS Lambda function that processes the data and stores the result on Amazon S3. Every hour, an AWS Lambda function runs an Amazon Athena query against the result data that identifies any outliers and places them in an Amazon SQS queue. An Amazon EC2 Auto Scaling group of two EC2 instances monitors the queue and runs a short (approximately 30-second) process to address the outliers. The devices submit an average of 10 outlying values every hour. Which combination of changes to the application would MOST reduce costs? (Select TWO.)

A. Change the Auto Scaling group launch configuration to use smaller instance types in the same instance family.

B. Replace the Auto Scaling group with an AWS Lambda function triggered by messages arriving in the Amazon SQS queue.

C. Reconfigure the devices and data stream to set a ratio of 10 devices to 1 data stream shard.

D. Reconfigure the devices and data stream to set a ratio of 2 devices to 1 data stream shard.

E. Change the desired capacity of the Auto Scaling group to a single EC2 instance.

Solution

Correct: B, D

Explanation

The average amount of compute used each hour is about 300 seconds (10 events x 30 seconds). While A and E would both reduce costs, they both involve paying for one or more EC2 instances sitting unused for 3,300 or more seconds per hour. B involves paying for the small amount of compute time required to process the outlying values only. Both C and D reduce the shard hour costs of the Kinesis data stream, but C will not work. Shard cost by shard hour. Shard is the base throughput unit of an Amazon Kinesis stream. One shard provides a capacity of 1MB/sec data input and 2MB/sec data output. One shard can support up to 1000 records per second. You specify the number of shards needed within your stream based on your throughput requirements. You are charged for each shard at an hourly rate.

Question 6
A team is building an HTML form hosted in a public Amazon S3 bucket. The form uses JavaScript to post data to an Amazon API Gateway endpoint. The endpoint is integrated with AWS Lambda functions. The team has tested each method in the API Gateway console and received valid responses. Which combination of steps must be completed for the form to successfully post to the API Gateway and receive a valid response? (Select TWO.)

A. Configure the S3 bucket to allow cross-origin resource sharing (CORS).

B. Host the form on Amazon EC2 rather than Amazon S3.

C. Request a limit increase for API Gateway.

D. Enable cross-origin resource sharing (CORS) in API Gateway.

E. Configure the S3 bucket for web hosting.

Solution

Correct: D, E

Explanation

CORS must be enabled to keep the browser from generating an error due to sample origin policy, which requires that the dynamic content should come from the same domain as the static content. Since API Gateway is using a domain of the form [restapi-id].execute-api.amazoaws.com, and the S3 bucket using [bucketname].s3.website-[region].amazonaws.com, a CORS header must be sent with the API Gateway response for the browser to relax the restriction. E is required for the HTML form to be served using a website endpoint. A is incorrect because the CORS header must be configured to be returned by the dynamic response from the API endpoint. Configuring CORS for the S3 bucket does not help. B is incorrect because there is no advantage to serving a static webpage from a web server running on EC2 versus an S3 bucket. C is incorrect because API Gateway has a default per AWS Region limit of 10,000 requests per second. If required for production, this limit can be increased.

Question 7
A retail company runs a server-less mobile app built on Amazon API Gateway, AWS Lambda, Amazon Cognito, and Amazon DynamoDB. During heavy holidays traffic spikes, the company receives complains of intermittent system failures. Developers find that the API Gateway endpoint is returning 502 Bad Gateway errors to seemingly valid requests. Which method should address this issue?

A. Increase the concurrency limit for Lambda functions and configure notification alerts to be sent by Amazon CloudWatch when the ConcurrentExecutions metric approaches the limit.

B. Configure notification alerts for the limit of transactions per second on the API Gateway endpoint and create a Lambda function that will increase this limit, as needed.

C. Shard users to Amazon Cognito user pools in multiple AWS Regions to reduce user authentication latency.

D. Use DynamoDB strongly consistent reads to ensure the latest data is always returned to the client application.

Solution

Correct: A

Explanation

The 502 internal server errors will be returned intermittently by API Gateway if the Lambda function exceeds concurrency limits. B is incorrect because, in this case, API Gateway would return a 429 error for too many requests. C is incorrect because the error occurs when calling the API Gateway endpoint, not during the authentication process. D is incorrect because stale data would not cause a bad gateway error.

Question 8
A developer is attempting to access an Amazon S3 bucket in a member account in AWS Organizations. The developer is logged in to the account with user credentials and has received an access denied error with no bucket listed. The developer should have read-only access to all buckets in the account. A Solutions Architect has reviewed the permissions and found that the developer IAM user has been granted read-only access to all S3 buckets in the account. Which additional steps should the Solutions Architect take to troubleshoot the issue? Select TWO.

A. Check the ACLs for all S3 buckets.

B. Check the SCPs set at the organizational units OUs.

C. Check if an appropriate IAM role is attached to the IAM user.

D. Check for the permissions boundaries set for the IAM user.

E. Check the bucket policies for all S3 buckets.

Solution

Correct: B, D

Explanation

A service control policy SCP may have been implemented that limits the API actions that are available for Amazon S3. This will apply to all users in the account regardless of the permissions they have assigned to their user account.Another potential cause of the issue is that the permissions boundary for the user limits the S3 API actions available to the user. A permissions boundary is an advanced feature for using a managed policy to set the maximum permissions that an identity-based policy can grant to an IAM entity. An entity permissions boundary allows it to perform only the actions that are allowed by both its identity-based policies and its permissions boundaries. Bucket ACL is not correct Check the ACLs for all S3 buckets is incorrect. With a bucket ACL the grantee is an AWS account or one of the predefined groups. With an ACL you can grant read/write at the bucket level but list is restricted to the object level so would not apply to the bucket itself. The user has been unable to list any buckets in this case so an ACL is unlikely to be the cause. Bucket policies is not correct because check the bucket policies for all S3 buckets is incorrect. The user has not been granted access to any buckets, and the error does not list access denied to any specific bucket. Therefore, it is more likely that the user is not been granted the API action to list the buckets.

Question 9
A company recently noticed an increase in costs associated with Amazon EC2 instances and Amazon RDS databases. The company needs to be able to track the costs. The company uses AWS Organizations for all of their accounts. AWS CloudFormation is used for deploying infrastructure and all resources are tagged. The management team has requested that cost center numbers and project ID numbers are added to all future EC2 instances and RDS databases.What is the MOST efficient strategy a Solutions Architect should follow to meet these requirements?

A. Use an AWS Config rule to check for untagged resources. Create a centralized AWS Lambda based solution to tag untagged EC2 instances and RDS databases every hour using a cross-account role.

B. Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate.

C. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate. Use permissions boundaries to restrict the creation of resources that do not have the cost center and project ID tags specified.

D. Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID. Use SCPs to restrict the creation of resources that do not have the cost center and project ID tags specified.

Solution

Correct: D

Explanation

Permission boundaries is not correct. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate. Use permissions boundaries to restrict the creation of resources that do not have the cost center and project ID tags specified is incorrect. Permissions boundaries apply to user accounts but SCPs apply to entire AWS accounts and will be easier to enforce for all users. INCORRECT: Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate is incorrect. There is no mechanism here to enforce application of tags. Use an AWS Config rule to check for untagged resources. Create a centralized AWS Lambda based solution to tag untagged EC2 instances and RDS databases every hour using a cross-account role is incorrect. AWS Config can be used for compliance but a better solution would be to enforce tags at creation time. Using Lambda to tag the resources would be complex in terms of identifying which tags to add to which resources.

Question 10
A company provides a service that allows users to upload high-resolution product images using an app on their phones for a price matching service. The service currently uses Amazon S3 in the us-west-1 Region. The company has expanded to Europe and users in European countries are experiencing significant delays when uploading images. Which combination of changes can a Solutions Architect make to improve the upload times for the images? Select TWO.

A. Redeploy the application to use Amazon S3 multipart upload.

B. Configure the S3 bucket to use S3 Transfer Acceleration.

C. Create an Amazon CloudFront distribution with the S3 bucket as an origin.

D. Configure the client application to use byte-range fetches.

E. Modify the Amazon S3 bucket to use Intelligent Tiering.

Solution

Correct: A, B

Explanation

Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between a client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.Transfer Acceleration is a good solution for the following use cases:You have customers that upload to a centralized bucket from all over the world.You transfer gigabytes to terabytes of data on a regular basis across continents.You are unable to utilize all of your available bandwidth over the Internet when uploading to Amazon S3.Multipart upload transfers parts of the file in parallel and can speed up performance. This should definitely be built into the application code. Multipart upload also handles the failure of any parts gracefully, allowing for those parts to be retransmitted.Transfer Acceleration in combination with multipart upload will offer significant speed improvements when uploading data.Create an Amazon CloudFront distribution with the S3 bucket as an origin is incorrect. CloudFront can offer performance improvements for downloading data but to improve upload transfer times, Transfer Acceleration should be used.Configure the client application to use byte-range fetches is incorrect. This is a technique that is used when reading not writing data to fetch only the parts of the file that are required.

Company

About UsBlogCareersContact Us

Install App

© 2022 Entest. All Rights Reserved.

TwitterYouTubeInstagram