Question 3
A company operates an ecommerce application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. After an order is successfully processed, the application immediately posts order data to an external third-party affiliate tracking system that pays sales commissions for order referrals. During a highly successful marketing promotion, the number of EC2 instances increased from 2 to 20. The application continued to work correctly, but the increased request rate overwhelmed the third-party affiliate and resulted in failed requests. Which combination of architectural changes could ensure that the entire process functions correctly under load? (Select TWO.)

A. Move the code that calls the affiliate to a new AWS Lambda function. Modify the application to invoke the Lambda function asynchronously.

B. Move the code that calls the affiliate to a new AWS Lambda function. Modify the application to place the order data in an Amazon SQS queue. Trigger the Lambda function from the queue.

C. Increase the timeout of the new AWS Lambda function.

D. Adjust the concurrency limit of the new AWS Lambda function. E) Increase the memory of the new AWS Lambda function.

Solution

Correct: B, D

Explanation

B, D – Putting the messages in a queue (B) will decouple the main application from calls to the affiliate. That will not only protect the main application from the reduced capacity of the affiliate, it will also allow failed requests to automatically go back to the queue. Limiting number of concurrent executions (D) will prevent overwhelming the affiliate application. A is incorrect because, while asynchronously invoking the Lambda function will reduce load on the EC2 instances, it will not lower the number of requests to the affiliate application. C is incorrect because, while it will allow the Lambda function to wait longer for the external call to return, it does not reduce the load on the affiliate application (which will still be overwhelmed). E is incorrect because adjusting the memory will have no effect on the interaction between the Lambda function and the affiliate application.

Question 5
A solutions architect needs to reduce costs for a big data application. The application environment consists of hundreds of devices that send events to Amazon Kinesis Data Streams. The device ID is used as the partition key, so each device gets a separate shard. Each device sends between 50 KB and 450 KB of data per second. The shards are polled by an AWS Lambda function that processes the data and stores the result on Amazon S3. Every hour, an AWS Lambda function runs an Amazon Athena query against the result data that identifies any outliers and places them in an Amazon SQS queue. An Amazon EC2 Auto Scaling group of two EC2 instances monitors the queue and runs a short (approximately 30-second) process to address the outliers. The devices submit an average of 10 outlying values every hour. Which combination of changes to the application would MOST reduce costs? (Select TWO.)

A. Change the Auto Scaling group launch configuration to use smaller instance types in the same instance family.

B. Replace the Auto Scaling group with an AWS Lambda function triggered by messages arriving in the Amazon SQS queue.

C. Reconfigure the devices and data stream to set a ratio of 10 devices to 1 data stream shard.

D. Reconfigure the devices and data stream to set a ratio of 2 devices to 1 data stream shard.

E. Change the desired capacity of the Auto Scaling group to a single EC2 instance.

Solution

Correct: B, D

Explanation

The average amount of compute used each hour is about 300 seconds (10 events x 30 seconds). While A and E would both reduce costs, they both involve paying for one or more EC2 instances sitting unused for 3,300 or more seconds per hour. B involves paying for the small amount of compute time required to process the outlying values only. Both C and D reduce the shard hour costs of the Kinesis data stream, but C will not work. Shard cost by shard hour. Shard is the base throughput unit of an Amazon Kinesis stream. One shard provides a capacity of 1MB/sec data input and 2MB/sec data output. One shard can support up to 1000 records per second. You specify the number of shards needed within your stream based on your throughput requirements. You are charged for each shard at an hourly rate.

Company

About UsBlogCareersContact Us

Install App

© 2022 Entest. All Rights Reserved.

TwitterYouTubeInstagram