A. Move the code that calls the affiliate to a new AWS Lambda function. Modify the application to invoke the Lambda function asynchronously.
B. Move the code that calls the affiliate to a new AWS Lambda function. Modify the application to place the order data in an Amazon SQS queue. Trigger the Lambda function from the queue.
C. Increase the timeout of the new AWS Lambda function.
D. Adjust the concurrency limit of the new AWS Lambda function. E) Increase the memory of the new AWS Lambda function.
Solution
Explanation
B, D – Putting the messages in a queue (B) will decouple the main application from calls to the affiliate. That will not only protect the main application from the reduced capacity of the affiliate, it will also allow failed requests to automatically go back to the queue. Limiting number of concurrent executions (D) will prevent overwhelming the affiliate application. A is incorrect because, while asynchronously invoking the Lambda function will reduce load on the EC2 instances, it will not lower the number of requests to the affiliate application. C is incorrect because, while it will allow the Lambda function to wait longer for the external call to return, it does not reduce the load on the affiliate application (which will still be overwhelmed). E is incorrect because adjusting the memory will have no effect on the interaction between the Lambda function and the affiliate application.
References
1.
A. Change the Auto Scaling group launch configuration to use smaller instance types in the same instance family.
B. Replace the Auto Scaling group with an AWS Lambda function triggered by messages arriving in the Amazon SQS queue.
C. Reconfigure the devices and data stream to set a ratio of 10 devices to 1 data stream shard.
D. Reconfigure the devices and data stream to set a ratio of 2 devices to 1 data stream shard.
E. Change the desired capacity of the Auto Scaling group to a single EC2 instance.
Solution
Explanation
The average amount of compute used each hour is about 300 seconds (10 events x 30 seconds). While A and E would both reduce costs, they both involve paying for one or more EC2 instances sitting unused for 3,300 or more seconds per hour. B involves paying for the small amount of compute time required to process the outlying values only. Both C and D reduce the shard hour costs of the Kinesis data stream, but C will not work. Shard cost by shard hour. Shard is the base throughput unit of an Amazon Kinesis stream. One shard provides a capacity of 1MB/sec data input and 2MB/sec data output. One shard can support up to 1000 records per second. You specify the number of shards needed within your stream based on your throughput requirements. You are charged for each shard at an hourly rate.