A. Create a user in an AWS SSO directory and assign a read-only permissions set. Assign all AWS accounts to be monitored to the new user. Provide the third-party monitoring solution with the user name and password.
B. Create an IAM role in the organization master account. Allow the AWS account of the third-party monitoring solution to assume the role.
C. Invite the AWS account of the third-party monitoring solution to join the organization. Enable all features.
D. Create an AWS CloudFormation template that defines a new IAM role for the third-party monitoring solution with the account of the third party listed in the trust policy. Create the IAM role across all linked AWS accounts by using a stack set.
Solution
Explanation
AWS CloudFormation StackSets can deploy the IAM role across multiple accounts with a single operation. A is incorrect because credentials supplied by AWS SSO are temporary, so the application would lose permissions and have to log in again. B would grant access to the master account only. C is incorrect because accounts belonging to an organization do not receive permissions in the other accounts.
References
1.
A. Change the SSH port to 2222 on the cluster instances with a user data script. Log in to each instance using SSH over port 2222.
B. Change the SSH port to 2222 on the cluster instances with a user data script. Use AWS Trusted Advisor to remotely manage the cluster instances over port 2222.
C. Launch the cluster instances with no SSH key pairs. Use the Amazon Systems Manager Run Command to remotely manage the cluster instances.
D. Launch the cluster instances with no SSH key pairs. Use AWS Trusted Advisor to remotely manage the cluster instances.
Solution
Explanation
The Systems Manager Run Command requires no inbound ports to be open; it operates entirely over outbound HTTPS (which is open by default for security groups). A and B are ruled out because the requirements clearly state that the only inbound port to be open is 443. D is ruled out because Trusted Advisor does perform management functions.
A. Move the code that calls the affiliate to a new AWS Lambda function. Modify the application to invoke the Lambda function asynchronously.
B. Move the code that calls the affiliate to a new AWS Lambda function. Modify the application to place the order data in an Amazon SQS queue. Trigger the Lambda function from the queue.
C. Increase the timeout of the new AWS Lambda function.
D. Adjust the concurrency limit of the new AWS Lambda function. E) Increase the memory of the new AWS Lambda function.
Solution
Explanation
B, D – Putting the messages in a queue (B) will decouple the main application from calls to the affiliate. That will not only protect the main application from the reduced capacity of the affiliate, it will also allow failed requests to automatically go back to the queue. Limiting number of concurrent executions (D) will prevent overwhelming the affiliate application. A is incorrect because, while asynchronously invoking the Lambda function will reduce load on the EC2 instances, it will not lower the number of requests to the affiliate application. C is incorrect because, while it will allow the Lambda function to wait longer for the external call to return, it does not reduce the load on the affiliate application (which will still be overwhelmed). E is incorrect because adjusting the memory will have no effect on the interaction between the Lambda function and the affiliate application.
References
1.
A. In the development account: Create a development IAM group with the ability to create and delete application infrastructure. Create an IAM user for each operator and developer and assign them to the development group. In the production account: Create an operations IAM group with the ability to create and delete application infrastructure. Create an IAM user for each operator and assign them to the operations group.
B. In the development account: Create a development IAM group with the ability to create and delete application infrastructure. Create an IAM user for each developer and assign them to the development group. Create an IAM user for each operator and assign them to the development group and the operations group in the production account. In the production account: Create an operations IAM group with the ability to create and delet application infrastructure.
C. In the development account: Create a shared IAM role with the ability to create and delete application infrastructure in the production account. Create a development IAM group with the ability to create and delete application infrastructure. Create an operations IAM group with the ability to assume the shared role. Create an IAM user for each developer and assign them to the development group. Create an IAM user for each operator and assign them to the development group and the operations group.
D. In the development account:Create a development IAM group with the ability to create and delete application infrastructure. Create an operations IAM group with the ability to assume the shared role in the production account. Create an IAM user for each developer and assign them to the development group. Create an IAM user for each operator and assign them to the development group and the operations group. In the production account: Create a shared IAM role with the ability to create and delete application infrastructure. Add the development account to the trust policy for the shared role.
Solution
Explanation
This is the only response that will work and meets the requirements. It follows the standard guidelines for granting cross-account access between two accounts that you control. A requires two sets of credentials for operators, which breaks the requirements. B will not work, as an IAM user cannot be added to an IAM group in a different account. C will not work, as a role cannot grant access to resources in another account; the shared role must be in the account with resources it manages.
References
1.
A. Change the Auto Scaling group launch configuration to use smaller instance types in the same instance family.
B. Replace the Auto Scaling group with an AWS Lambda function triggered by messages arriving in the Amazon SQS queue.
C. Reconfigure the devices and data stream to set a ratio of 10 devices to 1 data stream shard.
D. Reconfigure the devices and data stream to set a ratio of 2 devices to 1 data stream shard.
E. Change the desired capacity of the Auto Scaling group to a single EC2 instance.
Solution
Explanation
The average amount of compute used each hour is about 300 seconds (10 events x 30 seconds). While A and E would both reduce costs, they both involve paying for one or more EC2 instances sitting unused for 3,300 or more seconds per hour. B involves paying for the small amount of compute time required to process the outlying values only. Both C and D reduce the shard hour costs of the Kinesis data stream, but C will not work. Shard cost by shard hour. Shard is the base throughput unit of an Amazon Kinesis stream. One shard provides a capacity of 1MB/sec data input and 2MB/sec data output. One shard can support up to 1000 records per second. You specify the number of shards needed within your stream based on your throughput requirements. You are charged for each shard at an hourly rate.
A. Configure the S3 bucket to allow cross-origin resource sharing (CORS).
B. Host the form on Amazon EC2 rather than Amazon S3.
C. Request a limit increase for API Gateway.
D. Enable cross-origin resource sharing (CORS) in API Gateway.
E. Configure the S3 bucket for web hosting.
Solution
Explanation
CORS must be enabled to keep the browser from generating an error due to sample origin policy, which requires that the dynamic content should come from the same domain as the static content. Since API Gateway is using a domain of the form [restapi-id].execute-api.amazoaws.com, and the S3 bucket using [bucketname].s3.website-[region].amazonaws.com, a CORS header must be sent with the API Gateway response for the browser to relax the restriction. E is required for the HTML form to be served using a website endpoint. A is incorrect because the CORS header must be configured to be returned by the dynamic response from the API endpoint. Configuring CORS for the S3 bucket does not help. B is incorrect because there is no advantage to serving a static webpage from a web server running on EC2 versus an S3 bucket. C is incorrect because API Gateway has a default per AWS Region limit of 10,000 requests per second. If required for production, this limit can be increased.
A. Increase the concurrency limit for Lambda functions and configure notification alerts to be sent by Amazon CloudWatch when the ConcurrentExecutions metric approaches the limit.
B. Configure notification alerts for the limit of transactions per second on the API Gateway endpoint and create a Lambda function that will increase this limit, as needed.
C. Shard users to Amazon Cognito user pools in multiple AWS Regions to reduce user authentication latency.
D. Use DynamoDB strongly consistent reads to ensure the latest data is always returned to the client application.
Solution
Explanation
The 502 internal server errors will be returned intermittently by API Gateway if the Lambda function exceeds concurrency limits. B is incorrect because, in this case, API Gateway would return a 429 error for too many requests. C is incorrect because the error occurs when calling the API Gateway endpoint, not during the authentication process. D is incorrect because stale data would not cause a bad gateway error.
References
1. https://aws.amazon.com/premiumsupport/knowledge-center/malformed-502-api-gateway/
2. https://aws.amazon.com/premiumsupport/knowledge-center/lambda-troubleshoot-throttling/
3. https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html
4. https://aws.amazon.com/blogs/compute/operating-lambda-performance-optimization-part-1/
A. Check the ACLs for all S3 buckets.
B. Check the SCPs set at the organizational units OUs.
C. Check if an appropriate IAM role is attached to the IAM user.
D. Check for the permissions boundaries set for the IAM user.
E. Check the bucket policies for all S3 buckets.
Solution
Explanation
A service control policy SCP may have been implemented that limits the API actions that are available for Amazon S3. This will apply to all users in the account regardless of the permissions they have assigned to their user account.Another potential cause of the issue is that the permissions boundary for the user limits the S3 API actions available to the user. A permissions boundary is an advanced feature for using a managed policy to set the maximum permissions that an identity-based policy can grant to an IAM entity. An entity permissions boundary allows it to perform only the actions that are allowed by both its identity-based policies and its permissions boundaries. Bucket ACL is not correct Check the ACLs for all S3 buckets is incorrect. With a bucket ACL the grantee is an AWS account or one of the predefined groups. With an ACL you can grant read/write at the bucket level but list is restricted to the object level so would not apply to the bucket itself. The user has been unable to list any buckets in this case so an ACL is unlikely to be the cause. Bucket policies is not correct because check the bucket policies for all S3 buckets is incorrect. The user has not been granted access to any buckets, and the error does not list access denied to any specific bucket. Therefore, it is more likely that the user is not been granted the API action to list the buckets.
A. Use an AWS Config rule to check for untagged resources. Create a centralized AWS Lambda based solution to tag untagged EC2 instances and RDS databases every hour using a cross-account role.
B. Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate.
C. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate. Use permissions boundaries to restrict the creation of resources that do not have the cost center and project ID tags specified.
D. Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID. Use SCPs to restrict the creation of resources that do not have the cost center and project ID tags specified.
Solution
Explanation
Permission boundaries is not correct. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate. Use permissions boundaries to restrict the creation of resources that do not have the cost center and project ID tags specified is incorrect. Permissions boundaries apply to user accounts but SCPs apply to entire AWS accounts and will be easier to enforce for all users. INCORRECT: Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate is incorrect. There is no mechanism here to enforce application of tags. Use an AWS Config rule to check for untagged resources. Create a centralized AWS Lambda based solution to tag untagged EC2 instances and RDS databases every hour using a cross-account role is incorrect. AWS Config can be used for compliance but a better solution would be to enforce tags at creation time. Using Lambda to tag the resources would be complex in terms of identifying which tags to add to which resources.
A. Redeploy the application to use Amazon S3 multipart upload.
B. Configure the S3 bucket to use S3 Transfer Acceleration.
C. Create an Amazon CloudFront distribution with the S3 bucket as an origin.
D. Configure the client application to use byte-range fetches.
E. Modify the Amazon S3 bucket to use Intelligent Tiering.
Solution
Explanation
Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between a client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.Transfer Acceleration is a good solution for the following use cases:You have customers that upload to a centralized bucket from all over the world.You transfer gigabytes to terabytes of data on a regular basis across continents.You are unable to utilize all of your available bandwidth over the Internet when uploading to Amazon S3.Multipart upload transfers parts of the file in parallel and can speed up performance. This should definitely be built into the application code. Multipart upload also handles the failure of any parts gracefully, allowing for those parts to be retransmitted.Transfer Acceleration in combination with multipart upload will offer significant speed improvements when uploading data.Create an Amazon CloudFront distribution with the S3 bucket as an origin is incorrect. CloudFront can offer performance improvements for downloading data but to improve upload transfer times, Transfer Acceleration should be used.Configure the client application to use byte-range fetches is incorrect. This is a technique that is used when reading not writing data to fetch only the parts of the file that are required.