Question 8
A developer is attempting to access an Amazon S3 bucket in a member account in AWS Organizations. The developer is logged in to the account with user credentials and has received an access denied error with no bucket listed. The developer should have read-only access to all buckets in the account. A Solutions Architect has reviewed the permissions and found that the developer IAM user has been granted read-only access to all S3 buckets in the account. Which additional steps should the Solutions Architect take to troubleshoot the issue? Select TWO.

A. Check the ACLs for all S3 buckets.

B. Check the SCPs set at the organizational units OUs.

C. Check if an appropriate IAM role is attached to the IAM user.

D. Check for the permissions boundaries set for the IAM user.

E. Check the bucket policies for all S3 buckets.

Solution

Correct: B, D

Explanation

A service control policy SCP may have been implemented that limits the API actions that are available for Amazon S3. This will apply to all users in the account regardless of the permissions they have assigned to their user account.Another potential cause of the issue is that the permissions boundary for the user limits the S3 API actions available to the user. A permissions boundary is an advanced feature for using a managed policy to set the maximum permissions that an identity-based policy can grant to an IAM entity. An entity permissions boundary allows it to perform only the actions that are allowed by both its identity-based policies and its permissions boundaries. Bucket ACL is not correct Check the ACLs for all S3 buckets is incorrect. With a bucket ACL the grantee is an AWS account or one of the predefined groups. With an ACL you can grant read/write at the bucket level but list is restricted to the object level so would not apply to the bucket itself. The user has been unable to list any buckets in this case so an ACL is unlikely to be the cause. Bucket policies is not correct because check the bucket policies for all S3 buckets is incorrect. The user has not been granted access to any buckets, and the error does not list access denied to any specific bucket. Therefore, it is more likely that the user is not been granted the API action to list the buckets.

Question 9
A company recently noticed an increase in costs associated with Amazon EC2 instances and Amazon RDS databases. The company needs to be able to track the costs. The company uses AWS Organizations for all of their accounts. AWS CloudFormation is used for deploying infrastructure and all resources are tagged. The management team has requested that cost center numbers and project ID numbers are added to all future EC2 instances and RDS databases.What is the MOST efficient strategy a Solutions Architect should follow to meet these requirements?

A. Use an AWS Config rule to check for untagged resources. Create a centralized AWS Lambda based solution to tag untagged EC2 instances and RDS databases every hour using a cross-account role.

B. Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate.

C. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate. Use permissions boundaries to restrict the creation of resources that do not have the cost center and project ID tags specified.

D. Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID. Use SCPs to restrict the creation of resources that do not have the cost center and project ID tags specified.

Solution

Correct: D

Explanation

Permission boundaries is not correct. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate. Use permissions boundaries to restrict the creation of resources that do not have the cost center and project ID tags specified is incorrect. Permissions boundaries apply to user accounts but SCPs apply to entire AWS accounts and will be easier to enforce for all users. INCORRECT: Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate is incorrect. There is no mechanism here to enforce application of tags. Use an AWS Config rule to check for untagged resources. Create a centralized AWS Lambda based solution to tag untagged EC2 instances and RDS databases every hour using a cross-account role is incorrect. AWS Config can be used for compliance but a better solution would be to enforce tags at creation time. Using Lambda to tag the resources would be complex in terms of identifying which tags to add to which resources.

Question 17
A company includes several business units that each use a separate AWS account and a parent company AWS account. The company requires a single AWS bill across all AWS accounts with costs broken out for each business unit. The company also requires that services and features be restricted in the business unit accounts and this must be governed centrally. Which combination of steps should a Solutions Architect take to meet these requirements? (Select TWO.)

A. Enable consolidated billing in the parent account billing console and link the business unit AWS accounts.

B. Use permissions boundaries applied to each business unit AWS account to define the maximum permissions available for services and features.

C. Use AWS Organizations to create a single organization in the parent account with all features enabled. Then, invite each business unit AWS account to join the organization.

D. Use AWS Organizations to create a separate organization for each AWS account with all features enabled. Then, create trust relationships between the AWS organizations.

E. Create an SCP that allows only approved services and features, then apply the policy to the business unit AWS accounts.

Solution

Correct: D, E

Explanation

To enable the required features you simply need to setup a single AWS organization in the parent account with all features enabled. The existing business unit AWS accounts can then be invited to join the organization. This setup will automatically enable consolidated billing which will ensure a single AWS bill is received in the parent account which has costs broken out by each AWS account. Service Control Policies (SCPs) can then be used to restrict the maximum available permissions to services and features that the parent company wishes to apply to the member accounts. Once applied, all users will be affected in the member accounts. Permission boundaries is not correct because it is applied to each business unit’s AWS account to define the maximum permissions available for services and features” is incorrect. Permissions boundaries are applied to IAM entities (users or roles), not to AWS accounts.

Question 19
A global enterprise company is in the process of creating an infrastructure services platform for its users. The company has the following requirements: a) Centrally manage the creation of infrastructure services using a central AWS account. b) Distribute infrastructure services to multiple accounts in AWS Organizations. c) Follow the principle of least privilege to limit end users’ permissions for launching and managing applications. Which combination of actions using AWS services will meet these requirements? (Select TWO.)

A. Define the infrastructure services in AWS CloudFormation templates. Add the templates to a central Amazon S3 bucket and add the IAM users that require access to the S3 bucket policy.

B. Define the infrastructure services in AWS CloudFormation templates. Upload each template as an AWS Service Catalog product to portfolios created in a central AWS account. Share these portfolios with the AWS Organizations structure created for the company.

C. Grant IAM users AWSCloudFormationFullAccess and AmazonS3ReadOnlyAccess permissions. Add an Organizations SCP at the AWS account root user level to deny all services except AWS CloudFormation and Amazon S3.

D. Allow IAM users to have AWSServiceCatalogEndUserReadOnlyAccess permissions only. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.

E. Allow IAM users to have AWSServiceCatalogEndUserFullAccess permissions. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.

Solution

Correct: B, D

Explanation

“Allow IAM users to have AWSServiceCatalogEndUserFullAccess permissions. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints” is incorrect. Users do not need full access, read only is sufficient as it does not provide the ability for users to launch and manage products using their own accounts. The launch constraint provides the necessary permissions for launching products using an assigned role. “Grant IAM users AWSCloudFormationFullAccess and AmazonS3ReadOnlyAccess permissions. Add an Organizations SCP at the AWS account root user level to deny all services except AWS CloudFormation and Amazon S3” is incorrect. When launching services using CloudFormation, the principal used (user or role) must have permissions to the AWS services being launched through the template. This solution does not provide those permissions. “Define the infrastructure services in AWS CloudFormation templates. Add the templates to a central Amazon S3 bucket and add the IAM users that require access to the S3 bucket policy” is incorrect. This uses a central account but doesn’t have offer a mechanism to distribute the templates to accounts in AWS Organizations. It would also be very hard to manage access when adding users to bucket policies.

Question 20
A company requires that only the master account in AWS Organizations is able to purchase Amazon EC2 Reserved Instances. Current and future member accounts should be blocked from purchasing Reserved Instances. Which solution will meet these requirements?

A. Create an Amazon CloudWatch Events rule that triggers a Lambda function to terminate any Reserved Instances launched by member accounts.

B. Create an OU for the master account and each member account. Move the accounts into their respective OUs. Apply an SCP to the master accounts’ OU with the Allow effect for the ec2:PurchaseReservedInstancesOffering.

C. Move all current member accounts to a new OU. Create an SCP with the Deny effect on the ec2:PurchaseReservedInstancesOffering action. Attach the SCP to the new OU.

D. Create an SCP with the Deny effect on the ec2:PurchaseReservedInstancesOffering action. Attach the SCP to the root of the organization.

Solution

Correct: D

Explanation

“Create an OU for the master account and each member account. Move the accounts into their respective OUs. Apply an SCP to the master accounts’ OU with the Allow effect for the ec2:PurchaseReservedInstancesOffering” is incorrect. This is a complex setup and does not deny the relevant API actions from the member accounts. “Create an Amazon CloudWatch Events rule that triggers a Lambda function to terminate any Reserved Instances launched by member accounts” is incorrect. CloudWatch Events is not able to trigger based on EC2 Reserved Instance purchase actions. “Move all current member accounts to a new OU. Create an SCP with the Deny effect on the ec2:PurchaseReservedInstancesOffering action. Attach the SCP to the new OU” is incorrect. This will work for existing accounts but if new accounts are added and are not added to the same OU they will not inherit the policy.

Question 24
A company uses Amazon RedShift for analytics. Several teams deploy and manage their own RedShift clusters and management has requested that the costs for these clusters is better managed. The management team has set budgets and once the budgetary thresholds have been reached a notification should be sent to a distribution list for managers. Teams should be able to view their RedShift cluster expenses to date. A Solutions Architect needs to create a solution that ensures the policy is centrally enforced in a multi-account environment.Which combination of steps should the solutions architect take to meet these requirements? Select TWO

A. Install the unified CloudWatch Agent on the RedShift cluster hosts. Track the billing metric data in CloudWatch and trigger an alarm when a threshold is reached.

B. Create an AWS Service Catalog portfolio for each team. Add each team Amazon RedShift cluster as an AWS CloudFormation template to their Service Catalog portfolio as a Product.

C. Create an Amazon CloudWatch metric for billing. Create a custom alert when costs exceed the budgetary threshold.

D. Update the AWS CloudFormation template to include the AWS::Budgets::Budget::resource with the NotificationsWithSubscribers property.

E. Create an AWS CloudTrail trail that tracks data events. Configure Amazon CloudWatch to monitor the trail and trigger an alarm when billing metrics exceed a certain threshold.

Solution

Correct: B, D

Explanation

You can use AWS Budgets to track your service costs and usage within AWS Service Catalog. You can associate budgets with AWS Service Catalog products and portfolios. AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed or are forecasted to exceed your budgeted amount. If a budget is associated to a product, you can view information about the budget on the Products and Product details page. If a budget is associated to a portfolio, you can view information about the budget on the Portfolios and Portfolio details page. When you click on a product or portfolio, you are taken to a detail page. These Portfolio detail and Product detail pages have a section with detailed information about the associated budget. You can see the budgeted amount, current spend, and forecasted spend. You also have the option to view budget details and edit the budget. Install the unified CloudWatch Agent on the RedShift cluster hosts. Track the billing metric data in CloudWatch and trigger an alarm when a threshold is reached is incorrect. This agent is used on EC2 instances for sending additional metric data and logs to CloudWatch. However, it is not used for budgeting.Create an AWS CloudTrail trail that tracks data events. Configure Amazon CloudWatch to monitor the trail and trigger an alarm when billing metrics exceed a certain threshold is incorrect. CloudTrail tracks API calls, it cannot be used for tracking billing data.Create an Amazon CloudWatch metric for billing. Create a custom alert when costs exceed the budgetary threshold is incorrect. Billing data is automatically collected, you cannot create a metric for billing but you can create an alarm.

Question 29
A company uses multiple AWS accounts. There are separate accounts for development, staging, and production environments. Some new requirements have been issued to control costs and improve the overall governance of the AWS accounts. The company must be able to calculate costs associated with each project and each environment. Commonly deployed IT services must be centrally managed and business units should be restricted to deploying pre-approved IT services only.

A. Use Amazon CloudWatch to create a billing alarm that notifies managers when a billing threshold is reached or exceeded.

B. Apply environment, cost center, and application name tags to all resources that accept tags.

C. Configure custom budgets and define thresholds using AWS Cost Explorer.

D. Use AWS Savings Plans to configure budget thresholds and send alerts to management.

E. Create an AWS Service Catalog portfolio for each business unit and add products to the portfolios using AWS CloudFormation templates.

Solution

Correct: B, E

Explanation

“Use Amazon CloudWatch to create a billing alarm that notifies managers when a billing threshold is reached or exceeded” is incorrect. There is no requirement to create billing alarms specified in the scenario. “Use AWS Savings Plans to configure budget thresholds and send alerts to management” is incorrect as this is not a service but a pricing model and cannot be used for sending alerts. “Configure custom budgets and define thresholds using AWS Cost Explorer” is incorrect. Cost Explorer is used for viewing cost related information but not for creating budgets.

Question 33
A company runs a single application in an AWS account. The application uses an Auto Scaling Group of Amazon EC2 instances with a combination of Reserved Instances (RIs) and On-Demand instances. To maintain cost-effectiveness the RIs should cover 70% of the workload. The solution should include the ability to alert the DevOps team if coverage drops below the 70% threshold. Which set of steps should a Solutions Architect take to create the report and alert the DevOps team?

A. Use AWS Cost Explorer to configure a report for RI utilization and set the utilization target to 70%. Configure an alert that notifies the DevOps team.

B. Use AWS Budgets to create a budget for Rl coverage and set the threshold to 70%. Configure an alert that notifies the DevOps team.

C. Use AWS Cost Explorer to create a budget for Rl coverage and set the threshold to 70%. Configure an alert that notifies the DevOps team.

D. Use the AWS Billing and Cost Management console to create a reservation budget for RI utilization, set the utilization to 70%. Configure an alert that notifies the DevOps team.

Solution

Correct: B

Explanation

AWS Budgets gives you the ability to set custom budgets that alert you when you exceed (or are forecasted to exceed) your budget thresholds. ... From there, you can dive deeper using the same filters that you see in AWS Cost Explorer to set budgets based on more specific use cases.

Question 55
The security department of a large company with several AWS accounts wishes to centralize the management of identities and AWS permissions. The design should also synchronize authentication credentials with the company’s existing on-premises identity management provider (IdP). Which solution will meet the security department requirements?

A. Create a SAML-based identity management provider in a central account and map IAM roles that provide the necessary permissions for users. Map users in the on-premises IdP groups to IAM roles. Use cross-account access to the other AWS accounts.

B. Deploy the required IAM users, groups, roles, and policies in every AWS account. Create an AWS Organization and federate the on-premises identity management provider and the AWS accounts.

C. Create an AWS Organization with a management account that defines the SCPs for member accounts. Create a SAML-based identity management provider in each account and map users in the on-premises IdP groups to IAM roles.

D. Create a SAML-based identity management provider in a central account and map IAM roles that provide the necessary permissions for users. Create a centralized AWS Lambda function that replicates the identities in the on-premises IdP groups to the AWS accounts.

Solution

Correct: A

Explanation

A SAML-based IdP can be created that integrates with AWS IAM. In this configuration you map IAM roles that are assumed by authenticated identities. These IAM roles must have the correct permissions for users. The users can then assume roles in the other AWS accounts in order to perform there. This solution centralizes the management of identities, federation, and permissions and allows the users to access each account as needed. The diagram below depicts how identity federation works between IAM and an on-premises IdP:

Question 60
A company is creating an account structure on AWS. There will be separate accounts for the production and testing environments. The Solutions Architect wishes to implement centralized control of security identities and permissions to access the environments. Which solution is most appropriate for these requirements?

A. Create an AWS Organization that includes the production and testing accounts. Create IAM user accounts in the production and testing accounts and implement service control policies (SCPs) to centrally control permissions.

B. Create a separate AWS account for identities where IAM user accounts can be created. Create roles with appropriate permissions in the production and testing accounts. Add the identity account to the trust policies for the roles.

C. Create a separate AWS account for identities where IAM user accounts can be created. Create roles with appropriate permissions in the identity account and delegate access to the production and testing accounts.

D. Create all user accounts in the production account. Create roles for access in the production account and testing accounts. Grant cross-account access from the production account to the testing account.

Solution

Correct: A

Explanation

The AWS best practice for this situation is to use an identity account to store all user and service accounts. You then create roles in the accounts you want to access and delegate permissions for the identity account users (or the whole account) to be able to assume the role. This provides centralized control of security identities. The configuration for cross-account access between two accounts is depicted below. In this diagram the user in account B (the identity account) is able to assume a role in account A (the resource account) and access an S3 bucket.

Question 64
A company has created a management account and added several member accounts in an AWS Organization. The security team wishes to restrict access to a specific set of AWS services in the existing member accounts. How can this requirement be implemented MOST efficiently?

A. Create an IAM role in each member account and attach a policy to the role the denies access to the specific set of services. Create user accounts in the management account and instruct users to assume the IAM role in each member account to gain access to services.

B. Create an IAM policy in each account that denies access to the services. Associate the policy with an IAM group and add all IAM users to the group.

C. Create a service control policy (SCP) that denies access to the specific set of services and apply the policy to the root of the organization.

D. Add the member accounts to a single organizational unit (OU). Create a service control policy (SCP) that denies access to the specific set of services and attach it to the OU.

Solution

Correct: D

Explanation

Create a service control policy (SCP) that denies access to the specific set of services and apply the policy to the root of the organization” is incorrect. This would apply the SCP to the existing member accounts and any new accounts that are added which may not be desired. The best solution is to use an SCP to control access to the AWS services and apply that SCP to an OU that contains the member accounts. The SCP limits the maximum available permissions for the entire account and means users in that account will not be able to access the restricted services even if they have permissions to do so.

Question 70
A global enterprise company is in the process of creating an infrastructure services platform for its users. The company has the following requirements Centrally manage the creation of infrastructure services using a central AWS account Distribute infrastructure services to multiple accounts in AWS organizations Follow the principle of least privilege to limit end users permissions for launching and managing applications Which combination of actions using AWS services will meet these requirements?

A. Define the infrastructure services in AWS CloudFormation templates. Add the templates to a central Amazon S3 bucket and add the IAM users that require access to the S3 bucket policy.

B. Define the infrastructure services in AWS CloudFormation templates. Upload each template as an AWS Service Catalog product to portfolios created in a central AWS account. Share these portfolios with the AWS Organizations structure created for the company.

C. Allow IAM users to have AWSServiceCatalogEndUserReadOnlyAccess permissions only. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.

D. Grant IAM users AWSCloudFormationFullAccess and AmazonS3ReadOnlyAccess permissions. Add an Organizations SCP at the AWS account root user level to deny all services except AWS CloudFormation and Amazon S3.

E. Allow IAM users to have AWSServiceCatalogEndUserFullAccess permissions. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.

Solution

Correct: B, C

Explanation

There are three core requirements for this solution. The first two requirements are satisfied by adding each CloudFormation template to a product in AWS Service Catalog in a central AWS account and then sharing the portfolio with AWS Organizations. In this model, the central AWS account hosts the organizationally approved infrastructure services and shares them to other AWS accounts in the company. AWS Service Catalog administrators can reference an existing organization in AWS Organizations when sharing a portfolio, and they can share the portfolio with any trusted organizational unit OU in the organization tree structure. The third requirement is satisfied by using a permissions policy with read only access to AWS Service Catalog combined with a launch constraint that will use a dedicated IAM role that ensures least privilege access. Without a launch constraint, end users must launch and manage products using their own IAM credentials. To do so, they must have permissions for AWS CloudFormation, the AWS services used by the products, and AWS Service Catalog. By using a launch role, you can instead limit the end users’ permissions to the minimum that they require for that product.

References

1.

Question 72
A company includes several business units that each use a separate AWS account and a parent company AWS account. The company requires a single AWS bill across all AWS accounts with costs broken out for each business unit. The company also requires that services and features be restricted in the business unit accounts and this must be governed centrally. Which combination of steps should a Solutions Architect take to meet these requirements? (Select TWO.)

A. Create an SCP that allows only approved services and features, then apply the policy to the business unit AWS accounts.

B. Use AWS Organizations to create a separate organization for each AWS account with all features enabled. Then, create trust relationships between the AWS organizations.

C. Use AWS Organizations to create a single organization in the parent account with all features enabled. Then, invite each business unit AWS account to join the organization.

D. Enable consolidated billing in the parent account billing console and link the business unit AWS accounts.

E. Use permissions boundaries applied to each business unit AWS account to define the maximum permissions available for services and features.

Solution

Correct: A, C

Explanation

To enable the required features you simply need to setup a single AWS organization in the parent account with all features enabled. The existing business unit AWS accounts can then be invited to join the organization. This setup will automatically enable consolidated billing which will ensure a single AWS bill is received in the parent account which has costs broken out by each AWS account. Service Control Policies (SCPs) can then be used to restrict the maximum available permissions to services and features that the parent company wishes to apply to the member accounts. Once applied, all users will be affected in the member accounts.

References

1.

Question 73
A company requires that only the master account in AWS Organizations is able to purchase Amazon EC2 Reserved Instances. Current and future member accounts should be blocked from purchasing Reserved Instances. Which solution will meet these requirements?

A. Create an OU for the master account and each member account. Move the accounts into their respective OUs. Apply an SCP to the master accounts’ OU with the Allow effect for the ec2:PurchaseReservedInstancesOffering.

B. Move all current member accounts to a new OU. Create an SCP with the Deny effect on the ec2:PurchaseReservedInstancesOffering action. Attach the SCP to the new OU.

C. Create an SCP with the Deny effect on the ec2:PurchaseReservedInstancesOffering action. Attach the SCP to the root of the organization.

D. Create an Amazon CloudWatch Events rule that triggers a Lambda function to terminate any Reserved Instances launched by member accounts.Explanation:

Solution

Correct: C

Explanation

The only solution that works for both existing and future member accounts is to apply a Deny policy to the root of the organization. When you attach a policy to the organization root, all OUs and accounts in the organization inherit that policy which ensures that any new accounts that are added will inherit the policy automatically. SCPs affect only member accounts in the organization. They have no effect on users or roles in the management account (also known as the master account). Therefore, the users in the management account are able to purchase reserved instances. Note the following behavior in relation to policy inheritance: You can attach policies to organization entities (organization root, organizational unit (OU), or account) in your organization: When you attach a policy to the organization root, all OUs and accounts in the organization inherit that policy. When you attach a policy to a specific OU, accounts that are directly under that OU or any child OU inherit the policy. When you attach a policy to a specific account, it affects only that account

References

1.

Question 74
A company is using AWS CloudFormation templates for infrastructure provisioning. The templates are hosted in the company private GitHub repository. The company has experienced several issues with updates to the templates that have caused errors when executing the updates and creating the environment. A Solutions Architect must resolve these issues and implement automated testing of the CloudFormation template updates. How can the Solutions Architect accomplish these requirements?

A. Use AWS Lambda to synchronize the contents of the GitHub repository to AWS CodeCommit. Use AWS CodeBuild to create and execute a change set from the templates in GitHub. Configure CodeBuild to test the deployment with testing scripts.

B. Use AWS Lambda to synchronize the contents of the GitHub repository to AWS CodeCommit. Use AWS CodeDeploy to create and execute a change set. Configure CodeDeploy to test the environment using testing scripts run by AWS CodeBuild.

C. Use AWS CodePipeline to a create a change set when updates are made to the CloudFormation templates in GitHub. Include a CodePipeline action to test the deployment with testing scripts run using AWS CodeBuild. Upon successful testing, configure CodePipeline to execute the change set and deploy to production.

D. Use AWS CodePipeline to a create and execute a change set when updates are made to the CloudFormation templates in GitHub. Include a CodePipeline action to test the deployment with testing scripts run using AWS CodeDeploy. Upon successful testing, configure CodePipeline to execute the change set and deploy to production.

Solution

Correct: C

Explanation

You can apply continuous delivery practices to your AWS CloudFormation stacks using AWS CodePipeline. AWS CodePipeline is a continuous delivery service for fast and reliable application and infrastructure updates. CodePipeline builds, tests, and deploys your code every time there is a code change, based on the release process models you define. When using CloudFormation change sets you first create the change set which allows you to preview how proposed changes to a stack might impact your running resources. This creates a separate stack that can be used to view the changes and ensure the updates apply successfully. Then, once you’re happy with the changes the change set can be executed which will update the stack. You can use AWS CodeBuild to both build and test code. CodeBuild can be configured with custom scripts to run tests and the result of the tests can determine the subsequent actions such as proceeding to deployment.

References

1.

Question 75
A company currently manages a fleet of Amazon EC2 instances running Windows and Linux in public and private subnets. The operations team currently connects over the Internet to manage the instances as there is no connection to the corporate network. Security groups have been updated to allow the RDP and SSH protocols from any source IPv4 address. There have been reports of malicious attempts to access the resources and the company wishes to implement the most secure solution for managing the instances. Which strategy should a Solutions Architect recommend?

A. Configure an IPSec Virtual Private Network (VPN) connecting the corporate network to the Amazon VPC. Update security groups to allow connections over SSH and RDP from the corporate network only.

B. Deploy a server on the corporate network that can be used for managing EC2 instances. Update the security groups to allow connections over SSH and RDP from the on-premises management server only.

C. Deploy the AWS Systems Manager Agent on the EC2 instances. Access the EC2 instances using Session Manager restricting access to users with permission to manage the instances.

D. Deploy a Linux bastion host with an Elastic IP address in the public subnet. Allow access to the bastion host from 0.0.0.0 0.

Solution

Correct: C

Explanation

The most secure option presented is to use AWS Systems Manager Session Manager. Session Manager is a fully managed AWS Systems Manager capability that lets you manage EC2 instances, on-premises instances, and virtual machines (VMs) through an interactive one-click browser-based shell or through the AWS Command Line Interface (AWS CLI). Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys. Session Manager also makes it easy to comply with corporate policies that require controlled access to instances, strict security practices, and fully auditable logs with instance access details, while still providing end users with simple one-click cross-platform access to your managed instances.

References

1.

Question 76
A developer is attempting to access an Amazon S3 bucket in a member account in AWS Organizations. The developer is logged in to the account with user credentials and has received an access denied error with no bucket listed. The developer should have read-only access to all buckets in the account. A Solutions Architect has reviewed the permissions and found that the developer IAM user has been granted read-only access to all S3 buckets in the account. Which additional steps should the Solutions Architect take to troubleshoot the issue? (Select TWO.)

A. Check the bucket policies for all S3 buckets.

B. Check the SCPs set at the organizational units (OUs).

C. Check for the permissions boundaries set for the IAM user.

D. Check if an appropriate IAM role is attached to the IAM user.

E. Check the ACLs for all S3 buckets.

Solution

Correct: B, C

Explanation

A service control policy (SCP) may have been implemented that limits the API actions that are available for Amazon S3. This will apply to all users in the account regardless of the permissions they have assigned to their user account. Another potential cause of the issue is that the permissions boundary for the user limits the S3 API actions available to the user. A permissions boundary is an advanced feature for using a managed policy to set the maximum permissions that an identity-based policy can grant to an IAM entity. An entity permissions boundary allows it to perform only the actions that are allowed by both its identity-based policies and its permissions boundaries.

References

1.

Question 77
A company recently noticed an increase in costs associated with Amazon EC2 instances and Amazon RDS databases. The company needs to be able to track the costs. The company uses AWS Organizations for all of their accounts. AWS CloudFormation is used for deploying infrastructure and all resources are tagged. The management team has requested that cost center numbers and project ID numbers are added to all future EC2 instances and RDS databases. What is the MOST efficient strategy a Solutions Architect should follow to meet these requirements?

A. Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID. Use SCPs to restrict the creation of resources that do not have the cost center and project ID tags specified.

B. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate. Use permissions boundaries to restrict the creation of resources that do not have the cost center and project ID tags specified.

C. Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate.

D. Use an AWS Config rule to check for untagged resources. Create a centralized AWS Lambda based solution to tag untagged EC2 instances and RDS databases every hour using a cross-account role.

Solution

Correct: A

Explanation

You can use tags to organize your resources, and cost allocation tags to track your AWS costs on a detailed level. After you activate cost allocation tags, AWS uses the cost allocation tags to organize your resource costs on your cost allocation report, to make it easier for you to categorize and track your AWS costs. By adding tags to all new resources, the management team will be better able to track costs and allocate costs to specific cost centers and projects. Service Control Policies (SCPs) can be used to limit the maximum available permissions in an account in AWS Organizations. SCPs are policies and conditional statements can be added. In this case an SCP can be created with a conditional statement that only allows resources to be created if they have a tag specified.

References

1.

Question 78
A university is running computational algorithms that require large amounts of compute power. The algorithms are being run using a high-performance compute cluster on Amazon EC2 Spot instances. Each time an instance launches a DNS record must be created in an Amazon Route 53 private hosted zone. When the instance is terminated the DNS record must be deleted. The current configuration uses an Amazon CloudWatch Events rule that triggers an AWS Lambda function to create the DNS record. When scaling the solution to thousands of instances the university has experienced “HTTP 400 error (Bad request)” errors in the Lambda logs. The response header also includes a status code element with a value of “Throttling” and a status message element with a value of “Rate exceeded”. Which combination of steps should the Solutions Architect take to resolve these issues? (Select THREE.)

A. Configure an Amazon SQS FIFO queue and configure a CloudWatch Events rule to use this queue as a target. Remove the Lambda target from the CloudWatch Events rule.

B. Configure an Amazon Kinesis data stream and configure a CloudWatch Events rule to use this queue as a target. Remove the Lambda target from the CloudWatch Events rule.

C. Configure an Amazon SQS standard queue and configure the existing CloudWatch Events rule to use this queue as a target. Remove the Lambda target from the CloudWatch Events rule.

D. Configure a Lambda function to read data from the Amazon Kinesis data stream and configure the batch window to 5 minutes. Modify the function to make a single API call to Amazon Route 53 with all records read from the Kinesis data stream.

Solution

Correct: C, D, F

Explanation

The errors in the Lambda logs indicate that throttling is occurring. Throttling is intended to protect your resources and downstream applications. Though Lambda automatically scales to accommodate incoming traffic, functions can still be throttled for various reasons. In this case it is most likely that the throttling is not occurring in Lambda itself but in API calls made to Amazon Route 53. In Route 53 you are limited (by default) to five requests per second per AWS account. If you submit more than five requests per second, Amazon Route 53 returns an HTTP 400 error (Bad request). The response header also includes a Code element with a value of Throttling and a Message element with a value of Rate exceeded. The resolution here is to place the data for the DNS records into an SQS queue where they can buffer. AWS Lambda can then poll the queue and process the messages, making sure to batch the messages to reduce the likelihood of receiving more errors.

References

1.

Question 79
A Solutions Architect is helping to standardize a company method of deploying applications to AWS using AWS CodePipeline and AWS CloudFormation. A group of developers create applications using JavaScript and TypeScript and they are concerned about needing to learn new domain specific languages. They are also reluctant to lose access to features of the existing languages such as looping. How can the Solutions Architect address the developers concerns and quickly bring the applications up to deployment standards?

A. Use a third party resource provisioning engine inside AWS CodeBuild to standardize the deployment processes. Orchestrate the CodeBuild job using CodePipeline and use CloudFormation for deployment.

B. Create CloudFormation templates and re use parts of the JavaScript and TypeScript code as Instance user data. Use the AWS Cloud Development Kit AWS CDK to deploy the application using these templates. Incorporate the AWS CDK into CodePipeline and deploy the application to AWS using these templates.

C. Define the AWS resources using JavaScript or TypeScript. Use the AWS Cloud Development Kit AWS CDK to create CloudFormation templates from the developers code and use the AWS CDK to create CloudFormation stacks. Incorporate the AWS CDK as a CodeBuild job in CodePipeline.

D. Use AWS SAM and specify a serverless transform. Add the JavaScript and TypeScript code as metadata to the template file. Use AWS CodeBuild to build the code and output a CloudFormation template.

Solution

Correct: C

Explanation

The AWS CDK is a software development framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation. You can use the AWS CDK to define your cloud resources in a familiar programming language. The AWS CDK supports TypeScript, JavaScript, Python, Java, and C sharp.Net. Developers can use one of the supported programming languages to define reusable cloud components known as Constructs. You compose these together into Stacks and Apps. The diagram below depicts how an AWS CDK application is constructed

References

1.

Company

About UsBlogCareersContact Us

Install App

© 2022 Entest. All Rights Reserved.

TwitterYouTubeInstagram