Question 12
An agricultural company is rolling out thousands of devices that will send environmental data to a data platform. The platform will process and analyze the data and provide information back to researchers. The devices will send 8 KB of data every second and the solution must support near real-time analytics, provide durability for the data, and deliver results to a data warehouse. Which strategy should a solutions architect use to meet these requirements?

A. Use Amazon S3 to collect the inbound device data, analyze the data from Amazon SQS with Kinesis, and save the results to an Amazon Redshift cluster.

B. Use an Amazon API Gateway to put requests into an Amazon SQS queue, analyze the data with an AWS Lambda function, and save the results to an Amazon Redshift cluster using Amazon EMR.

C. Use Amazon Kinesis Data Streams to collect the inbound data, analyze the data with Kinesis clients, and save the results to an Amazon Redshift cluster using Amazon EMR

D. Use Amazon Kinesis Data Firehose to collect the inbound sensor data, analyze the data with Kinesis clients, and save the results to an Amazon RDS instance.

Solution

Correct: C

Explanation

The solution must support near real-time analytics. For this Amazon Kinesis data streams can be used with clients processing and analyzing the data using Amazon EMR. The solution must also deliver results to a data warehouse and Amazon RedShift is ideal for this purpose. “Use Amazon Kinesis Data Firehose to collect the inbound sensor data, analyze the data with Kinesis clients, and save the results to an Amazon RDS instance” is incorrect. Firehose does not use Kinesis clients; it loads data directly to a destination. “Use Amazon S3 to collect the inbound device data, analyze the data from Amazon SQS with Kinesis, and save the results to an Amazon Redshift cluster” is incorrect. Amazon S3 should not be used for near real-time ingestion of streaming data on this scale. Amazon Kinesis a better fit for this use case. Analyzing with Kinesis from SQS does not make sense either. “Use an Amazon API Gateway to put requests into an Amazon SQS queue, analyze the data with an AWS Lambda function, and save the results to an Amazon Redshift cluster using Amazon EMR” is incorrect. API Gateway should not be used for streaming data and cannot directly put data into an SQS queue.

Question 13
An eCommerce company runs a successful website with a growing base of customers. The website is becoming popular internationally and demand is increasing quickly. The website is currently hosted in an on-premises data center with web servers and a MySQL database. The company plans to migrate the workloads to AWS. A Solutions Architect has been asked to create a solution that: Improves security Improves reliability Improves availability Reduces latency Reduces maintenance Which combination of steps should the Solutions Architect take to meet these requirements? (Select THREE.)

A. Launch Amazon EC2 instances in two Availability Zones to host a highly available MySQL database cluster.

B. Migrate the database to an Amazon Aurora MySQL DB cluster configured for Multi-AZ.

C. Migrate the database to a single-AZ Amazon RDS for MySQL DB instance.

D. Host static website content in Amazon S3. Use Amazon CloudFront to reduce latency while serving webpages. Use AWS WAF to improve website security.

E. Host static website content in Amazon S3. Use S3 Transfer Acceleration to reduce latency while serving webpages. Use AWS WAF to improve website security.

F. Create an Auto Scaling group of Amazon EC2 instances in two Availability Zones and attach an Application Load Balancer.

Solution

Correct: B, D, F

Explanation

This is a simple migration to cloud that requires a standard set of security, performance, and reliability requirements. To meet these requirements an ASG should be created across multiple AZs for the web layer. This should be behind an ALB for distributing incoming connections. For the database layer an Aurora MySQL DB cluster with an Aurora Replica in another AZ will provide Multi-AZ failover. This ensures the DB layer is highly available, and reduces maintenance. Another way to improve performance for global users is to host static content in Amazon S3 and use the Amazon CloudFront CDN to cache the content in Edge Locations around the world. Adding AWS WAF adds additional security.

Question 15
An agricultural company is rolling out thousands of devices that will send environmental data to a data platform. The platform will process and analyze the data and provide information back to researchers. The devices will send 8 KB of data every second and the solution must support near real-time analytics, provide durability for the data, and deliver results to a data warehouse. Which strategy should a solutions architect use to meet these requirements?

A. Use Amazon S3 to collect the inbound device data, analyze the data from Amazon SQS with Kinesis, and save the results to an Amazon Redshift cluster.

B. Use an Amazon API Gateway to put requests into an Amazon SQS queue, analyze the data with an AWS Lambda function, and save the results to an Amazon Redshift cluster using Amazon EMR.

C. Use Amazon Kinesis Data Streams to collect the inbound data, analyze the data with Kinesis clients, and save the results to an Amazon Redshift cluster using Amazon EMR

D. Use Amazon Kinesis Data Firehose to collect the inbound sensor data, analyze the data with Kinesis clients, and save the results to an Amazon RDS instance.

Solution

Correct: C

Explanation

The solution must support near real-time analytics. For this Amazon Kinesis data streams can be used with clients processing and analyzing the data using Amazon EMR. The solution must also deliver results to a data warehouse and Amazon RedShift is ideal for this purpose. “Use Amazon Kinesis Data Firehose to collect the inbound sensor data, analyze the data with Kinesis clients, and save the results to an Amazon RDS instance” is incorrect. Firehose does not use Kinesis clients; it loads data directly to a destination. “Use Amazon S3 to collect the inbound device data, analyze the data from Amazon SQS with Kinesis, and save the results to an Amazon Redshift cluster” is incorrect. Amazon S3 should not be used for near real-time ingestion of streaming data on this scale. Amazon Kinesis a better fit for this use case. Analyzing with Kinesis from SQS does not make sense either. “Use an Amazon API Gateway to put requests into an Amazon SQS queue, analyze the data with an AWS Lambda function, and save the results to an Amazon Redshift cluster using Amazon EMR” is incorrect. API Gateway should not be used for streaming data and cannot directly put data into an SQS queue.

Question 23
A Solutions Architect needs to design the architecture for an application that requires high availability within and across AWS Regions. The design must support failover to the second Region within 1 minute and must minimize the impact on the user experience. The application will include three tiers, the web tier, application tier and NoSQL data tier. Which combination of steps will meet these requirements? (Select THREE.)

A. Use an Amazon Aurora global database across both Regions so reads and writes can occur either location.

B. Use Amazon DynamoDB with a global table across both Regions so reads and writes can occur in either location.

C. Run the web and application tiers in both Regions in an active/passive configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use zonal Reserved Instances for the minimum number of servers and On-Demand Instances for any additional resources.

D. Use an Amazon Route 53 failover routing policy for failover from the primary Region to the disaster recovery Region. Set Time to Live (TTL) to 30 seconds.

E. Run the web and application tiers in both Regions in an active/active configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use Spot Instances for the required resources.

F. Use an Amazon Route 53 weighted routing policy set to 100/0 across the two selected Regions. Set Time to Live (TTL) to 30 minutes.

Solution

Correct: B, C, D

Explanation

The requirements can be achieved by using an Amazon DynamoDB database with a global table. DynamoDB is a NoSQL database so it fits the requirements. A global table also allows both reads and writes to occur in both Regions. For the web and application tiers Auto Scaling groups should be configured. Due to the 1-minute RTO these must be configured in an active/passive state. The best pricing model to lower price but ensure resources are available when needed is to use a combination of zonal reserved instances and on-demand instances. To failover between the Regions, a Route 53 fail-over routing policy can be configured with a TTL configured on the record of 30 seconds. This will mean clients must resolve against Route 53 every 30 seconds to get the latest record. In a fail-over scenario the clients would be redirected to the secondary site if the primary site is unhealthy. “Run the web and application tiers in both Regions in an active/active configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use Spot Instances for the required resources” is incorrect. Spot instances may not be available if the maximum price configured is exceeded. This could result in instances not being available when needed. “Use an Amazon Aurora global database across both Regions so reads and writes can occur either location” is incorrect. An Aurora database is a relational DB, not a NoSQL DB. Also, Aurora global database does not allow writes in multiple Regions, only reads. “Use an Amazon Route 53 weighted routing policy set to 100/0 across the two selected Regions. Set Time to Live (TTL) to 30 minutes” is incorrect. A weighted routing policy would need to be updated to change the weightings and the TTL here is too high as clients will cache the result for 30 minutes.

Question 34
A company captures financial transactions in Amazon DynamoDB tables. The security team is concerned about identifying fraudulent behavior and has requested that all changes to items stored in DynamoDB tables must be logged within 30 minutes. How can a Solutions Architect meet this requirement?

A. Use event patterns in Amazon CloudWatch Events to capture DynamoDB API call events with an AWS Lambda function as a target to analyze behavior. Send SNS notifications when anomalous behaviors are detected.

B. Use Amazon DynamoDB Streams to capture and send updates to AWS Lambda. Create a Lambda function to output records to Amazon Kinesis Data Streams. Analyze any anomalies with Amazon Kinesis Data Analytics. Send SNS notifications when anomalous behaviors are detected.

C. Use AWS CloudTrail to capture all the APIs that change the DynamoDB tables. Send SNS notifications when anomalous behaviors are detected using CloudTrail event filtering.

D. Copy the DynamoDB tables into Apache Hive tables on Amazon EMR every hour and analyze them for anomalous behaviors. Send Amazon SNS notifications when anomalous behaviors are detected.

Solution

Correct: B

Explanation

DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table. DynamoDB is integrated with AWS Lambda so that you can create triggers—pieces of code that automatically respond to events in DynamoDB Streams. With triggers, you can build applications that react to data modifications in DynamoDB tables. In this case the Lambda function can process the data and place it in a Kinesis Data Stream where Data Analytics can analyze the data and send an SNS notification if any fraudulent behavior is detected.

Question 44
An eCommerce website consists of a two-tier architecture. Amazon EC2 instances in an Auto Scaling group are used for the web server layer behind an Application Load Balancer (ALB). The web servers run a PHP application on Apache Tomcat. The database layer runs on an Aurora MySQL database instance. Recently, a large sales event caused some errors to occur for customers when placing orders on the website. The operations team collected logs from the web servers and reviewed Aurora DB cluster performance metrics. Several web servers were terminated by the ASG before the logs could be collected and the Aurora metrics were not sufficient for query performance analysis. Which combination of steps should a Solutions Architect take to improve application performance visibility during peak traffic events? (Select THREE.)

A. Configure AWS CloudTrail to collect API activity from Amazon EC2 and Aurora and analyze with Amazon Athena.

B. Implement the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances and implement tracing of SQL queries with the X-Ray SDK for PHP.

C. Configure the Aurora MySQL DB cluster to generate slow query logs by setting parameters in the parameter group.

D. Configure the Aurora MySQL DB cluster to generate error logs by setting parameters in the parameter group.

E. Install and configure an Amazon CloudWatch Logs agent on the EC2 instances to send the Apache logs to CloudWatch Logs.

F. Configure an Amazon EventBridge rule that triggers Lambda upon Aurora error events and saves logs to Amazon S3 for analysis with Amazon Athena.

Solution

Correct: B, C, E

Explanation

AWS CloudTrail and AWS X-Ray are primarily classified as Log Management and Performance Monitoring tools respectively. For Amazon Aurora you can monitor the MySQL error log, slow query log, and the general log. The MySQL error log is generated by default; you can generate the slow query and general logs by setting parameters in your DB parameter group. This data will help troubleshoot any issues with the Aurora database layer. AWS X-Ray is a service that collects data about requests that your application serves, and provides tools you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization. You can use the AWS SDK to develop programs that use the X-Ray API. In this case the application runs on PHP so the X-Ray SDK for PHP should be used. Some instances were terminated by the ASG before the logs could be collected. To prevent this issue from reoccurring, the CloudWatch Logs agent can be installed on instances. The agent streams logs straight to CloudWatch logs at regular intervals (configurable) so the logs will not be lost next time. Configure AWS CloudTrail to collect API activity from Amazon EC2 and Aurora and analyze with Amazon Athena. API activity tells us who did what API action and when. In this case we are attempting to troubleshoot performance issues so auditing data is not as useful as performance metrics, tracing, and logs. . EventBridge is not used for triggering on error events occurring in Aurora.

Question 47
A manufacturing company collects data from IoT devices in JSON format. The data is collected, transformed, and stored in a data warehouse for analysis using an analytics tool that uses ODBC. The performance of the current solution suffers under high loads due to insufficient compute capacity and incoming data is often lost. The application will be migrated to AWS. The solution must support the current analytics tool, resolve the compute constraints, and be cost-effective. Which solution meets these requirements?

A. Re-architect the application. Load the data into Amazon S3. Use AWS Lambda to transform the data. Create an Amazon DynamoDB global table across two Regions to store the data and use Amazon Elasticsearch to query the data.

B. Replatform the application. Use Amazon API Gateway for data ingestion. Use AWS Lambda to transform the JSON data. Create an Amazon Aurora PostgreSQL DB cluster with an Aurora Replica in another Availability Zone. Use Amazon QuickSight to generate reports and visualize data.

C. Re-architect the application. Load the data into Amazon S3. Use AWS Glue to transform the data. Store the table schema in an AWS Glue Data Catalog. Use Amazon Athena to query the data.

D. Re-architect the application. Load the data into Amazon S3. Use Amazon Kinesis Data Analytics to transform the data. Create an external schema in an AWS Glue Data Catalog. Use Amazon Redshift Spectrum to query the data.

Solution

Correct: C

Explanation

There are multiple options that work here but we must select the most cost-effective and resilient solution that also resolves the performance constraints. Using a fully serverless architecture meets all of these goals. The JSON data can be initially stored in Amazon S3 and then AWS Glue can be directed at the source bucket to run transformation on the data and store the results in another S3 bucket. The Glue Data Catalog is used to store metadata about the data in the S3 data lake. Athena can be used to query the transformed data in the S3 bucket. The Amazon Athena ODBC driver can be used to connect to Athena from the existing analytics tool.

Question 49
A development team created a service that uses an AWS Lambda function to store information in an Amazon RDS Database. The database credentials are stored in clear text in the Lambda function code. A Solutions Architect is advising the development team on how to better secure the service. Which of the following should the Solutions Architect recommend? (Select TWO.)

A. Store the Amazon RDS database credentials in AWS KMS using imported key material.

B. Configure Lambda to use the stored database credentials in AWS KMS and enabled automatic key rotation.

C. Create a Lambda function to rotate the credentials every hour by deploying a new Lambda version with the updated credentials.

D. Configure Lambda to use the stored database credentials in AWS Secrets Manager and enable automatic rotation.

E. Create encrypted database credentials in AWS Secrets Manager for the Amazon RDS database.

Solution

Correct: D, E

Explanation

AWS Secrets Manager can be used to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You can configure Secrets Manager to rotate secrets automatically, which can help you meet your security and compliance needs. Secrets Manager offers built-in integrations for MySQL, PostgreSQL, and Amazon Aurora on Amazon RDS, and can rotate credentials for these databases natively. “Create a Lambda function to rotate the credentials every hour by deploying a new Lambda version with the updated credentials” is incorrect. The credentials can be rotated in Secrets Manager and a new Lambda function is not required. “Configure Lambda to use the stored database credentials in AWS KMS and enabled automatic key rotation” is incorrect. KMS is used for storing encryption keys, not credentials. “Store the Amazon RDS database credentials in AWS KMS using imported key material” is incorrect. You cannot import database credentials into KMS, you can only import key material for a CMK.

Company

About UsBlogCareersContact Us

Install App

© 2022 Entest. All Rights Reserved.

TwitterYouTubeInstagram