A. Use Amazon S3 to collect the inbound device data, analyze the data from Amazon SQS with Kinesis, and save the results to an Amazon Redshift cluster.
B. Use an Amazon API Gateway to put requests into an Amazon SQS queue, analyze the data with an AWS Lambda function, and save the results to an Amazon Redshift cluster using Amazon EMR.
C. Use Amazon Kinesis Data Streams to collect the inbound data, analyze the data with Kinesis clients, and save the results to an Amazon Redshift cluster using Amazon EMR
D. Use Amazon Kinesis Data Firehose to collect the inbound sensor data, analyze the data with Kinesis clients, and save the results to an Amazon RDS instance.
Solution
Explanation
The solution must support near real-time analytics. For this Amazon Kinesis data streams can be used with clients processing and analyzing the data using Amazon EMR. The solution must also deliver results to a data warehouse and Amazon RedShift is ideal for this purpose. “Use Amazon Kinesis Data Firehose to collect the inbound sensor data, analyze the data with Kinesis clients, and save the results to an Amazon RDS instance” is incorrect. Firehose does not use Kinesis clients; it loads data directly to a destination. “Use Amazon S3 to collect the inbound device data, analyze the data from Amazon SQS with Kinesis, and save the results to an Amazon Redshift cluster” is incorrect. Amazon S3 should not be used for near real-time ingestion of streaming data on this scale. Amazon Kinesis a better fit for this use case. Analyzing with Kinesis from SQS does not make sense either. “Use an Amazon API Gateway to put requests into an Amazon SQS queue, analyze the data with an AWS Lambda function, and save the results to an Amazon Redshift cluster using Amazon EMR” is incorrect. API Gateway should not be used for streaming data and cannot directly put data into an SQS queue.
A. Launch Amazon EC2 instances in two Availability Zones to host a highly available MySQL database cluster.
B. Migrate the database to an Amazon Aurora MySQL DB cluster configured for Multi-AZ.
C. Migrate the database to a single-AZ Amazon RDS for MySQL DB instance.
D. Host static website content in Amazon S3. Use Amazon CloudFront to reduce latency while serving webpages. Use AWS WAF to improve website security.
E. Host static website content in Amazon S3. Use S3 Transfer Acceleration to reduce latency while serving webpages. Use AWS WAF to improve website security.
F. Create an Auto Scaling group of Amazon EC2 instances in two Availability Zones and attach an Application Load Balancer.
Solution
Explanation
This is a simple migration to cloud that requires a standard set of security, performance, and reliability requirements. To meet these requirements an ASG should be created across multiple AZs for the web layer. This should be behind an ALB for distributing incoming connections. For the database layer an Aurora MySQL DB cluster with an Aurora Replica in another AZ will provide Multi-AZ failover. This ensures the DB layer is highly available, and reduces maintenance. Another way to improve performance for global users is to host static content in Amazon S3 and use the Amazon CloudFront CDN to cache the content in Edge Locations around the world. Adding AWS WAF adds additional security.
A. Use Amazon S3 to collect the inbound device data, analyze the data from Amazon SQS with Kinesis, and save the results to an Amazon Redshift cluster.
B. Use an Amazon API Gateway to put requests into an Amazon SQS queue, analyze the data with an AWS Lambda function, and save the results to an Amazon Redshift cluster using Amazon EMR.
C. Use Amazon Kinesis Data Streams to collect the inbound data, analyze the data with Kinesis clients, and save the results to an Amazon Redshift cluster using Amazon EMR
D. Use Amazon Kinesis Data Firehose to collect the inbound sensor data, analyze the data with Kinesis clients, and save the results to an Amazon RDS instance.
Solution
Explanation
The solution must support near real-time analytics. For this Amazon Kinesis data streams can be used with clients processing and analyzing the data using Amazon EMR. The solution must also deliver results to a data warehouse and Amazon RedShift is ideal for this purpose. “Use Amazon Kinesis Data Firehose to collect the inbound sensor data, analyze the data with Kinesis clients, and save the results to an Amazon RDS instance” is incorrect. Firehose does not use Kinesis clients; it loads data directly to a destination. “Use Amazon S3 to collect the inbound device data, analyze the data from Amazon SQS with Kinesis, and save the results to an Amazon Redshift cluster” is incorrect. Amazon S3 should not be used for near real-time ingestion of streaming data on this scale. Amazon Kinesis a better fit for this use case. Analyzing with Kinesis from SQS does not make sense either. “Use an Amazon API Gateway to put requests into an Amazon SQS queue, analyze the data with an AWS Lambda function, and save the results to an Amazon Redshift cluster using Amazon EMR” is incorrect. API Gateway should not be used for streaming data and cannot directly put data into an SQS queue.
A. Use an Amazon Aurora global database across both Regions so reads and writes can occur either location.
B. Use Amazon DynamoDB with a global table across both Regions so reads and writes can occur in either location.
C. Run the web and application tiers in both Regions in an active/passive configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use zonal Reserved Instances for the minimum number of servers and On-Demand Instances for any additional resources.
D. Use an Amazon Route 53 failover routing policy for failover from the primary Region to the disaster recovery Region. Set Time to Live (TTL) to 30 seconds.
E. Run the web and application tiers in both Regions in an active/active configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use Spot Instances for the required resources.
F. Use an Amazon Route 53 weighted routing policy set to 100/0 across the two selected Regions. Set Time to Live (TTL) to 30 minutes.
Solution
Explanation
The requirements can be achieved by using an Amazon DynamoDB database with a global table. DynamoDB is a NoSQL database so it fits the requirements. A global table also allows both reads and writes to occur in both Regions. For the web and application tiers Auto Scaling groups should be configured. Due to the 1-minute RTO these must be configured in an active/passive state. The best pricing model to lower price but ensure resources are available when needed is to use a combination of zonal reserved instances and on-demand instances. To failover between the Regions, a Route 53 fail-over routing policy can be configured with a TTL configured on the record of 30 seconds. This will mean clients must resolve against Route 53 every 30 seconds to get the latest record. In a fail-over scenario the clients would be redirected to the secondary site if the primary site is unhealthy. “Run the web and application tiers in both Regions in an active/active configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use Spot Instances for the required resources” is incorrect. Spot instances may not be available if the maximum price configured is exceeded. This could result in instances not being available when needed. “Use an Amazon Aurora global database across both Regions so reads and writes can occur either location” is incorrect. An Aurora database is a relational DB, not a NoSQL DB. Also, Aurora global database does not allow writes in multiple Regions, only reads. “Use an Amazon Route 53 weighted routing policy set to 100/0 across the two selected Regions. Set Time to Live (TTL) to 30 minutes” is incorrect. A weighted routing policy would need to be updated to change the weightings and the TTL here is too high as clients will cache the result for 30 minutes.
A. Use event patterns in Amazon CloudWatch Events to capture DynamoDB API call events with an AWS Lambda function as a target to analyze behavior. Send SNS notifications when anomalous behaviors are detected.
B. Use Amazon DynamoDB Streams to capture and send updates to AWS Lambda. Create a Lambda function to output records to Amazon Kinesis Data Streams. Analyze any anomalies with Amazon Kinesis Data Analytics. Send SNS notifications when anomalous behaviors are detected.
C. Use AWS CloudTrail to capture all the APIs that change the DynamoDB tables. Send SNS notifications when anomalous behaviors are detected using CloudTrail event filtering.
D. Copy the DynamoDB tables into Apache Hive tables on Amazon EMR every hour and analyze them for anomalous behaviors. Send Amazon SNS notifications when anomalous behaviors are detected.
Solution
Explanation
DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table. DynamoDB is integrated with AWS Lambda so that you can create triggers—pieces of code that automatically respond to events in DynamoDB Streams. With triggers, you can build applications that react to data modifications in DynamoDB tables. In this case the Lambda function can process the data and place it in a Kinesis Data Stream where Data Analytics can analyze the data and send an SNS notification if any fraudulent behavior is detected.
A. Configure AWS CloudTrail to collect API activity from Amazon EC2 and Aurora and analyze with Amazon Athena.
B. Implement the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances and implement tracing of SQL queries with the X-Ray SDK for PHP.
C. Configure the Aurora MySQL DB cluster to generate slow query logs by setting parameters in the parameter group.
D. Configure the Aurora MySQL DB cluster to generate error logs by setting parameters in the parameter group.
E. Install and configure an Amazon CloudWatch Logs agent on the EC2 instances to send the Apache logs to CloudWatch Logs.
F. Configure an Amazon EventBridge rule that triggers Lambda upon Aurora error events and saves logs to Amazon S3 for analysis with Amazon Athena.
Solution
Explanation
AWS CloudTrail and AWS X-Ray are primarily classified as Log Management and Performance Monitoring tools respectively. For Amazon Aurora you can monitor the MySQL error log, slow query log, and the general log. The MySQL error log is generated by default; you can generate the slow query and general logs by setting parameters in your DB parameter group. This data will help troubleshoot any issues with the Aurora database layer. AWS X-Ray is a service that collects data about requests that your application serves, and provides tools you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization. You can use the AWS SDK to develop programs that use the X-Ray API. In this case the application runs on PHP so the X-Ray SDK for PHP should be used. Some instances were terminated by the ASG before the logs could be collected. To prevent this issue from reoccurring, the CloudWatch Logs agent can be installed on instances. The agent streams logs straight to CloudWatch logs at regular intervals (configurable) so the logs will not be lost next time. Configure AWS CloudTrail to collect API activity from Amazon EC2 and Aurora and analyze with Amazon Athena. API activity tells us who did what API action and when. In this case we are attempting to troubleshoot performance issues so auditing data is not as useful as performance metrics, tracing, and logs. . EventBridge is not used for triggering on error events occurring in Aurora.
A. Re-architect the application. Load the data into Amazon S3. Use AWS Lambda to transform the data. Create an Amazon DynamoDB global table across two Regions to store the data and use Amazon Elasticsearch to query the data.
B. Replatform the application. Use Amazon API Gateway for data ingestion. Use AWS Lambda to transform the JSON data. Create an Amazon Aurora PostgreSQL DB cluster with an Aurora Replica in another Availability Zone. Use Amazon QuickSight to generate reports and visualize data.
C. Re-architect the application. Load the data into Amazon S3. Use AWS Glue to transform the data. Store the table schema in an AWS Glue Data Catalog. Use Amazon Athena to query the data.
D. Re-architect the application. Load the data into Amazon S3. Use Amazon Kinesis Data Analytics to transform the data. Create an external schema in an AWS Glue Data Catalog. Use Amazon Redshift Spectrum to query the data.
Solution
Explanation
There are multiple options that work here but we must select the most cost-effective and resilient solution that also resolves the performance constraints. Using a fully serverless architecture meets all of these goals. The JSON data can be initially stored in Amazon S3 and then AWS Glue can be directed at the source bucket to run transformation on the data and store the results in another S3 bucket. The Glue Data Catalog is used to store metadata about the data in the S3 data lake. Athena can be used to query the transformed data in the S3 bucket. The Amazon Athena ODBC driver can be used to connect to Athena from the existing analytics tool.
A. Store the Amazon RDS database credentials in AWS KMS using imported key material.
B. Configure Lambda to use the stored database credentials in AWS KMS and enabled automatic key rotation.
C. Create a Lambda function to rotate the credentials every hour by deploying a new Lambda version with the updated credentials.
D. Configure Lambda to use the stored database credentials in AWS Secrets Manager and enable automatic rotation.
E. Create encrypted database credentials in AWS Secrets Manager for the Amazon RDS database.
Solution
Explanation
AWS Secrets Manager can be used to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You can configure Secrets Manager to rotate secrets automatically, which can help you meet your security and compliance needs. Secrets Manager offers built-in integrations for MySQL, PostgreSQL, and Amazon Aurora on Amazon RDS, and can rotate credentials for these databases natively. “Create a Lambda function to rotate the credentials every hour by deploying a new Lambda version with the updated credentials” is incorrect. The credentials can be rotated in Secrets Manager and a new Lambda function is not required. “Configure Lambda to use the stored database credentials in AWS KMS and enabled automatic key rotation” is incorrect. KMS is used for storing encryption keys, not credentials. “Store the Amazon RDS database credentials in AWS KMS using imported key material” is incorrect. You cannot import database credentials into KMS, you can only import key material for a CMK.