Questions and Answers

Question YKWBx8WhsNM21bSTJXWq

Question

A company uses Amazon S3 to store data and Amazon QuickSight to create visualizations,

The company has an S3 bucket in an AWS account named Hub-Account. The S3 bucket is encrypted by an AWS Key Management Service (AWS KMS) key. The company’s QuickSight instance is in a separate account named BI-Account.

The company updates the S3 bucket policy to grant access to the QuickSight service role. The company wants to enable cross-account access to allow QuickSight to interact with the S3 bucket.

Which combination of steps will meet this requirement? (Choose two.)

Choices

  • A: Use the existing AWS KMS key to encrypt connections from QuickSight to the S3 bucket.
  • B: Add the S3 bucket as a resource that the QuickSight service role can access.
  • C: Use AWS Resource Access Manager (AWS RAM) to share the S3 bucket with the BI-Account account.
  • D: Add an IAM policy to the QuickSight service role to give QuickSight access to the KMS key that encrypts the S3 bucket.
  • E: Add the KMS key as a resource that the QuickSight service role can access.

Question w2KDm0X9XXzewsZ4Sfg1

Question

A car sales company maintains data about cars that are listed for sale in an area. The company receives data about new car listings from vendors who upload the data daily as compressed files into Amazon S3. The compressed files are up to 5 KB in size. The company wants to see the most up-to-date listings as soon as the data is uploaded to Amazon S3.

A data engineer must automate and orchestrate the data processing workflow of the listings to feed a dashboard. The data engineer must also provide the ability to perform one-time queries and analytical reporting. The query solution must be scalable.

Which solution will meet these requirements MOST cost-effectively?

Choices

  • A: Use an Amazon EMR cluster to process incoming data. Use AWS Step Functions to orchestrate workflows. Use Apache Hive for one-time queries and analytical reporting. Use Amazon OpenSearch Service to bulk ingest the data into compute optimized instances. Use OpenSearch Dashboards in OpenSearch Service for the dashboard.
  • B: Use a provisioned Amazon EMR cluster to process incoming data. Use AWS Step Functions to orchestrate workflows. Use Amazon Athena for one-time queries and analytical reporting. Use Amazon QuickSight for the dashboard.
  • C: Use AWS Glue to process incoming data. Use AWS Step Functions to orchestrate workflows. Use Amazon Redshift Spectrum for one-time queries and analytical reporting. Use OpenSearch Dashboards in Amazon OpenSearch Service for the dashboard.
  • D: Use AWS Glue to process incoming data. Use AWS Lambda and S3 Event Notifications to orchestrate workflows. Use Amazon Athena for one-time queries and analytical reporting. Use Amazon QuickSight for the dashboard.

Question TlCaQjzRe83al2pwlzQo

Question

A company has AWS resources in multiple AWS Regions. The company has an Amazon EFS file system in each Region where the company operates. The company’s data science team operates within only a single Region. The data that the data science team works with must remain within the team’s Region.

A data engineer needs to create a single dataset by processing files that are in each of the company’s Regional EFS file systems. The data engineer wants to use an AWS Step Functions state machine to orchestrate AWS Lambda functions to process the data.

Which solution will meet these requirements with the LEAST effort?

Choices

  • A: Peer the VPCs that host the EFS file systems in each Region with the VPC that is in the data science team’s Region. Enable EFS file locking. Configure the Lambda functions in the data science team’s Region to mount each of the Region specific file systems. Use the Lambda functions to process the data.
  • B: Configure each of the Regional EFS file systems to replicate data to the data science team’s Region. In the data science team’s Region, configure the Lambda functions to mount the replica file systems. Use the Lambda functions to process the data.
  • C: Deploy the Lambda functions to each Region. Mount the Regional EFS file systems to the Lambda functions. Use the Lambda functions to process the data. Store the output in an Amazon S3 bucket in the data science team’s Region.
  • D: Use AWS DataSync to transfer files from each of the Regional EFS files systems to the file system that is in the data science team’s Region. Configure the Lambda functions in the data science team’s Region to mount the file system that is in the same Region. Use the Lambda functions to process the data.

Question wGXMuDpdC0zdpLuZ1irI

Question

A company hosts its applications on Amazon EC2 instances. The company must use SSL/TLS connections that encrypt data in transit to communicate securely with AWS infrastructure that is managed by a customer.

A data engineer needs to implement a solution to simplify the generation, distribution, and rotation of digital certificates. The solution must automatically renew and deploy SSL/TLS certificates.

Which solution will meet these requirements with the LEAST operational overhead?

Choices

  • A: Store self-managed certificates on the EC2 instances.
  • B: Use AWS Certificate Manager (ACM).
  • C: Implement custom automation scripts in AWS Secrets Manager.
  • D: Use Amazon Elastic Container Service (Amazon ECS) Service Connect.

Question rA03G01qEHG3yJgVhShr

Question

A company uses an Amazon Redshift cluster that runs on RA3 nodes. The company wants to scale read and write capacity to meet demand. A data engineer needs to identify a solution that will turn on concurrency scaling. Which solution will meet this requirement?

Choices

  • A: Turn on concurrency scaling in workload management (WLM) for Redshift Serverless workgroups.
  • B: Turn on concurrency scaling at the workload management (WLM) queue level in the Redshift cluster.
  • C: Turn on concurrency scaling in the settings during the creation of any new Redshift cluster.
  • D: Turn on concurrency scaling for the daily usage quota for the Redshift cluster.