Questions and Answers
Question YKWBx8WhsNM21bSTJXWq
Question
A company uses Amazon S3 to store data and Amazon QuickSight to create visualizations,
The company has an S3 bucket in an AWS account named Hub-Account. The S3 bucket is encrypted by an AWS Key Management Service (AWS KMS) key. The company’s QuickSight instance is in a separate account named BI-Account.
The company updates the S3 bucket policy to grant access to the QuickSight service role. The company wants to enable cross-account access to allow QuickSight to interact with the S3 bucket.
Which combination of steps will meet this requirement? (Choose two.)
Choices
- A: Use the existing AWS KMS key to encrypt connections from QuickSight to the S3 bucket.
- B: Add the S3 bucket as a resource that the QuickSight service role can access.
- C: Use AWS Resource Access Manager (AWS RAM) to share the S3 bucket with the BI-Account account.
- D: Add an IAM policy to the QuickSight service role to give QuickSight access to the KMS key that encrypts the S3 bucket.
- E: Add the KMS key as a resource that the QuickSight service role can access.
answer?
Answer: E Answer_ET: E Community answer E (60%) B (30%) 10% Discussion
Comment 1362759 by Ell89
- Upvotes: 1
Selected Answer: E B & E. the issue isnt with sharing the bucket as the bucket policy does that already to the service role. its an encryption issue.
Comment 1355063 by fnuuu
- Upvotes: 1
Selected Answer: B BD : B - To ensure QS has permissions to access the S3 D - To ensure QS has permission for KMS to decrypt date in S3
Comment 1344683 by YUICH
- Upvotes: 1
Selected Answer: B BD Conclusion: To enable cross-account access for both (1) the Amazon S3 bucket and (2) the KMS key used to encrypt that bucket, the QuickSight service role must be granted the appropriate permissions. Among the provided options, the following two steps are essential:
B. Add the S3 bucket as a resource the QuickSight service role can access (→ Allows cross-account access to the S3 bucket)
D. Add an IAM policy to the QuickSight service role that grants access to the KMS key (→ Allows decryption of data encrypted by the KMS key)
Comment 1341566 by stevejake
- Upvotes: 1
Selected Answer: D S3 bucket policy is already updated from the question. Hence KMS key policy and IAM policy need to be altered to allow QuickSight service account to access KMS key.
Comment 1337458 by YUICH
- Upvotes: 1
Selected Answer: B Given that the question states “Update the S3 bucket policy to allow access for the QuickSight service role” and, from the perspective of “enabling cross-account access so that QuickSight can interact with the S3 bucket,” is asking what additional steps are needed, we can conclude that:
(B) “Add the S3 bucket as a resource accessible by the QuickSight service role” (E) “Add the KMS key as a resource accessible by the QuickSight service role”
together most succinctly represent the final actions required.
Comment 1316915 by devan007
- Upvotes: 4
Selected Answer: E D & E S3 bucket policy is already updated from the question. Hence KMS key policy and IAM policy need to be altered to allow QuickSight service account to access KMS key.
Comment 1313970 by michele_scar
- Upvotes: 1
Selected Answer: E B for bucket access E for KMS key policy
Comment 1309326 by Eleftheriia
- Upvotes: 2
It is BD
Comment 1307130 by kupo777
- Upvotes: 3
Correct Answer: DE
Comment 1305441 by truongnguyen86
- Upvotes: 3
Answer BE: Step to enable cross-account access:
- update S3 bucket policy in Hub-account (B)
- Update the KMS key Policy in Hub-Account(E)
- Config QuickSight to access S3
Comment 1305410 by pikuantne
- Upvotes: 3
Answer: BD
Comment 1304879 by 2022MMTT
- Upvotes: 4
Answer : DE
Comment 1303486 by Parandhaman_Margan
- Upvotes: 1
Answer:BE
Question w2KDm0X9XXzewsZ4Sfg1
Question
A car sales company maintains data about cars that are listed for sale in an area. The company receives data about new car listings from vendors who upload the data daily as compressed files into Amazon S3. The compressed files are up to 5 KB in size. The company wants to see the most up-to-date listings as soon as the data is uploaded to Amazon S3.
A data engineer must automate and orchestrate the data processing workflow of the listings to feed a dashboard. The data engineer must also provide the ability to perform one-time queries and analytical reporting. The query solution must be scalable.
Which solution will meet these requirements MOST cost-effectively?
Choices
- A: Use an Amazon EMR cluster to process incoming data. Use AWS Step Functions to orchestrate workflows. Use Apache Hive for one-time queries and analytical reporting. Use Amazon OpenSearch Service to bulk ingest the data into compute optimized instances. Use OpenSearch Dashboards in OpenSearch Service for the dashboard.
- B: Use a provisioned Amazon EMR cluster to process incoming data. Use AWS Step Functions to orchestrate workflows. Use Amazon Athena for one-time queries and analytical reporting. Use Amazon QuickSight for the dashboard.
- C: Use AWS Glue to process incoming data. Use AWS Step Functions to orchestrate workflows. Use Amazon Redshift Spectrum for one-time queries and analytical reporting. Use OpenSearch Dashboards in Amazon OpenSearch Service for the dashboard.
- D: Use AWS Glue to process incoming data. Use AWS Lambda and S3 Event Notifications to orchestrate workflows. Use Amazon Athena for one-time queries and analytical reporting. Use Amazon QuickSight for the dashboard.
answer?
Answer: D Answer_ET: D Community answer D (100%) Discussion
Comment 1332024 by axantroff
- Upvotes: 3
Selected Answer: D I don’t particularly like the formulation where AWS Lambda and S3 Event Notifications are described as being responsible for orchestrating any workflow. However, I believe Athena is a much more suitable solution in this case compared to AWS Redshift, so going with option D seems to be a reasonable choice at some point
Comment 1330786 by HagarTheHorrible
- Upvotes: 2
Selected Answer: D seems like C could be the answer but setting up redshift cluster takes much longer to get the same thing like Athena. so D
Comment 1317334 by emupsx1
- Upvotes: 1
Question TlCaQjzRe83al2pwlzQo
Question
A company has AWS resources in multiple AWS Regions. The company has an Amazon EFS file system in each Region where the company operates. The company’s data science team operates within only a single Region. The data that the data science team works with must remain within the team’s Region.
A data engineer needs to create a single dataset by processing files that are in each of the company’s Regional EFS file systems. The data engineer wants to use an AWS Step Functions state machine to orchestrate AWS Lambda functions to process the data.
Which solution will meet these requirements with the LEAST effort?
Choices
- A: Peer the VPCs that host the EFS file systems in each Region with the VPC that is in the data science team’s Region. Enable EFS file locking. Configure the Lambda functions in the data science team’s Region to mount each of the Region specific file systems. Use the Lambda functions to process the data.
- B: Configure each of the Regional EFS file systems to replicate data to the data science team’s Region. In the data science team’s Region, configure the Lambda functions to mount the replica file systems. Use the Lambda functions to process the data.
- C: Deploy the Lambda functions to each Region. Mount the Regional EFS file systems to the Lambda functions. Use the Lambda functions to process the data. Store the output in an Amazon S3 bucket in the data science team’s Region.
- D: Use AWS DataSync to transfer files from each of the Regional EFS files systems to the file system that is in the data science team’s Region. Configure the Lambda functions in the data science team’s Region to mount the file system that is in the same Region. Use the Lambda functions to process the data.
answer?
Answer: D Answer_ET: D Community answer D (67%) A (17%) C (17%) Discussion
Comment 1330787 by HagarTheHorrible
- Upvotes: 1
Selected Answer: C Data Sync is for large scale migration, Lambdas would do just fine here… C
Comment 1327352 by Vidhi212
- Upvotes: 3
Selected Answer: D Using AWS DataSync in Option D achieves the desired data consolidation efficiently while keeping the workflow simple and cost-effective. It aligns with the data locality requirement and reduces engineering effort.
Comment 1327349 by 7a1d491
- Upvotes: 1
Selected Answer: D Peer the VPC introduce complexity, D is a much better solution
Comment 1320895 by emupsx1
- Upvotes: 1
Selected Answer: A maybe A?
Question wGXMuDpdC0zdpLuZ1irI
Question
A company hosts its applications on Amazon EC2 instances. The company must use SSL/TLS connections that encrypt data in transit to communicate securely with AWS infrastructure that is managed by a customer.
A data engineer needs to implement a solution to simplify the generation, distribution, and rotation of digital certificates. The solution must automatically renew and deploy SSL/TLS certificates.
Which solution will meet these requirements with the LEAST operational overhead?
Choices
- A: Store self-managed certificates on the EC2 instances.
- B: Use AWS Certificate Manager (ACM).
- C: Implement custom automation scripts in AWS Secrets Manager.
- D: Use Amazon Elastic Container Service (Amazon ECS) Service Connect.
answer?
Answer: B Answer_ET: B Community answer B (100%) Discussion
Comment 1341193 by MerryLew
- Upvotes: 1
Selected Answer: B ACM takes care of creating, storing, and renewing SSL/TLS certificates and keys
Comment 1317345 by emupsx1
- Upvotes: 2
Selected Answer: B https://aws.amazon.com/tw/certificate-manager/
Question rA03G01qEHG3yJgVhShr
Question
A company uses an Amazon Redshift cluster that runs on RA3 nodes. The company wants to scale read and write capacity to meet demand. A data engineer needs to identify a solution that will turn on concurrency scaling. Which solution will meet this requirement?
Choices
- A: Turn on concurrency scaling in workload management (WLM) for Redshift Serverless workgroups.
- B: Turn on concurrency scaling at the workload management (WLM) queue level in the Redshift cluster.
- C: Turn on concurrency scaling in the settings during the creation of any new Redshift cluster.
- D: Turn on concurrency scaling for the daily usage quota for the Redshift cluster.
answer?
Answer: B Answer_ET: B Community answer B (100%) Discussion
Comment 1137938 by TonyStark0122
- Upvotes: 9
B. Turn on concurrency scaling at the workload management (WLM) queue level in the Redshift cluster.
Explanation: Concurrency scaling in Amazon Redshift allows the cluster to automatically add and remove compute resources in response to workload demands. Enabling concurrency scaling at the workload management (WLM) queue level allows you to specify which queues can benefit from concurrency scaling based on the query workload.
Comment 1356174 by saransh_001
- Upvotes: 4
Selected Answer: B Concurrency Scaling in Amazon Redshift is a feature that automatically adds temporary clusters to handle spikes in query traffic, providing additional read and write capacity. This feature is enabled through Workload Management (WLM) at the queue level in Redshift. Each queue can be configured to use concurrency scaling for handling queries that exceed the capacity of the main cluster. Why option A is incorrect: Turn on concurrency scaling in workload management (WLM) for Redshift Serverless workgroups: This option is for Redshift Serverless rather than clusters on RA3 nodes. Serverless clusters handle scaling differently and don’t require manual concurrency scaling settings like the RA3 clusters.
Comment 1308348 by lsj900605
- Upvotes: 1
B”You can manage which queries are sent to the concurrency-scaling cluster by configuring WLM queues. You’re charged for concurrency-scaling clusters only for the time they’re actively running queries.” https://docs.aws.amazon.com/redshift/latest/dg/concurrency-scaling.html
Comment 1270610 by San_Juan
- Upvotes: 2
Selected answer: B B. According to documentation, the “concurrency scaling” is set up in workload management queue (see comment below).
A. discarted, because redshift serverless scales automatically (it doesn’t need enable “concurrency scaling”).
Comment 1207141 by d8945a1
- Upvotes: 1
Selected Answer: B B. Turn on concurrency scaling at the workload management (WLM) queue level in the Redshift cluster.
Comment 1202962 by khchan123
- Upvotes: 1
Selected Answer: B Answer is B. B. Turn on concurrency scaling at the workload management (WLM) queue level in the Redshift cluster.
Comment 1127219 by milofficial
- Upvotes: 4
Selected Answer: B https://docs.aws.amazon.com/redshift/latest/dg/concurrency-scaling-queues.html