Questions and Answers
Question l31aqioTkRWQAivtyPGS
Question
A company wants to migrate data from an Amazon RDS for PostgreSQL DB instance in the eu-east-1 Region of an AWS account named Account_A. The company will migrate the data to an Amazon Redshift cluster in the eu-west-1 Region of an AWS account named Account_B.
Which solution will give AWS Database Migration Service (AWS DMS) the ability to replicate data between two data stores?
Choices
- A: Set up an AWS DMS replication instance in Account_B in eu-west-1.
- B: Set up an AWS DMS replication instance in Account_B in eu-east-1.
- C: Set up an AWS DMS replication instance in a new AWS account in eu-west-1.
- D: Set up an AWS DMS replication instance in Account_A in eu-east-1.
answer?
Answer: A Answer_ET: A Community answer A (77%) B (15%) 8% Discussion
Comment 1250100 by andrologin
- Upvotes: 5
Selected Answer: A Redshift needs to be in the same region as the replication instance see docs: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Redshift.html#CHAP_Target.Redshift.Prerequisites
Comment 1270420 by samadal
- Upvotes: 1
Selected Answer: D When you use WS DMS to migrate data between different AWS Regions or accounts, you must remember the following:
The replication instance must be created in the same Region as the source database. The target endpoint must be created in the Region where the target data store is located. You must set up the required IAM roles and permissions to enable DMS to access the source and target resources.
Comment 1243393 by lool
- Upvotes: 4
Selected Answer: A Redshift has to be in the same region as the DMS
Comment 1241057 by bakarys
- Upvotes: 1
Selected Answer: A To enable AWS Database Migration Service (AWS DMS) to replicate data between two data stores in different AWS Regions, you should choose option A. Here’s why:
Option A: Set up an AWS DMS replication instance in Account_B in eu-west-1. This approach allows you to configure replication between the Amazon RDS for PostgreSQL DB instance in eu-east-1 and the Amazon Redshift cluster in eu-west-1. By using AWS DMS, you can efficiently migrate data across Regions while minimizing downtime and ensuring data consistency
Comment 1240868 by sdas1
- Upvotes: 1
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.VPC.html
Comment 1240867 by sdas1
- Upvotes: 1
The correct solution to replicate data between the Amazon RDS for PostgreSQL DB instance in Account_A (eu-east-1) and the Amazon Redshift cluster in Account_B (eu-west-1) using AWS Database Migration Service (AWS DMS) is:
A. Set up an AWS DMS replication instance in Account_B in eu-west-1.
Comment 1240842 by HunkyBunky
- Upvotes: 2
Selected Answer: B B - becuase AWS DMS must be in a same region as AWS redshift cluster
Comment 1239095 by Bmaster
- Upvotes: 1
My Choice is D
Question 7uOuU4R8Vo5senzokcIJ
Question
A company uses Amazon S3 as a data lake. The company sets up a data warehouse by using a multi-node Amazon Redshift cluster. The company organizes the data files in the data lake based on the data source of each data file.
The company loads all the data files into one table in the Redshift cluster by using a separate COPY command for each data file location. This approach takes a long time to load all the data files into the table. The company must increase the speed of the data ingestion. The company does not want to increase the cost of the process.
Which solution will meet these requirements?
Choices
- A: Use a provisioned Amazon EMR cluster to copy all the data files into one folder. Use a COPY command to load the data into Amazon Redshift.
- B: Load all the data files in parallel into Amazon Aurora. Run an AWS Glue job to load the data into Amazon Redshift.
- C: Use an AWS Give job to copy all the data files into one folder. Use a COPY command to load the data into Amazon Redshift.
- D: Create a manifest file that contains the data file locations. Use a COPY command to load the data into Amazon Redshift.
answer?
Answer: D Answer_ET: D Community answer D (100%) Discussion
Comment 1250810 by andrologin
- Upvotes: 1
Selected Answer: D D is the right answer based on the docs in this page https://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-single-copy-command.html
Comment 1240844 by HunkyBunky
- Upvotes: 4
Selected Answer: D Only D makes sense
https://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-single-copy-command.html
Comment 1239108 by Bmaster
- Upvotes: 1
D is good
https://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-single-copy-command.html https://docs.aws.amazon.com/redshift/latest/dg/loading-data-files-using-manifest.html
Question 0sdQYGEysOQtxIPpwl2p
Question
A company plans to use Amazon Kinesis Data Firehose to store data in Amazon S3. The source data consists of 2 MB .csv files. The company must convert the .csv files to JSON format. The company must store the files in Apache Parquet format.
Which solution will meet these requirements with the LEAST development effort?
Choices
- A: Use Kinesis Data Firehose to convert the .csv files to JSON. Use an AWS Lambda function to store the files in Parquet format.
- B: Use Kinesis Data Firehose to convert the .csv files to JSON and to store the files in Parquet format.
- C: Use Kinesis Data Firehose to invoke an AWS Lambda function that transforms the .csv files to JSON and stores the files in Parquet format.
- D: Use Kinesis Data Firehose to invoke an AWS Lambda function that transforms the .csv files to JSON. Use Kinesis Data Firehose to store the files in Parquet format.
answer?
Answer: D Answer_ET: D Community answer D (54%) B (40%) 6% Discussion
Comment 1249430 by qwertyuio
- Upvotes: 8
Selected Answer: D https://docs.aws.amazon.com/firehose/latest/dev/record-format-conversion.html
Comment 1252331 by mzansikiller
- Upvotes: 6
Answer D https://docs.aws.amazon.com/firehose/latest/dev/record-format-conversion.html
Amazon Data Firehose can convert the format of your input data from JSON to Apache Parquet or Apache ORC before storing the data in Amazon S3. Parquet and ORC are columnar data formats that save space and enable faster queries compared to row-oriented formats like JSON. If you want to convert an input format other than JSON, such as comma-separated values (CSV) or structured text, you can use AWS Lambda to transform it to JSON first. For more information, see Transform data in Amazon Data Firehose.
Comment 1410487 by JimOGrady
- Upvotes: 1
Selected Answer: B simplest and most efficient - Firehose to convert to JSON and store in Parquet - no need for Lambda function
Comment 1399220 by saurwt
- Upvotes: 1
Selected Answer: D Amazon Kinesis Data Firehose does not natively support CSV to JSON conversion. However, it does support JSON to Parquet conversion.
Given that, the best approach with the least development effort is:
D. Use Kinesis Data Firehose to invoke an AWS Lambda function that transforms the .csv files to JSON. Use Kinesis Data Firehose to store the files in Parquet format.
Comment 1387416 by Ramdi1
- Upvotes: 1
Selected Answer: D Kinesis Data Firehose natively supports data format conversion to Parquet, reducing development effort. AWS Lambda is needed only for the CSV to JSON conversion, as Firehose does not support direct CSV to JSON transformation. Firehose then automatically converts JSON to Parquet and stores it in S3, minimizing custom code.
Comment 1346595 by Salam9
- Upvotes: 1
Selected Answer: B https://aws.amazon.com/ar/about-aws/whats-new/2016/12/amazon-kinesis-firehose-can-now-prepare-and-transform-streaming-data-before-loading-it-to-data-stores/
Comment 1338135 by kailu
- Upvotes: 1
Selected Answer: C Lambda handles both the CSV-to-JSON and JSON-to-Parquet transformations before Firehose stores the data in Amazon S3
Comment 1331276 by zoneout
- Upvotes: 1
Selected Answer: D If you want to convert an input format other than JSON, such as comma-separated values (CSV) or structured text, you can use AWS Lambda to transform it to JSON first and then you can use Amazon Data Firehose can convert the format of your input data from JSON to Apache Parquet or Apache ORC.
Comment 1330152 by kailu
- Upvotes: 1
Selected Answer: C I would go with C. D is close but Kinesis Data Firehose does not really store files in Parquet format.
Comment 1312016 by michele_scar
- Upvotes: 1
Selected Answer: D https://docs.aws.amazon.com/firehose/latest/dev/record-format-conversion.html
You need firstly a JSON (using Lambda) to be able using Kinesis to store it in Parquet
Comment 1301609 by rsmf
- Upvotes: 2
Selected Answer: D Firehose can’t convert csv to json.
So, that’s D
Comment 1285242 by PashoQ
- Upvotes: 2
Selected Answer: D If you want to convert an input format other than JSON, such as comma-separated values (CSV) or structured text, you can use AWS Lambda to transform it to JSON first. For more information
Comment 1268230 by mzansikiller
- Upvotes: 3
Selected Answer: D Amazon Data Firehose can convert the format of your input data from JSON to Apache Parquet or Apache ORC before storing the data in Amazon S3. Parquet and ORC are columnar data formats that save space and enable faster queries compared to row-oriented formats like JSON. If you want to convert an input format other than JSON, such as comma-separated values (CSV) or structured text, you can use AWS Lambda to transform it to JSON first. For more information, see Transform source data in Amazon Data Firehose. Answer D
Comment 1263612 by Shanmahi
- Upvotes: 2
Selected Answer: B Kinesis Data Firehose: It has built-in support for data transformation and format conversion. It can directly convert incoming data from .csv to JSON format and then further convert the data to Apache Parquet format before storing it in Amazon S3.
Minimal Development Effort: This option requires the least development effort because Kinesis Data Firehose handles both the transformation (from .csv to JSON) and the format conversion (to Parquet) natively. No additional AWS Lambda functions or custom code are needed.
Comment 1259922 by MinTheRanger
- Upvotes: 4
Selected Answer: B B. Why? Amazon Data Firehose can convert the format of your input data from JSON to Apache Parquet or Apache ORC before storing the data in Amazon S3. https://docs.aws.amazon.com/firehose/latest/dev/record-format-conversion.html With that LEAST development effort, why do we need to use Lambda additionally? :D
Comment 1254188 by valuedate
- Upvotes: 3
Option D - Need to convert the inout data from .csv to JSON first. Firehose can’t do that without the help of a lambda function in this case. After firehose can convert to .parquet and deliver it to s3
Comment 1240848 by HunkyBunky
- Upvotes: 2
Selected Answer: B B - least development efforts
Comment 1239459 by Alagong
- Upvotes: 4
Selected Answer: B By using the built-in transformation and format conversion features of Kinesis Data Firehose, you achieve the desired result with minimal custom development, thereby meeting the requirements efficiently and cost-effectively.
Comment 1239110 by Bmaster
- Upvotes: 1
D is good
https://docs.aws.amazon.com/firehose/latest/dev/record-format-conversion.html
Question JyFfEYxpglpOUutU8ppS
Question
A company is using an AWS Transfer Family server to migrate data from an on-premises environment to AWS. Company policy mandates the use of TLS 1.2 or above to encrypt the data in transit.
Which solution will meet these requirements?
Choices
- A: Generate new SSH keys for the Transfer Family server. Make the old keys and the new keys available for use.
- B: Update the security group rules for the on-premises network to allow only connections that use TLS 1.2 or above.
- C: Update the security policy of the Transfer Family server to specify a minimum protocol version of TLS 1.2
- D: Install an SSL certificate on the Transfer Family server to encrypt data transfers by using TLS 1.2.
answer?
Answer: C Answer_ET: C Community answer C (100%) Discussion
Comment 1399179 by Palee
- Upvotes: 1
Selected Answer: C C is correct
Comment 1244465 by Ja13
- Upvotes: 1
Selected Answer: C A company is using an AWS Transfer Family server to migrate data from an on-premises environment to AWS. Company policy mandates the use of TLS 1.2 or above to encrypt the data in transit.
Which solution will meet these requirements?
A. Generate new SSH keys for the Transfer Family server. Make the old keys and the new keys available for use. B. Update the security group rules for the on-premises network to allow only connections that use TLS 1.2 or above. C. Update the security policy of the Transfer Family server to specify a minimum protocol version of TLS 1.2 D. Install an SSL certificate on the Transfer Family server to encrypt data transfers by using TLS 1.2.
Comment 1240850 by HunkyBunky
- Upvotes: 2
Selected Answer: C Only C is good
Comment 1239111 by Bmaster
- Upvotes: 4
C is correct
https://docs.aws.amazon.com/transfer/latest/userguide/security-policies.html
Question emo1vlLPmP006nGuf541
Question
A company wants to migrate an application and an on-premises Apache Kafka server to AWS. The application processes incremental updates that an on-premises Oracle database sends to the Kafka server. The company wants to use the replatform migration strategy instead of the refactor strategy.
Which solution will meet these requirements with the LEAST management overhead?
Choices
- A: Amazon Kinesis Data Streams
- B: Amazon Managed Streaming for Apache Kafka (Amazon MSK) provisioned cluster
- C: Amazon Kinesis Data Firehose
- D: Amazon Managed Streaming for Apache Kafka (Amazon MSK) Serverless
answer?
Answer: D Answer_ET: D Community answer D (100%) Discussion
Comment 1241156 by HunkyBunky
- Upvotes: 4
Selected Answer: D D - becase this is lift-and-shift migration and serveless - because LEAST management overhead
Comment 1239115 by Bmaster
- Upvotes: 2
D is good