Question 1 ( Topic 1 )
A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for MySQL Multi-AZ DB instance is part of this deployment with a database-1.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com endpoint listening on port 3306. The companyג€™s Database Specialist is able to log in to MySQL and run queries from the bastion host using these details.
When users try to utilize the application hosted in the AWS account, they are presented with a generic error message. The application servers are logging a ג€could not connect to server: Connection times outג€ error message to Amazon CloudWatch Logs.
What is the cause of this error?
A. The user name and password the application is using are incorrect.
B. The security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.
C. The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.
D. The user name and password are correct, but the user is not authorized to use the DB instance.
Answer : C
Reference:
https://forums.aws.amazon.com/thread.js ... dID=129700
Question 2 ( Topic 1 )
An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and recent data was lost. A Database Specialist needs to add
RDS settings to the CloudFormation template to reduce the chance of accidental instance data loss in the future.
Which settings will meet this requirement? (Choose three.)
A. Set DeletionProtection to True
B. Set MultiAZ to True
C. Set TerminationProtection to True
D. Set DeleteAutomatedBackups to False
E. Set DeletionPolicy to Delete
F. Set DeletionPolicy to Retain
Answer : ACF
Reference:
https://docs.aws.amazon.com/AWSCloudFor ... olicy.html
https://aws.amazon.com/premiumsupport/k ... l-updates/
Question 3 ( Topic 1 )
A Database Specialist is troubleshooting an application connection failure on an Amazon Aurora DB cluster with multiple Aurora Replicas that had been running with no issues for the past 2 months. The connection failure lasted for 5 minutes and corrected itself after that. The Database Specialist reviewed the Amazon
RDS events and determined a failover event occurred at that time. The failover process took around 15 seconds to complete.
What is the MOST likely cause of the 5-minute connection outage?
A. After a database crash, Aurora needed to replay the redo log from the last database checkpoint
B. The client-side application is caching the DNS data and its TTL is set too high
C. After failover, the Aurora DB cluster needs time to warm up before accepting client connections
D. There were no active Aurora Replicas in the Aurora DB cluster
Answer : C
Question 4 ( Topic 1 )
A company is deploying a solution in Amazon Aurora by migrating from an on-premises system. The IT department has established an AWS Direct Connect link from the companyג€™s data center. The companyג€™s Database Specialist has selected the option to require SSL/TLS for connectivity to prevent plaintext data from being set over the network. The migration appears to be working successfully, and the data can be queried from a desktop machine.
Two Data Analysts have been asked to query and validate the data in the new Aurora DB cluster. Both Analysts are unable to connect to Aurora. Their user names and passwords have been verified as valid and the Database Specialist can connect to the DB cluster using their accounts. The Database Specialist also verified that the security group configuration allows network from all corporate IP addresses.
What should the Database Specialist do to correct the Data Analystsג€™ inability to connect?
A. Restart the DB cluster to apply the SSL change.
B. Instruct the Data Analysts to download the root certificate and use the SSL certificate on the connection string to connect.
C. Add explicit mappings between the Data Analystsג€™ IP addresses and the instance in the security group assigned to the DB cluster.
D. Modify the Data Analystsג€™ local client firewall to allow network traffic to AWS.
Answer : D
Question 5 ( Topic 1 )
A company is concerned about the cost of a large-scale, transactional application using Amazon DynamoDB that only needs to store data for 2 days before it is deleted. In looking at the tables, a Database Specialist notices that much of the data is months old, and goes back to when the application was first deployed.
What can the Database Specialist do to reduce the overall cost?
A. Create a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.
B. Create a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table.
C. Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.
D. Create an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.
Answer : A
Question 6 ( Topic 1 )
A company has an on-premises system that tracks various database operations that occur over the lifetime of a database, including database shutdown, deletion, creation, and backup.
The company recently moved two databases to Amazon RDS and is looking at a solution that would satisfy these requirements. The data could be used by other systems within the company.
Which solution will meet these requirements with minimal effort?
A. Create an Amazon CloudWatch Events rule with the operations that need to be tracked on Amazon RDS. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
B. Create an AWS Lambda function to trigger on AWS CloudTrail API calls. Filter on specific RDS API calls and write the output to the tracking systems.
C. Create RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system notifications.
D. Write RDS logs to Amazon Kinesis Data Firehose. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
Answer : C
Question 7 ( Topic 1 )
A clothing company uses a custom ecommerce application and a PostgreSQL database to sell clothes to thousands of users from multiple countries. The company is migrating its application and database from its on-premises data center to the AWS Cloud. The company has selected Amazon EC2 for the application and Amazon RDS for PostgreSQL for the database. The company requires database passwords to be changed every 60 days. A Database Specialist needs to ensure that the credentials used by the web application to connect to the database are managed securely.
Which approach should the Database Specialist take to securely manage the database credentials?
A. Store the credentials in a text file in an Amazon S3 bucket. Restrict permissions on the bucket to the IAM role associated with the instance profile only. Modify the application to download the text file and retrieve the credentials on start up. Update the text file every 60 days.
B. Configure IAM database authentication for the application to connect to the database. Create an IAM user and map it to a separate database user for each ecommerce user. Require users to update their passwords every 60 days.
C. Store the credentials in AWS Secrets Manager. Restrict permissions on the secret to only the IAM role associated with the instance profile. Modify the application to retrieve the credentials from Secrets Manager on start up. Configure the rotation interval to 60 days.
D. Store the credentials in an encrypted text file in the application AMI. Use AWS KMS to store the key for decrypting the text file. Modify the application to decrypt the text file and retrieve the credentials on start up. Update the text file and publish a new AMI every 60 days.
Answer : B
Question 8 ( Topic 1 )
A financial services company is developing a shared data service that supports different applications from throughout the company. A Database Specialist designed a solution to leverage Amazon ElastiCache for Redis with cluster mode enabled to enhance performance and scalability. The cluster is configured to listen on port 6379.
Which combination of steps should the Database Specialist take to secure the cache data and protect it from unauthorized access? (Choose three.)
A. Enable in-transit and at-rest encryption on the ElastiCache cluster.
B. Ensure that Amazon CloudWatch metrics are configured in the ElastiCache cluster.
C. Ensure the security group for the ElastiCache cluster allows all inbound traffic from itself and inbound traffic on TCP port 6379 from trusted clients only.
D. Create an IAM policy to allow the application service roles to access all ElastiCache API actions.
E. Ensure the security group for the ElastiCache clients authorize inbound TCP port 6379 and port 22 traffic from the trusted ElastiCache clusterג€™s security group.
F. Ensure the cluster is created with the auth-token parameter and that the parameter is used in all subsequent commands.
Answer : ABE
Reference:
https://aws.amazon.com/getting-started/ ... asticache/
Question 9 ( Topic 1 )
A company is running an Amazon RDS for PostgreSQL DB instance and wants to migrate it to an Amazon Aurora PostgreSQL DB cluster. The current database is 1 TB in size. The migration needs to have minimal downtime.
What is the FASTEST way to accomplish this?
A. Create an Aurora PostgreSQL DB cluster. Set up replication from the source RDS for PostgreSQL DB instance using AWS DMS to the target DB cluster.
B. Use the pg_dump and pg_restore utilities to extract and restore the RDS for PostgreSQL DB instance to the Aurora PostgreSQL DB cluster.
C. Create a database snapshot of the RDS for PostgreSQL DB instance and use this snapshot to create the Aurora PostgreSQL DB cluster.
D. Migrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replica. Promote the replica during the cutover.
Answer : C
Question 10 ( Topic 1 )
A Database Specialist is migrating a 2 TB Amazon RDS for Oracle DB instance to an RDS for PostgreSQL DB instance using AWS DMS. The source RDS Oracle
DB instance is in a VPC in the us-east-1 Region. The target RDS for PostgreSQL DB instance is in a VPC in the use-west-2 Region.
Where should the AWS DMS replication instance be placed for the MOST optimal performance?
A. In the same Region and VPC of the source DB instance
B. In the same Region and VPC as the target DB instance
C. In the same VPC and Availability Zone as the target DB instance
D. In the same VPC and Availability Zone as the source DB instance
Answer : D
Question 11 ( Topic 1 )
The Development team recently executed a database script containing several data definition language (DDL) and data manipulation language (DML) statements on an Amazon Aurora MySQL DB cluster. The release accidentally deleted thousands of rows from an important table and broke some application functionality.
This was discovered 4 hours after the release. Upon investigation, a Database Specialist tracked the issue to a DELETE command in the script with an incorrect
WHERE clause filtering the wrong set of rows.
The Aurora DB cluster has Backtrack enabled with an 8-hour backtrack window. The Database Administrator also took a manual snapshot of the DB cluster before the release started. The database needs to be returned to the correct state as quickly as possible to resume full application functionality. Data loss must be minimal.
How can the Database Specialist accomplish this?
A. Quickly rewind the DB cluster to a point in time before the release using Backtrack.
B. Perform a point-in-time recovery (PITR) of the DB cluster to a time before the release and copy the deleted rows from the restored database to the original database.
C. Restore the DB cluster using the manual backup snapshot created before the release and change the application configuration settings to point to the new DB cluster.
D. Create a clone of the DB cluster with Backtrack enabled. Rewind the cloned cluster to a point in time before the release. Copy deleted rows from the clone to the original database.
Answer : D
Question 12 ( Topic 1 )
A company is load testing its three-tier production web application deployed with an AWS CloudFormation template on AWS. The Application team is making changes to deploy additional Amazon EC2 and AWS Lambda resources to expand the load testing capacity. A Database Specialist wants to ensure that the changes made by the Application team will not change the Amazon RDS database resources already deployed.
Which combination of steps would allow the Database Specialist to accomplish this? (Choose two.)
A. Review the stack drift before modifying the template
B. Create and review a change set before applying it
C. Export the database resources as stack outputs
D. Define the database resources in a nested stack
E. Set a stack policy for the database resources
Answer : AD
Question 13 ( Topic 1 )
A manufacturing companyג€™s website uses an Amazon Aurora PostgreSQL DB cluster.
Which configurations will result in the LEAST application downtime during a failover? (Choose three.)
A. Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.
B. Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.
C. Edit and enable Aurora DB cluster cache management in parameter groups.
D. Set TCP keepalive parameters to a high value.
E. Set JDBC connection string timeout variables to a low value.
F. Set Java DNS caching timeouts to a high value.
Answer : ABC
Question 14 ( Topic 1 )
A company is hosting critical business data in an Amazon Redshift cluster. Due to the sensitive nature of the data, the cluster is encrypted at rest using AWS
KMS. As a part of disaster recovery requirements, the company needs to copy the Amazon Redshift snapshots to another Region.
Which steps should be taken in the AWS Management Console to meet the disaster recovery requirements?
A. Create a new KMS customer master key in the source Region. Switch to the destination Region, enable Amazon Redshift cross-Region snapshots, and use the KMS key of the source Region.
B. Create a new IAM role with access to the KMS key. Enable Amazon Redshift cross-Region replication using the new IAM role, and use the KMS key of the source Region.
C. Enable Amazon Redshift cross-Region snapshots in the source Region, and create a snapshot copy grant and use a KMS key in the destination Region.
D. Create a new KMS customer master key in the destination Region and create a new IAM role with access to the new KMS key. Enable Amazon Redshift cross-Region replication in the source Region and use the KMS key of the destination Region.
Answer : A
Reference:
https://docs.aws.amazon.com/redshift/la ... shots.html
Question 15 ( Topic 1 )
A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP) transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the OLTP transactions run all the time. The company has benchmarked its workload and determined that a six- node Aurora DB cluster is appropriate for the peak workload.
The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of nodes in the cluster to support the workload at different times. The workload has not changed since the previous benchmarking exercise.
How can a Database Specialist address these requirements with minimal user involvement?
A. Split up the DB cluster into two different clusters: one for OLTP and the other for reporting. Monitor and set up replication between the two clusters to keep data consistent.
B. Review all evaluate the peak combined workload. Ensure that utilization of the DB cluster node is at an acceptable level. Adjust the number of instances, if necessary.
C. Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal workload. The cluster can be restarted again depending on the workload at the time.
D. Set up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.
Answer : D
Question 16 ( Topic 1 )
A company is running a finance application on an Amazon RDS for MySQL DB instance. The application is governed by multiple financial regulatory agencies.
The RDS DB instance is set up with security groups to allow access to certain Amazon EC2 servers only. AWS KMS is used for encryption at rest.
Which step will provide additional security?
A. Set up NACLs that allow the entire EC2 subnet to access the DB instance
B. Disable the master user account
C. Set up a security group that blocks SSH to the DB instance
D. Set up RDS to use SSL for data in transit
Answer : D
Reference:
https://aws.amazon.com/blogs/database/a ... mazon-rds/
Question 17 ( Topic 1 )
A company needs a data warehouse solution that keeps data in a consistent, highly structured format. The company requires fast responses for end-user queries when looking at data from the current year, and users must have access to the full 15-year dataset, when needed. This solution also needs to handle a fluctuating number incoming queries. Storage costs for the 100 TB of data must be kept low.
Which solution meets these requirements?
A. Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storage. Provision enough instances to support high demand.
B. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Provision enough instances to support high demand.
C. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon Redshift Concurrency Scaling.
D. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Leverage Amazon Redshift elastic resize.
Answer : C
Question 18 ( Topic 1 )
A gaming company wants to deploy a game in multiple Regions. The company plans to save local high scores in Amazon DynamoDB tables in each Region. A
Database Specialist needs to design a solution to automate the deployment of the database with identical configurations in additional Regions, as needed. The solution should also automate configuration changes across all Regions.
Which solution would meet these requirements and deploy the DynamoDB tables?
A. Create an AWS CLI command to deploy the DynamoDB table to all the Regions and save it for future deployments.
B. Create an AWS CloudFormation template and deploy the template to all the Regions.
C. Create an AWS CloudFormation template and use a stack set to deploy the template to all the Regions.
D. Create DynamoDB tables using the AWS Management Console in all the Regions and create a step-by-step guide for future deployments.
Answer : B
Question 19 ( Topic 1 )
A team of Database Specialists is currently investigating performance issues on an Amazon RDS for MySQL DB instance and is reviewing related metrics. The team wants to narrow the possibilities down to specific database wait events to better understand the situation.
How can the Database Specialists accomplish this?
A. Enable the option to push all database logs to Amazon CloudWatch for advanced analysis
B. Create appropriate Amazon CloudWatch dashboards to contain specific periods of time
C. Enable Amazon RDS Performance Insights and review the appropriate dashboard
D. Enable Enhanced Monitoring will the appropriate settings
Answer : C
Question 20 ( Topic 1 )
A large company is using an Amazon RDS for Oracle Multi-AZ DB instance with a Java application. As a part of its disaster recovery annual testing, the company would like to simulate an Availability Zone failure and record how the application reacts during the DB instance failover activity. The company does not want to make any code changes for this activity.
What should the company do to achieve this in the shortest amount of time?
A. Use a blue-green deployment with a complete application-level failover test
B. Use the RDS console to reboot the DB instance by choosing the option to reboot with failover
C. Use RDS fault injection queries to simulate the primary node failure
D. Add a rule to the NACL to deny all traffic on the subnets associated with a single Availability Zone
Answer : C
Reference:
https://wellarchitectedlabs.com/Reliabi ... Guide.html
Question 21 ( Topic 1 )
A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database generates log files with retention periods set to their default values. The company has now mandated that database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and after-the-fact analyses.
What should a Database Specialist do to meet these requirements with minimal effort?
A. Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
B. Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.
C. Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
D. Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after 90 days.
Answer : A
Question 22 ( Topic 1 )
A Database Specialist is setting up a new Amazon Aurora DB cluster with one primary instance and three Aurora Replicas for a highly intensive, business-critical application. The Aurora DB cluster has one medium-sized primary instance, one large-sized replica, and two medium sized replicas. The Database Specialist did not assign a promotion tier to the replicas.
In the event of a primary failure, what will occur?
A. Aurora will promote an Aurora Replica that is of the same size as the primary instance
B. Aurora will promote an arbitrary Aurora Replica
C. Aurora will promote the largest-sized Aurora Replica
D. Aurora will not promote an Aurora Replica
Answer : A
Reference:
https://docs.aws.amazon.com/AmazonRDS/l ... ora-ug.pdf
Question 23 ( Topic 1 )
A company is running its line of business application on AWS, which uses Amazon RDS for MySQL at the persistent data store. The company wants to minimize downtime when it migrates the database to Amazon Aurora.
Which migration method should a Database Specialist use?
A. Take a snapshot of the RDS for MySQL DB instance and create a new Aurora DB cluster with the option to migrate snapshots.
B. Make a backup of the RDS for MySQL DB instance using the mysqldump utility, create a new Aurora DB cluster, and restore the backup.
C. Create an Aurora Replica from the RDS for MySQL DB instance and promote the Aurora DB cluster.
D. Create a clone of the RDS for MySQL DB instance and promote the Aurora DB cluster.
Answer : A
Reference:
https://d1.awsstatic.com/whitepapers/RD ... Aurora.pdf
(10)
Question 24 ( Topic 1 )
The Security team for a finance company was notified of an internal security breach that happened 3 weeks ago. A Database Specialist must start producing audit logs out of the production Amazon Aurora PostgreSQL cluster for the Security team to use for monitoring and alerting. The Security team is required to perform real-time alerting and monitoring outside the Aurora DB cluster and wants to have the cluster push encrypted files to the chosen solution.
Which approach will meet these requirements?
A. Use pg_audit to generate audit logs and send the logs to the Security team.
B. Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.
C. Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.
D. Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.
Answer : B
Reference:
https://docs.aws.amazon.com/AmazonRDS/l ... ora-ug.pdf
(525)
Question 25 ( Topic 1 )
A company is using Amazon RDS for MySQL to redesign its business application. A Database Specialist has noticed that the Development team is restoring their
MySQL database multiple times a day when Developers make mistakes in their schema updates. The Developers sometimes need to wait hours for the restores to complete.
Multiple team members are working on the project, making it difficult to find the correct restore point for each mistake.
Which approach should the Database Specialist take to reduce downtime?
A. Deploy multiple read replicas and have the team members make changes to separate replica instances
B. Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot
C. Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature
D. Enable the Amazon RDS for MySQL Backtrack feature
Answer : A
Question 26 ( Topic 1 )
A media company is using Amazon RDS for PostgreSQL to store user data. The RDS DB instance currently has a publicly accessible setting enabled and is hosted in a public subnet. Following a recent AWS Well-Architected Framework review, a Database Specialist was given new security requirements.
✑ Only certain on-premises corporate network IPs should connect to the DB instance.
✑ Connectivity is allowed from the corporate network only.
Which combination of steps does the Database Specialist need to take to meet these new requirements? (Choose three.)
A. Modify the pg_hba.conf file. Add the required corporate network IPs and remove the unwanted IPs.
B. Modify the associated security group. Add the required corporate network IPs and remove the unwanted IPs.
C. Move the DB instance to a private subnet using AWS DMS.
D. Enable VPC peering between the application host running on the corporate network and the VPC associated with the DB instance.
E. Disable the publicly accessible setting.
F. Connect to the DB instance using private IPs and a VPN.
Answer : DEF
Question 27 ( Topic 1 )
A company is about to launch a new product, and test databases must be re-created from production data. The company runs its production databases on an
Amazon Aurora MySQL DB cluster. A Database Specialist needs to deploy a solution to create these test databases as quickly as possible with the least amount of administrative effort.
What should the Database Specialist do to meet these requirements?
A. Restore a snapshot from the production cluster into test clusters
B. Create logical dumps of the production cluster and restore them into new test clusters
C. Use database cloning to create clones of the production cluster
D. Add an additional read replica to the production cluster and use that node for testing
Answer : D
Question 28 ( Topic 1 )
A company with branch offices in Portland, New York, and Singapore has a three-tier web application that leverages a shared database. The database runs on
Amazon RDS for MySQL and is hosted in the us-west-2 Region. The application has a distributed front end deployed in the us-west-2, ap-southheast-1, and us- east-2 Regions.
This front end is used as a dashboard for Sales Managers in each branch office to see current sales statistics. There are complaints that the dashboard performs more slowly in the Singapore location than it does in Portland or New York. A solution is needed to provide consistent performance for all users in each location.
Which set of actions will meet these requirements?
A. Take a snapshot of the instance in the us-west-2 Region. Create a new instance from the snapshot in the ap-southeast-1 Region. Reconfigure the ap- southeast-1 front-end dashboard to access this instance.
B. Create an RDS read replica in the ap-southeast-1 Region from the primary RDS DB instance in the us-west-2 Region. Reconfigure the ap-southeast-1 front- end dashboard to access this instance.
C. Create a new RDS instance in the ap-southeast-1 Region. Use AWS DMS and change data capture (CDC) to update the new instance in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
D. Create an RDS read replica in the us-west-2 Region where the primary instance resides. Create a read replica in the ap-southeast-1 Region from the read replica located on the us-west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
Answer : A
Question 29 ( Topic 1 )
A company wants to migrate its existing on-premises Oracle database to Amazon Aurora PostgreSQL. The migration must be completed with minimal downtime using AWS DMS. A Database Specialist must validate that the data was migrated accurately from the source to the target before the cutover. The migration must have minimal impact on the performance of the source database.
Which approach will MOST effectively meet these requirements?
A. Use the AWS Schema Conversion Tool (AWS SCT) to convert source Oracle database schemas to the target Aurora DB cluster. Verify the datatype of the columns.
B. Use the table metrics of the AWS DMS task created for migrating the data to verify the statistics for the tables being migrated and to verify that the data definition language (DDL) statements are completed.
C. Enable the AWS Schema Conversion Tool (AWS SCT) premigration validation and review the premigration checklist to make sure there are no issues with the conversion.
D. Enable AWS DMS data validation on the task so the AWS DMS task compares the source and target records, and reports any mismatches.
Answer : D
Reference:
https://docs.aws.amazon.com/dms/latest/ ... ating.html
Question 30 ( Topic 1 )
A company is planning to close for several days. A Database Specialist needs to stop all applications along with the DB instances to ensure employees do not have access to the systems during this time. All databases are running on Amazon RDS for MySQL.
The Database Specialist wrote and ran a script to stop all the DB instances. When reviewing the logs, the Database Specialist found that Amazon RDS DB instances with read replicas did not stop.
How should the Database Specialist edit the script to fix this issue?
A. Stop the source instances before stopping their read replicas
B. Delete each read replica before stopping its corresponding source instance
C. Stop the read replicas before stopping their source instances
D. Use the AWS CLI to stop each read replica and source instance at the same time
Answer : D
Question 31 ( Topic 1 )
A global digital advertising company captures browsing metadata to contextually display relevant images, pages, and links to targeted users. A single page load can generate multiple events that need to be stored individually. The maximum size of an event is 200 KB and the average size is 10 KB. Each page load must query the userג€™s browsing history to provide targeting recommendations. The advertising company expects over 1 billion page visits per day from users in the
United States, Europe, Hong Kong, and India. The structure of the metadata varies depending on the event. Additionally, the browsing metadata must be written and read with very low latency to ensure a good viewing experience for the users.
Which database solution meets these requirements?
A. Amazon DocumentDB
B. Amazon RDS Multi-AZ deployment
C. Amazon DynamoDB global table
D. Amazon Aurora Global Database
Answer : C
Reference:
https://docs.aws.amazon.com/amazondynam ... ables.html
Question 32 ( Topic 1 )
A Database Specialist modified an existing parameter group currently associated with a production Amazon RDS for SQL Server Multi-AZ DB instance. The change is associated with a static parameter type, which controls the number of user connections allowed on the most critical RDS SQL Server DB instance for the company. This change has been approved for a specific maintenance window to help minimize the impact on users.
How should the Database Specialist apply the parameter group change for the DB instance?
A. Select the option to apply the change immediately
B. Allow the preconfigured RDS maintenance window for the given DB instance to control when the change is applied
C. Apply the change manually by rebooting the DB instance during the approved maintenance window
D. Reboot the secondary Multi-AZ DB instance
Answer : D
Reference:
https://docs.aws.amazon.com/AmazonRDS/l ... oting.html
Question 33 ( Topic 1 )
A Database Specialist is designing a new database infrastructure for a ride hailing application. The application data includes a ride tracking system that stores
GPS coordinates for all rides. Real-time statistics and metadata lookups must be performed with high throughput and microsecond latency. The database should be fault tolerant with minimal operational overhead and development effort.
Which solution meets these requirements in the MOST efficient way?
A. Use Amazon RDS for MySQL as the database and use Amazon ElastiCache
B. Use Amazon DynamoDB as the database and use DynamoDB Accelerator
C. Use Amazon Aurora MySQL as the database and use Auroraג€™s buffer cache
D. Use Amazon DynamoDB as the database and use Amazon API Gateway
Answer : D
Reference:
https://aws.amazon.com/solutions/case-studies/lyft/
Question 34 ( Topic 1 )
A company is using an Amazon Aurora PostgreSQL DB cluster with an xlarge primary instance master and two large Aurora Replicas for high availability and read-only workload scaling. A failover event occurs and application performance is poor for several minutes. During this time, application servers in all Availability
Zones are healthy and responding normally.
What should the company do to eliminate this application performance issue?
A. Configure both of the Aurora Replicas to the same instance class as the primary DB instance. Enable cache coherence on the DB cluster, set the primary DB instance failover priority to tier-0, and assign a failover priority of tier-1 to the replicas.
B. Deploy an AWS Lambda function that calls the DescribeDBInstances action to establish which instance has failed, and then use the PromoteReadReplica operation to promote one Aurora Replica to be the primary DB instance. Configure an Amazon RDS event subscription to send a notification to an Amazon SNS topic to which the Lambda function is subscribed.
C. Configure one Aurora Replica to have the same instance class as the primary DB instance. Implement Aurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the primary DB instance and one replica with the same instance class. Set the failover priority to tier-1 for the other replicas.
D. Configure both Aurora Replicas to have the same instance class as the primary DB instance. Implement Aurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the primary DB instance and to tier-1 for the replicas.
Answer : D
Question 35 ( Topic 1 )
A company has a database monitoring solution that uses Amazon CloudWatch for its Amazon RDS for SQL Server environment. The cause of a recent spike in
CPU utilization was not determined using the standard metrics that were collected. The CPU spike caused the application to perform poorly, impacting users. A
Database Specialist needs to determine what caused the CPU spike.
Which combination of steps should be taken to provide more visibility into the processes and queries running during an increase in CPU load? (Choose two.)
A. Enable Amazon CloudWatch Events and view the incoming T-SQL statements causing the CPU to spike.
B. Enable Enhanced Monitoring metrics to view CPU utilization at the RDS SQL Server DB instance level.
C. Implement a caching layer to help with repeated queries on the RDS SQL Server DB instance.
D. Use Amazon QuickSight to view the SQL statement being run.
E. Enable Amazon RDS Performance Insights to view the database load and filter the load by waits, SQL statements, hosts, or users.
Answer : BE
Question 36 ( Topic 1 )
A company is using Amazon with Aurora Replicas for read-only workload scaling. A Database Specialist needs to split up two read-only applications so each application always connects to a dedicated replica. The Database Specialist wants to implement load balancing and high availability for the read-only applications.
Which solution meets these requirements?
A. Use a specific instance endpoint for each replica and add the instance endpoint to each read-only application connection string.
B. Use reader endpoints for both the read-only workload applications.
C. Use a reader endpoint for one read-only application and use an instance endpoint for the other read-only application.
D. Use custom endpoints for the two read-only applications.
Answer : B
Reference:
https://rimzy.net/category/amazon-rds/page/4/
Question 37 ( Topic 1 )
An online gaming company is planning to launch a new game with Amazon DynamoDB as its data store. The database should be designated to support the following use cases:
✑ Update scores in real time whenever a player is playing the game.
✑ Retrieve a playerג€™s score details for a specific game session.
A Database Specialist decides to implement a DynamoDB table. Each player has a unique user_id and each game has a unique game_id.
Which choice of keys is recommended for the DynamoDB table?
A. Create a global secondary index with game_id as the partition key
B. Create a global secondary index with user_id as the partition key
C. Create a composite primary key with game_id as the partition key and user_id as the sort key
D. Create a composite primary key with user_id as the partition key and game_id as the sort key
Answer : B
Reference:
https://aws.amazon.com/blogs/database/a ... -patterns/
Question 38 ( Topic 1 )
A Database Specialist migrated an existing production MySQL database from on-premises to an Amazon RDS for MySQL DB instance. However, after the migration, the database needed to be encrypted at rest using AWS KMS. Due to the size of the database, reloading, the data into an encrypted database would be too time-consuming, so it is not an option.
How should the Database Specialist satisfy this new requirement?
A. Create a snapshot of the unencrypted RDS DB instance. Create an encrypted copy of the unencrypted snapshot. Restore the encrypted snapshot copy.
B. Modify the RDS DB instance. Enable the AWS KMS encryption option that leverages the AWS CLI.
C. Restore an unencrypted snapshot into a MySQL RDS DB instance that is encrypted.
D. Create an encrypted read replica of the RDS DB instance. Promote it the master.
Answer : A
Question 39 ( Topic 1 )
A Database Specialist is planning to create a read replica of an existing Amazon RDS for MySQL Multi-AZ DB instance. When using the AWS Management
Console to conduct this task, the Database Specialist discovers that the source RDS DB instance does not appear in the read replica source selection box, so the read replica cannot be created.
What is the most likely reason for this?
A. The source DB instance has to be converted to Single-AZ first to create a read replica from it.
B. Enhanced Monitoring is not enabled on the source DB instance.
C. The minor MySQL version in the source DB instance does not support read replicas.
D. Automated backups are not enabled on the source DB instance.
Answer : D
Reference:
https://aws.amazon.com/rds/features/read-replicas/
Question 40 ( Topic 1 )
A Database Specialist has migrated an on-premises Oracle database to Amazon Aurora PostgreSQL. The schema and the data have been migrated successfully.
The on-premises database server was also being used to run database maintenance cron jobs written in Python to perform tasks including data purging and generating data exports. The logs for these jobs show that, most of the time, the jobs completed within 5 minutes, but a few jobs took up to 10 minutes to complete. These maintenance jobs need to be set up for Aurora PostgreSQL.
How can the Database Specialist schedule these jobs so the setup requires minimal maintenance and provides high availability?
A. Create cron jobs on an Amazon EC2 instance to run the maintenance jobs following the required schedule.
B. Connect to the Aurora host and create cron jobs to run the maintenance jobs following the required schedule.
C. Create AWS Lambda functions to run the maintenance jobs and schedule them with Amazon CloudWatch Events.
D. Create the maintenance job using the Amazon CloudWatch job scheduling plugin.
Answer : D
Reference:
https://docs.aws.amazon.com/systems-man ... tions.html
AWS Database Specialty Questions + Answers
-
- Site Admin
- Posts: 899603
- Joined: Mon Aug 02, 2021 8:13 am