SAP-C01 Exam Questions - Online Test
SAP-C01 Premium VCE File
Learn More
100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

Pass4sure offers free demo for SAP-C01 exam. "AWS Certified Solutions Architect- Professional", also known as SAP-C01 exam, is a Amazon-Web-Services Certification. This set of posts, Passing the Amazon-Web-Services SAP-C01 exam, will help you answer those questions. The SAP-C01 Questions & Answers covers all the knowledge points of the real exam. 100% real Amazon-Web-Services SAP-C01 exams and revised by experts!
Online SAP-C01 free questions and answers of New Version:
NEW QUESTION 1
A company runs an ordering system on AWS using Amazon SQS and AWS Lambda, with each order received as a JSON message. recently the company had a marketing event that led to a tenfold increase in orders. With this increase, the following undesired behaviors started in the ordering system:
Lambda failures while processing orders lead to queue backlogs.
The same orders have been processed multiple times.
A solutions Architect has been asked to solve the existing issues with the ordering system and add the following resiliency features:
Retain problematic orders for analysis.
Send notification if errors go beyond a threshold value. How should the Solutions Architect meet these requirements?
- A. Receive multiple messages with each Lambda invocation, add error handling to message processing code and delete messages after processing, increase the visibility timeout for the messages, create a dead letter queue for messages that could not be processed, create an Amazon CloudWatch alarm on Lambda errors for notification.
- B. Receive single messages with each Lambda invocation, put additional Lambda workers to poll the queue, delete messages after processing, increase the message timer for the messages, use Amazon CloudWatch Logs for messages that could not be processed, create a CloudWatch alarm on Lambda errors for notification.
- C. Receive multiple messages with each Lambda invocation, use long polling when receiving the messages, log the errors from the message processing code using Amazon CloudWatch Logs, create a dead letter queue with AWS Lambda to capture failed invocations, create CloudWatch events on Lambda errors for notification.
- D. Receive multiple messages with each Lambda invocation, add error handling to message processing code and delete messages after processing, increase the visibility timeout for the messages, create a delay queue for messages that could not be processed, create an Amazon CloudWatch metric on Lambda errors for notification.
Answer: D
NEW QUESTION 2
A Solutions Architect is working with a company that is extremely sensitive to its IT costs and wishes to implement controls that will result in a predictable AWS spend each month.
Which combination of steps can help the company control and monitor its monthly AWS usage to achieve a cost that is as close as possible to the target amount? (Choose three.)
- A. Implement an IAM policy that requires users to specify a ‘workload’ tag for cost allocation when launching Amazon EC2 instances.
- B. Contact AWS Support and ask that they apply limits to the account so that users are not able to launch more than a certain number of instance types.
- C. Purchase all upfront Reserved Instances that cover 100% of the account’s expected Amazon EC2 usage.
- D. Place conditions in the users’ IAM policies that limit the number of instances they are able to launch.
- E. Define ‘workload’ as a cost allocation tag in the AWS Billing and Cost Management console.
- F. Set up AWS Budgets to alert and notify when a given workload is expected to exceed a defined cost.
Answer: AEF
NEW QUESTION 3
A large company experienced a drastic increase in its monthly AWS spend. This is after Developers accidentally launched Amazon EC2 instances in unexpected regions. The company has established practices around least privileges for Developers and controls access to on-premises resources using Active Directory groups. The company now wants to control costs by restricting the level of access that Developers have to the AWS Management Console without impacting their productivity. The company would also like to allow Developers to launch Amazon EC2 in only one region, without limiting access to other services in any region.
How can this company achieve these new security requirements while minimizing the administrative burden on the Operations team?
- A. Set up SAML-based authentication tied to an IAM role that has an AdministrativeAccess managed policy attached to i
- B. Attach a customer managed policy that denies access to Amazon EC2 in each region except for the one required.
- C. Create an IAM user for each Developer and add them to the developer IAM group that has the PowerUserAccess managed policy attached to i
- D. Attach a customer managed policy that allows the Developers access to Amazon EC2 only in the required region.
- E. Set up SAML-based authentication tied to an IAM role that has a PowerUserAccess managed policy and a customer managed policy that deny all the Developers access to any AWS services except AWS Service Catalo
- F. Within AWS Service Catalog, create a product containing only the EC2 resources in the approved region.
- G. Set up SAML-based authentication tied to an IAM role that has the PowerUserAccess managed policy attached to i
- H. Attach a customer managed policy that denies access to Amazon EC2 in each region except for the one required.
Answer: D
Explanation:
The tricks here are: - SAML for AD federation and authentication - PowerUserAccess vs AdministrativeAccess. (PowerUSer has less privilege, which is the required once for developers). Admin, has more rights. The description of "PowerUser access" given by AWS is “Provides full access to AWS services and resources, but does not allow management of Users and groups.”
NEW QUESTION 4
The Solutions Architect manages a serverless application that consists of multiple API gateways, AWS Lambda functions, Amazon S3 buckets, and Amazon DynamoDB tables. Customers say that a few application components slow while loading dynamic images, and some are timing out with the “504 Gateway Timeout” error. While troubleshooting the scenario, the Solutions Architect confirms that DynamoDB monitoring metrics are at acceptable levels.
Which of the following steps would be optimal for debugging these application issues? (Choose two.)
- A. Parse HTTP logs in Amazon API Gateway for HTTP errors to determine the root cause of the errors.
- B. Parse Amazon CloudWatch Logs to determine processing times for requested images at specified intervals.
- C. Parse VPC Flow Logs to determine if there is packet loss between the Lambda function and S3.
- D. Parse AWS X-Ray traces and analyze HTTP methods to determine the root cause of the HTTP errors.
- E. Parse S3 access logs to determine if objects being accessed are from specific IP addresses to narrow the scope to geographic latency issues.
Answer: BD
Explanation:
Firstly “A 504 Gateway Timeout Error means your web server didn't receive a timely response from another server upstream when it attempted to load one of your web pages. Put simply, your web servers aren't communicating with each other fast enough”. This specific issue is addressed in the AWS article “Tracing, Logging and Monitoring an API Gateway API”. https://docs.amazonaws.cn/en_us/apigateway/latest/developerguide/monitoring_overview.html
NEW QUESTION 6
An organization has recently grown through acquisitions. Two of the purchased companies use the same IP CIDR range. There is a new short-term requirement to allow AnyCompany A (VPC-A) to communicate with a server that has the IP address 10.0.0.77 in AnyCompany B (VPC-B). AnyCompany A must also communicate with all resources in AnyCompany C (VPC-C). The Network team has created the VPC peer links, but it is having issues with communications between VPC-A and VPC-B. After an investigation, the team believes that the routing tables in the VPCs are incorrect.

What configuration will allow AnyCompany A to communicate with AnyCompany C in addition to the database in AnyCompany B?
- A. On VPC-A, create a static route for the VPC-B CIDR range (10.0.0.0/24) across VPC peerpcx-AB.Create a static route of 10.0.0.0/16 across VPC peer pcx-AC.On VPC-B, create a static route for VPC-A CIDR (172.16.0.0/24) on peer pcx-AB.On VPC-C, create a static route for VPC-A CIDR (172.16.0.0/24) across peer pcx-AC.
- B. On VPC-A, enable dynamic route propagation on pcx-AB and pcx-AC.On VPC-B, enable dynamic route propagation and use security groups to allow only the IP address 10.0.0.77/32 on VPC peer pcx-AB.On VPC-C, enable dynamic route propagation with VPC-A on peer pcx-AC.
- C. On VPC-A, create network access control lists that block the IP address 10.0.0.77/32 on VPC peerpcx-AC.On VPC-A, create a static route for VPC-B CIDR (10.0.0.0/24) on pcx-AB and a static route for VPC-C CIDR (10.0.0.0/24) on pcx-AC.On VPC-B, create a static route for VPC-A CIDR (172.16.0.0/24) across peer pcx-AB.On VPC-C, create a static route for VPC-A CIDR (172.16.0.0/24) across peer pcx-AC.
- D. On VPC-A, create a static route for the VPC-B CIDR (10.0.0.77/32) database across VPC peerpcx-AB.Create a static route for the VPC-C CIDR on VPC peer pcx-AC.On VPC-B, create a static route for VPC-A CIDR (172.16.0.0/24) on peer pcx-AB.On VPC-C, create a static route for VPC-A CIDR (172.16.0.0/24) across peer pcx-AC.
Answer: D
NEW QUESTION 7
A company is running multiple applications on Amazon EC2. Each application is deployed and managed by multiple business units. All applications are deployed on a single AWS account but on different virtual private clouds (VPCs). The company uses a separate VPC in the same account for test and development purposes.
Production applications suffered multiple outages when users accidentally terminated and modified resources that belonged to another business unit. A Solutions Architect has been asked to improve the availability of the company applications while allowing the Developers access to the resources they need.
Which option meets the requirements with the LEAST disruption?
- A. Create an AWS account for each business uni
- B. Move each business unit’s instances to its own account and set up a federation to allow users to access their business unit’s account.
- C. Set up a federation to allow users to use their corporate credentials, and lock the users down to their own VP
- D. Use a network ACL to block each VPC from accessing other VPCs.
- E. Implement a tagging policy based on business unit
- F. Create an IAM policy so that each user can terminate instances belonging to their own business units only.
- G. Set up role-based access for each user and provide limited permissions based on individual roles and the services for which each user is responsible.
Answer: C
Explanation:
Principal – Control what the person making the request (the principal) is allowed to do based on the tags that are attached to that person's IAM user or role. To do this, use the aws:PrincipalTag/key-name condition key to specify what tags must be attached to the IAM user or role before the request is allowed. https://docs.aws.amazon.com/IAM/latest/UserGuide/access_iam-tags.html
NEW QUESTION 8
A company needs to run a software package that has a license that must be run on the same physical host for the duration of its use. The software package is only going to be used for 90 days. The company requires patching and restarting of all instances every 30 days.
How can these requirements be met using AWS?
- A. Run a dedicated instance with auto-placement disabled.
- B. Run the instance on a dedicated host with Host Affinity set to Host.
- C. Run an On-Demand instance with a Reserved Instance to ensure consistent placement.
- D. Run the instance on a licensed host with termination set for 90 days.
Answer: B
Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/how-dedicated-hosts-work.html
NEW QUESTION 9
A company is adding a new approved external vendor that only supports IPv6 connectivity. The company’s backend systems sit in the private subnet of an Amazon VPC. The company uses a NAT gateway to allow these systems to communicate with external vendors over IPv4. Company policy requires systems that communicate with external vendors use a security group that limits access to only approved external vendors. The virtual private cloud (VPC) uses the default network ACL.
The Systems Operator successfully assigns IPv6 addresses to each of the backend systems. The Systems Operator also updates the outbound security group to include the IPv6 CIDR of the external vendor (destination). The systems within the VPC are able to ping one another successfully over IPv6. However, these systems are unable to communicate with the external vendor.
What changes are required to enable communication with the external vendor?
- A. Create an IPv6 NAT instanc
- B. Add a route for destination 0.0.0.0/0 pointing to the NAT instance.
- C. Enable IPv6 on the NAT gatewa
- D. Add a route for destination ::/0 pointing to the NAT gateway.
- E. Enable IPv6 on the internet gatewa
- F. Add a route for destination 0.0.0.0/0 pointing to the IGW.
- G. Create an egress-only internet gatewa
- H. Add a route for destination ::/0 pointing to the gateway.
Answer: D
Explanation:
https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html
NEW QUESTION 10
A Solutions Architect is designing a highly available and reliable solution for a cluster of Amazon EC2 instances.
The Solutions Architect must ensure that any EC2 instance within the cluster recovers automatically after a system failure. The solution must ensure that the recovered instance maintains the same IP address.
How can these requirements be met?
- A. Create an AWS Lambda script to restart any EC2 instances that shut down unexpectedly.
- B. Create an Auto Scaling group for each EC2 instance that has a minimum and maximum size of 1.
- C. Create a new t2.micro instance to monitor the cluster instance
- D. Configure the t2.micro instance to issue an aws ec2 reboot-instances command upon failure.
- E. Create an Amazon CloudWatch alarm for the StatusCheckFailed_System metric, and then configure an EC2 action to recover the instance.
Answer: B
Explanation:
References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html
NEW QUESTION 11
A company runs a video processing platform. Files are uploaded by users who connect to a web server, which stores them on an Amazon EFS share. This web server is running on a single Amazon EC2 instance. A different group of instances, running in an Auto Scaling group, scans the EFS share directory structure for new files to process and generates new videos (thumbnails, different resolution, compression, etc.) according to the instructions file, which is uploaded along with the video files. A different application running on a group of instances managed by an Auto Scaling group processes the video files and then deletes them from the EFS share. The results are stored in an S3 bucket. Links to the processed video files are emailed to the customer.
The company has recently discovered that as they add more instances to the Auto Scaling Group, many files are processed twice, so image processing speed is not improved. The maximum size of these video files is 2GB.
What should the Solutions Architect do to improve reliability and reduce the redundant processing of video files?
- A. Modify the web application to upload the video files directly to Amazon S3. Use Amazon CloudWatch Events to trigger an AWS Lambda function every time a file is uploaded, and have this Lambda function put a message into an Amazon SQS queu
- B. Modify the video processing application to read from SQS queue for new files and use the queue depth metric to scale instances in the video processing Auto Scaling group.
- C. Set up a cron job on the web server instance to synchronize the contents of the EFS share into Amazon S3. Trigger an AWS Lambda function every time a file is uploaded to process the video file and store the results in Amazon S3. Using Amazon CloudWatch Events trigger an Amazon SES job to send an email to the customer containing the link to the processed file.
- D. Rewrite the web application to run directly from Amazon S3 and use Amazon API Gateway to upload the video files to an S3 bucke
- E. Use an S3 trigger to run an AWS Lambda function each time a file is uploaded to process and store new video files in a different bucke
- F. Using CloudWatch Events, trigger an SES job to send an email to the customer containing the link to the processed file.
- G. Rewrite the web application to run from Amazon S3 and upload the video files to an S3 bucke
- H. Each time a new file is uploaded, trigger an AWS Lambda function to put a message in an SQS queue containing the link and the instruction
- I. Modify the video processing application to read from the SQS queue and the S3 bucke
- J. Use the queue depth metric to adjust the size of the Auto Scaling group for video processing instances.
Answer: A
NEW QUESTION 12
A company is running a high-user-volume media-sharing application on premises It currently hosts about 400 TB of data with millions of video files The company is migrating this application to AWS to improve reliability and reduce costs
The Solutions Architecture team plans to store the videos in an Amazon S3 bucket and use Amazon
CloudFront to distribute videos to users. The company needs to migrate this application to AWS within 10 days with the least amount of downtime possible. The company currently has 1 Gbps connectivity to the internet with 30 percent free capacity
Which of the following solutions would enable the company to migrate the workload to AWS and meet an of the requirements?
- A. Use a multipart upload in Amazon S3 client at to parallel-upload the data to the Amazon S3 bucket over the internet Use the throttling feature to ensure that the Amazon S3 client does not use more than 30 percent of available internet capacity
- B. Request an AWS Snowmobile with 1 PB capacity to be delivered to the data center Load the data into Snowmobile and send it back to have AWS download that data to the Amazon S3 bucket Sync the new data that was generated white migration was in flight
- C. Use an Amazon S3 client to transfer data from the data center to the Amazon S3 bucket over the internet Use the throttling feature to ensure the Amazon S3 client does not use more than 30 percent of available internet capacity
- D. Request multiple AWS Snowball devices to be delivered to the data center Load the data concurrently into these devices and send it back Have AWS download that data to the Amazon S3 bucket Sync the new data that was generated while migration was in flight.
Answer: D
Explanation:
https://www.edureka.co/blog/aws-snowball-and-snowmobile-tutorial/
NEW QUESTION 13
A company runs a dynamic mission-critical web application that has an SLA of 99.99%. Global application users access the application 24/7. The application is currently hosted on premises and routinely fails to meet its SLA, especially when millions of users access the application concurrently. Remote users complain of latency.
How should this application be redesigned to be scalable and allow for automatic failover at the lowest cost?
- A. Use Amazon Route 53 failover routing with geolocation-based routin
- B. Host the website on automatically scaled Amazon EC2 instances behind an Application Load Balancer with an additional Application Load Balancer and EC2 instances for the application layer in each regio
- C. Use a Multi-AZ deployment with MySQL as the data layer.
- D. Use Amazon Route 53 round robin routing to distribute the load evenly to several regions with health check
- E. Host the website on automatically scaled Amazon ECS with AWS Fargate technology containers behind a Network Load Balancer, with an additional Network Load Balancer and Fargate containers for the application layer in each regio
- F. Use Amazon Aurora replicas for the data layer.
- G. Use Amazon Route 53 latency-based routing to route to the nearest region with health check
- H. Host the website in Amazon S3 in each region and use Amazon API Gateway with AWS Lambda for the application laye
- I. Use Amazon DynamoDB global tables as the data layer with Amazon DynamoDB Accelerator (DAX) for caching.
- J. Use Amazon Route 53 geolocation-based routin
- K. Host the website on automatically scaled AWS Fargate containers behind a Network Load Balancer with an additional Network Load Balancer and Fargate containers for the application layer in each regio
- L. Use Amazon Aurora Multi-Master for Aurora MySQL as the data layer.
Answer: C
Explanation:
https://aws.amazon.com/getting-started/projects/build-serverless-web-app-lambda-apigateway-s3-dynamodb-co
NEW QUESTION 14
A Solutions Architect is designing the storage layer for a recently purchased application. The application will be running on Amazon EC2 instances and has the following layers and requirements:
Data layer: A POSIX file system shared across many systems.
Service layer: Static file content that requires block storage with more than 100k IOPS. Which combination of AWS services will meet these needs? (Choose two.)
- A. Data layer – Amazon S3
- B. Data layer – Amazon EC2 Ephemeral Storage
- C. Data layer – Amazon EFS
- D. Service layer – Amazon EBS volumes with Provisioned IOPS
- E. Service layer – Amazon EC2 Ephemeral Storage
Answer: CE
Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/storage-optimized-instances.html
NEW QUESTION 15
A Solutions Architect is redesigning an image-viewing and messaging platform to be delivered as SaaS. Currently, there is a farm of virtual desktop infrastructure (VDI) that runs a desktop image-viewing application and a desktop messaging application. Both applications use a shared database to manage user accounts and sharing. Users log in from a web portal that launches the applications and streams the view of the application on the user’s machine. The Development Operations team wants to move away from using VDI and wants to rewrite the application.
What is the MOST cost-effective architecture that offers both security and ease of management?
- A. Run a website from an Amazon S3 bucket with a separate S3 bucket for images and messaging data.Call AWS Lambda functions from embedded JavaScript to manage the dynamic content, and use Amazon Cognito for user and sharing management.
- B. Run a website from Amazon EC2 Linux servers, storing the images in Amazon S3, and use Amazon Cognito for user accounts and sharin
- C. Create AWS CloudFormation templates to launch the application by using EC2 user data to install and configure the application.
- D. Run a website as an AWS Elastic Beanstalk application, storing the images in Amazon S3, and using an Amazon RDS database for user accounts and sharin
- E. Create AWS CloudFormation templates to launch the application and perform blue/green deployments.
- F. Run a website from an Amazon S3 bucket that authorizes Amazon AppStream to stream applications for a combined image viewer and messenger that stores images in Amazon S3. Have the website use an Amazon RDS database for user accounts and sharing.
Answer: D
Explanation:
https://docs.aws.amazon.com/appstream2/latest/developerguide/managing-images.html
NEW QUESTION 16
A company's main intranet page has experienced degraded response times as its user base has increased although there are no reports of users seeing error pages. The application uses Amazon DynamoDB in read-only mode.
Amazon DynamoDB latency metrics for successful requests have been in a steady state even during times when users have reported degradation The Development team has correlated the issue to ProvisionedThrough put Exceeded exceptions in the application logs when doing Scan and read operations The team also identified an access pattern of steady spikes of read activity on a distributed set of individual data items
The Chief Technology Officer wants to improve the user experience
Which solutions will meet these requirements with the LEAST amount of changes to the application? (Select TWO )
- A. Change the data model of the DynamoDB tables to ensure that all Scan and read operations meet DynamoDB best practices of uniform data access, reaching the full request throughput provisioned for the DynamoDB tables
- B. Enable DynamoDB auto scaling to manage the throughput capacity as table traffic increases Set the upper and lower limits to control costs and set a target utilization given the peak usage and how quickly the traffic changes.
- C. Provision Amazon ElastiCache for Redis with cluster mode enabled The cluster should be provisioned with enough shards to spread the application load and provision at least one read replica node for each shard
- D. Implement the DynamoDB Accelerator (DAX) client and provision a DAX cluster with the appropriate node types to sustain the application loa
- E. Tune the item and query cache configuration for an optimal user experience
- F. Remove error retries and exponential backoffs in the application code to handle throttling errors
Answer: AE
NEW QUESTION 17
A Solutions Architect must migrate an existing on-premises web application with 70 TB of static files supporting a public open-data initiative. The architect wants to upgrade to the latest version of the host operating system as part of the migration effort.
Which is the FASTEST and MOST cost-effective way to perform the migration?
- A. Run a physical-to-virtual conversion on the application serve
- B. Transfer the server image over the internet, and transfer the static data to Amazon S3.
- C. Run a physical-to-virtual conversion on the application serve
- D. Transfer the server image over AWS Direct Connect, and transfer the static data to Amazon S3.
- E. Re-platform the server to Amazon EC2, and use AWS Snowball to transfer the static data to Amazon S3.
- F. Re-platform the server by using the AWS Server Migration Service to move the code and data to a new Amazon EC2 instance.
Answer: C
NEW QUESTION 18
A company has an application that runs a web service on Amazon EC2 instances and stores .jpg images in Amazon S3. The web traffic has a predictable baseline, but often demand spikes unpredictably for short periods of time. The application is loosely coupled and stateless. The .jpg images stored in Amazon S3 are accessed frequently for the first 15 to 20 days, they are seldom accessed thereafter but always need to be immediately available. The CIO has asked to find ways to reduce costs.
Which of the following options will reduce costs? (Choose two.)
- A. Purchase Reserved instances for baseline capacity requirements and use On-Demand instances for the demand spikes.
- B. Configure a lifecycle policy to move the .jpg images on Amazon S3 to S3 IA after 30 days.
- C. Use On-Demand instances for baseline capacity requirements and use Spot Fleet instances for the demand spikes.
- D. Configure a lifecycle policy to move the .jpg images on Amazon S3 to Amazon Glacier after 30 days.
- E. Create a script that checks the load on all web servers and terminates unnecessary On-Demand instances.
Answer: AB
NEW QUESTION 19
A company has several teams, and each team has their own Amazon RDS database that totals 100 TB The company is building a data query platform for Business Intelligence Analysts to generate a weekly business report The new system must run ad-hoc SQL queries
What is the MOST cost-effective solution?
- A. Create a new Amazon Redshift cluster Create an AWS Glue ETL job to copy data from the RDS databases to the Amazon Redshift cluster Use Amazon Redshift to run the query
- B. Create an Amazon EMR cluster with enough core nodes Run an Apache Spark job to copy data from the RDS databases to an Hadoop Distributed File System (HDFS) Use a local Apache Hive metastore to maintain the table definition Use Spark SQL to run the query
- C. Use an AWS Glue ETL job to copy all the RDS databases to a single Amazon Aurora PostgreSQL database Run SQL queries on the Aurora PostgreSQL database
- D. Use an AWS Glue crawler to crawl all the databases and create tables in the AWS Glue Data Catalog Use an AWS Glue ETL Job to load data from the RDS databases to Amazon S3, and use Amazon Athena to run the queries.
Answer: C
NEW QUESTION 20
A large company has many business units. Each business unit has multiple AWS accounts for different purposes. The CIO of the company sees that each business unit has data that would be useful to share with other parts of the company. In total, there are about 10 PB of data that needs to be shared with users in 1,000 AWS accounts. The data is proprietary, so some of it should only be available to users with specific job types. Some of the data is used for throughput of intensive workloads, such as simulations. The number of accounts changes frequently because of new initiatives, acquisitions, and divestitures.
A Solutions Architect has been asked to design a system that will allow for sharing data for use in AWS with all of the employees in the company.
Which approach will allow for secure data sharing in scalable way?
- A. Store the data in a single Amazon S3 bucke
- B. Create an IAM role for every combination of job type and business unit that allows to appropriate read/write access based on object prefixes in the S3 bucke
- C. The roles should have trust policies that allow the business unit’s AWS accounts to assume their role
- D. Use IAM in each business unit’s AWS account to prevent them from assuming roles for a different job typ
- E. Users get credentials to access the data by using AssumeRole from their business unit’s AWS accoun
- F. Users can then use those credentials with an S3 client.
- G. Store the data in a single Amazon S3 bucke
- H. Write a bucket policy that uses conditions to grant read and write access where appropriate, based on each user’s business unit and job typ
- I. Determine the business unit with the AWS account accessing the bucket and the job type with a prefix in the IAM user’s nam
- J. Users can access data by using IAM credentials from their business unit’s AWS account with an S3 client.
- K. Store the data in a series of Amazon S3 bucket
- L. Create an application running in Amazon EC2 that is integrated with the company’s identity provider (IdP) that authenticates users and allows them to download or upload data through the applicatio
- M. The application uses the business unit and job type information in the IdP to control what users can upload and download through the applicatio
- N. The users can access the data through the application’s API.
- O. Store the data in a series of Amazon S3 bucket
- P. Create an AWS STS token vending machine that isintegrated with the company’s identity provider (IdP). When a user logs in, have the token vending machine attach an IAM policy that assumes the role that limits the user’s access and/or upload only the data the user is authorized to acces
- Q. Users can get credentials by authenticating to the token vending machine’s website or API and then use those credentials with an S3 client.
Answer: B
NEW QUESTION 21
A company is running a large application on-premises. Its technology stack consists of Microsoft .NET for the web server platform and Apache Cassandra for the database. The company wants to migrate the application to AWS to improve service reliability. The IT team also wants to reduce the time it spends on capacity management and maintenance of this infrastructure. The Development team is willing and available to make code changes to support the migration.
Which design is the LEAST complex to manage after the migration?
- A. Migrate the web servers to Amazon EC2 instances in an Auto Scaling group that is running .NE
- B. Migrate the existing Cassandra database to Amazon Aurora with multiple read replicas, and run both in a Multi-AZ mode.
- C. Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling configuratio
- D. Migrate the Cassandra database to Amazon EC2 instances that are running in a Multi-AZ configuration.
- E. Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling configuratio
- F. Migrate the existing Cassandra database to Amazon DynamoDB.
- G. Migrate the web servers to Amazon EC2 instances in an Auto Scaling group that is running .NE
- H. Migrate the existing Cassandra database to Amazon DynamoDB.
Answer: B
NEW QUESTION 22
A company is using an Amazon CloudFront distribution to distribute both static and dynamic content from a web application running behind an Application Load Balancer. The web application requires user authorization and session tracking for dynamic content. The CloudFront distribution has a single cache behavior configured to forward the Authorization, Host, and User-Agent HTTP whitelist headers and a session cookie to the origin. All other cache behavior settings are set to their default value.
A valid ACM certificate is applied to the CloudFront distribution with a matching CNAME in the distribution settings. The ACM certificate is also applied to the HTTPS listener for the Application Load Balancer. The CloudFront origin protocol policy is set to HTTPS only. Analysis of the cache statistics report shows that the miss rate for this distribution is very high.
What can the Solutions Architect do to improve the cache hit rate for this distribution without causing the SSL/TLS handshake between CloudFront and the Application Load Balancer to fail?
- A. Create two cache behaviors for static and dynamic conten
- B. Remove the User-Agent and Host HTTP headers from the whitelist headers section on both if the cache behavior
- C. Remove the session cookie from the whitelist cookies section and the Authorization HTTP header from the whitelist headers section for cache behavior configured for static content.
- D. Remove the User-Agent and Authorization HTTP headers from the whitelist headers section of the cache behavio
- E. Then update the cache behavior to use presigned cookies for authorization.
- F. Remove the Host HTTP header from the whitelist headers section and remove the session cookie from the whitelist cookies section for the default cache behavio
- G. Enable automatic object compression and use Lambda@Edge viewer request events for user authorization.
- H. Create two cache behaviors for static and dynamic conten
- I. Remove the User-Agent HTTP header from the whitelist headers section on both of the cache behavior
- J. Remove the session cookie from the whitelist cookies section and the Authorization HTTP header from the whitelist headers section for cache behavior configured for static content.
Answer: D
NEW QUESTION 23
An organization has a write-intensive mobile application that uses Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. The application has scaled well, however, costs have increased exponentially because of higher than anticipated Lambda costs. The application’s use is unpredictable, but there has been a steady 20% increase in utilization every month.
While monitoring the current Lambda functions, the Solutions Architect notices that the execution-time averages 4.5 minutes. Most of the wait time is the result of a high-latency network call to a 3-TB MySQL database server that is on-premises. A VPN is used to connect to the VPC, so the Lambda functions have been configured with a five-minute timeout.
How can the Solutions Architect reduce the cost of the current architecture?
- A. Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.Enable local caching in the mobile application to reduce the Lambda function invocation calls.Monitor the Lambda function performance; gradually adjust the timeout and memory properties to lower values while maintaining an acceptable execution time.Offload the frequently accessed records from DynamoDB to Amazon ElastiCache.
- B. Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.Cache the API Gateway results to Amazon CloudFront.Use Amazon EC2 Reserved Instances instead of Lambda.Enable Auto Scaling on EC2, and use Spot Instances during peak times.Enable DynamoDB Auto Scaling to manage target utilization.
- C. Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.Enable caching of the Amazon API Gateway results in Amazon CloudFront to reduce the number of Lambda function invocations.Monitor the Lambda function performance; gradually adjust the timeout and memory properties to lower values while maintaining an acceptable execution time.Enable DynamoDB Accelerator for frequently accessed records, and enable the DynamoDB Auto Scaling feature.
- D. Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.Enable API caching on API Gateway to reduce the number of Lambda function invocations.Continue to monitor the AWS Lambda function performance; gradually adjust the timeout and memory properties to lower values while maintaining an acceptable execution time.Enable Auto Scaling in DynamoDB.
Answer: D
100% Valid and Newest Version SAP-C01 Questions & Answers shared by Certleader, Get Full Dumps HERE: https://www.certleader.com/SAP-C01-dumps.html (New 179 Q&As)