Logo Passei Direto
Material
Study with thousands of resources!

Text Material Preview

AWS Certified AI
Practitioner Exam
Version: Demo
[ Total Questions: 10]
Web: www.certsout.com
Email: support@certsout.com
Amazon Web Services
AIF-C01
https://www.certsout.com
https://www.certsout.com/AIF-C01-test.html
IMPORTANT NOTICE
Feedback
We have developed quality product and state-of-art service to ensure our customers interest. If you have any 
suggestions, please feel free to contact us at feedback@certsout.com
Support
If you have any questions about our product, please provide the following items:
exam code
screenshot of the question
login id/email
please contact us at and our technical experts will provide support within 24 hours.support@certsout.com
Copyright
The product of each order has its own encryption code, so you should use it independently. Any unauthorized 
changes will inflict legal punishment. We reserve the right of final explanation for this statement.
Amazon Web Services - AIF-C01Certs Exam
1 of 13Pass with Valid Exam Questions Pool
A. 
B. 
C. 
D. 
E. 
Category Breakdown
Category Number of Questions
AWS AI/ML Services and Tools 2
Data Engineering for AI 1
AI and ML Concepts 6
Generative AI and LLMs 1
TOTAL 10
Question #:1 - [AWS AI/ML Services and Tools]
A publishing company built a Retrieval Augmented Generation (RAG) based solution to give its users the 
ability to interact with published content. New content is published daily. The company wants to provide a 
near real-time experience to users.
Which steps in the RAG pipeline should the company implement by using offline batch processing to meet 
these requirements? (Select TWO.)
Generation of content embeddings
Generation of embeddings for user queries
Creation of the search index
Retrieval of relevant content
Response generation for the user
Answer: A C
Explanation
Comprehensive and Detailed Explanation From Exact Extract:
In a RAG (Retrieval Augmented Generation) architecture, there are steps that can be optimized using offline 
, particularly for operations that do not require real-time updates:batch processing
A. Generation of content embeddings:When new content is published, it can be processed in batches 
to generate embeddings (vector representations) offline. These embeddings are then used at query time 
for similarity search. As new documents come in daily, batch processing is ideal for generating 
embeddings for all new content together.
“Content/document embeddings are typically generated offline, as this operation can be computationally 
expensive and does not need to happen in real-time.”
(Reference: AWS GenAI RAG Blog, Amazon Bedrock RAG Pattern)
C. Creation of the search index:After generating the content embeddings, these are indexed in a 
vector database or search service. This indexing is also typically performed in batch as part of the 
offline pipeline.
Amazon Web Services - AIF-C01Certs Exam
2 of 13Pass with Valid Exam Questions Pool
A. 
B. 
C. 
D. 
“Building or updating the vector index is often performed as a batch operation, reflecting the latest state of the 
content repository.”
(Reference: AWS RAG Pattern Whitepaper)
B, D, and E are real-time steps. Embeddings for user queries (B), retrieval of relevant content (D), and 
response generation (E) must be processed in real-time to provide an interactive experience.
References:
Retrieval Augmented Generation (RAG) on AWS
Amazon Bedrock RAG Documentation
Question #:2 - [Data Engineering for AI]
A company is implementing intelligent agents to provide conversational search experiences for its customers. 
The company needs a database service that will support storage and queries of embeddings from a generative 
AI model as vectors in the database.
Which AWS service will meet these requirements?
Amazon Athena
Amazon Aurora PostgreSQL
Amazon Redshift
Amazon EMR
Answer: B
Explanation
The requirement is to identify an AWS database service that supports the storage and querying of embeddings 
(from a generative AI model) as vectors. Embeddings are typically high-dimensional numerical 
representations of data (e.g., text, images) used in AI applications like conversational search. The database 
must support vector storage and efficient vector similarity searches. Let’s evaluate each option:
A. Amazon Athena: Amazon Athena is a serverless query service for analyzing data in Amazon S3 
using SQL. It is designed for ad-hoc querying of structured data but does not natively support vector 
storage or vector similarity searches, making it unsuitable for this use case.
B. Amazon Aurora PostgreSQL: Amazon Aurora PostgreSQL is a fully managed relational database 
compatible with PostgreSQL. With the extension (available in PostgreSQL and supported by pgvector
Aurora PostgreSQL), it can store and query vector embeddings efficiently. The pgvector extension 
enables vector similarity searches (e.g., using cosine similarity or Euclidean distance), which is critical 
for conversational search applications using embeddings from generative AI models.
Amazon Web Services - AIF-C01Certs Exam
3 of 13Pass with Valid Exam Questions Pool
A. 
B. 
C. 
D. 
C. Amazon Redshift: Amazon Redshift is a data warehousing service optimized for analytical queries 
on large datasets. While it supports machine learning features and can store numerical data, it does not 
have native support for vector embeddings or vector similarity searches as of May 17, 2025, making it 
less suitable for this use case.
D. Amazon EMR: Amazon EMR is a managed big data platform for processing large-scale data using 
frameworks like Apache Hadoop and Spark. It is not a database service and is not designed for storing 
or querying vector embeddings in the context of a conversational search application.
Exact Extract Reference: According to the AWS documentation, “Amazon Aurora PostgreSQL-Compatible 
Edition supports the pgvector extension, which enables efficient storage and similarity searches for vector 
embeddings. This makes it suitable for AI/ML workloads such as natural language processing and 
recommendation systems that rely on vector data.” (Source: AWS Aurora Documentation - Using pgvector 
with Aurora PostgreSQL, https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide
/PostgreSQLpgvector.html). Additionally, the pgvector extension supports operations like nearest-neighbor 
searches, which are essential for querying embeddings in a conversational search system.
Amazon Aurora PostgreSQL with the pgvector extension directly meets the requirement for storing and 
querying embeddings as vectors, making the correct answer.B
:
AWS Aurora Documentation: Using pgvector with Aurora PostgreSQL (https://docs.aws.amazon.com
/AmazonRDS/latest/AuroraUserGuide/PostgreSQLpgvector.html)
AWS AI Practitioner Study Guide (focus on data engineering for AI, including vector databases)
AWS Blog on Vector Search with Aurora (https://aws.amazon.com/blogs/database/using-vector-search-with-
amazon-aurora-postgresql/)
Question #:3 - [AI and ML Concepts]
A company wants to develop a large language model (LLM) application by using Amazon Bedrock and 
customer data that is uploaded to Amazon S3. The company's security policy states that each team can access 
data for only the team's own customers.
Which solution will meet these requirements?
Create an Amazon Bedrock custom service role for each team that has access to only the team's 
customer data.
Create a custom service role that has Amazon S3 access. Ask teams to specify the customer name on 
each Amazon Bedrock request.
Redact personal data in Amazon S3. Update the S3 bucket policy to allow team access to customer data.
Create one Amazon Bedrock role that has full Amazon S3 access. Create IAM roles for each team that 
have access to only each team's customer folders.
Answer: A
Amazon Web Services - AIF-C01Certs Exam
4 of 13Pass with Valid Exam Questions Pool
A. 
B. 
C. 
D. 
Explanation
To comply with the company's security policy, which restrictseach team to access data for only their own 
customers, creating an Amazon Bedrock custom service role for each team is the correct solution.
Custom Service Role Per Team:
A custom service role for each team ensures that the access control is granular, allowing only specific teams 
to access their own customer data in Amazon S3.
This setup aligns with the principle of least privilege, ensuring teams can only interact with data they are 
authorized to access.
Why Option A is Correct:
Access Control: Allows precise access permissions for each team's data.
Security Compliance: Directly meets the company's security policy requirements by ensuring data segregation.
Why Other Options are Incorrect:
B. Custom service role with customer name specification: This approach is impractical as it relies on manual 
input, which is prone to errors and does not inherently enforce data access controls.
C. Redacting personal data and updating S3 bucket policy: Redaction does not solve the requirement for team-
specific access, and updating bucket policies is less granular than creating roles.
D. One Bedrock role with full S3 access and IAM roles for teams: This setup does not meet the least privilege 
principle, as having a single role with full access is contrary to the company's security policy.
Thus, A is the correct answer to meet the company's security requirements.
Question #:4 - [AI and ML Concepts]
A company wants to use a large language model (LLM) on Amazon Bedrock for sentiment analysis. The 
company wants to know how much information can fit into one prompt.
Which consideration will inform the company's decision?
Temperature
Context window
Batch size
Model size
Answer: B
Explanation
Amazon Web Services - AIF-C01Certs Exam
5 of 13Pass with Valid Exam Questions Pool
A. 
B. 
C. 
D. 
The context window determines how much information can fit into a single prompt when using a large 
language model (LLM) like those on Amazon Bedrock.
Context Window:
The context window is the maximum amount of text (measured in tokens) that a language model can process 
in a single pass.
For LLM applications, the size of the context window limits how much input data, such as text for sentiment 
analysis, can be fed into the model at once.
Why Option B is Correct:
Determines Prompt Size: The context window size directly informs how much information (e.g., words or 
sentences) can fit in one prompt.
Model Capacity: The larger the context window, the more information the model can consider for generating 
outputs.
Why Other Options are Incorrect:
A. Temperature: Controls randomness in model outputs but does not affect the prompt size.
C. Batch size: Refers to the number of training samples processed in one iteration, not the amount of 
information in a prompt.
D. Model size: Refers to the number of parameters in the model, not the input size for a single prompt.
Question #:5 - [AI and ML Concepts]
A company wants to build an ML model by using Amazon SageMaker. The company needs to share and 
manage variables for model development across multiple teams.
Which SageMaker feature meets these requirements?
Amazon SageMaker Feature Store
Amazon SageMaker Data Wrangler
Amazon SageMaker Clarify
Amazon SageMaker Model Cards
Answer: A
Explanation
Amazon SageMaker Feature Store is the correct solution for sharing and managing variables (features) across 
multiple teams during model development.
Amazon Web Services - AIF-C01Certs Exam
6 of 13Pass with Valid Exam Questions Pool
A. 
B. 
C. 
D. 
Amazon SageMaker Feature Store:
A fully managed repository for storing, sharing, and managing machine learning features across different 
teams and models.
It enables collaboration and reuse of features, ensuring consistent data usage and reducing redundancy.
Why Option A is Correct:
Centralized Feature Management: Provides a central repository for managing features, making it easier to 
share them across teams.
Collaboration and Reusability: Improves efficiency by allowing teams to reuse existing features instead of 
creating them from scratch.
Why Other Options are Incorrect:
B. SageMaker Data Wrangler: Helps with data preparation and analysis but does not provide a centralized 
feature store.
C. SageMaker Clarify: Used for bias detection and explainability, not for managing variables across teams.
D. SageMaker Model Cards: Provide model documentation, not feature management.
Question #:6 - [AI and ML Concepts]
A company uses a foundation model (FM) from Amazon Bedrock for an AI search tool. The company wants 
to fine-tune the model to be more accurate by using the company's data.
Which strategy will successfully fine-tune the model?
Provide labeled data with the prompt field and the completion field.
Prepare the training dataset by creating a .txt file that contains multiple lines in .csv format.
Purchase Provisioned Throughput for Amazon Bedrock.
Train the model on journals and textbooks.
Answer: A
Explanation
Providing labeled data with both a prompt field and a completion field is the correct strategy for fine-tuning a 
foundation model (FM) on Amazon Bedrock.
Fine-Tuning Strategy:
To fine-tune a model, labeled data that pairs input prompts with the correct outputs (completions) is necessary.
Amazon Web Services - AIF-C01Certs Exam
7 of 13Pass with Valid Exam Questions Pool
A. 
B. 
C. 
D. 
This allows the model to learn the desired behavior or response style based on the provided examples.
Why Option A is Correct:
Proper Training Format: The prompt-completion pairs provide the necessary format for training the model to 
produce accurate outputs.
Customization: Ensures that the model is fine-tuned to the specific requirements of the company’s data and 
desired outputs.
Why Other Options are Incorrect:
B. Prepare a .txt file in .csv format: This does not align with the specific need for labeled data with prompts 
and completions.
C. Purchase Provisioned Throughput: Relates to read/write capacity in databases, not to model fine-tuning.
D. Train on journals and textbooks: Lacks the specific format and labeling required for fine-tuning.
Question #:7 - [AI and ML Concepts]
A company is developing an ML model to predict customer churn.
Which evaluation metric will assess the model's performance on a binary classification task such as predicting 
chum?
F1 score
Mean squared error (MSE)
R-squared
Time used to train the model
Answer: A
Explanation
The company is developing an ML model to predict customer churn, a binary classification task (churn or no 
churn). The F1 score is an evaluation metric that balances precision and recall, making it suitable for 
assessing the performance of binary classification models, especially when dealing with imbalanced datasets, 
which is common in churn prediction.
Exact Extract from AWS AI Documents:
From the Amazon SageMaker Developer Guide:
Amazon Web Services - AIF-C01Certs Exam
8 of 13Pass with Valid Exam Questions Pool
A. 
B. 
C. 
D. 
"The F1 score is a metric for evaluating binary classification models, combining precision and recall into a 
single value. It is particularly useful for tasks like churn prediction, where class imbalance may exist, ensuring 
the model performs well on both positive and negative classes."
(Source: Amazon SageMaker Developer Guide, Model Evaluation Metrics)
Detailed Explanation:
Option A: F1 scoreThis is the correct answer. The F1 score is ideal for binary classification tasks like churn 
prediction, as it measures the model’s ability to correctly identify both churners and non-churners.
Option B: Mean squared error (MSE)MSE is used for regression tasks to measure the average squared 
difference between predicted and actual values, not for binary classification.
Option C: R-squaredR-squared is a metric for regression models, indicating how well the model explains the 
variability of the target variable. It is not applicable to classification tasks.Option D: Time used to train the modelTraining time is not an evaluation metric for model performance; it 
measures the duration of training, not the model’s accuracy or effectiveness.
References:
Amazon SageMaker Developer Guide: Model Evaluation Metrics (https://docs.aws.amazon.com/sagemaker
/latest/dg/model-evaluation.html)
AWS AI Practitioner Learning Path: Module on Model Performance and Evaluation
AWS Documentation: Metrics for Classification (https://aws.amazon.com/machine-learning/)
Question #:8 - [Generative AI and LLMs]
A financial company is using ML to help with some of the company's tasks.
Which option is a use of generative AI models?
Summarizing customer complaints
Classifying customers based on product usage
Segmenting customers based on type of investments
Forecasting revenue for certain products
Answer: A
Explanation
Amazon Web Services - AIF-C01Certs Exam
9 of 13Pass with Valid Exam Questions Pool
A. 
B. 
C. 
D. 
E. 
Generative AI models (such as large language models) are designed to , such as text, generate new content
summaries, images, and more. Summarizing text—like customer complaints—is a classic application of 
generative AI.
A is correct:
"Text summarization is a core generative AI use case, as it involves generating new, concise content from a 
larger body of text."
(Reference: AWS Generative AI Use Cases)
B and are standard ML classification/segmentation tasks.C
D is a regression/prediction task, not generative.
Question #:9 - [AI and ML Concepts]
A company wants to deploy a conversational chatbot to answer customer questions. The chatbot is based on a 
fine-tuned Amazon SageMaker JumpStart model. The application must comply with multiple regulatory 
frameworks.
Which capabilities can the company show compliance for? (Select TWO.)
Auto scaling inference endpoints
Threat detection
Data protection
Cost optimization
Loosely coupled microservices
Answer: B C
Explanation
To comply with multiple regulatory frameworks, the company must ensure data protection and threat 
detection. Data protection involves safeguarding sensitive customer information, while threat detection 
identifies and mitigates security threats to the application.
Option C (Correct): "Data protection": This is correct because data protection is critical for compliance with 
privacy and security regulations.
Option B (Correct): "Threat detection": This is correct because detecting and mitigating threats is essential to 
maintaining the security posture required for regulatory compliance.
Option A: "Auto scaling inference endpoints" is incorrect because auto-scaling does not directly relate to 
regulatory compliance.
Amazon Web Services - AIF-C01Certs Exam
10 of 13Pass with Valid Exam Questions Pool
Option D: "Cost optimization" is incorrect because it is focused on managing expenses, not compliance.
Option E: "Loosely coupled microservices" is incorrect because this architectural approach does not directly 
address compliance requirements.
AWS AI Practitioner References:
AWS Compliance Capabilities: AWS offers services and tools, such as data protection and threat detection, to 
help companies meet regulatory requirements for security and privacy.
Question #:10 - [AWS AI/ML Services and Tools]
A company wants to create an application to summarize meetings by using meeting audio recordings.
Select and order the correct steps from the following list to create the application. Each step should be 
selected one time or not at all. (Select and order THREE.)
• Convert meeting audio recordings to meeting text files by using Amazon Polly.
• Convert meeting audio recordings to meeting text files by using Amazon Transcribe.
• Store meeting audio recordings in an Amazon S3 bucket.
• Store meeting audio recordings in an Amazon Elastic Block Store (Amazon EBS) volume.
• Summarize meeting text files by using Amazon Bedrock.
• Summarize meeting text files by using Amazon Lex.
Amazon Web Services - AIF-C01Certs Exam
11 of 13Pass with Valid Exam Questions Pool
Answer:
Explanation
Step 1: Store meeting audio recordings in an Amazon S3 bucket.
Amazon Web Services - AIF-C01Certs Exam
12 of 13Pass with Valid Exam Questions Pool
Step 2: Convert meeting audio recordings to meeting text files by using Amazon Transcribe.
Step 3: Summarize meeting text files by using Amazon Bedrock.
The company wants to create an application to summarize meeting audio recordings, which requires a 
sequence of steps involving storage, speech-to-text conversion, and text summarization. Amazon S3 is the 
recommended storage service for audio files, Amazon Transcribe converts audio to text, and Amazon 
Bedrock provides generative AI capabilities for summarization. These three steps, in this order, create an 
efficient workflow for the application.
Exact Extract from AWS AI Documents:
From the Amazon Transcribe Developer Guide:
"Amazon Transcribe uses deep learning to convert audio files into text, supporting applications such as 
meeting transcription. Audio files can be stored in Amazon S3, and Transcribe can process them directly from 
an S3 bucket."
From the AWS Bedrock User Guide:
"Amazon Bedrock provides foundation models that can perform text summarization, enabling developers to 
build applications that generate concise summaries from text data, such as meeting transcripts."
(Source: Amazon Transcribe Developer Guide, Introduction to Amazon Transcribe; AWS Bedrock User 
Guide, Text Generation and Summarization)
Detailed Explanation:
Step 1: Store meeting audio recordings in an Amazon S3 bucket.Amazon S3 is the standard storage service 
for audio files in AWS workflows, especially for integration with services like Amazon Transcribe. Storing 
the recordings in S3 allows Transcribe to access and process them efficiently. This is the first logical step.
Step 2: Convert meeting audio recordings to meeting text files by using Amazon Transcribe.Amazon 
Transcribe is designed for automatic speech recognition (ASR), converting audio files (stored in S3) into text. 
This step is necessary to transform the meeting recordings into a format that can be summarized.
Step 3: Summarize meeting text files by using Amazon Bedrock.Amazon Bedrock provides foundation 
models capable of generative AI tasks like text summarization. Once the audio is converted to text, Bedrock 
can summarize the meeting transcripts, completing the application’s requirements.
Unused Options Analysis:
Convert meeting audio recordings to meeting text files by using Amazon Polly.Amazon Polly is a text-to-
speech service, not for converting audio to text. This option is incorrect and not used.
Store meeting audio recordings in an Amazon Elastic Block Store (Amazon EBS) volume.Amazon EBS is for 
block storage, typically used for compute instances, not for storing files for processing by services like 
Transcribe. S3 is the better choice, so this option is not used.
Amazon Web Services - AIF-C01Certs Exam
13 of 13Pass with Valid Exam Questions Pool
Summarize meeting text files by using Amazon Lex.Amazon Lex is for building conversational interfaces 
(chatbots), not for text summarization. Bedrock is the appropriate service for summarization, so this option is 
not used.
Hotspot Selection Analysis:
The task requires selecting and ordering three steps from the list, with each step used exactly once or not at 
all. The selected steps—storing in S3, converting with Transcribe, and summarizing with Bedrock—form a 
complete and logical workflow for the application.
References:
Amazon Transcribe Developer Guide: Introduction to Amazon Transcribe (https://docs.aws.amazon.com
/transcribe/latest/dg/what-is.html)
AWS Bedrock User Guide: Text Generation and Summarization (https://docs.aws.amazon.com/bedrock/latest
/userguide/what-is-bedrock.html)
AWS AI Practitioner Learning Path: Module on Speech-to-Text and Generative AI
Amazon S3 User Guide: Storing Data for Processing (https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html)
About certsout.com
certsout.com was founded in 2007. We provide latest & high quality IT / Business Certification Training Exam 
Questions, Study Guides, Practice Tests.
We help you pass any IT / Business Certification Exams with 100% Pass Guaranteed or Full Refund. Especially 
Cisco, CompTIA, Citrix, EMC, HP, Oracle, VMware, Juniper, Check Point, LPI, Nortel, EXIN and so on.
View list of all certification exams: All vendors
 
 
 
We prepare state-of-the art practice tests for certification exams. You can reach us at any of the email addresses 
listed below.
Sales: sales@certsout.com
Feedback: feedback@certsout.com
Support: support@certsout.com
Any problems about IT certification or our products, You can write us back and we will get back to you within 24 
hours.
https://www.certsout.com
https://www.certsout.com/vendors.html
https://www.certsout.com/Apple-Practice-Test.html
https://www.certsout.com/Cisco-Practice-Test.html
https://www.certsout.com/Citrix-Practice-Test.html
https://www.certsout.com/CompTIA-Practice-Test.html
https://www.certsout.com/EMC-Practice-Test.html
https://www.certsout.com/ISC-Practice-Test.html
https://www.certsout.com/IBM-Practice-Test.html
https://www.certsout.com/Juniper-Practice-Test.html
https://www.certsout.com/Microsoft-Practice-Test.html
https://www.certsout.com/Oracle-Practice-Test.html
https://www.certsout.com/Symantec-Practice-Test.html
https://www.certsout.com/VMware-Practice-Test.html
mailto:sales@certsout.com
mailto:feedback@certsout.com
mailto:support@certsout.com