Text Material Preview
Itfreedumps provides the latest online questions for all IT certifications,
such as IBM, Microsoft, CompTIA, Huawei, and so on.
Hot exams are available below.
AZ-204 Developing Solutions for Microsoft Azure
820-605 Cisco Customer Success Manager
MS-203 Microsoft 365 Messaging
HPE2-T37 Using HPE OneView
300-415 Implementing Cisco SD-WAN Solutions (ENSDWI)
DP-203 Data Engineering on Microsoft Azure
500-220 Engineering Cisco Meraki Solutions v1.0
NACE-CIP1-001 Coating Inspector Level 1
NACE-CIP2-001 Coating Inspector Level 2
200-301 Implementing and Administering Cisco Solutions
Share some ARA-C01 exam online questions below.
1.With default settings, how long will a query run on snowflake
A. Snowflake will cancel the query if it runs more than 48 hours
B. Snowflake will cancel the query if it runs more than 24 hours
C. Snowflake will cancel the query if the warehouse runs out of memory
D. Snowflake will cancel the query if the warehouse runs out of memory and hard disk storage
1 / 21
https://www.itfreedumps.com/exam/real-microsoft-az-204-dumps/
https://www.itfreedumps.com/exam/real-cisco-820-605-dumps/
https://www.itfreedumps.com/exam/real-microsoft-ms-203-dumps/
https://www.itfreedumps.com/exam/real-hp-hpe2-t37-dumps/
https://www.itfreedumps.com/exam/real-cisco-300-415-dumps/
https://www.itfreedumps.com/exam/real-microsoft-dp-203-dumps/
https://www.itfreedumps.com/exam/real-cisco-500-220-dumps/
https://www.itfreedumps.com/exam/real-nace-nace-cip1-001-dumps/
https://www.itfreedumps.com/exam/real-nace-nace-cip2-001-dumps/
https://www.itfreedumps.com/exam/real-cisco-200-301-dumps/
Answer: A
Explanation:
STATEMENT_TIMEOUT_IN_SECONDS
This parameter tells Snowflake how long can a SQL statement run before the system cancels it. The
default value is 172800 seconds (48 hours)
This is both a session and object type parameter. As a session type, it can be applied to the account,
a user or a session. As an object type, it can be applied to warehouses. If set at both levels, the
lowest value is used.
2.What is the best recommended size of data file in case of SNOWPIPE continuous loading?
A. 1 GB Compressed
B. if file taking more than a minute, then split the files into more files
C. Same as of Bulk Loading (10 MB - 100 MB compressed)
D. Same as of Bulk Loading (10 MB - 100 MB uncompressed)
Answer: B,C
Explanation:
Snowpipe is designed to load new data typically within a minute after a file notification is sent. Follow
the best practices as per bulk loading for file sizes (10 MB - 100 MB compressed) but split into more
files if it takes more than a minute
3. RECORD_METADATA. This contains metadata about the message, for example, the topic from
which the message was read.
4.By default System cancels long running queries after execution till specific duration. Select the
correct duration.
A. 2 Days
B. 12 Hours
C. never
D. 7 days
Answer: A
Explanation:
Handle by parameter STATEMENT_TIMEOUT_IN_SECONDS. Default is 172800 seconds (i.e., 2
days)
5.If you are defining a multi-column clustering key of a table, the order in which the columns are
specified in the CLUSTER BY clause is important. As general rule, Snowflake recommends:
A. Order doesn't matter
B. Ordering the columns from lowest cardinality to highest cardinality
C. Ordering the columns from highest cardinality to lowest cardinality
Answer: B
Explanation:
As a general rule, Snowflake recommends ordering the columns from lowest cardinality to highest
cardinality. Putting a higher cardinality column before a lower cardinality column will generally reduce
the effectiveness of clustering on the latter column.
6.Which of the following Snowflake Editions encrypt all data transmitted over the network within a
Virtual Private Cloud (VPC)?
2 / 21
A. Standard
B. Enterprise
C. Business Critical
Answer: C
Explanation:
Business Critical edition has lot many additional features like - Tri-Secret Secure using Customer-
managed key, AWS and Azure Private Link supports.
7.Can you have a database overlap across two Snowflake account?
A. Yes
B. No
Answer: B
Explanation:
A database can not overlap across two Snowflake accounts.
8.If you find a data-related tool that is not listed as part of the Snowflake ecosystem, what industry
standard options could you check for as a way to easily connect to Snowflake? (Select 2)
A. Check to see if the tool can connect to other solutions via JDBC
B. Check to see if the tool can connect to other solutions via ODBC
C. Check to see if there is a petition in the community to create a driver
D. Check to see if you can develop a driver and put it on GitHub
Answer: A,B
Explanation:
ODBC (Open Database Connectivity) and JDBC (JAVA Database Connectivity) are the industry
standard options to connect Snowflake easily.
9.You ran a query and the query
SELECT * FROM inventory WHERE BIBNUMBER = 2805127;
The query profile looks as below. If you would like to further tune the query, what is the best thing to
do?
3 / 21
A. Execute the below query to enable auto clustering
B. alter table inventory cluster by (BIBNUMBER);
C. Create an index on column BIBNUMBER
D. Divide the table into multiple smaller tables
Answer: B
Explanation:
So this is again a question that is testing your working experience.
What do you see here?
10.An Architect needs to allow a user to create a database from an inbound share.
To meet this requirement, the user’s role must have which privileges? (Choose two.)
A. IMPORT SHARE;
B. IMPORT PRIVILEGES;
C. CREATE DATABASE;
D. CREATE SHARE;
E. IMPORT DATABASE;
Answer: B,C
4 / 21
11.USERADMIN and Security administrators (i.e. users with the SECURITYADMIN role) or higher
can create roles.
A. TRUE
B. FALSE
Answer: A
Explanation:
ACCOUNTADMIN
(aka Account Administrator)
Role that encapsulates the SYSADMIN and SECURITYADMIN system-defined roles. It is the top-
level role in the system and should be granted only to a limited/controlled number of users in your
account. SECURITYADMIN
(aka Security Administrator)
Role that can manage any object grant globally, as well as create, monitor, and manage users and
roles.
More specifically, this role:
Is granted the MANAGE GRANTS security privilege to be able to modify any grant, including revoking
it.
Inherits the privileges of the USERADMIN role via the system role hierarchy (e.g. USERADMIN role is
granted to SECURITYADMIN).
USERADMIN
(aka User and Role Administrator)
Role that is dedicated to user and role management only. More specifically, this role:
Is granted the CREATE USER and CREATE ROLE security privileges.
Can create and manage users and roles in the account (assuming that ownership of those roles or
users
has not been transferred to another role).
SYSADMIN
(aka System Administrator)
Role that has privileges to create warehouses and databases (and other objects) in an account.
If, as recommended, you create a role hierarchy that ultimately assigns all custom roles to the
SYSADMIN role, this role also has the ability to grant privileges on warehouses, databases, and other
objects to other roles.
PUBLIC
Pseudo-role that is automatically granted to every user and every role in your account. The PUBLIC
role can own securable objects, just like any other role; however, the objects owned by the role are,
by definition, available to every other user and role in your account.
This role is typically used in cases where explicit access control is not needed and all users are
viewed as equal with regard to their access rights.
12.Which are the correct statements about STREAMS?
A. it can not be used with TASKS
B. STREAMS is used to scheduled SQL execution
C. It is used for Change Data Capture (CDC)
D. Streams is used to identify and act on changed table records
Answer: C,D
Explanation:
Tasks is used to scheduled SQL execution. A stream records data manipulation language (DML)
changes made to a table, including information about inserts, updates,and deletes. It can be combine
with TASKS to design some valuable solution.
5 / 21
13.Which privilege grants ability to set a Column-level Security masking policy on a table or view
column.
A. APPLY MASKING POLICY
B. APPLY MASK
C. ALTER COLUMN MASK
Answer: A
Explanation:
APPLY MASKING POLICY
Global
Grants ability to set a Column-level Security masking policy on a table or view column. https://docs.sn
owflake.com/en/user-guide/security-access-control-privileges.html#all-privileges-alphabeti cal
14.You are running a large join on snowflake. You ran it on a medium warehouse and it took almost
an hour to run. You then tried to run the join on a large warehouse but still the performance did not
improve.
What may be the most possible cause of this.
A. There may be a symptom on skew in your data where one of the value of the column is
significantly more than rest of the values in the column
B. Your warehouses do not have enough memory
C. Since you have configured an warehouse with a low auto-suspend value, your warehouse is going
down frequently
Answer: A
Explanation:
In the snowflake advanced architect exam, 40% of the questions will be based on work experience
and this is one such question. You need to have a very good hold on the concepts of Snowflake. So,
what may be happening here?
15.When unloading data into multiple files, use the MAX_FILE_SIZE copy option to specify the
maximum size of each file created. (TRUE / FALSE)
A. FALSE
B. TRUE
Answer: B
Explanation:
True, When unloading data into multiple files, use the MAX_FILE_SIZE copy option to specify the
maximum size of each file created. If you want to unload in just one file you can set SINGLE = TRUE.
16.The following DDL command was used to create a task based on a stream:
6 / 21
Assuming MY_WH is set to auto_suspend C 60 and used exclusively for this task, which statement is
true?
A. The warehouse MY_WH will be made active every five minutes to check the stream.
B. The warehouse MY_WH will only be active when there are results in the stream.
C. The warehouse MY_WH will never suspend.
D. The warehouse MY_WH will automatically resize to accommodate the size of the stream.
Answer: A
17.How will you drop a cluster key?
A. ALTER TABLE <name> DROP CLUSTERING KEY
B. ALTER TABLE <name> DELETE CLUSTERING KEY
C. ALTER TABLE <name> REMOVE CLUSTERING KEY
Answer: A
Explanation:
https://docs.snowflake.com/en/user-guide/tables-clustering-keys.html#dropping-the-clustering-keys-
for-a -table
18.Which cache type runs on a 24 hour "clock"?
A. Results Cache
B. Warehouse Cache
C. Metadata Cache
Answer: A
Explanation:
Each time the persisted result for a query is reused, Snowflake resets the 24-hour retention period for
the result, up to a maximum of 31 days from the date and time that the query was first executed. After
31 days, the result is purged and the next time the query is submitted, a new result is generated and
persisted.
19. The table is frequently queried on columns other than the primary cluster key. https://docs.snowfla
ke.com/en/user-guide/search-optimization-service.html#queries-that-benefit-from-se arch-optimization
20.Schema owner can grant object privileges in a regular schema
A. TRUE
B. FALSE
Answer: B
Explanation:
Another important topic to remember. Please also go through the difference between a managed
schema and a regular schema
https://docs.snowflake.com/en/user-guide/security-access-control-configure.html#creating-managed-
acc
ess-schemas
7 / 21
21.An Architect needs to grant a group of ORDER_ADMIN users the ability to clean old data in an
ORDERS table (deleting all records older than 5 years), without granting any privileges on the table.
The group’s manager (ORDER_MANAGER) has full DELETE privileges on the table.
How can the ORDER_ADMIN role be enabled to perform this data cleanup, without needing the
DELETE privilege held by the ORDER_MANAGER role?
A. Create a stored procedure that runs with caller’s rights, including the appropriate "> 5 years"
business logic, and grant USAGE on this procedure to ORDER_ADMIN. The ORDER_MANAGER
role owns the procedure.
B. Create a stored procedure that can be run using both caller’s and owner’s rights (allowing the
user to specify which rights are used during execution), and grant USAGE on this procedure to
ORDER_ADMIN. The ORDER_MANAGER role owns the procedure.
C. Create a stored procedure that runs with owner’s rights, including the appropriate "> 5 years"
business logic, and grant USAGE on this procedure to ORDER_ADMIN. The ORDER_MANAGER
role owns the procedure.
D. This scenario would actually not be possible in Snowflake C any user performing a DELETE on a
table requires the DELETE privilege to be granted to the role they are using.
Answer: D
22.How do you refresh a materialized view?
A. ALTER VIEW <MV_NAME> REFRESH
B. REFRESH MATERIALIZED VIEW <MV_NAME>
C. Materialized views are automatically refreshed by snowflake and does not require manual
intervention
Answer: C
Explanation:
Materialized views are automatically and transparently maintained by Snowflake. A background
service updates the materialized view after changes are made to the base table. This is more efficient
and less error-prone than manually maintaining the equivalent of a materialized view at the
application level. https://docs.snowflake.com/en/user-guide/views-materialized.html#when-to-use-
materialized-views
23.What are purposes for creating a storage integration? (Choose three.)
A. Control access to Snowflake data using a master encryption key that is maintained in the cloud
provider’s key management service.
B. Store a generated identity and access management (IAM) entity for an external cloud provider
regardless of the cloud provider that hosts the Snowflake account.
C. Support multiple external stages using one single Snowflake object.
8 / 21
D. Avoid supplying credentials when creating a stage or when loading or unloading data.
E. Create private VPC endpoints that allow direct, secure connectivity between VPCs without
traversing the public internet.
F. Manage credentials from multiple cloud providers in one single Snowflake object.
Answer: B,C,D
24.Which command below will only copy the table structure from the existing table to the new table?
A. CREATE TABLE … AS SELECT
B. CREATE TABLE … LIKE
C. CREATE TABLE … CLONE
Answer: B
Explanation:
CREATE TABLE … LIKE
Creates a new table with the same column definitions as an existing table, but without copying data
from the existing table. Column names, types, defaults, and constraints are copied to the new table:
CREATE [ OR REPLACE ] TABLE <table_name> LIKE <source_table> [ CLUSTER BY ( <expr> [ ,
<expr> , ... ] ) ]
[ COPY GRANTS ]
[ ... ]
https://docs.snowflake.com/en/sql-reference/sql/create-table.html#create-table
25.Data loading transformation as part of copying data to a table from stage supports selecting data
from user stage and named stages(internal and external) only
A. TRUE
B. FALSE
Answer: A
Explanation:
The SELECT statement used for transformations does not support all functions. For a complete list of
the supported functions and more details about data loading transformations, including examples, see
the usage notes in Transforming Data During a Load.
Also, data loading transformation only supports selecting data from user stages and named stages
(internal or external).
https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html#transformation-parameters
26.Which command will help you get the lists of pipes for which you have access privileges?
A. LIST PIPES
B. SHOW PIPES()
C. LIST PIPE
D. SHOW PIPE
E. SELECT PIPE()
Answer: B
Explanation:
SHOW PIPES lists the pipes for which you have access privileges.
27.To increase performance, materialized views can be created on external table without any
additional cost
A. TRUE
B. FALSE
9 / 21
Answer: B
Explanation:
Materialized views are designedto improve query performance for workloads composed of common,
repeated query patterns. However, materializing intermediate results incurs additional costs. As such,
before creating any materialized views, you should consider whether the costs are offset by the
savings from re-using these results frequently enough.
28.What is the best use of SCALING OUT?
A. Better Performance
B. Better Concurrency
Answer: B
Explanation:
SCALING OUT is meant for handling high concurrent queries.
29.You ran the below query. I have a warehouse with auto suspend set at 5 seconds
SELECT * FROM INVENTORY;
The query profile looks like as below. Please see below 'Percentage scanned from cache' is 0%
You ran the query again before 5 seconds has elapsed and the query profile looks as below. Look at
the 'Percentage scanned for cache', it is 75%
10 / 21
You ran the query again after 5 seconds. The query profile looks as below. Look at the 'Percentage
scanned from cache', it is zero again.
Why is this happening?
11 / 21
A. The second run of the query used data cache to retrieve part of the result since it ran before the
warehouse was suspended
B. The second run of the query used query result cache
C. The third run of the query used query result cache
Answer: A
Explanation:
This is a very important concept to understand. There may be many different questions on this
concept. Lets, understand what is going on in here
Virtual warehouses are an abstraction on the compute instances of the cloud provider(in case of
AWS, it is EC2 instances). Each Virtual warehouse is a cluster of these compute instances(or EC2 in
case of AWS). The compute instances has local SSD attached to them. When you ran the query for
the first time, the results of the query were retrieved from the remote storage(which is the object store
of the cloud provider, S3 in case of AWS), part of the results also got cached in the local SSD storage
of the compute instance. So, when we ran the query second time, part of the results got retrieved
from the SSD cache also known as Data Cache.
Ok, if that is the case why did not it retrieve from data cache the third time. The third run of the query
happened after 5 seconds. The virtual warehouse had a auto suspend setting of 5 seconds. So, since
there were no activity for 5 seconds, the warehouse suspended itself. When the warehouse is
suspended, it loses the data cache.
Why?
30.Which of the below objects cannot be replicated?
A. Resource Monitors
B. Warehouses
C. Users
D. Databases
E. Shares
F. Roles
Answer: A,B,C,E,F
Explanation:
As of today(28-Nov-2020), only database replication is supported for an account. Other objects in the
account cannot be replicated
31.An Architect has a VPN_ACCESS_LOGS table in the SECURITY_LOGS schema containing
timestamps of the connection and disconnection, username of the user, and summary statistics.
What should the Architect do to enable the Snowflake search optimization service on this table?
A. Assume role with OWNERSHIP on future tables and ADD SEARCH OPTIMIZATION on the
SECURITY_LOGS schema.
B. Assume role with ALL PRIVILEGES including ADD SEARCH OPTIMIZATION in the SECURITY
LOGS schema.
C. Assume role with OWNERSHIP on VPN_ACCESS_LOGS and ADD SEARCH OPTIMIZATION in
the SECURITY_LOGS schema.
D. Assume role with ALL PRIVILEGES on VPN_ACCESS_LOGS and ADD SEARCH
OPTIMIZATION in the SECURITY_LOGS schema.
Answer: C
32.The kafka connector creates one pipe for each partition in a Kafka topic.
A. TRUE
B. FALSE
12 / 21
Answer: A
Explanation:
The connector creates one pipe for each partition in a Kafka topic. The format of the pipe name is:
SNOWFLAKE_KAFKA_CONNECTOR_<connector_name>_<PIPE_table_name>_<partition_number
> https://docs.snowflake.com/en/user-guide/kafka-connector-manage.html#dropping-pipes
33.Which cache type gets purged regularly?
A. Metadata Cache
B. Results Cache
C. Warehouse Cache
Answer: C
Explanation:
Whenever a warehouse suspends, its cache gets purged.
34.How is the change of local time due to daylight savings time handled in Snowflake tasks? (Choose
two.)
A. A task scheduled in a UTC-based schedule will have no issues with the time changes.
B. Task schedules can be designed to follow specified or local time zones to accommodate the time
changes.
C. A task will move to a suspended state during the daylight savings time change.
D. A frequent task execution schedule like minutes may not cause a problem, but will affect the task
history.
E. A task schedule will follow only the specified time and will fail to handle lost or duplicated hours.
Answer: B,D
35.Who pays for the compute usage of the Reader account?
A. Consumer
B. Provider
Answer: B
Explanation:
Provider pays the compute of the reader account because the consumer who uses the reader
account doesn't owns the accounts( not a customer of Snowflake for that account).
36. Transient table can be cloned
37. Tasks
38.An hour ago, you ran a complex query. You then ran several simple queries from the same
worksheet. You want to export the results from the complex query but they are no longer loaded in
the Results pane of the worksheet.
What is the least costly way to download the results?
A. Type the command SELECT RESULTS(-3) into the Worksheet and click "Run"
B. Click on History -> Locate the Query -> Click the QueryID -> Use the "Export Result" button
C. Click on History -> Locate the Query -> Click "Download Results" in column 3
D. Type the command Select RESULTS(<Account>,<User>,<QueryID>) in the Worksheet and click
"Run"
Answer: B
Explanation:
13 / 21
The History page displays queries executed in the last 14 days, starting with the most recent ones.
You can use the End Time filter to display queries based on a specified date; however, if you specify
a date earlier than the last 14 days, no results are returned. You can export results only for queries for
which you can view the results (i.e. queries you’ve executed). If you didn’t execute a query or the
query result is no longer available, the Export Result button is not displayed for the query. The web
interface only supports exporting results up to 100 MB in size. If a query result exceeds this limit, you
are prompted whether to proceed with the export. The export prompts may differ depending on your
browser. For example, in Safari, you are prompted only for an export format (CSV or TSV). After the
export completes, you are prompted to download the exported result to a new window, in which you
can use the Save Page As… browser option to save the result to a file.
39.Select the right container hierarchy.
A. ACCOUNT > USER > DATABASE > SCHEMA > TABLE
B. ACCOUNT > WAREHOUSE, ROLES, USER, DATABASE on same level > SCHEMA > TABLE
C. ACCOUNT > ROLE > DATABASE > SCHEMA > TABLE
D. ACCOUNT >WAREHOUSE > DATABASE > SCHEMA > TABLE
E. None of these
Answer: B
Explanation:
The top-most container is the customer ACCOUNT, within which reside USER, ROLE,
WAREHOUSE, and DATABASE objects. All other securable objects (such as TABLE, FUNCTION,
FILE FORMAT, STAGE, SEQUENCE, etc.) are contained within a SCHEMA object within a
DATABASE.
40.What is a valid object hierarchy when building a Snowflake environment?
A. Account --> Database --> Schema --> Warehouse
B. Organization --> Account --> Database --> Schema --> Stage
C. Account --> Schema > Table --> Stage
D. Organization --> Account --> Stage --> Table --> View
Answer: B
41.What is Clustering Depth?
A. It is the total number of micro-partitions that comprise the table
B. It can be used to determine whether a large table would benefit from explicitly defining a clustering
key
C. The depth of the overlapping micro-partitions
D. The bigger the average depth, the better clustered the table
Answer: B,C
Explanation:
The clustering depth for a populated table measures the average depth (1 or greater) of the
overlapping micro-partitions for specified columns in a table. The smaller the average depth, the
betterclustered the table is with regards to the specified columns. Clustering depth can be used for a
variety of purposes, including: - Monitoring the clustering “health” of a large table, particularly over
time as DML is performed on the table. - Determining whether a large table would benefit from
explicitly defining a clustering key.
42.The Data Engineering team at a large manufacturing company needs to engineer data coming
from many sources to support a wide variety of use cases and data consumer requirements which
14 / 21
include:
1) Finance and Vendor Management team members who require reporting and visualization
2) Data Science team members who require access to raw data for ML model development
3) Sales team members who require engineered and protected data for data monetization What
Snowflake data modeling approaches will meet these requirements? (Choose two.)
A. Consolidate data in the company’s data lake and use EXTERNAL TABLES.
B. Create a raw database for landing and persisting raw data entering the data pipelines.
C. Create a set of profile-specific databases that aligns data with usage patterns.
D. Create a single star schema in a single database to support all consumers’ requirements.
E. Create a Data Vault as the sole data pipeline endpoint and have all consumers directly access the
Vault.
Answer: D,E
43. You need EXECUTE TASK Privilege to run a TASK
44.You have created a table as below
CREATE TABLE TEST_01 (NAME STRING(10));
What data type SNOWFLAKE will assign to column NAME?
A. LONGCHAR
B. STRING
C. VARCHAR
Answer: C
Explanation:
Try it yourself
Execute the below commands
CREATE TABLE TEST_01 (NAME STRING(10));
DESCRIBE TABLE TEST_01;
45.Please select the correct hierarchy from below
A.
B.
C.
D.
15 / 21
Answer: C
Explanation:
Always remember this, account is the top most container. user, role, database, warehouse objects
are contained within account. Schema is within a database. All other objects table, function, views,
stored procedure, file format, stage, sequence are within a schema
46.Removing files from a stage after you are done loading the files improves performance when
subsequently loading data
A. TRUE
B. FALSE
Answer: A
Explanation:
Managing Unloaded Data Files
Staged files can be deleted from a Snowflake stage using the REMOVE command to remove the files
in the stage after you are finished with them.
Removing files improves performance when loading data, because it reduces the number of files that
the COPY INTO <table> command must scan to verify whether existing files in a stage were loaded
already.
47.At what frequency does Snowflake rotate the object keys?
A. 16 Days
B. 60 Days
C. 1 Year
D. 30 Days
Answer: D
Explanation:
Key automatically get rotated every 30 days.
48.A query you initiated is taking too long. You've gone into the History area to check whether this
query (which usually runs every hour) is supposed to take a long time. Check all true statements.
A. Information in the History area can be filtered to show a single Session
B. Information in the History area can help you visually compare query times
C. Information in the History area can be filtered to show a single Warehouse
D. If you decide to end the query, you must return to the worksheet to Abort the query
E. Information in the History area can be filtered to show a single User
Answer: A,B,C,E
Explanation:
Snowflake provides many filter criteria's like - Status, User, Warehouse, Duration, End Time, Session
ID, SQL Text, Query ID, Statement Type, Query Tag
49.Which of the below approach results in perfromance improvement through linear scaling of data
ingestion workload?
A. Split large files into recommended range of 10 MB to 100 MB
B. Organize data by granular path
C. Resize virtual warehouse
D. All of the above
Answer: D
Explanation:
16 / 21
When loading your staged data, narrow the path to the most granular level that includes your data for
improved data load performance.
Use any of the following options to further confine the list of files to load:
If the file names match except for a suffix or extension, include the matching part of the file names in
the path, e.g.:
copy into t1 from @%t1/united_states/california/los_angeles/2016/06/01/11/mydata;
Add the FILES or PATTERN options (see Options for Selecting Staged Data Files), e.g.:
copy into t1 from @%t1/united_states/california/los_angeles/2016/06/01/11/ files=('mydata1.csv',
'mydata1.csv');
copy into t1 from @%t1/united_states/california/los_angeles/2016/06/01/11/
pattern='.*mydata[^[0-9]{1,3}$$].csv';
https://docs.snowflake.com/en/user-guide/data-load-considerations-stage.html#organizing-data-by-
path Now, also understand why splitting large files help...
Each node in a virtual warehouse has 8 cores. if you split your files, the loading can be parallelized as
each file will be take care of by each core.
50.Which of the below are securable objects?
A. USER
B. ROLE
C. PRIVILEDGE
D. TABLE
E. DATABASE
Answer: A,B,D,E
Explanation:
Securable object is an entity to which access can be granted. Unless allowed by a grant, access will
be denied
Every securable object resides within a logical container in a hierarchy of containers. The top-most
container is the customer account, within which reside USER, ROLE, WAREHOUSE, and
DATABASE objects. All other securable objects (such as TABLE, FUNCTION, FILE FORMAT,
STAGE, SEQUENCE, etc.) are contained within a SCHEMA object within a DATABASE.
https://docs.snowflake.com/en/user-guide/security-access-control-overview.html#securable-objects
51.FORCE option is used to load all files, ignoring load metadata if it exists. (TRUE / FALSE)
A. FALSE
B. TRUE
Answer: B
Explanation:
You can set the FORCE option to load all files, ignoring load metadata if it exists. Note that this option
reloads files, potentially duplicating data in a table.
52.Snowflake can query the data from External Tables. (TRUE / FALSE)
A. FALSE
B. TRUE
Answer: B
Explanation:
External tables enable querying existing data stored in external cloud storage for analysis without first
loading it into Snowflake. The source of truth for the data remains in the external cloud storage. This
solution is especially beneficial to accounts that have a large amount of data stored in external cloud
storage and only want to query a portion of the data
17 / 21
53.Your business team runs a set of identical queries every day after the batch ETL run is complete.
From the following actions, what is the best action that you will recommend.
A. After the ETL run, execute the identical queries so that they remain in the result cache
B. After the ETL run, resize the warehouse to a larger warehouse
C. After the ETL run, copy the tables to another schema for the business users to query
Answer: A
Explanation:
Please note the key word here which is IDENTICAL queries. When a query is run the first time, and
the same query is run the second time it picks up the results from the query result cache and it does
not cost you any compute. The query result cache is valid for 24 hours and it gets extended for
another 24 hours, every time you access it(even if you access it at 23 hours 59 seconds).
54.VALIDATION_MODE does not support COPY statements that transform data during a load
A. TRUE
B. FALSE
Answer: A
Explanation:
VALIDATION_MODE does not support COPY statements that transform data during a load. If the
parameter is specified, the COPY statement returns an error.
Use the VALIDATE table function to view all errors encountered during a previous load. Note that this
function also does not support COPY statements that transform data during a load.
55.
How do you specify the file format when you try to load a file to a snowflake table [ FILE_FORMAT = (
{ FORMAT_NAME = '[<namespace>.]<file_format_name>' | TYPE = { CSV | JSON | AVRO | ORC |
PARQUET | XML } [ formatTypeOptions ] } ) ]
56. External tables.
57.New or modified data in tables in a shareare immediately not available to all consumers who have
created a database from a share
A. FALSE
B. TRUE
Answer: A
Explanation:
New or modified data in tables in a share are immediately available to all consumers who have
created a database from a share. You must grant usage on new objects created in a database in a
share in order for them to be available to consumers.
58.While loading data into a table from stage, which are the valid copyOptions
A. CONTINUE
B. SKIP_FILE
C. SKIP_FILE_<NUM>
D. SKIP_FILE_<NUM>%
E. ABORT_STATEMENT
F. ERROR_STATEMENT
Answer: A,B,C,D,E
18 / 21
Explanation:
59.Reclustering in Snowflake is automatic. (TRUE / FALSE)
A. FALSE
B. TRUE
Answer: B
Explanation:
Reclustering in Snowflake is automatic; no maintenance is needed.
60.Remote service in external function can be an AWS Lambda function
A. TRUE
B. FALSE
Answer: A
Explanation:
remote service
A remote service is stored and executed outside Snowflake, and returns a value. For example,
remote
services can be implemented as:
An AWS Lambda function.
An HTTPS server (e.g. Node.js) running on an EC2 instance.
To be called by the Snowflake external function feature, the remote service must:
Expose an HTTPS endpoint.
Accept JSON inputs and return JSON outputs.
61.A large manufacturing company runs a dozen individual Snowflake accounts across its business
divisions. The company wants to increase the level of data sharing to support supply chain
optimizations and increase its purchasing leverage with multiple vendors.
The company’s Snowflake Architects need to design a solution that would allow the business
19 / 21
divisions to decide what to share, while minimizing the level of effort spent on configuration and
management. Most of the company divisions use Snowflake accounts in the same cloud deployments
with a few exceptions for European-based divisions.
According to Snowflake recommended best practice, how should these requirements be met?
A. Migrate the European accounts in the global region and manage shares in a connected graph
architecture. Deploy a Data Exchange.
B. Deploy a Private Data Exchange in combination with data shares for the European accounts.
C. Deploy to the Snowflake Marketplace making sure that invoker_share() is used in all secure views.
D. Deploy a Private Data Exchange and use replication to allow European data shares in the
Exchange.
Answer: D
62.Materialized views based on external tables can improve query performance
A. TRUE
B. FALSE
Answer: A
Explanation:
Querying data stored external to the database is likely to be slower than querying native database
tables; however, materialized views based on external tables can improve query performance.
https://docs.snowflake.com/en/user-guide/tables-external-intro.html
63.Choose the different ways that you have to optimize query performance
A. Clustering a table
B. Creating one or more materialized views
C. Search Optimization
D. Vacuuming
Answer: A,B,C
Explanation:
Please note that search optimization is still in preview mode. But this will be a powerful feature to
optimize your query. If you are in a Snowflake implementation project, you must evaluate this feature.
The current disadvantage is that when you apply search optimization, it creates an index for all the
applicable columns in the table, you cannot specify a certain column to be indexed. So, you do not
have a way to reduce your index size by selecting only certain columns. Good news is that from my
experience, I have seen that only 10% of the overall data size is used for the index. Considering
Other Solutions for Optimizing Query Performance
The search optimization service is one of several ways to optimize query performance. Related
techniques include:
64.What is the best practice to follow when calling the SNOWPIPE REST API loadHistoryScan
A. Reading the last 10 minutes of history every 8 minutes
B. Read the last 24 hours of history every minute
C. Read the last 7 days of history every hour
Answer: A
Explanation:
This endpoint is rate limited to avoid excessive calls. To help avoid exceeding the rate limit (error
code 429), snowflake recommends relying more heavily on insertReport than loadHistoryScan. When
calling loadHistoryScan, specify the most narrow time range that includes a set of data loads. For
example, reading the last 10 minutes of history every 8 minutes would work well. Trying to read the
last 24 hours of history every minute will result in 429 errors indicating a rate limit has been reached.
The rate limits are designed to allow each history record to be read a handful of times.
20 / 21
Get ARA-C01 exam dumps full version.
Powered by TCPDF (www.tcpdf.org)
21 / 21
https://www.itfreedumps.com/exam/real-snowflake-ara-c01-dumps/
http://www.tcpdf.org