Logo Passei Direto

SnowPro Advanced Architect

Ferramentas de estudo

Solved questions

Which authentication methods are supported by all Snowflake editions?

A - SSO
B - MFA (Multi-factor authentication)
C - OAuth
D - Only MFA is supported by all the Snowflake editions
E - Only MFA and SSO are supported by all the Snowflake editions
A,B,C
D,E
A,C
B,C

Material
Study with thousands of resources!

Solved questions

Which authentication methods are supported by all Snowflake editions?

A - SSO
B - MFA (Multi-factor authentication)
C - OAuth
D - Only MFA is supported by all the Snowflake editions
E - Only MFA and SSO are supported by all the Snowflake editions
A,B,C
D,E
A,C
B,C

Text Material Preview

Itfreedumps provides the latest online questions for all IT certifications,
such as IBM, Microsoft, CompTIA, Huawei, and so on. 
Hot exams are available below. 
AZ-204 Developing Solutions for Microsoft Azure 
820-605 Cisco Customer Success Manager 
MS-203 Microsoft 365 Messaging 
HPE2-T37 Using HPE OneView 
300-415 Implementing Cisco SD-WAN Solutions (ENSDWI) 
DP-203 Data Engineering on Microsoft Azure 
500-220 Engineering Cisco Meraki Solutions v1.0 
NACE-CIP1-001 Coating Inspector Level 1 
NACE-CIP2-001 Coating Inspector Level 2 
200-301 Implementing and Administering Cisco Solutions 
Share some SnowPro Advanced Architect exam online questions
below. 
1.Scaling up is intended for handling concurrency issues dure to more users or more queries
A. TRUE
B. FASE
 1 / 14
https://www.itfreedumps.com/exam/real-microsoft-az-204-dumps/
https://www.itfreedumps.com/exam/real-cisco-820-605-dumps/
https://www.itfreedumps.com/exam/real-microsoft-ms-203-dumps/
https://www.itfreedumps.com/exam/real-hp-hpe2-t37-dumps/
https://www.itfreedumps.com/exam/real-cisco-300-415-dumps/
https://www.itfreedumps.com/exam/real-microsoft-dp-203-dumps/
https://www.itfreedumps.com/exam/real-cisco-500-220-dumps/
https://www.itfreedumps.com/exam/real-nace-nace-cip1-001-dumps/
https://www.itfreedumps.com/exam/real-nace-nace-cip2-001-dumps/
https://www.itfreedumps.com/exam/real-cisco-200-301-dumps/
Answer: B
Explanation:
Scaling Up is intended to handle Performance for complex queries not to handle concurrency.
2.Which vendors do support Snowflake natively for federated authentication and SSO?
A. Microsoft ADFS
B. Onelogin
C. Microsoft Azure Active Directory
D. Google G Suite
E. Okta
Answer: A,E
Explanation:
Okta and Microsoft ADFS provide native Snowflake support for federated authentication and SSO.
Other are not native but Snowflake supports using SAML 2.0-compliant.
3.What all objects can be shared?
A. Secure UDF
B. Secure View
C. Table
D. Standard View
Answer: A,B,C
Explanation:
Data Share is meant for Secures access and so, Standard View is not allowed to be shared.
4.User-Defined Function (UDF) supports
A. SQL
B. Go
C. JavaScript
D. Java
E. Python
Answer: A,C
Explanation:
UDF supports JavaScript and SQL only.
5. COPY INTO Statement
6.Scaling up is intended for handling concurrency issues dure to more users or more queries
A. FASE
B. TRUE
Answer: A
Explanation:
Scaling Up is intended to handle Performance for complex queries not to handle concurrency.
7.Which of the following Snowflake Editions encrypt all data transmitted over the network within a
Virtual Private Cloud (VPC)?
A. Standard
B. Enterprise
 2 / 14
C. Business Critical
Answer: C
Explanation:
Business Critical edition has lot many additional features like - Tri-Secret Secure using Customer-
managed key, AWS and Azure Private Link supports.
8.Semi Structured data can be accessed: (Select 3)
A. In files in an on-prem file server
B. In files on an AWS EC2 server
C. In files in a internal stage
D. In files in an external stage
E. In a permanent table using the variant data type
Answer: C,D,E
Explanation:
Snowflake CAN'T access from AWS EC2 server and on-prem file server. Snowflake can query
External Table ( files of External Stage), Internal Stage and Permanent table.
9.What are the use cases for cross-cloud & cross-region replication?
A. Data Portability for Account Migrations
B. Business Continuity & Disaster Recovery
C. Secure Data Sharing across regions/clouds
D. All of these
Answer: D
Explanation:
True, these all three are few of the best use cases for Cross-Cloud and Cross-Region Replications
10.A CAST command (symbol) will force a value to be output as a certain datatype.
Which of the following code samples will result in the "employeename" being output using the
VARCHAR datatype?
A. SELECT employeename AS VARCHAR
B. SELECT employeename||VARCHAR
C. SELECT VARCHAR(employeename)
D. SELECT employeename::VARCHAR
Answer: D
Explanation:
:: is used for Casting in Snowflake.
11.Running query against INFORMATION_SCHEMA.QUERY_HISTORY can help in finding the Size
of Warehouse used in Query. (TRUE/FALSE)
A. FALSE
B. TRUE
Answer: B
Explanation:
WAREHOUSE_SIZE column of INFORMATION_SCHEMA.QUERY_HISTORY can tell you size of
the warehouse when the statement executed.
 3 / 14
12.How can you impose row level access control while sharing a database which has your customers
/accounts specific data? One customer should not be granted select on the other customers records
in the same table.
A. Create two shares for two respective customers
B. Add single account when Altering Share to add the account name
C. Create two tables in Provider account. One for each customer
D. Impose row level access control using CURRENT_ACCOUNT(); mapping on secure view and
share
the secure view.
Answer: D
Explanation:
Account mapping using CURRENT_ACCOUNT() function helps in imposing row level access control.
You can create Secure view as : CREATE SECURE VIEW shared_records AS SELECT * FROM
vendor_records vr JOIN acct_map am ON vr.id = am.id AND am.acct_name =
CURRENT_ACCOUNT(); Here, acct_map table can have two columns - id and acct_name.
13.Materialized views doesn't incur any costs. (TRUE / FALSE)
A. TRUE
B. FALSE
Answer: B
Explanation:
Materialized views are designed to improve query performance for workloads composed of common,
repeated query patterns. However, materializing intermediate results incurs additional costs. As such,
before creating any materialized views, you should consider whether the costs are offset by the
savings from re-using these results frequently enough.
14.New or modified data in tables in a share are immediately not available to all consumers who have
created a database from a share
A. FALSE
B. TRUE
Answer: A
Explanation:
New or modified data in tables in a share are immediately available to all consumers who have
created a database from a share. You must grant usage on new objects created in a database in a
share in order for them to be available to consumers.
15.Secured view can be used to hide the definition but its performance can get degraded?
A. TRUE
B. FALSE
Answer: A
Explanation:
Secure views should not be used for views that are defined for query convenience, such as views
created for simplifying querying data for which users do not need to understand the underlying data
representation. This is because the Snowflake query optimizer, when evaluating secure views,
bypasses certain optimizations used for regular views. This might result in some impact on query
performance for secure views.
 4 / 14
16.By default System cancels long running queries after execution till specific duration. Select the
correct duration.
A. never
B. 12 Hours
C. 2 Days
D. 7 days
Answer: C
Explanation:
Handle by parameter STATEMENT_TIMEOUT_IN_SECONDS. Default is 172800 seconds (i.e., 2
days)
17.The best method to assist pruning on a large table is to:
A. Create a CLUSTERED INDEX on the table
B. Create a DENSE INDEX on the table
C. Define a HASH TABLE for the table
D. Define a CLUSTER KEY for the table
E. Define a PARTITIONING KEY on the table
Answer: D
Explanation:
A clustering key is a subset of columns in a table (or expressions on a table) that are explicitly
designated to co-locate the data in the table in the same micro-partitions. This is useful for very large
tables where the ordering was not ideal (at the time the data was inserted/loaded) or extensive DML
has caused the table’s natural clustering to degrade.
18.Reclustering in Snowflake is automatic. (TRUE / FALSE)
A. TRUE
B. FALSE
Answer: A
Explanation:
Reclustering in Snowflake is automatic; no maintenance is needed.
19.Before Snowflake Kafka connector load data into table, it is required to create the table in
Snowflake. Kafka connectorcan't create a table. (TRUE/FALSE)
A. FALSE
B. TRUE
Answer: A
Explanation:
Kafka topics can be mapped to existing Snowflake tables in the Kafka configuration. If the topics are
not mapped, then the Kafka connector creates a new table for each topic using the topic name.
20.Mike has setup process to load specific set of files using both Bulk and Snowpipe. This is best
practice to avoid any missed loading either by Bulk loading or Snowpipe. (TRUE /FALSE)
A. TRUE
B. FALSE
Answer: B
Explanation:
This is not a best practice, it may create reloading issue. To avoid reloading files (and duplicating
 5 / 14
data), Snowflake recommends loading data from a specific set of files using either bulk data loading
or Snowpipe but not both.
21.Zero Copy Cloning allows users to have multiple copies of your data without the additional cost of
storage usually associated with replicating data.
Which other statements about the Cloning features in Snowflake are True?
A. Cloning is an efficient and cost effective approach for code migration for Agile Release
Management
B. Any new record in the parent table gets available in the cloned table
C. The clone is a pointer to the original table data
D. Clone is a “point in time version” of the table data as of the time the clone was made
Answer: A,C,D
Explanation:
New record doesn't get available in cloned table because cloning is "point in time version" means only
the data which were available at the time of cloning get available in cloned table.
22.You have a dashboard that connects to Snowflake via JDBC. The dashboard is refreshed
hundreds of times per day. The data is very stable, only changing once or twice per day. The query
run by the dashboard connector user never changes.
How will Snowflake manage changing and non-changing data? Mark all true statements.
A. Snowflake will spin up a warehouse each time the dashboard is refreshed
B. Snowflake will re-use data from the Results Cache as long as it is still the most up-to-date data
available
C. Snowflake will show the most up-to-date data each time the dashboard is refreshed
D. Snowflake will compile results cache data from all user results so no warehouse is needed
E. Snowflake will spin up a warehouse only if the underlying data has changed
Answer: B,C,E
Explanation:
Until, data has not changed and query is same - Snowflake reuses the data from cache. Please note,
Each time the persisted result for a query is reused, Snowflake resets the 24-hour retention period for
the result, up to a maximum of 31 days from the date and time that the query was first executed. After
31 days, the result is purged and the next time the query is submitted, a new result is generated and
persisted.
23.You have a dashboard that connects to Snowflake via JDBC. The dashboard is refreshed
hundreds of times per day. The data is very stable, only changing once or twice per day. The query
run by the dashboard connector user never changes.
How will Snowflake manage changing and non-changing data? Mark all true statements.
A. Snowflake will compile results cache data from all user results so no warehouse is needed
B. Snowflake will show the most up-to-date data each time the dashboard is refreshed
C. Snowflake will re-use data from the Results Cache as long as it is still the most up-to-date data
available
D. Snowflake will spin up a warehouse only if the underlying data has changed
E. Snowflake will spin up a warehouse each time the dashboard is refreshed
Answer: B,C,D
Explanation:
Until, data has not changed and query is same - Snowflake reuses the data from cache. Please note,
Each time the persisted result for a query is reused, Snowflake resets the 24-hour retention period for
the result, up to a maximum of 31 days from the date and time that the query was first executed. After
 6 / 14
31 days, the result is purged and the next time the query is submitted, a new result is generated and
persisted.
24.How many maximum columns (or expressions) are recommended for a cluster key?
A. 3 to 4
B. 12 to 16
C. 7 to 8
D. Higher the number of columns (or expressions) in the key, better will be the performance
Answer: A
Explanation:
A single clustering key can contain one or more columns or expressions. For most tables, Snowflake
recommends a maximum of 3 or 4 columns (or expressions) per key. Adding more than 3-4 columns
tends to increase costs more than benefits.
25.Materialized views doesn't incur any costs. (TRUE / FALSE)
A. TRUE
B. FALSE
Answer: B
Explanation:
Materialized views are designed to improve query performance for workloads composed of common,
repeated query patterns. However, materializing intermediate results incurs additional costs. As such,
before creating any materialized views, you should consider whether the costs are offset by the
savings from re-using these results frequently enough.
26.How is the most effective way to test if clustering a table helped performance?
A. Use SYSTEM$CLUSTERING_INFORMATION. Check the average_depth
B. Run a sample query before clustering and after to compare the results
C. Use the SYSTEM$CLUSTERING_DEPTH and check the depth of each column
D. Use SYSTEM$CLUSTERING_INFORMATION. Check the total_constant_partition_count
E. Use SYSTEM$CLUSTERING_INFORMATION. Check the average_overlaps
Answer: B
Explanation:
Also, Snowflake strongly recommends that you test a representative set of queries on the table to
establish some performance baselines.
27.Defining a clustering key directly on top of VARIANT columns is not supported. (TRUE / FALSE)
A. TRUE
B. FALSE
Answer: A
Explanation:
Defining a clustering key directly on top of VARIANT columns is not supported; however, you can
specify a VARIANT column in a clustering key if you provide an expression consisting of the path and
the target type.
28.In the History Page, a query shows Bytes Scanned having Assigned Partitions: 110, Scanned
Partitions 58, and Original Partitions 110.
Why did the optimizer show fewer partitions scanned than assigned?
 7 / 14
A. The optimizer estimated only 58 partitions would need to be scan but during the execution of the
query, the optimizer realized it would have to read all 110 micro partitions
B. During the execution of the query, new data was added to the table and the optimizer had to add
those micro partitions into the scan.
C. One of the tables in the query was an external table and didn’t have micro partitions The metadata
for the table was out of date and there were really only 58 partitions total
D. The query was using an Xlarge warehouse and could scan the partitions in parallel
E. The static optimization determined the number of possible micro partitions would be 110 but the
dynamic optimization was able to prune some of the partitions from a joined table
Answer: E
Explanation:
Snowflake produces well-clustered data in tables in micro-partitions. Snowflake only targets those
micro-partitions which come under the range of query criteria. If table is clustered well, Snowflake
scans only few of the micro-partitions.
29.What are the types of Caches? (Select 2)
A. Storage Cache
B. History Cache
C. Results Cache
D. Metadata Cache
Answer: C,D
Explanation:
Also, Warehouse Cache. Warehouse cache gets purged on suspension.
30.Which type of view has an extra layer of protection to hide the SQL code from unauthorized
viewing?
A. Standard
B. Materialized
C. Secure
D. Permanent
Answer: C
Explanation:
Some of the internal optimizations for views require access to the underlying data in the base tables
for the view. This access might allow data that is hidden from users of the view to be exposed through
user code, such as user-defined functions, or other programmatic methods. Secure views do not
utilize these optimizations, ensuring that users have no access to the underlying data.
31.The VARIANT data type imposes a 10 MBto 100 MB (compressed) size limit on individual
rows.(TRUE / FALSE)
A. TRUE
B. FALSE
Answer: B
Explanation:
The VARIANT data type imposes a 16 MB (compressed) size limit on individual rows.
32.What are the use cases for cross-cloud & cross-region replication?
A. All of these
B. Secure Data Sharing across regions/clouds
 8 / 14
C. Data Portability for Account Migrations
D. Business Continuity & Disaster Recovery
Answer: D
Explanation:
True, these all three are few of the best use cases for Cross-Cloud and Cross-Region Replications
33.Snowflake provides standard and powerful features that ensure the highest levels of security for
your account and users if used properly.
Which are the true statements about Snowflake Security?
A. Tri-secret requires that customers manage their own keys
B. Snowflake supports user-based access control
C. Federated authentication in Snowflake is compliant with SAML 2.0
Answer: A,C
Explanation:
Along with Tri-Secret and Federated authentication, Snowflake supports ROLE-based access control.
34.Mike wants to create a multi-cluster warehouse and wants to make sure that whenever new
queries are queued, additional clusters should start immediately.
How should he configure the Warehouse?
A. Snowflake takes care of this automatically so, Mike does not have to worry about it
B. Set the SCALING POLICY as ECONOMY
C. Set the SCALING POLICY as STANDARD
D. Configure as SCALE-MAX so that the warehouse is always using maximum number of specified
clusters
Answer: C
Explanation:
If a multi-cluster warehouse is configured with SCALING policy as STANDARD it immediately when
either a query is queued or the system detects that there’s one more query than the currently-running
clusters can execute
35.Please choose the correct Table Type from the given options.
A. PERMANENT
B. TRANSIENT
C. EXTERNAL
D. TEMPORARY
E. INTERNAL
Answer: A,B,C,D
Explanation:
There are four types of tables - Permanent, Temporary, Transient and External.
36.What are the best strategies to select clustering keys?
A. Cluster columns that are most actively used in selective filters.
B. Avoid columns which are used in join predicates
C. Columns with timestamps in nano seconds
D. Consider columns frequently used in join predicates
Answer: A,D
Explanation:
Snowflake recommends prioritizing keys in the order below: Cluster columns that are most actively
 9 / 14
used in selective filters. For many fact tables involved in date-based queries (for example “WHERE
invoice_date > x AND invoice date <= y”), choosing the date column is a good idea. For event tables,
event type might be a good choice, if there are a large number of different event types. (If your table
has only a small number of different event types, then see the comments on cardinality below before
choosing an event column as a clustering key.) If there is room for additional cluster keys, then
consider columns frequently used in join predicates, for example “FROM table1 JOIN table2 ON
table2.column_A = table1.column_B”. Columns with timestamps in nano seconds will result in very
high cardinality which is not good candidate to use as a clustering key.
37.What is the data size limit while loading data into VARIANT column?
A. 10 MB - 100 MB compressed
B. 10 MB - 100 MB uncompressed
C. 1 GB Compressed
D. 16 MB (uncompressed)
E. 16 MB (compressed)
Answer: E
Explanation:
The VARIANT data type imposes a 16 MB (compressed) size limit on individual rows.
38.Which three objects did we explicitly refer to using the COPY INTO command in the lesson on
using external stages?
A. FILE FORMAT
B. DATABASE
C. STAGE
D. VIEW
E. SCHEMA
F. TABLE
Answer: A,C,F
Explanation:
TABLE, FILE FORMAT and STAGE are three main objects explicitly referred in COPY INTO
statement.
39.Stages which do not support File Formats are:
A. Internal User Stage
B. Internal Table Stage
C. Internal named Stage
D. External Named Stage
Answer: A,B
Explanation:
Table Stage and User Stage are created automatically whenever a table is created or a new user is
added into the system respectively. They don’t support setting up the file format.
40.SQL Clause which helps defining the clustering key:
A. CLUSTERING BY
B. CLUSERTING ON
C. CLUSTER ON
D. CLUSTER BY
Answer: D
 10 / 14
Explanation:
example - create or replace table t1 (c1 date, c2 string, c3 number) cluster by (c1, c2);
41.How to choose the right size of warehouse to achieve the best results based on the Query
processing?
A. Execute varieties of queries on same warehouse to achieve the best result
B. Execute relatively homogenous queries on the same warehouse
Answer: B
Explanation:
To achieve the best results, try to execute relatively homogeneous queries (size, complexity, data
sets, etc.) on the same warehouse; executing queries of widely-varying size and/or complexity on the
same warehouse makes it more difficult to analyze warehouse load, which can make it more difficult
to select the best size to match the size, composition, and number of queries in your workload.
42.Snowflake supports transforming data while loading it into a table using the COPY command.
What all options you have?
A. Column reordering
B. Column omission
C. Casts
D. String Truncation
E. Join
Answer: A,B,C,D
Explanation:
Snowflake supports transforming data while loading it into a table using the COPY command. Options
include:
- Column reordering
- Column omission
- Casts
- Truncating text strings that exceed the target column length
There is no requirement for your data files to have the same number and ordering of columns as your
target table. The COPY INTO transformations do not support FLATTEN, JOIN, GROUP BY.
43.Jessica created table T1 and inserted 2 records with ids 1 and 2. Then she dropped the table T1.
She again created table T1 and inserted records with ids 3 and 4. She again dropped the table T1.
She again created table T1 and inserted records with ids 5 and 6. She again dropped the table T1.
Now, she wants to get the table t1 back from the time-travel at the time when the table has records
with ids 3 and 4.
What commands should she run?
A. UNDROP TABLE T1; UNDROP TABLE T1;
B. UNDROP TABLE T1. DROP TABLE T1; UNDROP TABLE T1;
C. ALTER TABLE T1 RENAME TO T2; UNDROP TABLE T1;
D. UNDROP TABLE T1;
E. UNDROP TABLE T1; ALTER TABLE T1 RENAME TO T2; UNDROP TABLE T1;
Answer: E
Explanation:
The concept to learn here is if you create same table multiple times and drop. When you run undrop it
will give the latest version of table. So, in this case, First UNDROP command will get back the T1 with
Id 5 and 6. Then Jessica will have to rename the T1 so that she can run another UNDROP TABLE T1
statement. When she will run the UNDROP second time, she will the version of T1 with ids 3 and 4.
 11 / 14
44.A multi-cluster virtual warehouse is Maximized when
A. Minimum number of clusters and Maximum number of clusters are same and must be specified
with a value of more than 1.
B. Minimum and Maximum number of clusters are specified differently.
C. MIN_CLUSTER_COUNT = 3 MAX_CLUSTER_COUNT = 3
D. MIN_CLUSTER_COUNT = 1 MAX_CLUSTER_COUNT = 1
Answer: A,C
Explanation:
Maximized mode is enabled by specifying the same value for both maximum and minimum clusters
(note that the specified value must be larger than 1). In this mode, when the warehouse is started,
Snowflake starts all the clusters so that maximum resources are available while the warehouse is
running.
45.Stored Procedure supports
A. Java
B. Python
C. Go
D. JavaScript
E. SQL
Answer: D
Explanation:
As of now, Snowflake Stored Procedures only support JavaScript
46.At what frequency does Snowflake rotate the object keys?
A. 16 Days
B. 60 Days
C. 1 Year
D. 30 Days
Answer: D
Explanation:
Key automatically get rotated every 30 days.
47.Snowflake provides native supportfor semi-structured data. Select true option about Snowflake
native support for Semi-Structure data.
A. Flexible-Schema data types for loading semi-structured data without transformation
B. Database Optimization for fast and efficient SQL querying.
C. All of these
D. Automatic conversion of data to optimize internal storage format
Answer: C
Explanation:
Snowflake provides native support for semi-structured data, including: Flexible-schema data types for
loading semi-structured data without transformation. Automatic conversion of data to optimized
internal storage format. Database optimization for fast and efficient SQL querying.
48.Can you have a database overlap across two Snowflake account?
A. No
 12 / 14
B. Yes
Answer: A
Explanation:
A database can not overlap across two Snowflake accounts.
49.As an ACCOUNTADMIN, how can you find the credit usage of a warehouse?
A. Run SQL query on ACCOUNT_USAGE table under Snowflake Database
B. Using Web interface > Account > Usage
C. Run SQL query on METERING_HISTORY view under ACCOUNT_USAGE Schema
D. Run SQL query on WAREHOUSE_METERING_HISTORY view under ACCOUNT_USAGE
Schema
Answer: B,C,D
Explanation:
Using Web interface > Account > Usage section. AND using SQL - ACCOUNT_USAGE: - Query the
METERING_HISTORY to view hourly usage for an account. - Query the
METERING_DAILY_HISTORY to view daily usage for an account. - Query the
WAREHOUSE_METERING_HISTORY to view usage for a warehouse. - Query the
QUERY_HISTORY to view usage for a job. INFORMATION_SCHEMA: - Query the
QUERY_HISTORY table function.
50.Select the objects which can be replicated from one region to other region.
A. Transient Table
B. Views
C. Temporary Table
D. Sequences
E. External Tables
F. Permanent Table
Answer: A,B,D,F
Explanation:
Temporary Tables can not be replicated as they last only for a session. As of now, creating or
refreshing a secondary database is blocked if an external table exists in the primary database.
51.When choosing a geographic deployment region, what factors might an enrollee consider?
A. End-user perceptions of glamorous or trendy geographic locations
B. Number of availability zones within a region
C. Additional fees charged for regions with geo-political unrest
D. Proximity to the point of service
Answer: B,D
Explanation:
It is better to choose the nearest region to avoid any lag or latency with higher number of availability
zones.
52.Mike wants to create a multi-cluster warehouse and wants to make sure that whenever new
queries are queued, additional clusters should start immediately.
How should he configure the Warehouse?
A. Set the SCALING POLICY as ECONOMY
B. Configure as SCALE-MAX so that the warehouse is always using maximum number of specified
clusters
 13 / 14
C. Set the SCALING POLICY as STANDARD
D. Snowflake takes care of this automatically so, Mike does not have to worry about it
Answer: C
Explanation:
If a multi-cluster warehouse is configured with SCALING policy as STANDARD it immediately when
either a query is queued or the system detects that there’s one more query than the currently-running
clusters can execute
53.Which parameter does help in loading files whose metadata has expired?
A. set LAST_MODIFIED_DATE to within 64 days
B. Set LAST_MODIFIED_DATE to more than 64 days
C. set LOAD_EXPIRED_FILES to TRUE
D. set LOAD_UNCERTAIN_FILES to TRUE
Answer: D
Explanation:
To load files whose metadata has expired, set the LOAD_UNCERTAIN_FILES copy option to true.
54.Refreshing a secondary database is blocked for:
A. Materialized views
B. external table exists in the primary table
C. Databased created from shares
D. Transient Tables exists in the primary table
Answer: B,C
Explanation:
Currently following limitations exist with Replication: Refreshing a secondary database is blocked if an
external table exists in the primary database. Databases created from shares cannot be replicated.
55.Snowflake supports many methods of authentication.
Which are the supported authentication methods in ALL Snowflake Editions?
A. SSO
B. MFA (Multi-factor authentication)
C. OAuth
D. Only MFA is supported by all the Snowflake editions
E. Only MFA and SSO are supported by all the Snowflake editions
Answer: A,B,C
Explanation:
MFA, Oauth, SSO - all these methods are supported by all the Snowflake editions.
Get SnowPro Advanced Architect exam dumps
full version.
 14 / 14
https://www.itfreedumps.com/exam/real-snowflake-snowpro-advanced-architect-dumps/
https://www.itfreedumps.com/exam/real-snowflake-snowpro-advanced-architect-dumps/