An Architect is designing Snowflake architecture to support fast Data Analyst reporting. To optimize costs, the virtual warehouse is configured to auto-suspend after 2 minutes of idle time. Queries are run once in the morning after refresh, but later queries run slowly.
Why is this occurring?
The warehouse is not large enough.
The warehouse was not configured as a multi-cluster warehouse.
The warehouse was not created with USE_CACHE = TRUE.
When the warehouse was suspended, the cache was dropped.
Snowflake virtual warehouses maintain a local result and data cache only while the warehouse is running. When a warehouse is suspended—whether manually or via auto-suspend—the local cache is cleared. As a result, subsequent queries cannot benefit from cached data and must re-scan data from remote storage, leading to slower execution (Answer D).
Snowflake does maintain a global result cache at the cloud services layer, but it is only used when the exact same query text is re-executed and the underlying data has not changed. In many analytical workloads, queries vary slightly, preventing reuse of the result cache.
Warehouse size and multi-cluster configuration impact concurrency and throughput, not cache persistence. There is no USE_CACHE parameter in Snowflake. This question tests an architect’s understanding of Snowflake caching behavior and the tradeoff between aggressive auto-suspend for cost control and cache reuse for performance.
=========
QUESTION NO: 32 [Security and Access Management]
A company has two databases, DB1 and DB2.
Role R1 has SELECT on DB1.
Role R2 has SELECT on DB2.
Users should normally access only one database, but a small group must access both databases in the same query with minimal operational overhead.
What is the best approach?
A. Set DEFAULT_SECONDARY_ROLE to R2.
B. Grant R2 to users and use USE_SECONDARY_ROLES for SELECT.
C. Grant R2 to R1 to use privilege inheritance.
D. Grant R2 to users and require USE SECONDARY ROLES.
Answer: B
Snowflake supports secondary roles to allow users to activate additional privileges without changing their primary role. Granting R2 to the users and enabling USE_SECONDARY_ROLES for SELECT allows those users to access both DB1 and DB2 in a single query, while keeping their default role unchanged (Answer B).
This approach minimizes operational overhead because it avoids role restructuring or privilege inheritance changes. It also maintains least privilege by ensuring that users only activate additional access when needed. Setting a default secondary role applies automatically and may unintentionally broaden access. Granting R2 to R1 affects all users with R1, which violates the requirement to limit access to a small group.
This pattern is a common SnowPro Architect design for cross-database access control.
=========
QUESTION NO: 33 [Performance Optimization and Monitoring]
How can an Architect enable optimal clustering to enhance performance for different access paths on a given table?
A. Create multiple clustering keys for a table.
B. Create multiple materialized views with different cluster keys.
C. Create super projections that automatically create clustering.
D. Create a clustering key containing all access path columns.
Answer: B
Snowflake allows only one clustering key per table, which limits its effectiveness when multiple access paths exist. Creating a composite clustering key that includes many columns often leads to poor clustering depth and limited pruning.
Materialized views provide an effective alternative. Each materialized view can be clustered independently, allowing architects to tailor physical data organization to specific query patterns (Answer B). Queries targeting different access paths can then leverage the appropriate materialized view, achieving better pruning and performance.
Super projections are not a Snowflake feature. Creating multiple clustering keys on a single table is not supported. This question reinforces SnowPro Architect knowledge of advanced performance design techniques using materialized views.
=========
QUESTION NO: 34 [Cost Control and Resource Management]
An Architect configures the following timeouts and creates a task using a size X-Small warehouse. The task’s INSERT statement will take ~40 hours.
How long will the INSERT execute?
A. 1 minute
B. 5 minutes
C. 1 hour
D. 40 hours
Answer: A
Tasks in Snowflake are governed by the USER_TASK_TIMEOUT_MS parameter, which specifies the maximum execution time for a single task run. In this scenario, USER_TASK_TIMEOUT_MS = 60000, which equals 1 minute. This timeout applies regardless of account-, session-, or warehouse-level statement timeout settings.
Even though the account, session, and warehouse statement timeouts are higher, the task-specific timeout takes precedence for task execution. As a result, the INSERT statement will be terminated after 1 minute (Answer A).
This is a key SnowPro Architect concept: tasks have their own execution limits that override other timeout parameters. Architects must ensure that task timeouts are configured appropriately for long-running operations or redesign workloads to fit within task constraints.
=========
QUESTION NO: 35 [Snowflake Ecosystem and Integrations]
Several in-house applications need to connect to Snowflake without browser access or redirect capabilities.
What is the Snowflake best practice for authentication?
A. Use Snowflake OAuth.
B. Use usernames and passwords.
C. Use external OAuth.
D. Use key pair authentication with a service user.
Answer: D
For non-interactive, service-to-service authentication scenarios, Snowflake recommends key pair authentication using a service user (Answer D). This method avoids hardcoding passwords, supports automated rotation of credentials, and aligns with security best practices.
OAuth-based methods typically require browser redirects or user interaction, which are not available in this scenario. Username/password authentication introduces security risks and operational overhead.
Key pair authentication enables strong, certificate-based security and is widely used in SnowPro Architect designs for applications, ETL tools, and automated workloads.
What considerations apply when using database cloning for data lifecycle management in a development environment? (Select TWO).
Any pipes in the source are not cloned.
Any pipes in the source referring to internal stages are not cloned.
Any pipes in the source referring to external stages are not cloned.
The clone inherits all granted privileges of all child objects, including the database.
The clone inherits all granted privileges of all child objects, excluding the database.
Zero-copy cloning in Snowflake creates a metadata-only snapshot of databases, schemas, or tables, enabling rapid environment refreshes without duplicating data. However, not all dependent objects behave identically during cloning.
Pipes that reference external stages are not cloned because external stages depend on credentials and integrations that may not exist or be valid in the cloned environment (Answer C). This is a documented Snowflake behavior and an important consideration when cloning production environments into development.
Regarding privileges, a cloned database inherits privileges granted on child objects (such as schemas and tables), but it does not inherit privileges granted directly on the source database itself (Answer E). This design prevents unintended access escalation and ensures that security must be explicitly re-granted at the database level.
Understanding these nuances is essential for SnowPro Architect candidates when designing environment promotion strategies and managing data lifecycle processes using cloning.
=========
QUESTION NO: 40 [Performance Optimization and Monitoring]
Two SQL queries perform the same task, but one consistently outperforms the other in reports, even though the SQL logic appears equivalent.
What step ensures the comparison is accurate?
A. Eliminate compilation time metrics.
B. Ensure both queries use the same clustering keys.
C. Disable result cache usage for the faster query.
D. Suspend and resume the virtual warehouse between queries.
Answer: C
Snowflake’s result cache can cause misleading performance comparisons. If one query benefits from cached results while another does not, execution time metrics will not reflect true performance differences. Disabling result cache usage for the faster query ensures both queries execute fully and fairly (Answer C).
Suspending and resuming a warehouse clears the local data cache but does not necessarily control result cache usage. Compilation time is generally minimal compared to execution time and does not explain consistent performance differences. Clustering keys are unrelated if both queries access the same data.
This question reinforces a core SnowPro Architect principle: accurate performance testing requires controlling for caching effects. Architects must understand Snowflake’s caching layers to correctly interpret query performance metrics.
An Architect wants to stream website logs near real time to Snowflake using the Snowflake Connector for Kafka.
What characteristics should the Architect consider regarding the different ingestion methods? (Select TWO).
Snowpipe Streaming is the default ingestion method.
Snowpipe Streaming supports schema detection.
Snowpipe has lower latency than Snowpipe Streaming.
Snowpipe Streaming automatically flushes data every one second.
Snowflake can handle jumps or resetting offsets by default.
When using the Snowflake Connector for Kafka, architects must understand the behavior differences between Snowpipe (file-based) and Snowpipe Streaming. Snowpipe Streaming is optimized for low-latency ingestion and works by continuously sending records directly into Snowflake-managed channels rather than staging files. One important characteristic is that Snowpipe Streaming automatically flushes buffered records at short, fixed intervals (approximately every second), ensuring near real-time data availability (Answer D).
Another key consideration is offset handling. The Snowflake Connector for Kafka is designed to tolerate Kafka offset jumps or resets, such as those caused by topic reprocessing or consumer group changes. Snowflake can safely ingest records without corrupting state, relying on Kafka semantics and connector metadata to maintain consistency (Answer E).
Snowpipe Streaming is not always the default ingestion method; configuration determines whether file-based Snowpipe or Streaming is used. Schema detection is not supported in Snowpipe Streaming. Traditional Snowpipe does not offer lower latency than Snowpipe Streaming. For the SnowPro Architect exam, understanding ingestion latency, buffering behavior, and fault tolerance is essential when designing streaming architectures.
=========
QUESTION NO: 57 [Snowflake Data Engineering]
An Architect wants to create an externally managed Iceberg table in Snowflake.
What parameters are required? (Select THREE).
A. External volume
B. Storage integration
C. External stage
D. Data file path
E. Catalog integration
F. Metadata file path
Answer: A, E, F
Externally managed Iceberg tables in Snowflake rely on external systems for metadata and storage management. An external volume is required to define and manage access to the underlying cloud storage where the Iceberg data files reside (Answer A). A catalog integration is required so Snowflake can interact with the external Iceberg catalog (such as AWS Glue or other supported catalogs) that manages table metadata (Answer E).
Additionally, Snowflake must know the location of the Iceberg metadata files (the Iceberg metadata JSON), which is provided via the metadata file path parameter (Answer F). This allows Snowflake to read schema and snapshot information maintained externally.
An external stage is not required for Iceberg tables, as Snowflake accesses the data directly through the external volume. A storage integration is used for stages, not for Iceberg tables. The data file path is derived from metadata and does not need to be specified explicitly. This question tests SnowPro Architect understanding of modern open table formats and Snowflake’s Iceberg integration model.
=========
QUESTION NO: 58 [Security and Access Management]
A company stores customer data in Snowflake and must protect Personally Identifiable Information (PII) to meet strict regulatory requirements.
What should an Architect do?
A. Use row-level security to mask PII data.
B. Use tag-based masking policies for columns containing PII.
C. Create secure views for PII data and grant access as needed.
D. Separate PII into different tables and grant access as needed.
Answer: B
Tag-based masking policies provide a scalable and centralized way to protect PII across many tables and schemas (Answer B). By tagging columns that contain PII and associating masking policies with those tags, Snowflake automatically enforces masking rules wherever the tagged columns appear. This approach reduces administrative overhead and ensures consistent enforcement as schemas evolve.
Row access policies control row visibility, not column masking. Secure views and table separation can protect data but introduce significant maintenance complexity and do not scale well across large environments. Snowflake best practices—and the SnowPro Architect exam—emphasize tag-based governance for sensitive data.
=========
QUESTION NO: 59 [Security and Access Management]
An Architect created a data share and wants to verify that only specific records in secure views are visible to consumers.
What is the recommended validation method?
A. Create reader accounts and log in as consumers.
B. Create a row access policy and assign it to the share.
C. Set the SIMULATED_DATA_SHARING_CONSUMER session parameter.
D. Alter the share to impersonate a consumer account.
Answer: C
Snowflake provides the SIMULATED_DATA_SHARING_CONSUMER session parameter to allow providers to test how shared data appears to specific consumer accounts without logging in as those consumers (Answer C). This feature enables secure, efficient validation of row-level and column-level filtering logic implemented through secure views.
Creating reader accounts is unnecessary and operationally heavy. Row access policies are part of access control design, not validation. Altering a share does not provide impersonation capabilities. This question tests SnowPro Architect familiarity with governance validation tools in Secure Data Sharing scenarios.
=========
QUESTION NO: 60 [Architecting Snowflake Solutions]
Which requirements indicate that a multi-account Snowflake strategy should be used? (Select TWO).
A. A requirement to use different Snowflake editions.
B. A requirement for easy object promotion using zero-copy cloning.
C. A requirement to use Snowflake in a single cloud or region.
D. A requirement to minimize complexity of changing database names across environments.
E. A requirement to use RBAC to govern DevOps processes across environments.
Answer: A, B
A multi-account Snowflake strategy is appropriate when environments have fundamentally different requirements. Using different Snowflake editions (for example, Business Critical for production and Enterprise for non-production) requires separate accounts because edition is an account-level property (Answer A).
Zero-copy cloning is frequently used for fast environment refresh and object promotion, but cloning only works within a single account. To promote data between environments cleanly, many organizations use separate accounts combined with replication or sharing strategies, making multi-account design relevant when environment isolation and promotion workflows are required (Answer B).
Single-region usage, minimizing database name changes, and RBAC governance can all be handled within a single account. This question reinforces SnowPro Architect principles around environment isolation, governance, and account-level design decisions.
A Snowflake Architect Is working with Data Modelers and Table Designers to draft an ELT framework specifically for data loading using Snowpipe. The Table Designers will add a timestamp column that Inserts the current tlmestamp as the default value as records are loaded into a table. The Intent is to capture the time when each record gets loaded into the table; however, when tested the timestamps are earlier than the loae_take column values returned by the copy_history function or the Copy_HISTORY view (Account Usage).
Why Is this occurring?
The timestamps are different because there are parameter setup mismatches. The parameters need to be realigned
The Snowflake timezone parameter Is different from the cloud provider's parameters causing the mismatch.
The Table Designer team has not used the localtimestamp or systimestamp functions in the Snowflake copy statement.
The CURRENT_TIMEis evaluated when the load operation is compiled in cloud services rather than when the record is inserted into the table.
The correct answer is D because the CURRENT_TIME function returns the current timestamp at the start of the statement execution, not at the time of the record insertion. Therefore, if the load operation takes some time to complete, the CURRENT_TIME value may be earlier than the actual load time.
Option A is incorrect because the parameter setup mismatches do not affect the timestamp values. The parameters are used to control the behavior and performance of the load operation, such as the file format, the error handling, the purge option, etc.
Option B is incorrect because the Snowflake timezone parameter and the cloud provider’s parameters are independent of each other. The Snowflake timezone parameter determines the session timezone for displaying and converting timestamp values, while the cloud provider’s parameters determine the physical location and configuration of the storage and compute resources.
Option C is incorrect because the localtimestamp and systimestamp functions are not relevant for the Snowpipe load operation. The localtimestamp function returns the current timestamp in the session timezone, while the systimestamp function returns the current timestamp in the system timezone. Neither of them reflect the actual load time of the records. References:
Snowflake Documentation: Loading Data Using Snowpipe: This document explains how to use Snowpipe to continuously load data from external sources into Snowflake tables. It also describes the syntax and usage of the COPY INTO command, which supports various options and parameters to control the loading behavior.
Snowflake Documentation: Date and Time Data Types and Functions: This document explains the different data types and functions for working with date and time values in Snowflake. It also describes how to set and change the session timezone and the system timezone.
Snowflake Documentation: Querying Metadata: This document explains how to query the metadata of the objects and operations in Snowflake using various functions, views, and tables. It also describes how to access the copy history information using the COPY_HISTORY function or the COPY_HISTORY view.
Copyright © 2021-2026 CertsTopics. All Rights Reserved