Snowflake Intelligence Is Generally Available Analytics to Action

On November 4, 2025, Snowflake made Intelligence and Cortex Agents generally available, moving analytics from answers to actions. Here is what changes, why the semantic layer is the new battleground, and a 90-day plan to ship value.

ByTalosTalos
Artificial Inteligence
Snowflake Intelligence Is Generally Available Analytics to Action

The day business intelligence learned to act

On November 4, 2025, Snowflake flipped the switch on two related launches that move analytics from a place you look to a place you do. Snowflake Intelligence is generally available, giving every authorized user a conversational way to explore governed data and trigger next steps. At the same time, Snowflake made its agent framework production ready. Together with AISQL, the company’s suite of artificial intelligence operators that live inside the SQL engine, Intelligence turns the data cloud into an action layer.

This is not just about friendly chat over data. It is about putting planning, tool use, and policy inside the same perimeter as your tables, files, features, and metrics. The result is a tighter loop from question to outcome. Ask, reason, fetch, decide, and act now happen where the data already lives, under the same lineage, security, and audit controls.

What changes when agents are data native

Most enterprise questions still die in the handoff. An executive asks a question, an analyst writes a query, a visualization lands in a slide, and then someone opens a separate workflow tool to trigger follow up. Each hop adds latency, cost, and risk. A data-native agent collapses the hops.

Inside Snowflake, an agent can plan a multi step approach, call specialized tools, and reflect on intermediate outputs before producing a final answer. Crucially, the tools are not uncoupled services in another cloud. They are first class citizens in the data platform. For structured data the agent uses Cortex Analyst to select and aggregate facts that have been modeled and governed. For unstructured content the agent uses Cortex Search to ground its reasoning in documents, images, and audio you already steward. AISQL operators run within the SQL engine, so classification, extraction, translation, and similarity scoring are expressed right beside joins and windows.

Think of it like moving from a mail-in service to an onsite workshop. The machines, the safety rules, the raw materials, and the inspectors now share one floor. Work flows faster because there is no shipping. Compliance is easier because nothing leaves the building.

Planning, tool use, and AISQL, explained simply

Three concepts make the new loop work.

  1. Planning. When you ask, “Which accounts are likely to churn next quarter and what should the field do,” the agent does not jump straight to a number. It builds a plan. Step one, define churn and the horizon. Step two, find candidate features from usage logs, tickets, and renewal dates. Step three, pull unstructured clues from support call transcripts and success manager notes. Step four, combine signals, quantify risk, and prepare suggested actions. Planning is visible in logs, so you can audit steps and add guardrails when needed.

  2. Tool use. The agent routes subtasks to the right tool. It uses governed semantic views for facts and metrics, so “active user” means exactly what finance and product already agreed on. It calls Cortex Search to retrieve passages from contracts or post-mortems. If it encounters file data, AISQL operators handle it directly in a query. Tool permissions inherit from your data platform roles, which reduces the chance of permission drift that occurs when agents fetch from unmanaged copies.

  3. AISQL inside the engine. AISQL expands SQL with operators that understand text, images, audio, and embeddings right in a SELECT. You can classify a document, extract fields from a contract, generate embeddings, and measure similarity without shipping content to another system. Because these operators run next to your data and participate in query planning, you can join an invoice table to contract clauses or cluster support emails by theme with the same performance and governance posture as your regular analytics.

In practice, this lets a revenue team write a dynamic table that labels every opportunity with risk reasons inferred from free-text notes, and then have an agent watch those labels to open tasks for the right account owner. No glue code, no silent data copies.

Why the semantic layer becomes the battleground

When agents learn to act, the bottleneck shifts from dashboards to definitions. The agent can only stay consistent if its world is consistent. That is the semantic layer’s job.

Inside Snowflake, semantic views and models describe business concepts, from customer to pipeline stage to active device, with governed metrics and dimensions. Intelligence agents attach to these definitions. This makes the semantic layer the new high ground because it is where truth, access, and explainability intersect. If a company’s sales headcount metric is wrong in the semantic layer, every plan, forecast, and action that flows from an agent will reflect that mistake. If the layer is right, the agent propagates the right answer into conversations, documents, and tickets. For a deeper view on resilience and controls, see how organizations build the agent trust stack.

Expect a competitive push here. Vendors that already own pieces of the semantic stack will race to integrate their definitions into agent planning. Data transformation teams will harden metric stores. Business intelligence vendors will reinforce modeling features. Great semantic architecture will become a decisive advantage, not a nice to have.

A 90-day build plan you can start now

Here is a concrete, time-boxed plan any enterprise can execute to turn Intelligence into business outcomes while staying within governance.

Weeks 1 to 2: pick one high-value decision loop

  • Choose a loop where insight reliably drives action. Examples: churn prevention in customer success, real time product quality triage in operations, fraudulent refund detection in finance.
  • Scope a single decision owner and a small audience. Success here is momentum, not broad coverage.
  • Draft the target action. For churn, it might be: open a task in the account team’s queue with risk reason, top three signals, and a suggested playbook.

Weeks 3 to 4: formalize definitions and access

  • Model the domain as semantic views and models. Name the facts and metrics the way the business already speaks. If there is conflict, settle it now.
  • Tag sensitive fields and set row and column level policies. Grant access to the agent’s role only where the owner team has access. Avoid giving the agent broader reach than the humans it assists.
  • Create a narrow intelligence agent attached to those semantic models and the minimum required search collections. Prefer precise scope to broad scope at the start.

Weeks 5 to 6: turn unstructured signals into columns

  • Use AISQL operators to extract structure from tickets, call notes, or documents into dynamic tables. Keep prompts simple and evaluate outputs weekly. If the agent will cite text in its reasoning, store those citations.
  • Join these AI-derived columns with your metrics in the semantic layer. Name them clearly, for example, support_risk_reason and support_risk_confidence.

Weeks 7 to 8: wire the action path

  • Decide how actions show up where work already happens. For many enterprises that is Microsoft Teams and Microsoft Copilot. Build a thin service that receives the agent’s structured output, posts a message to a channel with the summary and citations, and opens a task in the team’s tool of record with fields mapped to your playbook.
  • Keep the integration simple and observable. Capture every action in a ledger table with who, what, when, and why fields filled. This becomes your feedback loop.

Weeks 9 to 10: test guardrails and review loops

  • Run a red team week. Ask the agent hard questions, ambiguous questions, and out of scope questions. Confirm it defers or asks for clarification instead of guessing.
  • Add alignment tests. For instance, if the agent recommends a revenue play for a government account, block the action and log the event for review.
  • Give the decision owner an approval switch for the first month. Track how often they approve versus edit, and refine prompts, semantic definitions, and extraction templates accordingly.

Weeks 11 to 12: measure, publish, and scale one level

  • Measure the loop with end-to-end metrics. For churn, this might be the change in intervention lead time and the change in retained monthly recurring revenue for the cohort that receives the play.
  • Publish lessons and move to the adjacent loop. Repeat the same pattern rather than starting a new one from scratch. This builds a library of agent patterns that your organization can reuse.

The governance risks, and how to manage them

Two risks will dominate boardroom discussions in the next quarter: governance drift and platform lock-in. Both are real, both are manageable with clear techniques.

Governance drift happens when an agent learns to answer with data it should not use, or when it combines sources in ways that violate policy. Drift often comes from tool creep outside the platform and hidden copies of sensitive fields. You can counter it by keeping the agent’s tools inside the data cloud and by attaching it strictly to semantic views that already enforce policy. Require the agent to cite which semantic objects it used for every answer, and record those citations to an audit table. Add unit tests for policies, for example, a test that proves no personally identifiable information ever leaves a defined scope. Treat policies as executable artifacts, not as documents.

Platform lock-in creeps in when you pour business rules directly into proprietary prompts and operators. You will always adopt some platform specificity, but you can make smart choices. Keep business logic in portable models and transformations. Codify metrics in the semantic layer rather than in prompts. When you must use a proprietary operator, wrap it in a view so you can swap implementations later. Keep orchestration and action connectors in a thin layer you control, rather than inside an opaque black box. For context on portability and market shifts, see the dynamics shaping the multi-cloud agent era.

Competitive pressure, decoded

Databricks. Expect a faster path from prompts to production on the lakehouse, anchored in Unity Catalog, Mosaic AI agent tooling, and the company’s investment in retrieval, vector search, and fine-tuning. The pitch will emphasize open formats, notebook centric development, and broad model choice. The gap they will try to close is the convenience of running agents directly inside your governed data perimeter.

Microsoft Fabric. Look for Fabric to merge Power BI’s semantic model with Copilot and Teams workflows into a coherent agent fabric. The company already sits inside your meetings, chat, identity, and document graph. The move will be to keep reasoning near those user interactions while improving performance and governance over data that sits across multiple clouds. The key watch item is how easily Fabric agents can stream and act on facts that remain in other platforms without making extra copies.

Salesforce. Einstein Copilot and Data Cloud give Salesforce a direct line to the customer record and the engagement layer. The likely play is to extend packaged sales, service, and marketing agents that act on Data Cloud segments and real time events, with clear governance and approvals. The question for buyers will be how those agents interoperate with data products that live outside Salesforce.

Enterprises will not choose one platform for everything. Instead, they will pick the platform that sits closest to a given decision loop. For data-heavy, cross-system loops, running the agent in the data cloud where the facts and definitions already live will often win on simplicity and control.

What this means for the semantic layer market

The battleground now rewards whoever can make truth easy to reuse. Metric stores, headless business intelligence, and modeling tools will pitch themselves as the one language of the business. They will also argue, correctly, that agents without a strong semantic backbone become freelancing interns. Your best move is to centralize semantic definitions where governance and performance are strongest, make those definitions addressable by agents, and keep the number of truth tables small and high quality.

Vendors will try to bundle their semantic layer with their agent. Resist unnecessary duplication. Keep a single system of record for definitions, and insist that any agent plug into it rather than shadow it.

A note on standards and interfaces

Agents that act need safe doors in and out. Snowflake has introduced standards-minded interfaces for tool discovery and authentication so agents can call approved tools and log every decision without extra infrastructure. You should treat those interfaces as a security and audit feature, not only as a convenience. For how cross-platform connectors are converging, see the emerging USB-C of agents standard.

Why this is a shift in economics

Running planning, tool use, and AISQL inside the platform changes cost structure. You spend less on data movement, less on per-request egress, and less on glue. You spend more on tightening definitions and on measuring outcomes. The spend shifts from integration to iteration. That is the right trade for teams that want to improve a loop every week rather than every quarter.

What to do on Monday

  • Pick a loop. If you cannot name a decision, you cannot automate a decision.
  • Pick a definition. Your agent is only as good as your semantics.
  • Pick a place. Run the first agent where the facts and approvals already live.
  • Pick a metric. Measure the loop end to end, not just the model.

With these four picks, you can assemble a narrow agent that improves one real outcome in under 90 days.

How we got here and what is next

The last decade gave us abundant storage, fast compute, and healthy skepticism about copying data. The next year will give us accurate agents that plan, cite, and act inside the perimeter. The path is not automatic, but it is clear. If you build strong semantics, keep actions observable, and design for portability where it matters, you will turn questions into outcomes faster than your competitors.

The simplest summary is this. Intelligence running on top of semantic truth, with agents that plan and tools that live in your data cloud, turns analysis into action. As of November 4, that is not a demo. Cortex Agents reached general availability, and the work has shifted from proving the idea to compounding the gains. The companies that move now will write the playbooks everyone else will follow.

Other articles you might like

The Agent Trust Stack Arrives: The New Moat for AI Agents

The Agent Trust Stack Arrives: The New Moat for AI Agents

Enterprise AI is pivoting from bigger models to verifiable runtime behavior. Use this vendor and standards map plus a 13 week build plan to harden agents without slowing delivery.

MCP Is Becoming the USB-C of Agents Across IDEs and OS

MCP Is Becoming the USB-C of Agents Across IDEs and OS

With Windows 11 and major IDEs shipping native MCP support, the Model Context Protocol is tipping toward default. Here is how to ship IDE-native multi-agent orchestration, governed enterprise connectors, and security patterns now.

Voice Agents Hit Prime Time: Contact Centers Cross the Chasm

Voice Agents Hit Prime Time: Contact Centers Cross the Chasm

Real-time voice AI has hit production in contact centers as sub-300 ms pipelines, SIP telephony, and robust barge-in make calls feel human. This guide shows what changed, which KPIs to track, and how to deploy safely in 90 days.

OpenAI’s $38B AWS Deal and the Multi‑Cloud Agent Era

OpenAI’s $38B AWS Deal and the Multi‑Cloud Agent Era

OpenAI and AWS signed a seven-year, $38 billion capacity deal on November 3, 2025 that resets where agent workloads run. Here is what it means for cost, latency, sovereignty, and how to build a portable multi-cloud runtime for 2026.

Gemini for Home goes live and the living room leads

Gemini for Home goes live and the living room leads

Google’s Gemini for Home early access arrives as Amazon rolls out paid Alexa+ and Apple delays its Siri revamp. The living room is becoming the first mass-market proving ground for autonomous AI agents that blend voice, vision, and device control.

Agents Go Retail: Manus 1.5, $75M Scrutiny, and a 2026 Reset

Agents Go Retail: Manus 1.5, $75M Scrutiny, and a 2026 Reset

Manus 1.5’s October 2025 release and a $75 million round led by Benchmark, now facing U.S. review, mark the moment autonomous web‑acting agents enter mainstream retail. As capital, policy, and reliability engineering collide, commerce and consumer trust are set for a 2026 reset.

TIME’s AI Agent and the race for publisher-native assistants

TIME’s AI Agent and the race for publisher-native assistants

TIME just turned its archive and newsroom into a conversational product. Here is why publisher native agents will redefine distribution, what to build first, and how to wire rights, safety, and retrieval for trust and growth.

DeepL Agent Goes GA, Multilingual Automation Hits the Enterprise

DeepL Agent Goes GA, Multilingual Automation Hits the Enterprise

DeepL launched its enterprise agent with broad language coverage and European governance, positioning it as a task‑taking coworker that operates across your existing apps. Here is how this changes CRM, ERP, and the future of automation.

Browser Wars 2.0: Atlas makes the browser the agent runtime

Browser Wars 2.0: Atlas makes the browser the agent runtime

OpenAI’s October 21, 2025 launch of ChatGPT Atlas and its Agent Mode preview marks a platform shift. The browser is becoming where agents work, reshaping search, SEO, ads, and developer roadmaps.