Corey security and privacy

  • Last update on May 4th, 2026

Corey uses GenAI models hosted on Microsoft Azure AI Foundry infrastructure.

In some geographic areas, such as Canada, APAC, or other regions where the required models or deployment options are not yet available locally, CoreView may enable Corey by using an Azure model available in another supported region, according to the service configuration and contractual terms agreed with the customer.

How it works

When a user interacts with Corey:

  1. the user submits a request through the CoreView platform;
  2. Corey forwards the AI request to the Azure infrastructure configured for the service;
  3. the model generates the response;
  4. the response is returned to the CoreView platform and shown to the user.

It is important to clarify that:

  • the customer tenant is not moved;
  • the customer’s CoreView environment remains in its original region;
  • only the AI request is processed outside the region, meaning the prompt and the context required to generate the response;
  • prompts and responses are not stored by the AI provider because Corey uses Azure OpenAI deployments configured with Zero Data Retention (ZDR).

Global Standard deployment

When CoreView uses a Global Standard deployment in Azure AI Foundry, the inference request is not tied to a single region. Azure uses its global infrastructure to dynamically route each request to the data center with the most appropriate availability.
This means that, even if the deployment is configured within the customer’s environment, processing of an individual request may take place in a different Azure region. Microsoft distinguishes Global Standard deployments, which can process inference data in any Azure region, from regional deployment options, which process data in the deployment’s specific region.

Understanding Zero Data Retention (ZDR) and Data Zone Standard (DZS)

ZDR and DZS refer to different aspects of data handling:

  • Zero Data Retention (ZDR) means prompts and responses are not stored by the AI provider and are used only for the time required to process the request;
  • Data Zone Standard (DZS) refers to the geographic area where inference processing takes place.

Corey data handling and processing

Corey uses Azure OpenAI deployments configured with Zero Data Retention (ZDR) in all scenarios.

This means that:

  • prompts and responses are not stored by the AI provider;
  • data is used only to generate the requested response;
  • information remains available only for the time required to complete processing by the AI provider;
  • data is not used to train or improve the model;
  • data is protected through encryption in transit and through the native security controls of the Azure platform.

At the moment, while ZDR is available and configured for all Corey AI deployments, DZS is available only in certain Azure regions, currently the EU and the US. For customers who want to use Corey in regions where Azure does not yet provide DZS, CoreView may use Azure deployments that process inference in another supported region, according to the service configuration and contractual terms agreed with the customer.

This reduces the risk associated with processing AI requests outside the customer’s preferred processing zone because prompts and responses are not retained by the AI provider.

Security and data protection

In this model, Corey continues to apply CoreView’s security controls, including:

  • tenant isolation;
  • user authentication and authorization;
  • access controls already enforced within the CoreView platform;
  • encryption in transit via TLS when communicating with Azure services.

This means Corey can operate only within the context of the authenticated user and according to the permissions already defined in the CoreView platform.

External references

FAQs

How did you train Corey?

Corey is not trained on customer data.

It uses Azure OpenAI base models provided through Microsoft Azure AI Foundry. At runtime, Corey uses CoreView-developed prompts, orchestration logic, and platform skills to answer user requests.

To generate responses, Corey may use tenant data available in the CoreView platform, but only at runtime, within the authenticated user’s session, and according to the permissions and policies applied to that user. This data is used to answer the request, not to train or fine-tune the model.

 
 

What keeps Corey from releasing my data or replies to the internet?

Corey runs within a controlled Azure infrastructure using Azure AI Foundry. Data is processed only within the execution context of your authenticated session and organization.

CoreView uses Azure OpenAI Zero Data Retention (ZDR) for all Corey AI deployments. This means prompts and responses are not stored by the AI provider and are used only for the time required to process the request.

A separate concept is Data Zone Standard (DZS), which refers to the geographic area where inference processing takes place. At the moment, DZS is available only in certain Azure regions, currently the EU and the US. For customers who use Corey in regions where Azure does not yet provide DZS, CoreView may use Azure deployments that process inference in another supported region, according to the service configuration and contractual terms agreed with the customer.

Within Azure AI Foundry documentation, this capability is referred to as Global Standard deployment. With this deployment model, the model can still be configured within a customer's Azure Data Zone while Azure routes inference requests across its global infrastructure to the most appropriate region for processing, while remaining under Microsoft’s enterprise security and compliance controls.

Microsoft documentation reference:
Microsoft Learn – Deployment types for Microsoft Foundry Models

 
 

How do I know another customer can’t prompt Corey to get my data?

CoreView enforces strict tenant and session-level isolation. AI interactions occur within the context of your organization and authenticated user session, so requests and responses are always scoped to your environment.

Corey uses the same authentication and security layer used across the CoreView platform, which means the platform’s built-in security and access controls apply to AI interactions as well. Because of this architecture, tenant segregation and access controls prevent another customer or tenant from accessing your environment or its data through prompts.

 
 

How long do you keep my queries?

CoreView uses Azure OpenAI Zero Data Retention (ZDR), which means prompts and responses are not stored by the AI provider and are used only for the time required to process the request.

On the CoreView side, interactions and queries may be logged and stored within the CoreView platform for security, auditing, governance, and service functionality purposes, including making conversations available in the product. This allows CoreView tenant administrators to review AI interactions if needed.

 
 

Can CoreView staff see my queries and replies?

At this time, limited access may be available to authorized CoreView personnel for system monitoring and troubleshooting purposes. This access is restricted to CoreView only and does not extend to AI providers such as Azure AI Foundry or Azure OpenAI.

Access is strictly controlled and used only to support system reliability, service improvement, and issue resolution during the beta period.

 
 

What AI security standards are you following?

CoreView follows enterprise-grade security practices and compliance standards aligned with the broader CoreView platform security program. The infrastructure runs on Microsoft Azure and follows Microsoft’s security model, while CoreView maintains industry-recognized security certifications and practices.

More information is available here:
https://www.coreview.com/security

 
 

How do you prevent Corey from hallucinating?

CoreView applies rigorous validation processes and controls to maintain response quality and align results with the authenticated user context, available platform data, and the CoreView permission model.

 
 

How are you preventing prompt injection?

We apply multiple layers of protection:

  1. Azure AI Content Safety protections built into the Azure OpenAI platform.
    https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety
  2. Custom LLM guardrails implemented within Corey’s prompts and orchestration layer.
  3. Azure Prompt Shield and jailbreak detection capabilities are used as part of the protection model.
    https://learn.microsoft.com/en-us/azure/ai-services/content-safety/concepts/jailbreak-detection
 
 

How are you preventing data extraction?

Data access through Corey is limited to the authenticated user’s organizational context and the CoreView permission model. The AI layer does not bypass platform permissions, which means Corey can access and return only information that the user is already authorized to access within CoreView.

Additionally, we use tool-level permission controls to allow tenant administrators to define which Corey tools specific users can access, providing more granular governance over the actions users can perform.

 
 

Are prompts and replies encrypted?

Yes. Communications with Azure services are encrypted in transit using TLS, and Azure infrastructure provides encryption and security controls. Zero Data Retention and Azure’s service architecture mean prompts are processed without persistent storage by the AI provider.

 
 

How can I roll back something Corey did if it is wrong?

Corey can execute certain management actions within the CoreView platform, but these actions are never executed silently; they always require explicit user confirmation.

In many cases, supported actions have corresponding reverse actions within the platform. If such an action exists, it can be used to revert the change, again requiring user confirmation.