Resources /
Blog

Zero Trust Security for Salesforce: Implementation Guide

3
Min Read
Resources /
Blog

Zero Trust Security for Salesforce: Implementation Guide

Download
3
Min Read

Customer records, financial data, and regulated information live inside Salesforce, making the platform a prime target for credential theft, insider misuse, and risky integrations. Traditional "castle and moat" controls rely on a single perimeter, but in cloud ecosystems where users connect from any network and APIs exchange data continuously, that perimeter no longer exists. Attackers who obtain one valid credential can move laterally and exfiltrate data in minutes.

Zero Trust replaces implicit trust with continuous verification. The model requires every user, device, and integration to authenticate, authorize, and validate risk at each request. 

Implementing Zero Trust in Salesforce reduces incident scope, simplifies audit preparation, secures remote workforces, and satisfies GDPR, HIPAA, and PCI requirements. Organizations gain faster response times, clearer compliance evidence, and smaller attack surfaces without slowing development.

The following checklist translates each Zero Trust pillar into specific Salesforce tasks, strengthening identity controls, limiting privileges, segmenting sensitive data, securing integrations, and monitoring activity in real time.

1. Verify Identity and Control Access

Every security breach starts with compromised credentials, making identity verification the first line of defense. Zero Trust begins by asking one fundamental question at every access attempt: who or what is trying to reach Salesforce data? Answer that question by combining mandatory multi-factor authentication with context-aware controls that block suspicious logins before they ever reach sensitive records.

Salesforce no longer accepts passwords alone as sufficient proof of identity. The platform's evolving threat landscape and the rise of credential-stuffing attacks have made single-factor authentication unacceptably risky.

Multi-Factor Authentication

Since February 2022, Salesforce support agreements require MFA for every user who signs in through the UI. Skipping this control exposes organizations to account-takeover risk and potential service disruption. Organizations that delay implementation face not only security gaps but also the possibility that Salesforce support will decline assistance during critical incidents.

Meet this requirement through three approaches:

  • Enable Salesforce-native MFA directly in Setup and ask each user to register a second factor
  • Integrate Single Sign-On with an identity provider that already enforces MFA
  • Mix both approaches for hybrid environments

Supported factors include the Salesforce Authenticator mobile app, time-based one-time password (TOTP) apps, physical security keys, and device biometrics. Users should register at least two factors to avoid lockout if one device fails or gets lost.

Conditional Access Policies

MFA answers "who," but Zero Trust also asks "under what conditions" before granting access. A valid username, password, and second factor prove identity, but they do not prove the context is safe. An employee's credentials could be replayed from an attacker's infrastructure halfway around the world, or a compromised device could request access from an untrusted network.

Strengthen those conditions with configuration that denies risky contexts automatically:

  • Login IP Ranges restrict access to approved corporate networks or require step-up authentication when users connect from unexpected locations
  • Device activation requires new browsers and mobile devices to complete out-of-band verification before sessions start

These controls prevent attackers who steal credentials from immediately accessing the platform from their own hardware.

Session Controls

Even authenticated, authorized users pose risk when their sessions remain active longer than necessary. An idle laptop in a coffee shop or a forgotten browser tab on a shared workstation creates an opening for unauthorized access. Session controls reduce this exposure by automatically terminating inactive connections and requiring higher authentication assurance for sensitive operations.

Lower idle-timeout values so inactive sessions expire quickly, consider 30 minutes for standard users and 15 minutes for privileged accounts. Raise session security levels to require high-assurance authentication for operations like viewing encrypted fields or modifying security settings. These controls operate at the org level and apply consistently across all users, eliminating the risk of per-user misconfigurations.

My Domain

Deploy My Domain to give the organization a dedicated subdomain like company.my.salesforce.com instead of generic login URLs. This configuration provides multiple security benefits: it limits session hijacking by ensuring authentication cookies are bound to a specific domain, enables custom login policies such as redirecting all logins through an identity provider, and provides the foundation for secure external communities. My Domain also prevents phishing attacks that rely on users entering credentials into generic Salesforce login pages that could be spoofed.

With identities verified and sessions controlled, the next layer determines what those authenticated users can actually access.

2. Enforce Least Privilege

Authentication proves who users are, but authorization determines what they can do. An authenticated user with excessive permissions creates nearly as much risk as an unauthenticated attacker: credential compromise, insider threats, and accidental data exposure all stem from accounts that hold more access than their job functions require. The principle of least privilege addresses this by restricting every user to only the minimum permissions necessary to complete their work.

Building Blocks of Least Privilege

Salesforce provides four components that work together to create granular access controls. Each component serves a distinct purpose in building a complete authorization model:

  • Profiles establish baseline object, field, and system permissions for every user. Start every new user with the "Minimum Access User" profile, then layer extra rights only when justified by documented business need. This approach, known as "deny by default," prevents privilege creep and forces explicit decisions about every permission grant.
  • Permission Sets add incremental capabilities without bloating profiles or creating dozens of profile variants. A sales manager might receive an additional permission set for report creation, while peers remain limited to viewing data. This separation allows modification of reporting permissions for all managers by editing a single permission set rather than hunting through multiple profiles.
  • Role Hierarchy controls record visibility through automatic sharing. Design it to mirror business reporting lines only when transparency is essential, not as a default. Many organizations create deep role hierarchies that unintentionally grant managers access to records far beyond their actual responsibilities.
  • Permission Set Groups bundle related permission sets so administrators can grant consistent, auditable access with a single assignment. When a new sales operations analyst joins, assign the "Sales Operations" permission set group that includes reporting, data export, and dashboard editing.

The architecture scales cleanly as job functions evolve and new capabilities emerge. When you start with minimal permissions and add only what's needed, adjusting access becomes straightforward: grant a new permission set when responsibilities expand, revoke it when they change.

Most organizations take the opposite approach and create problems for themselves. They start by assigning "Standard User" or role-specific profiles that include broad permissions by default. Users become accustomed to accessing everything. Later, when security teams attempt to remove unnecessary rights, users complain about losing access they've come to expect—even if they never needed it in the first place. Removing permissions after the fact becomes a political battle rather than a technical task.

Continuous Access Review

Access requirements drift over time as employees change roles, projects end, and new features launch. Run Salesforce Optimizer each quarter to identify users with administrative permissions, profiles with "View All" or "Modify All" rights, and permission sets that have not been reviewed in over a year. Compare Optimizer results to deployment diff reports that show recent permission changes, and present deviations to management for approval. This process exposes stale service accounts, unused permission sets, and hidden privilege creep that accumulates between formal reviews.

Regular entitlement reviews should answer three critical questions:

  • Does this user still require this access for their current role?
  • Are there accounts that have not logged in for 90+ days that should be deactivated?
  • Are there permission sets assigned to users who no longer perform those functions?

These reviews create accountability and prevent the gradual accumulation of unnecessary privileges.

Just-in-Time Elevation

For tasks that truly demand elevated rights, such as data loads, sandbox refreshes, or one-time configuration changes, use just-in-time (JIT) elevation rather than granting permanent admin access. An approval flow can grant a time-boxed permission set, revoke it automatically after two hours, and log every action taken during that window. This pattern delivers the flexibility users need while keeping auditors satisfied and reducing standing privileges that attract attackers.

Narrow permissions reduce risk, but even authorized users should not see everything or reach everywhere. The next step creates internal boundaries around sensitive systems and data.

3. Segment and Encrypt Data

Properly configured permissions limit what users can do, but segmentation limits where they can go. A compromised account with narrow object permissions still poses a significant risk if that account can traverse the entire environment, accessing development sandboxes, testing integrations, and moving between business units without restriction. Effective segmentation builds internal perimeters inside Salesforce, stopping attackers from pivoting even when they gain a foothold.

Traditional network segmentation uses firewalls and VLANs to isolate systems, but cloud platforms require different approaches. Salesforce segmentation operates through environment boundaries, data classification, and cryptographic controls that keep sensitive information protected even when accessed by authorized users.

Environment Separation

Separate production, development, and integration sandboxes into distinct environments so code defects, data corruption, or configuration errors stay contained. A developer testing a new validation rule should not risk disrupting active sales processes, and sandbox refreshes that overwrite data should not touch production records. Most organizations understand this separation conceptually but fail to enforce it through access controls: developers often hold production admin rights "for emergencies," creating a bypass that attackers exploit.

When external communities enter the picture, serve them through Experience Cloud sites that maintain their own authentication, authorization, and session management. External users should never authenticate against the core Salesforce org or share session tokens with internal employees. This separation prevents compromised partner accounts from accessing internal systems and limits the blast radius when community features contain vulnerabilities.

Data Segmentation

Segment the data itself through multiple layers that work together to restrict visibility. Each layer addresses a different aspect of data access control:

  • Object-level CRUD (Create, Read, Update, Delete) settings remove whole data sets from view. If users do not need to see opportunity records, do not just hide the tab; remove object-level read permission so those records never appear in searches, reports, or API queries.
  • Field-level security hides sensitive attributes like social security numbers or salary data even when users can access the parent record.
  • Record-level sharing rules tighten access based on business ownership, ensuring sales reps see only their own accounts rather than the entire customer database.

This layered approach creates defense in depth. An attacker who compromises a service rep's account gains access to cases but not opportunities. Even within cases, they cannot see credit card numbers or other fields restricted by field-level security. Each layer reduces the potential damage from any single account compromise.

Shield Platform Encryption

Shield Platform Encryption secures sensitive fields at rest while preserving search and validation capabilities. Unlike field masking solutions that replace real data with dummy values, Shield encryption stores encrypted data in Salesforce databases and decrypts it only when authorized users request access. The encryption keys remain under organizational control through Salesforce's Key Management System, meeting compliance requirements for data protection.

This approach delivers several advantages over external encryption:

  • Encrypted data remains searchable, queries can search for customers by encrypted social security number without decrypting the entire field
  • Validation rules and workflow automation continue to function because Salesforce decrypts data temporarily during processing
  • Organizations maintain granular control over which users and integrations can access decrypted values through permission sets and field-level security

Organizations in healthcare, financial services, and government frequently use Shield encryption to satisfy HIPAA, PCI-DSS, and FedRAMP requirements without rebuilding their entire data model.

With proper segmentation protecting internal users and data, external integrations require the same zero-trust treatment.

4. Secure Integrations

Human users operate through verified identities and narrow permissions, but integrations often bypass those controls entirely. Service accounts frequently hold "API Enabled" plus "View All Data" and "Modify All Data," creating a single compromise point that grants complete database access. External APIs connect through overly broad OAuth scopes, and certificates remain unchanged for years despite security teams requiring quarterly rotation for all other credentials.

Each API connection, service account, and middleware layer introduces risk if not properly constrained. The same Zero Trust principles that govern human access must extend to every machine-to-machine connection, with integrations treated as untrusted by default until they prove context, scope, and purpose for each request.

Named Credentials and OAuth Scopes

Connect each application through Named Credentials that encapsulate endpoint URLs, authentication methods, and permission scopes in a single, centrally managed configuration. This approach prevents individual developers from hardcoding credentials into Apex classes or storing API keys in custom settings where they are difficult to rotate and impossible to audit effectively.

Use the least-privileged OAuth scope for each integration rather than granting "Full Access" by default. A marketing automation platform that needs to create leads and update campaigns does not require the ability to modify user permissions or delete all records. Review Connected Apps quarterly to identify unused credentials: many organizations discover that abandoned integration projects still hold active API access months after teams stopped using them. Revoke unused credentials immediately rather than waiting for annual security audits.

Integration User Profiles

Create dedicated integration user profiles that restrict IP ranges to approved middleware servers and deny login through the Salesforce UI. Never reuse human user profiles for API connections: service accounts should be immediately identifiable through naming conventions like api.integration.marketing@company.com and should appear in usage reports as distinct from human users. This separation allows security teams to apply different policies: human users might tolerate IP restrictions with VPN exceptions, while integration users should fail hard when requests arrive from unexpected sources.

Rotate certificates and tokens on a fixed schedule, quarterly at minimum, and monthly for high-security environments. Many organizations implement certificate rotation for web servers and internal applications, but treat Salesforce API credentials as permanent, creating long-lived secrets that become attractive targets. Automated rotation through infrastructure-as-code prevents the operational overhead from becoming a barrier.

API Gateway Controls

When using external API gateways, configure filters to block unapproved endpoints and throttle request bursts that signal misuse. An integration designed to sync leads once per hour should not suddenly request 10,000 API calls in five minutes: that pattern indicates compromised credentials being used for data exfiltration. Set rate limits that align with expected integration patterns and alert when thresholds are exceeded rather than silently allowing overages.

API gateways also provide centralized logging that correlates Salesforce requests with network events, helping security teams identify lateral movement. An attacker who compromises a middleware server and attempts to access Salesforce will generate API traffic alongside filesystem changes and network reconnaissance, patterns that are difficult to detect by monitoring Salesforce alone but become obvious when correlated with other security signals.

Strong controls at every layer reduce risk, but continuous monitoring ensures those controls remain effective as configurations drift and attackers probe for weaknesses.

5. Monitor and Respond

Static security controls degrade over time as configurations drift, users accumulate permissions, and attackers adapt their techniques. An identity policy that blocks suspicious logins today becomes ineffective tomorrow when attackers shift to credential phishing that provides valid session tokens. Continuous monitoring provides the visibility needed to detect suspicious activity immediately, investigate incidents with complete forensic context, and prevent data loss before it spreads across the environment.

Salesforce generates extensive telemetry about user behavior, system changes, and data access, but most organizations lack the tooling or processes to analyze that data in real time. Event logs accumulate in storage but never get reviewed until after a breach is discovered through other means. Effective monitoring requires three components: native Salesforce telemetry that captures detailed events, automated policies that respond to threats without human intervention, and SIEM integration that correlates Salesforce activity with broader security signals.

Native Salesforce Telemetry

Real-Time Event Monitoring streams fifteen event types, including API calls, report exports, login attempts, and permission changes, within seconds of occurrence rather than the hourly batches provided by standard event logs. This low latency enables immediate response to suspicious activity. Combine Real-Time Event Monitoring with Transaction Security Policies to automatically block or quarantine risky actions before they complete. A policy might terminate sessions that download more than 10,000 records in a single report export, or require step-up authentication when users access encrypted fields from new devices.

Setup Audit Trail records every configuration change with timestamp, user ID, and before/after values. This audit trail becomes essential during incident response when security teams need to reconstruct how an attacker moved from initial access to privilege escalation. Field Audit Trail (part of Shield) extends this capability to data changes, tracking modifications to sensitive fields across their entire history. Security teams can determine not just who changed a credit card number today, but who accessed it over the past ten years and whether any of those accesses occurred outside normal business patterns.

For multi-org environments, Security Center aggregates events from all connected orgs, identifies anomalies through baseline comparison, and benchmarks security settings against Salesforce's recommended configurations. Organizations with dozens of production orgs, common in enterprises that grew through acquisition, use Security Center to detect configuration drift when regional teams disable MFA or grant overly broad permissions without central approval.

SIEM Integration

Native logs become more powerful when exported to a security information and event management platform that correlates Salesforce events with network alerts, endpoint telemetry, and threat intelligence. A Salesforce login from an unusual location becomes a critical alert when the SIEM correlates it with a phishing email received by that user three hours earlier. An API request that queries 50,000 account records becomes obvious data exfiltration when the SIEM shows simultaneous outbound network connections to a file-sharing service.

Export Salesforce events to the SIEM using platform-native connectors that stream data continuously rather than batch exports that introduce delays. Configure automated response playbooks that disable OAuth tokens immediately when suspicious API patterns appear, create incident tickets that route to the security operations team, and trigger additional monitoring on related accounts that might be compromised through the same attack vector.

Alert Prioritization

Not every security event warrants immediate response: the challenge lies in separating true threats from normal business activity. Prioritize alerts for behaviors that signal immediate risk and have high confidence of malicious intent:

  • Impossible-travel logins appearing minutes apart on different continents indicate credential theft, especially when the second location matches known attacker infrastructure
  • Sudden spikes in data export or API throughput from accounts that normally generate low activity suggest compromised service accounts being used for data exfiltration
  • Privilege escalation attempts, such as users granting themselves "Modify All Data" or adding colleagues to administrative permission sets, represent active attacks in progress
  • Administrative actions outside business hours or change-control windows may indicate legitimate emergency work, but they require immediate validation to rule out attacker activity

These high-confidence alerts should trigger automated containment followed by human investigation, while lower-confidence signals get routed to security analysts for review during normal business hours.

Combining native telemetry, automated policies, and SIEM analytics creates a closed loop of detection, investigation, and response. This monitoring layer completes the Zero Trust foundation, but manual implementation introduces gaps during deployments and configuration changes. The final step automates enforcement so security controls remain consistent across every release.

Automate Policy Enforcement

Zero Trust principles fail the moment they require human discipline to enforce. Security teams configure MFA, audit permissions quarterly, and monitor production religiously, yet developers bypass those controls by granting themselves admin rights in sandboxes, hardcoding credentials during testing, and deploying permission sets that violate policy. An attacker who compromises a developer account inherits their sandbox privileges, pivots through inadequately segmented environments, and deploys backdoors that manual reviews miss because security teams only inspect production after deployment completes.

Organizations face a strategic choice: treat security as a gate that slows releases, or treat it as automation that accelerates them. When security requires manual approval at every step, engineering teams face impossible tradeoffs. Wait days for sign-off and miss delivery windows, or bypass governance during "emergencies" that quietly become standard practice. Both paths fail. The organizations that win automate security enforcement so controls execute instantly—faster than humans can route around them.

Flosum applies Zero Trust enforcement directly inside the development pipeline. Approval workflows prevent single-user deployments, static code analysis blocks permission sets that violate policy, and AI-driven conflict detection surfaces manual overrides that hide privilege escalation. Because Flosum DevOps is 100% native to Salesforce, metadata never crosses trust boundaries during version control or deployment. The result: security becomes a deployment accelerant rather than a barrier, compliance evidence is generated automatically, and compromised accounts cannot modify production settings without triggering immediate containment.

Request a demo to see how Flosum brings Zero Trust to every change promoted.

Table Of Contents
Author
Stay Up-to-Date
Get flosum.com news in your inbox.
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.