Resources /
Blog

How to Assess and Strengthen Your Salesforce Security Posture

6
Min Read
Resources /
Blog

How to Assess and Strengthen Your Salesforce Security Posture

Download
6
Min Read

Salesforce sits at the center of your revenue engine, housing customer data, workflows, integrations, and the day-to-day processes your teams rely on. Yet most organizations still treat their security as an afterthought—trusting default settings, relying on perimeter tools, or assuming internal configurations are “good enough.” The result is a growing gap between the increasing complexity of Salesforce environments and teams’ preparedness to secure them.

This guide helps you close that gap. A strong Salesforce security posture comes from proactively assessing internal configurations, data access pathways, and user behaviors—not from relying on default settings or external perimeter tools. By adopting an inside-out, data-centric approach, you can surface the hidden risks that actually threaten your business and strengthen the safeguards that matter most. In this article, you’ll learn how to evaluate your current posture, identify where your org is most exposed, and take actionable steps to harden Salesforce before issues turn into incidents.

Foundation Assessment

Foundation Assessment quantifies current risk exposure across three dimensions: platform configuration compliance, data classification, and identity governance. Completing this phase provides the baseline metrics needed to prioritize all subsequent remediation work.

Run the Baseline Security Health Check

The Salesforce Security Health Check compares over fifty configuration settings with recommended values and produces a score from 0 to 100. Scores above 90 indicate excellent performance; scores below 54 flag serious gaps. Color-coded indicators mark high-risk, medium-risk, and low-risk items. The "Fix Risks" button applies recommended settings in bulk.

Salesforce enforced Multi-Factor Authentication (MFA) for all production users in 2024, making missing MFA flags the most common score detractor. After addressing MFA gaps, focus on:

  • Strengthening password complexity and expiration thresholds
  • Shortening session idle time-outs to limit exposure from unattended workstations
  • Replacing expiring certificates used for SSO and integrations

Note that custom domains require the same MFA rigor as primary login pages because excluding them creates a backdoor that attackers actively exploit. 

Classify and Map Data

Data classification determines which controls to implement because regulatory requirements impose different protections based on information sensitivity. Create an inventory of standard and custom objects, then classify each by sensitivity level.

  • Public: Information intended for broad distribution (marketing content, knowledge articles)
  • Internal: Business data requiring basic access controls (opportunity records, case notes)
  • Confidential: Sensitive information subject to regulatory protection (financial records, health information)
  • Restricted: Highly sensitive data requiring maximum security (authentication credentials, encryption keys)

Map each classification tier to specific regulatory requirements. Different regulations impose distinct obligations on how classified data must be handled throughout its lifecycle. Understanding these mappings ensures that technical controls align with legal mandates from the start.

  • HIPAA-covered entities must protect electronic Protected Health Information (PHI) across all environments, including sandboxes and backups
  • SOX-compliant organizations must track all changes to objects containing financial calculation logic and maintain seven-year audit trails
  • GDPR obligations extend to any EU resident data, regardless of where the organization resides

Document current protection methods for each classification tier, including field-level encryption status, sharing rule restrictions, and backup retention periods. Gaps between regulatory requirements and actual controls become the remediation priorities for Control Implementation.

Audit Identity and Access Governance

Begin by auditing every Profile and Permission Set using the principle of least privilege, removing any rights users have accumulated that they no longer need. Pay equal attention to guest sites, community users, and integration accounts, which often receive blanket permissions granted during initial setup but never reviewed afterward.

Conduct a Permission Sweep

Focus the permission sweep on these high-risk capabilities:

  • API Enabled: Allows programmatic access that bypasses UI security controls
  • Modify All Data: Grants full read and write access to all objects regardless of sharing rules
  • View All Data: Provides full read access to all objects including those restricted by organization-wide defaults
  • Author Apex: Permits writing code that executes in system context without sharing enforcement

Document business justification for every user holding these permissions. Set a 90-day review date to reassess whether justification remains valid. Establish a removal timeline if users cannot provide sufficient justification—typically 30 days for non-production organizations and 60 days for production.

Align Security Settings with Data Classification

Review object-level, field-level, and record-level security settings to confirm confidential and restricted fields remain hidden from users without documented business need. Cross-reference these settings against the data classification map:

  • Confidential data enforces field-level security
  • Restricted data uses separate permission sets rather than profile-based access
  • Public and Internal data maintain appropriate organization-wide defaults

Strengthen Authentication and Session Controls

Access control extends beyond permissions to include where and when users can authenticate. Define trusted IP ranges based on office locations and approved VPN endpoints, then restrict login hours to match business operations in each region. This geographic and temporal restriction prevents credential misuse outside normal business patterns.

Multi-factor authentication prevents unauthorized access even when credentials are compromised. Verify MFA functions end-to-end with your identity provider, including backup authentication methods for account recovery scenarios.

Proactive monitoring detects suspicious activity before it escalates into a breach. Configure Security Center to flag anomalous logins and privilege escalations:

  • Impossible travel patterns indicating credential compromise
  • After-hours administrative changes outside approved maintenance windows
  • Repeated failed login attempts from unfamiliar locations

Control Implementation

The Foundation Assessment establishes your security baseline across platform configuration, data classification, and identity governance. Control Implementation builds on these findings by translating them into enforced protections.

This phase hardens security posture through configurations that resist unauthorized changes. Properly implemented controls prevent configuration drift, deployment errors, and policy violations that degrade security between formal reviews.

Deploy Configuration Governance

Configuration governance prevents security drift by enforcing peer review, automated testing, and audit trails before metadata reaches production. Document the complete release path from developer sandboxes to production with three essential elements:

  1. Version control structure that distinguishes feature work from hotfixes through branch naming rules
  2. Peer approval requirements that include documented rollback plans for every deployment
  3. Immutable audit trails that record who approved changes, when deployments occurred, and what metadata moved between environments

Where this release path executes matters as much as how it's documented. Salesforce-native pipelines eliminate external metadata storage and reduce compliance risk by operating entirely within the platform's security boundaries, avoiding the data residency challenges and synchronization failures common in Git-based workflows.

Automate Security Validation

Run the Code Scanner Portal against every commit before allowing deployment. Gate all releases until test coverage meets minimum thresholds (typically 75 percent for Apex), naming conventions follow documented standards, and secure-coding checks pass without critical violations.

Require Mandatory Code Review

Human reviewers catch issues that automated scanners miss. Mandatory code review applies to all Apex classes and requires verification of security vulnerabilities that could compromise data protection:

  • SOQL injection vulnerabilities where user input concatenates into queries
  • Insufficient governor limit handling that causes runtime failures under load
  • Missing null checks that expose detailed error messages to end users
  • Queries that enforce sharing rules appropriately
  • Sensitive operations that require explicit permission checks

Govern Declarative Changes

Extend governance discipline to Flow and Process Builder modifications by treating them with the same approval requirements as Apex deployments. Create approval requirements for validation rule changes, workflow rule modifications, and sharing rule adjustments. Log all declarative changes through setup audit trail monitoring and alert administrators when changes occur outside approved maintenance windows.

Implement Data Protection Controls

Data protection requires layered controls that work together to safeguard information throughout its lifecycle. Encryption protects data at rest, field audit trails preserve change history, and backup enables recovery when prevention and detection fail. Together, these three controls form the foundation of a defensible data protection strategy.

Configure Shield Platform Encryption

Shield Platform Encryption secures sensitive data by encrypting it at the field level within Salesforce. Unlike transport-layer encryption that only protects data in transit, Shield encrypts data at rest in the database. When authorized users access encrypted fields, Salesforce automatically decrypts the values. Unauthorized users see only meaningless character strings, even when they have object-level permissions.

Shield Platform Encryption requires continuous attention because organizations evolve. As teams add new fields and objects, coverage gaps emerge. Monthly policy reviews ensure encryption keeps pace with organizational changes by systematically evaluating new fields that match Confidential or Restricted classifications.

Focus each monthly review on objects and fields that carry specific regulatory obligations:

Automate encryption policy validation by configuring Security Center to alert when new fields on sensitive objects lack encryption or when developers modify encrypted fields in ways that could expose data. These alerts shift encryption management from reactive firefighting to proactive governance.

Maintain Field Audit Trail

Field Audit Trail provides the forensic evidence needed to investigate compliance violations, insider threats, and accidental data corruption. Configure Field Audit Trail for all objects containing regulated data, then set retention periods according to data classification:

  • Public and Internal data: 90 days
  • Confidential data (SOX): Seven years for financial records
  • Confidential data (HIPAA): Six years for clinical documentation
  • Restricted data: Indefinite retention for records subject to legal hold

These retention periods align with regulatory requirements while balancing storage costs against forensic value. Automated archival processes move older audit records to long-term storage without manual intervention, ensuring compliance obligations don't inflate operational costs unnecessarily.

Establish Backup and Recovery

Recovery objectives determine backup architecture. Two critical objectives define what backup solutions must deliver:

  • Recovery Point Objective (RPO) represents the maximum acceptable data loss window. Most production organizations require 24-hour RPO, meaning daily backups that capture all changes since the previous backup. Tighter RPOs necessitate more frequent backup cycles, which increase storage consumption and processing overhead.
  • Recovery Time Objective (RTO) measures how quickly service must be restored after an incident. Business-critical systems commonly require 4-hour RTO, necessitating hot standby capabilities rather than cold backup restoration. The gap between RPO and RTO defines the operational window where recovery processes must execute.

Backup solutions must meet specific technical criteria to deliver on these objectives:

  • Backup frequency: Daily incremental backups plus weekly full backups
  • Storage location: Off-platform encrypted repositories in geographically appropriate regions
  • Restore granularity: Supports recovering a single field value, a complete record, and an entire object

Meeting these criteria ensures backup processes provide genuine protection rather than compliance theater. Regular validation through restore drills confirms backup solutions deliver on their technical promises when actual recovery situations demand them. Without periodic testing, backup processes become unverified assumptions that fail precisely when organizations need them most.

Operational Monitoring

Operational monitoring converts static security controls into active defense systems that detect threats as they occur and coordinate response before significant damage occurs. Without monitoring, security controls operate blind—access rules enforce permissions but don't alert when those permissions are misused, encryption protects data but doesn't flag suspicious decryption patterns, and backup creates recovery points but doesn't validate they work when needed. Monitoring bridges this gap by continuously analyzing user behavior, system events, and data access patterns to identify anomalies that signal potential threats.

This phase reduces average dwell time—the window between initial compromise and detection—from industry-standard weeks or months to hours or minutes. Faster detection limits attacker opportunities to escalate privileges, exfiltrate data, or establish persistent access.

Activate Continuous Monitoring

Continuous monitoring detects security incidents by analyzing real-time activity across the Salesforce environment. Event Monitoring provides the foundation by capturing detailed logs of API calls, login attempts, permission changes, and data access. These logs provide the raw forensic data needed to reconstruct attack timelines, identify affected records, and determine appropriate containment actions.

Configure Event Monitoring to stream logs to the organization's SIEM platform rather than relying on manual log reviews. Automated streaming enables correlation with events from other enterprise systems, creating visibility across attack chains that span multiple platforms. Set alerts for these high-risk activities:

  • Login attempts from unknown IP addresses or impossible travel scenarios
  • API calls querying more than 10,000 records in a single session
  • Permission set assignments outside approved change windows
  • Encryption policy modifications
  • Mass record deletions or transfers

Detection thresholds require continuous tuning to balance sensitivity against alert fatigue. Configure thresholds conservatively at first based on historical activity baselines, then tune sensitivity weekly as the system learns normal operational patterns. A threshold set too low generates false positives that train security teams to ignore alerts. A threshold set too high misses genuine threats until they've already caused damage.

Login Forensics supplements Event Monitoring by analyzing authentication patterns and flagging compromised credentials before attackers can pivot from initial access to sensitive data exfiltration. Transaction Security policies take monitoring a step further by moving from detection to prevention. These policies define real-time actions when suspicious activity occurs—blocking transactions entirely, requiring step-up multi-factor authentication, or triggering immediate security team notifications.

Validate Backup and Recovery Procedures

Backup validation confirms recovery capabilities work as expected under realistic operational conditions. Regular testing ensures teams understand recovery procedures, organizational requirements remain aligned with backup configurations, and technical execution matches documented processes. Testing transforms backup from theoretical insurance into practical operational capability.

Quarterly recovery drills provide comprehensive validation. Execute drills during business hours with full team participation to simulate the coordination demands of actual incidents. Each drill should test different recovery scenarios:

  • Restore a complete sandbox from production backup, verifying all metadata components, data records, and file attachments return intact
  • Test granular recovery by restoring a single deleted record from last week's backup
  • Verify field-level restoration by recovering a single field value from a specific datetime three months ago
  • Time each recovery operation from detection through validation to confirm actual performance meets RTO targets

Document drill results to track recovery performance over time. Measuring restore times, identifying procedural gaps, and capturing lessons learned creates continuous improvement in incident response capability. Teams that execute drills quarterly develop muscle memory for recovery procedures that teams without practice lack when emergencies occur.

Validate that backup processes capture all content types based on how the organization uses Salesforce. Different organizations store files in different locations depending on their workflows and integrations:

  • Salesforce Files that store current documents
  • Chatter file attachments from internal collaboration
  • Email attachments stored in EmailMessage records
  • External storage like AWS S3 or Azure Blob

Confirm backup data resides in approved regions that satisfy data residency mandates. GDPR requires EU resident data remain in the EU unless organizations document appropriate cross-border safeguards. FedRAMP High requires government data stay within US-based, specifically authorized cloud regions. Organizations must verify backup storage locations align with the same compliance requirements that govern production data.

Establish Incident Response Workflows

Incident response workflows structure the human coordination that turns monitoring alerts into effective containment actions. Monitoring systems detect threats, but humans decide how to respond. Without predefined workflows, incident response devolves into ad-hoc decision-making under pressure, which consistently produces slower and less effective outcomes than structured processes.

Define incident classification tiers based on business impact and required response urgency:

  • Priority 1: Active data breaches requiring immediate containment within 15 minutes (Modify All Data misuse, bulk API exports of regulated records)
  • Priority 2: Suspicious activity requiring investigation within business hours (unusual login patterns, failed privilege escalation attempts)
  • Priority 3: Policy violations without immediate security risk (unapproved sandbox changes, expired certificates)
  • Priority 4: Informational alerts from routine monitoring (nightly job failures, performance threshold breaches)

Each tier carries different response expectations and resource commitments, ensuring genuine emergencies receive immediate attention while routine issues follow normal business processes. Without tiering, everything becomes urgent or nothing becomes urgent as teams lose the ability to distinguish signal from noise.

Define response ownership for each classification tier. Clear assignment prevents the diffusion of responsibility where everyone assumes someone else will respond:

  • Primary contact: First responder with 24/7 availability for Priority 1 events
  • Secondary contact: Backup responder if primary is unavailable
  • Tertiary contact: Final escalation point for off-hours incidents
  • Escalation path: Clear chain from security analyst through CISO to legal counsel

These contact hierarchies transform general incident response principles into specific accountability that responders can execute without interpretation. During actual incidents, responders need immediate clarity about who does what, not philosophical frameworks about incident response theory.

Create detailed playbooks for common scenarios. Playbooks eliminate the need for real-time decision making during high-stress incidents by predefining exact steps for the most common attack patterns.

  • Compromised credential playbooks: Immediate password reset procedures, commands to terminate all active sessions, steps to audit recent permission changes and data access
  • Bulk data export playbooks: Interview questions for account owners, log analysis procedures to identify all exported records, criteria for account suspension versus increased monitoring
  • Unauthorized permission change playbooks: Immediate rollback of grants, complete audit trail analysis, policy enforcement reviews

Test procedures through quarterly tabletop exercises that simulate realistic attack scenarios. Measure time from initial alert generation through containment action completion, targeting under 15 minutes for Priority 1 events and under 4 hours for Priority 2 events. Exercises reveal gaps in playbooks, contact lists, and technical procedures while the cost of failure remains zero.

Track Progress with Key Metrics

Security posture improvement requires objective measurement. Track three complementary metrics that together provide a complete view of security posture evolution:

Security Health Check Score

Target a score of 90 or above, indicating Excellent performance across password policies, session management, certificate validity, and MFA enforcement. Scores below 80 suggest preventive controls have degraded and require immediate attention. Review this metric monthly after each release cycle.

Percentage of Users with High-Risk Permissions

Target under 5 percent of all users holding any combination of API Enabled, Modify All Data, View All Data, or Author Apex permissions. Percentages above 10 percent indicate permission creep, where convenience overrides security discipline. Calculate this metric after each permission set modification and report trends quarterly to governance councils.

Mean Time to Remediate Findings

Target under 30 days from finding creation in Security Center through validation of deployed remediation. Complex findings requiring architecture changes may exceed this threshold, while configuration fixes should resolve in under 7 days. Track this metric weekly through Security Center dashboards to identify bottlenecks.

Establish Governance Cycles

Quarterly reassessment cycles prevent gradual degradation by forcing regular evaluation against these metrics. Convene a cross-functional governance council including security team members who implement controls, compliance officers who interpret regulatory requirements, and business owners who balance security with operational needs.

Strengthen Security Posture with Flosum

Salesforce releases three annual updates that introduce features, modify security defaults, and occasionally expose vulnerabilities. Continuous posture assessment provides the only reliable defense against this constant evolution.

Flosum supports comprehensive security assessment through Salesforce-native DevSecOps pipelines that maintain security controls within Salesforce's boundaries. The platform applies automated code scans at every commit, enforces policy-based deployment gates, and creates traceable approval workflows that satisfy audit requirements.

Unlike Git-based tools that export metadata to external repositories, Flosum's native architecture keeps data and metadata inside Salesforce, maintaining data residency guarantees while automating the configuration governance and change tracking essential to operational monitoring.

Request a demo with Flosum to see how native security automation reduces risk exposure while maintaining development velocity.

Table Of Contents
Author
Stay Up-to-Date
Get flosum.com news in your inbox.
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.