Resources /
Blog

73% of Salesforce Data Loss Comes From Inside Your Org

Submit your details to get a book

Min Read
Resources /
Blog

73% of Salesforce Data Loss Comes From Inside Your Org

Download

Submit your details to get a book

Min Read

TL;DR

Enterprise Strategy Group research finds that 73% of SaaS data loss stems from internal incidents (accidental deletion, failed integrations, bad deployments, insider mistakes), not external attacks. Salesforce operates on a shared responsibility model: they protect the platform, you protect your data. This reframes Salesforce backup strategy. Most enterprises plan for ransomware but get hit by the admin who clicked the wrong filter. The right architecture optimizes for fast, granular, point-in-time recovery, not just full-org snapshots.

Key takeaways:

  • ESG attributes 73% of SaaS data loss to internal causes. Ransomware and external attacks are the minority case.
  • Salesforce’s native protections (Recycle Bin: 15 days; Data Recovery Service: 6–8 weeks, $10,000+) were not designed for enterprise-scale internal-threat scenarios.
  • Salesforce now offers paid native backup products (Salesforce Backup, plus the acquired Backup & Recover from Own Company), but each has gaps in metadata and Big Objects coverage.
  • Recovery speed and granularity matter more than backup completeness. Industry research puts IT downtime cost between $5,600 per minute and $300,000 per hour.
  • Compliance frameworks (SOX, GDPR, HIPAA, FINRA) treat backup-and-recovery capability as a mandatory control, not an optional safeguard.

Why backup planning is usually pointed at the wrong threat

Most enterprise data protection planning focuses on the dramatic threat: ransomware, exfiltration, malicious insiders, headline-making breaches. That focus is reasonable but incomplete. Look at the actual incidents that take Salesforce orgs offline, and a different pattern emerges. The primary cause of Salesforce data loss is not an attacker on the outside. It is a teammate on the inside, with valid credentials, doing something unintentional.

This post unpacks what the research shows for SaaS environments specifically, why Salesforce’s native protections were designed for a different threat model, and how to build a backup strategy that actually matches the risks your org faces.

Where the 73% comes from

In its research on data protection cloud strategies, Enterprise Strategy Group surveyed enterprises across SaaS environments and found that 73% of data loss incidents traced to internal causes. The breakdown is illuminating:

  • Accidental deletion by authorized users accounts for roughly 20% of incidents on its own.
  • Internal malicious deletion contributes another 6%.
  • The remainder of the 73% comes from failed integrations, configuration errors that cascade into data corruption, sandbox-to-production deployment mistakes, and other human-driven causes.

External and malicious actors account for the remaining 27%. The popular narrative around ransomware and external attacks is real, but it describes the minority of actual incidents. The threats inside the building, in other words, are statistically much bigger than the threats outside it.

What “internal” actually looks like in Salesforce

Internal data loss in a Salesforce org is rarely dramatic. It is more often quiet, incremental, and discovered days or weeks after the fact. The most common patterns:

  • An admin runs a bulk update through Data Loader. A column maps incorrectly. 30,000 contact records get the wrong email format, and outbound campaigns start bouncing the next morning.
  • An integration process between Salesforce and an external system pushes a bad payload during an unattended overnight run. Records are not deleted; they are silently corrupted.
  • A deployment from sandbox overwrites a recently updated permission set that was not in source control. Users start losing access to records they could see yesterday.
  • A workflow rule gets edited to fix a different bug. The edit accidentally removes the field update that was populating a key reporting attribute. Reports run fine; they just have wrong numbers.
  • An admin meant to delete a test contact record. The selection clicked the wrong filter. 800 production records went into the Recycle Bin. Then someone emptied it.

Notice what these scenarios have in common: every one is the result of authorized action by trusted users. Salesforce executes the request as instructed. Salesforce does not detect that the request was a mistake.

Why Salesforce’s native protections were not designed for this

Salesforce operates on a shared responsibility model. Per Salesforce’s own trust documentation, Salesforce is responsible for the security and uptime of the platform. The customer is responsible for the data within it. This is the standard SaaS posture, not a Salesforce-specific position.

The native options Salesforce provides reflect that division of labor. The free options handle simple, recent cases:

  • Recycle Bin: Items remain for 15 days, then are permanently deleted. Useful for “I deleted that an hour ago” recovery; insufficient for “we noticed the integration was bad three weeks ago.”
  • Weekly Export Service: Schedules a full data export as CSV. Does not include metadata. Does not preserve relationships cleanly. Restore is manual via Data Loader, which means a multi-day project on the way back in.
  • Data Recovery Service: Salesforce’s last-resort paid service. The process takes 6 to 8 weeks, costs a $10,000 minimum per request, and makes no guarantee of 100% data recovery. Salesforce retired it in July 2020 over customer-experience concerns, then reinstated it in March 2021 as a customer-requested emergency option, and explicitly states that customers should not rely on it as a primary backup.

Salesforce has also been investing in paid native products. Salesforce Backup (formerly Backup and Restore) is a managed package that supports automated daily backups but does not cover metadata or Big Objects. In late 2024, Salesforce announced its acquisition of Own Company (formerly OwnBackup) for $1.9 billion, which added the Backup & Recover product to its portfolio. Backup & Recover supports both data and metadata recovery, although metadata coverage is limited to a subset of types. Both products are real improvements over the free native options. Neither, by itself, is the full picture for an enterprise running multiple production orgs or operating under formal compliance regimes.

What an internal-threat backup strategy looks like

Once you have internalized that most data loss comes from inside the org, a few priorities reorder.

Recovery speed beats backup completeness

A backup that recovers in days is operationally useless during an incident. Industry research puts IT downtime at $5,600 per minute on the low end, up to $300,000 per hour for large organizations. The difference between a 30-minute restore and a 30-hour restore is the difference between an inconvenience and a board-level conversation. (More on why recovery speed matters.)

Granular restore beats full-org restore

When the issue is “30,000 contact records got bad email values overnight,” restoring the entire org from yesterday’s snapshot is overkill. It would also wipe out every legitimate change made in the same window. The capability you actually need is the ability to restore a specific object, a specific field, or even a specific record set at a specific point in time. Most internal-threat scenarios resolve cleanly with surgical restores, not org-wide rollbacks.

Metadata and data must restore together

A common failure mode in third-party backups is restoring data into an org whose metadata has changed. The field the data referenced no longer exists, the validation rules have moved, the lookup relationships do not match. Backups that capture only data, or only metadata, leave you with a partial recovery problem. (More on metadata backup.)

Backup retention must outlive detection windows

Internal data loss is often detected days or weeks after it occurs. The relevant question is “how far back can we go?” not “do we have last night’s backup?” Compliance frameworks such as SOX, HIPAA, GDPR, and FINRA typically require multi-year retention for regulated data anyway, so the retention window is usually set by compliance, not by RTO/RPO.

Audit-ready logging belongs in the backup product

Regulators do not just want proof that the data exists. They want proof of who changed what, when, and proof that the recovery process is documented, tested, and reliable. Tamper-proof restore histories belong in the same product as the backups they describe, not in a separate logging tool that has to be reconciled at audit time.

Comparing backup approaches at a glance

Three main categories cover most enterprise Salesforce backup decisions: native Salesforce options (free and paid), generic third-party SaaS backup tools, and integrated platforms purpose-built for Salesforce that combine backup with the rest of the data lifecycle.

Capability Salesforce native (free + paid) Generic third-party SaaS backup Integrated, purpose-built for Salesforce (e.g., Flosum Backup & Archive)
Backup frequency Weekly Export only; Salesforce Backup managed package supports daily Configurable, often daily Configurable; near-real-time available
Metadata coverage None in free options; Salesforce Backup excludes metadata and Big Objects; Backup & Recover covers limited metadata types Varies by vendor Yes, versioned alongside data
Granular restore Recycle Bin only (15 days) Usually supported Yes: object, field, or record level
Point-in-time recovery 15 days via Recycle Bin; multi-week request via Data Recovery Service Vendor-dependent Yes, with full snapshot history
DevOps integration None Limited Yes: backup is part of release pipeline
Compliance retention Manual configuration Yes, vendor-managed Policy-based for SOX, HIPAA, GDPR, FINRA
Recovery time Days to weeks Hours typically Minutes for granular; near-instant for archived

Reflects publicly available product information as of mid-2026. Vendor capabilities evolve quickly, so verify current state with each provider before final selection.

Frequently asked questions

How often should we back up our Salesforce data?
At a minimum, daily. For mission-critical objects and high-volume transactional data, more frequently. Many enterprises run backups every few hours during business operations and snapshot before any high-risk deployment. Salesforce’s native Weekly Export Service is rarely sufficient on its own.
Doesn’t Salesforce back up our data for us?
Salesforce backs up its platform for its own disaster recovery purposes. Customer data is governed by the shared responsibility model: Salesforce keeps the platform running, customers protect the data inside it. Salesforce’s Data Recovery Service exists as a last-resort paid option, but Salesforce explicitly tells customers not to rely on it as a primary backup.
What about the Recycle Bin?
Items remain in the Recycle Bin for 15 days, after which they are permanently deleted. The Recycle Bin handles “I just deleted that” scenarios. It does not help with the more common pattern: data loss discovered days or weeks after the fact, often via a downstream report or an unhappy customer.
Do we need to back up metadata, not just data?
Yes. Restoring data into an org whose metadata has changed is a partial recovery at best. The field your data references may no longer exist; validation rules may have moved; relationships may have changed. A complete backup strategy versions both metadata and data and restores them together.
How do we recover from a failed deployment?
This is one of the most common internal data loss scenarios. The recovery requires three things: a clean point-in-time snapshot from before the deployment, granular restore so you can selectively roll back the affected components without unwinding unrelated changes, and a clear audit trail to verify the rollback worked. Backup tooling integrated with your DevOps pipeline solves this directly.

The strategic shift, in one sentence

Most enterprise data protection budgets are sized for the threat model the news cycle highlights. Most enterprise data loss happens for the much quieter reason that someone with valid credentials made a mistake. Closing the gap between those two facts is the single most useful thing a Salesforce backup strategy can do.

If your current strategy is built around weekly exports, Recycle Bin recovery, and the assumption that Salesforce will save you, the 73% number is the prompt to revisit. Flosum Backup & Archive is designed for this internal-threat reality: native Salesforce architecture, point-in-time restore down to the field level, integrated with the DevOps pipeline, and compliance retention aligned to SOX, HIPAA, GDPR, and FINRA out of the box.

Request a 20-minute demo of Flosum Backup & Archive to see how it would fit your environment.

Table Of Contents
Author
Stay Up-to-Date
Get flosum.com news in your inbox.

Thank you for subscribing