Company “revenue” in enrichment. Where it comes from, how accurate it is, and how to use it

Eugene Levi

,  

CEO, Co-Founder

Most “revenue” in enrichment tools isn’t reported - it’s estimated; here’s how to understand the sources, judge accuracy, and use it wisely in outreach.

Content

Where enrichment platforms get “revenue” data

Short answer: most enrichment “revenue” for private companies is modeled, not reported. It’s good enough for coarse targeting (size buckets/ranges), but risky as a hard filter or for 1:1 personalization unless you verify the source and freshness. For public companies and some countries with mandatory filings, revenue can be authoritative.

There are three broad sources behind the “Annual Revenue” (or “Estimated Revenue”) you see in tools:

Official filings (authoritative when available)

  • Public companies file audited financials (10‑K/10‑Q) with the SEC’s EDGAR; these are the gold standard. SEC
  • Many non‑US jurisdictions require private companies to file accounts with national registries (e.g., UK Companies House). Note: the UK’s plan to force small/micro firms to file P&L (turnover) by April 2027 has been paused/shelved as of July 2025, so coverage still varies. Financial Times
  • Commercial databases such as Moody’s Orbis standardize those registry filings and other official sources globally; coverage and freshness vary by country and firm size. Independent studies find Orbis is strong for large/multinational firms but less complete for smaller firms. Moody'sOECD

Credit & business information bureaus (often “modeled”)

  • Dun & Bradstreet (D&B) provides “Modelled” values when actuals are missing, using trade credit data, public records, demographics, and financials. D&B also runs explicit Global Sales & Employee models to predict sales/revenue. These are estimates—useful, but not filings. Dun & Bradstreet

Sales intelligence / web‑data providers (typically modeled or crowdsourced)

  • Clearbit exposes Estimated Annual Revenue, derived from factors like company size, location, category, and age (back‑tested against known revenues). It’s an estimate by design. clearbit.com
  • Owler blends community/crowd input with AI/ML; revenue estimates reference public financials, funding, and other signals. (Crunchbase historically surfaced Owler revenue ranges via a data partnership, later discontinued.) OwlerUpLeadCrunchbase
  • Other aggregators (S&P Capital IQ, FactSet, PitchBook, Craft, etc.) combine filings where available and third‑party/private sources elsewhere; private‑company coverage is substantial but not universal, and methods vary. S&P Global Marketplace+1

Why accuracy varies so much

Public vs. private. In the US, private companies don’t have to publicly disclose financial statements, so vendors must model—or get voluntary disclosures. Expect larger error bars. Booth School of Business

Country rules. Mandatory filing regimes (many in Europe) yield more reliable numbers. The UK example shows policy flux—whether P&L/turnover becomes public for the smallest firms is in motion. Financial Times

What metric you’re looking at. SaaS tools might show ARR (forward‑looking, non‑GAAP) while enrichment shows “revenue” (GAAP). Those differ materially; confusing them leads to mis‑targeting. RightRev

Corporate structure. Revenues booked at a holding company or another geography, consolidated vs. unconsolidated entities, and transfer pricing can make site‑level or subsidiary revenue misleading. (Data platforms like Orbis attempt standardization, but gaps remain.) OECD

Model inputs & proxies. Many vendors use revenue‑per‑employee (RPE) bands by industry combined with headcount (often from LinkedIn), web traffic, pricing proxies, and funding milestones. These can be useful, but sensitive to bad inputs (e.g., mis‑stated headcount, atypical RPE). GrataSaaS Capital

When to trust it and when not to

Not all revenue data is created equal. Some numbers can be trusted as fact, while others are only rough estimates. Use official filings and registries with confidence, but treat modelled or estimated figures,especially for private US companies and SaaS ARR, with caution.

Trust more when:

  • The figure cites a public filing (EDGAR, registry doc) or the provider links to a source URL for the statement.
  • The company is large, public‑debt issuer, or operates in a jurisdiction with mandatory private filings.

Be cautious when:

  • The label says “Estimated” / “Modelled”, especially for US private firms.
  • You see revenue ranges (e.g., “$10–50M”) from sales‑intel providers; these are bands derived from signals like headcount & funding, not audited results.
  • You’re targeting SaaS and the number might actually be ARR, not GAAP revenue.

Should you use revenue to build outreach lists?

Revenue can guide targeting, but only in broad strokes. Use it as a banded signal alongside headcount and other firmographics, not as a strict filter. Treat estimates as a low-weight input for scoring, and rely on verified sources for decisions that really matter.

Use it for coarse segmentation, not precision

  • Treat revenue as a bucket (e.g., “<$5M”, “$5–25M”, “$25–100M”, “$100M+”) rather than a single figure.
  • Combine with headcount, industry, geo, funding stage, and technographics to reach similar selectivity with less risk of false negatives.

Avoid as a hard gate when:

  • The territory is US‑heavy private companies. You’ll exclude good accounts due to estimation error. Booth School of Business
  • Your ideal customer profile (ICP) is high‑variance RPE (agencies, capital‑intensive sectors, marketplaces), where headcount and revenue decouple.

Do use revenue in lead scoring

  • Make estimated revenue a soft, low‑weight feature; give higher weights to verified signals (filings, credit/trade, contractual spend you’ve observed).
  • Penalize stale estimates and reward linked sources. (See scoring pattern below.)

Practical methods vendors use to estimate revenue

  • Filings lookup and standardization. First try to fetch official numbers (EDGAR, registries), then normalize to a single currency and fiscal period. SECMoody's
  • Modelled values from credit bureaus (e.g., D&B) if filings don’t exist. solutions.dnb.com
  • Signal‑based models using:
    • Headcount × industry RPE (common baseline). Benchmarks vary widely by industry and stage. SaaS Capital
    • Funding stage/round size as a proxy for scale (esp. venture‑backed). Grata
    • Web traffic & pricing proxies (volumes × price points, where public). sourcescrub.com
    • Crowdsourced inputs (Owler‑style signals) + human QA. Owler
    • Provider‑specific ML features (industry, geo, age, etc.). (Clearbit example.) clearbit.com

Implementation playbook for your list‑building

To make revenue useful in list-building, normalize all figures into a common format, assign confidence scores, and bucket into size bands. Combine this with other signals in Tabula flows so sales teams see not just a number, but how reliable it is and how to act on it.

Normalize the concept of revenue

Keep separate fields for reported revenue and estimated revenue, plus notes on type (GAAP, ARR, modelled), period end, currency, and source link. Always convert amounts to the same currency (like USD) and record the fiscal year or quarter they come from.

Consensus + confidence scoring

RevenueConfidenceScore (0–100), e.g.:

  • +60 points. When the number comes directly from an official filing (GAAP revenue or Turnover) and is less than ~18 months old.
  • +30 points. When it’s a modelled value from Dun & Bradstreet (DNB) and is less than ~12 months old.
  • +15 points. When it’s an estimated revenue from a sales-intel provider (like Clearbit, Owler, Apollo).
  • +10 points. When two different sources agree on the same revenue band (e.g., both say $5–25M).
  • −15 points. When the number is ARR instead of GAAP revenue.
  • −25 points. When the number is too old (more than 24 months).
  • −20 points. When there’s a weak company match (domain doesn’t clearly match the legal entity)

Bucketization for go‑to‑market

Map any figure into sane size bands and use bands in filters and routing. E.g.,

  • Micro: <$1M
  • Small: $1–5M
  • Lower‑Mid: $5–25M
  • Upper‑Mid: $25–100M
  • Large: $100M–$1B
  • Enterprise: $1B+

Apply hysteresis (don’t bounce accounts between bands unless the change is >1 band or confirmed by filings) to stabilize routing.

Guardrails for outreach personalization

Don’t insert exact $X revenue into copy unless you have an official source; use range language (“mid‑eight figures,” “$25–100M band”) or headcount‑based copy instead. For SaaS ICPs, favor ARR only when clearly labeled; otherwise, treat ARR as not comparable to GAAP revenue.

What we recommend teams actually do

  • Default filter on headcount, not revenue, for cold lists; add revenue bands as a secondary filter.
  • Elevate confidence: surface the source & period next to the number inside your UI so SDRs/marketers know when it’s safe to rely on it. (Users trust numbers with provenance.)
  • Auto‑verify when it matters: for high‑intent accounts (e.g., demo requests), auto‑hit EDGAR/registry/check your filings provider before routing.
  • Score, don’t gate: use modeled revenue as a score input, not a hard pass/fail.
  • Handle geography explicitly: where registries are strong (many EU markets), be stricter; where they’re weak (US private), be looser.

Further reading

  • Filings & registries: SEC EDGAR (public companies); UK Companies House policy shift (2025). SECFinancial Times
  • Global coverage & bias: Orbis (Moody’s) product overview; OECD paper on Orbis representativeness. Moody'sOECD
  • Modelled data: D&B “Modelled Value” and global sales/employee models. solutions.dnb.comDun & Bradstreet
  • Sales‑intel methods: Clearbit Estimated Annual Revenue; Owler community/ML; Crunchbase’s historic Owler integration. clearbit.comOwlerCrunchbase
  • Estimation techniques: Grata on RPE/funding/alt‑signals; SaaS Capital RPE benchmarks; ARR vs GAAP differences. GrataSaaS CapitalRightRev