SAM.gov Exclusions API: What Nobody Tells You

Getting a usable, up-to-date exclusions database from SAM.gov looks simple on paper. In practice there are several non-obvious blockers - rate limits that make naïve pagination impossible, a multi-step async download flow, and real duplicate records in SAM's own data. This guide covers all of it.

TL;DR: You cannot paginate through all 163K+ exclusions daily using the JSON API - the math doesn't work. The only practical approach is the bulk CSV download endpoint, which is async, takes 20–30 minutes to generate, and still contains duplicates you have to deduplicate yourself. Or just use GovCon API's exclusions endpoint, which is already done.
Need verified contacts at contracting offices?

Get contracting officer and program manager contact data alongside exclusions screening. Try GovCon Contacts →

What Are SAM.gov Exclusions?

The SAM.gov exclusions list (formerly EPLS - Excluded Parties List System) is the authoritative federal database of entities that are debarred, suspended, or otherwise excluded from receiving federal contracts, grants, or other assistance.

Before awarding any federal contract, agencies are required by regulation (FAR 9.405) to check this list. Many prime contractors run the same check on their subcontractors. The check is mandatory, not optional.

What's in the database

Key fields per record

Why Pagination-Based Daily Sync Is Infeasible

The SAM.gov JSON API at /entity-information/v4/exclusions supports pagination. The obvious approach is: loop through all pages, collect records, diff against your DB, update daily. Here's why that breaks down.

The math

163,000 records ÷ 1,000 per page = 163 API calls Basic SAM.gov key (no role, just an account): 10 requests/day → impossible. Entity registration key (1,000/day): 163 calls looks fine on paper, but those 1,000 calls are shared across ALL SAM.gov endpoints. If you're also pulling opportunities (119K+ records), you'll burn your daily budget long before finishing the exclusions pagination. Even with a dedicated exclusions-only key at 1,000/day: 163 pages × ~2 sec/call (SAM.gov latency) = ~5–6 minutes minimum Daily, this is technically feasible - until SAM.gov rate-limits you mid-run and you have to restart, leading to partial or inconsistent state.

The deeper problem: you can't get a clean snapshot via pagination. Records can change between your first and last page request. If an exclusion is added or removed mid-run, your snapshot is inconsistent. There's no cursor or consistency guarantee in the paginated API.

The SAM.gov solution: bulk CSV

SAM.gov provides a dedicated bulk export endpoint precisely because pagination isn't viable for full-dataset sync:

Request bulk export: GET https://api.sam.gov/entity-information/v4/exclusions?api_key=YOUR_KEY&format=CSV Response (immediate, HTTP 200): Extract File will be available for download with url: https://api.sam.gov/entity-information/v4/download-exclusions?api_key=REPLACE_WITH_API_KEY&token=XYZ in some time. If you have requested for an email notification, you will receive it once the file is ready.

The response is not the file - it's a token. SAM.gov queues the export job and generates the file asynchronously. You then poll the download URL until the file is ready, which takes 20–30 minutes.

The Bulk Download Flow Step by Step

Step 1: Request the export

Hit the CSV endpoint with a valid SAM.gov API key. Any registered key works. The response body contains the token URL. Parse the token from the URL parameter.

Step 2: Wait

SAM.gov generates the file server-side. This is not a live export - it's a batch job. Expect 20–30 minutes. Polling before the file is ready returns a 404 or an error JSON body; don't treat that as a permanent failure.

Step 3: Poll the download URL

GET https://api.sam.gov/entity-information/v4/download-exclusions?api_key=YOUR_KEY&token=XYZ

When the file is ready this returns a redirect to the actual download. The file is gzipped CSV, typically 10–15 MB compressed.

Step 4: Deduplicate before inserting

This is the part most guides skip: SAM's own export contains duplicate rows. In the April 2026 export, 4,021 of 167,335 rows were exact duplicates - same entity, same dates, same everything. If you blindly insert into a unique-constrained table, those rows fail silently or cause errors depending on your DB setup.

Use a content hash as primary key and ON CONFLICT DO NOTHING. You'll load 163K clean unique records.

Step 5: TRUNCATE then reload (don't diff)

Don't try to diff exclusions against your existing table. Exclusions have no reliable updated-at timestamp you can use to detect changes. The correct approach is full reload: TRUNCATE the table, then batch insert the entire CSV in one transaction. With 163K rows and batch sizes of 5,000 this takes about 35–40 minutes end-to-end (dominated by the SAM.gov generation wait).

Data Quality Issues You'll Hit

1. Duplicates in the source data

As noted: SAM's export contains exact duplicate rows. Not near-duplicates - exact byte-for-byte copies. About 2.4% of rows in the April 2026 export. No explanation from SAM.gov. Just deduplicate.

2. Sparse UEI coverage

Not every exclusion has a uei_sam. Individuals typically don't. Older firm records may predate UEI assignment. If you're checking by UEI only, you'll miss some records. Also check by cage_code and name for comprehensive coverage.

3. Name matching is hard

Individual exclusions are stored with first/last name split. Firm exclusions use entity_name. Searching by name requires checking both. Name variations, aliases, and transliterated foreign names are common. A name-only check has both false positives and false negatives - always confirm with UEI or CAGE when available.

4. The termination_date trap

An exclusion with record_status = 'Active' but a past termination_date is effectively expired. SAM.gov does update record_status but there can be lag. Always filter on both: record_status = 'Active' AND (termination_date IS NULL OR termination_date > CURRENT_DATE).

5. FASCSA records are a new category

In 2023, FASCSA (Foreign Adversary Communications Security Act) orders were added as a distinct exclusion category. These flag entities involved in communications equipment from adversary nations (Huawei, ZTE, etc.). The is_fascsa_order boolean identifies them. They have different legal implications from standard debarment - worth surfacing separately in any compliance UI.

How Often to Sync

~50
new exclusions added per week (estimate)
30 min
SAM.gov file generation time
163K+
total records (April 2026)

Weekly sync is sufficient for most use cases. Exclusions are not added in real-time - agencies submit them through an admininstrative process that takes days to weeks. The risk of acting on a one-week-old snapshot is low.

If you're doing compliance checks immediately before contract award and your exposure is high, consider a same-day check via the SAM.gov JSON API (single record lookup by UEI) rather than relying solely on your local copy.

Use the GovCon API Exclusions Endpoint Instead

If you don't want to build and maintain this pipeline, GovCon API exposes the same dataset via a simple REST API. The data is updated weekly (and can be refreshed on demand - email us if you need a same-day sync).

Search by name

GET https://govconapi.com/api/v1/exclusions/search?entity_name=acme&active_only=true Authorization: Bearer YOUR_API_KEY

Lookup by UEI

GET https://govconapi.com/api/v1/exclusions/JWMCZ9VZ5JN3 Authorization: Bearer YOUR_API_KEY

Filter by excluding agency, state, or type

GET https://govconapi.com/api/v1/exclusions/search?excluding_agency=DOD&classification_type=Firm&active_only=true Authorization: Bearer YOUR_API_KEY

What's available

Full API documentation →  |  Pricing →

Running Your Own Sync (Reference Implementation)

If you want to run your own pipeline, the logic is straightforward:

High-level pseudocode

# 1. Request bulk export resp = GET /entity-information/v4/exclusions?api_key=KEY&format=CSV token = parse_token_from(resp.body) # 2. Wait for file to generate (20-30 min) sleep(30 * 60) # 3. Poll until ready for attempt in range(20): resp = GET /entity-information/v4/download-exclusions?api_key=KEY&token=TOKEN if resp.status == 200 and resp.content_type == 'application/octet-stream': break sleep(90) # retry every 90s # 4. Download and decompress (gzipped CSV) save_to('exclusions_full.csv.gz') # 5. Load into DB with deduplication TRUNCATE exclusions; for batch in read_csv_batches('exclusions_full.csv.gz', batch_size=5000): INSERT INTO exclusions (...) VALUES (...) ON CONFLICT (payload_hash) DO NOTHING; COMMIT;

Schema notes

Compliance Checklist

For those building compliance tooling, here's what a thorough check looks like:

  1. Check by UEI SAM - most reliable, when available
  2. Check by CAGE code - catches records without UEI
  3. Check by name (fuzzy) - catches individuals and name variations; expect false positives you'll need to resolve manually
  4. Filter to active records only - record_status = 'Active' AND termination check
  5. Check at time of award, not just at time of proposal - exclusion status can change between proposal and award
  6. Document your check - date/time of check, dataset version, query params; some agencies ask for this in audit

Related Guides

Community Feedback

Hit a data issue or a flow we didn't cover? Press Ctrl+Enter to share - it helps the next developer.

Last Updated: April 2026 | Questions? [email protected]