Source on EconIndx: Federal Reserve Economic Data (FRED) — free, no enterprise tier, 800,000+ time series.
Access & Pricing
FRED is completely free. Get an API key at fredaccount.stlouisfed.org — email registration, instant activation. No contracts, no approval. The API key raises your rate limit from 30 to 120 requests/minute. No enterprise tier exists; everyone gets the same access.
Your First Data Pull
Install nothing special — requests is enough. Register your key and pull your first series in under two minutes:
import requests
import pandas as pd
FRED_KEY = "your_api_key_here"
def fetch_series(series_id: str, start: str = "2000-01-01") -> pd.DataFrame:
r = requests.get(
"https://api.stlouisfed.org/fred/series/observations",
params={
"series_id": series_id,
"api_key": FRED_KEY,
"file_type": "json",
"observation_start": start,
"sort_order": "asc",
"limit": 10000,
}
)
obs = r.json()["observations"]
df = pd.DataFrame(obs)[["date", "value"]]
df["value"] = pd.to_numeric(df["value"], errors="coerce") # "." = missing
df["series_id"] = series_id
return df
# Three essential series to start
gdp = fetch_series("GDP") # Quarterly US GDP, billions USD
cpi = fetch_series("CPIAUCSL") # Monthly CPI, all urban consumers
unrate = fetch_series("UNRATE") # Monthly unemployment rate, %
print(f"GDP rows: {len(gdp)}, from {gdp.date.min()} to {gdp.date.max()}")
print(f"CPI rows: {len(cpi)}, from {cpi.date.min()} to {cpi.date.max()}")
First Pull: What to Expect
When you pull these three series you should see:
| Series | Frequency | Expected rows (from 2000) | History available |
|---|---|---|---|
GDP | Quarterly | ~100 | Back to 1947 Q1 |
CPIAUCSL | Monthly | ~300 | Back to 1947 |
UNRATE | Monthly | ~300 | Back to 1948 |
DGS10 (10yr Treasury) | Daily | ~6,500 | Back to 1962 |
A full backfill of all three (no date filter) completes in under 5 seconds at the default rate limit. Missing values come back as "." — always coerce to numeric and handle nulls.
Key Datasets to Start With
Macro indicators:
GDP— nominal GDP, quarterly, billions USDGDPC1— real GDP (inflation-adjusted)CPIAUCSL— Consumer Price Index, seasonally adjustedCPILFESL— Core CPI (excludes food & energy)FEDFUNDS— Federal funds rate, daily
Labor market:
UNRATE— unemployment rate, monthly SAPAYEMS— total nonfarm payrolls, monthlyICSA— initial jobless claims, weekly
Financial:
DGS10— 10-year Treasury yield, dailyDEXUSEU— USD/EUR exchange rate, dailyM2SL— M2 money supply, monthly
Data Tolerance & Validation
What’s normal:
- Revisions are routine. GDP gets revised twice in the two months following initial release. Payrolls get revised monthly for the prior two months. Always re-pull the last 3 months on each load.
- Missing values (
".") are expected, especially at the start and end of a series. ForUNRATE, expect no nulls after 1948. For newer experimental series, expect sparse early data. - Seasonal adjustment: every SA series has an NSA counterpart (e.g.,
UNRATENSA). Store which version you’re using in your metadata.
Validation checks to build:
def validate_fred_pull(df: pd.DataFrame, series_id: str) -> dict:
null_rate = df["value"].isna().mean()
latest_date = pd.to_datetime(df["date"]).max()
days_stale = (pd.Timestamp.today() - latest_date).days
return {
"series_id": series_id,
"row_count": len(df),
"null_rate": round(null_rate, 4),
"latest_date": str(latest_date.date()),
"days_stale": days_stale,
"stale_alert": days_stale > 60, # monthly series; alert if >60 days old
}
Alert thresholds to set:
UNRATE/PAYEMS/CPIAUCSL: alert if latest date is more than 45 days ago (monthly release, should update within 2 weeks of month end)GDP: alert if latest date is more than 120 days ago (quarterly)DGS10/FEDFUNDS: alert if latest date is more than 5 business days ago (daily series)- Null rate above 5% for a core series: investigate
Schema Stability
FRED’s schema is extremely stable. Series IDs don’t change. The response envelope (observations, date, value) hasn’t changed in over a decade. The only breaking change pattern to watch: discontinued series return discontinued=true in the series metadata endpoint — worth checking when you add a new series to your registry.
For vintage/point-in-time data, use the ALFRED endpoint with realtime_start and realtime_end parameters — same API, different endpoint base.
Batch Loading Multiple Series
import json
def fetch_batch(series_ids: list[str], start_year: str, end_year: str, key: str) -> list:
"""FRED supports up to 1 series per request — batch manually."""
all_data = []
for sid in series_ids:
df = fetch_series(sid, start=f"{start_year}-01-01")
all_data.append(df)
return pd.concat(all_data, ignore_index=True)
series_to_load = ["GDP", "GDPC1", "CPIAUCSL", "UNRATE", "PAYEMS", "FEDFUNDS", "DGS10"]
master_df = fetch_batch(series_to_load, "2000", "2026", FRED_KEY)
print(f"Total rows loaded: {len(master_df)}")
# Expected: ~2,500–3,500 rows depending on history depth
Next Steps
- Browse the full FRED source page on EconIndx for rate limits, ALFRED vintage details, and tool compatibility
- Explore FRED Categories API to discover series by topic
- For Python users: the
fredapilibrary wraps all of this cleanly —pip install fredapi