Hacker News serves 10+ million pageviews per day. And they give away ALL their data through a free Firebase API.
No API key. No rate limits. No authentication. Just raw JSON.
The API
Base URL: https://hacker-news.firebaseio.com/v0/
That’s it. No signup. No OAuth. No headers needed.
Get the Top 500 Stories Right Now
curl https://hacker-news.firebaseio.com/v0/topstories.json | python3 -m json.tool | head -20
Returns an array of up to 500 item IDs, sorted by rank.
Get Any Story’s Details
curl https://hacker-news.firebaseio.com/v0/item/41967900.json
{
"by": "dang",
"descendants": 245,
"id": 41967900,
"kids": [41968234, 41968567, ...],
"score": 834,
"time": 1711234567,
"title": "Show HN: Something cool",
"type": "story",
"url": "https://example.com"
}
You get: author, score, comment count, timestamp, URL, title. Everything.
Useful Endpoints
| Endpoint | What it returns |
|---|---|
/topstories.json |
Top 500 stories (by rank) |
/newstories.json |
Newest 500 stories |
/beststories.json |
Best 500 stories (by score) |
/askstories.json |
Ask HN posts |
/showstories.json |
Show HN posts |
/jobstories.json |
Job posts |
/item/{id}.json |
Any item (story, comment, poll) |
/user/{username}.json |
User profile |
/maxitem.json |
Current max item ID |
/updates.json |
Recently changed items |
Build a “Top HN Stories” Dashboard in 15 Lines
import requests
top_ids = requests.get("https://hacker-news.firebaseio.com/v0/topstories.json").json()[:10]
for story_id in top_ids:
story = requests.get(f"https://hacker-news.firebaseio.com/v0/item/{story_id}.json").json()
score = story.get("score", 0)
title = story.get("title", "")
url = story.get("url", "")
comments = story.get("descendants", 0)
print(f"{score:>4}pts | {comments:>3} comments | {title}")
print(f" {url}n")
Run it. You’ll get something like:
834pts | 245 comments | Show HN: Something cool
https://example.com
612pts | 189 comments | Why X is better than Y
https://blog.example.com/post
Monitor Any Topic on HN
import requests, time
def monitor_hn(keywords, interval=300):
seen = set()
while True:
stories = requests.get("https://hacker-news.firebaseio.com/v0/newstories.json").json()[:50]
for sid in stories:
if sid in seen:
continue
seen.add(sid)
story = requests.get(f"https://hacker-news.firebaseio.com/v0/item/{sid}.json").json()
title = (story.get("title") or "").lower()
if any(kw in title for kw in keywords):
print(f"🔥 {story["title"]}")
print(f" https://news.ycombinator.com/item?id={sid}")
time.sleep(interval)
monitor_hn(["python", "rust", "ai", "llm"])
Free monitoring. No Algolia account needed.
Get a User’s Entire History
curl https://hacker-news.firebaseio.com/v0/user/pg.json
{
"about": "Bug Fixer.",
"created": 1160418111,
"id": "pg",
"karma": 157236,
"submitted": [39849234, 39849233, ...]
}
Paul Graham’s karma: 157K. His submitted array has every story and comment he’s ever posted.
Pro Tips
-
Algolia alternative: HN also has a search API at
hn.algolia.com/api/v1/search?query=YOUR_QUERY— also free, also no auth -
Real-time updates: The Firebase API supports Server-Sent Events — append
?print=eventto any endpoint - Bulk data: If you need ALL of HN (40M+ items), use the BigQuery public dataset instead of hammering the API
-
Comment threads: Each item’s
kidsarray contains child comment IDs — recurse to build full threads
Why This Matters
HN is where developers discover tools. If your product lands on the front page, you get 10,000-50,000 visits in 24 hours.
With this API you can:
- Monitor when competitors are mentioned
- Track trending topics in your niche
- Build alerts for job posts matching your skills
- Analyze what content performs best
- Find early-stage startups to partner with
All free. All real-time. All without signing up for anything.
I track developer trends across HN, GitHub, and Reddit. Follow for weekly breakdowns of what’s trending in tech.
Need automated data collection from any website? Email me at spinov001@gmail.com — I’ve built 77 production web scrapers.
