Catalog108 / challenges / api/rest/large-payload
Large-payload REST endpoint
What this challenge teaches
Teaches: Use streaming JSON parsers when payloads are large; don't load everything into memory.
Expected output: Stream ?json=1; iterate 2000 items without loading whole body.
Submit your scraper's JSON output to /challenges/api/rest/large-payload/grade
(grader endpoint is part of a later phase; URL is reserved now).
# Stream a large JSON response with ijson
import requests, ijson
with requests.get("https://practice.scrapingcentral.com/challenges/api/rest/large-payload?json=1", stream=True) as r:
for item in ijson.items(r.raw, "items.item"):
print(item["id"], item["name"])