Catalog108 / challenges / static/files/large
Large file streaming download
What this challenge teaches
Teaches: Stream the response instead of loading it all in memory (~1MB).
Expected output: Download /challenges/static/files/large?raw=1 in chunks; final file size > 800kB.
Submit your scraper's JSON output to /challenges/static/files/large/grade
(grader endpoint is part of a later phase; URL is reserved now).
About 1MB of plain text, small enough to keep CI cheap, large enough to demonstrate that loading the whole body into memory is wasteful. Use streaming.
# Python (requests), stream:
with requests.get(url, stream=True) as r:
for chunk in r.iter_content(chunk_size=8192):
sink.write(chunk)
# Python (httpx), async stream:
async with httpx.AsyncClient() as c:
async with c.stream("GET", url) as r:
async for chunk in r.aiter_bytes():
sink.write(chunk)