Hacker Newsnew | past | comments | ask | show | jobs | submit | rabatata's commentslogin

I'd love to play around with this. Do you have an estimate for the RAM and CPU requirements for the Toronto dataset? I skimmed the project site and didn't see this discussed.


The robustness experiments are highly parallelized and maintain a copy of the (modified) network per thread. I vaguely remember the NYC test would take around a day on 16? cores and use up to 64GB of RAM.

You can find query time evaluation in the performance recap on the results page I linked. For NYC it's around 2.2s for the Dijkstra (baseline) and 27ms for the TP-based search.

For single-threaded pre-computation and shortest-path queries, I would expect you to need around 8GB for NYC, less for Toronto and you can get the Honolulu feeds to run on <2GB (which was my local test set).

Sorry for not being more specific or inaccurate.

You can find some GTFS feeds here: http://www.gtfs-data-exchange.com


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: