The demo looks really cool. Do any of the major navigation apps utilize these algorithms?
Thanks for this!
Did you measure memory usage for e.g. San Francisco or Berlin?
Why did you consider this non-commercial license? Especially since you say 'I don't plan on actively maintaining it'
I didn't realize GitHub had added a topic tags feature to repos. I hope it ends up being useful!
Can you look into adding numba support to accelerate the computation?
I'm very interested in this. Thanks for sharing.
Having coded up a semisolid implementation of time-dependent contraction hierarchies (which might not even get used, because of the very same data accessibility issues you outline), it seems at first glance quite straightforward to adapt CH preprocessing to this model. To contract a vertex you just look at whether each potential witness path has a nonzero probability of being a shortcut. I suppose you could cut preprocessing time down by replacing "nonzero probability" with "probability greater than epsilon" to get a sort of approximate hierarchy.
You refer to the edge weights as an "hmm", which I guess stands for hidden markov model. It doesn't seem like they use that term in the paper, although if you made the data time-dependent would edge weights then be a genuine markov process? Would something like the Viterbi algorithm become a useful subprocess of the overall algorithm in that case?
The current big open source routing engines tend to be written in c++ or java. Do you think using python makes a significant difference in query or preprocessing times? *
*Edit: Just noticed the Numba dependency, which probably answers this question.