Is the spike-based underlying computational model more fundamental or necessary than the rate-based one? If so, could the higher-level computational models built upon them still be computationally equivalent? Or is there a significant difference in their ability to achieve Turing completeness?