I don't think Moore's Law is even relevant here. Brains are simple systems running massively parallel. If you're taking the emulation approach you can simply stack up enough systems to do the job.
We already have a good understanding on whole systems running in parallel. That's literally how every modern AI works.
If you've seen the word "tensor" to describe them, that's what it means, just a huge chaining of small constant-throughput systems where each unit feeds directly into the next.
The problem that we're running into NOW is another whole "efficiency layer" around that, because it costs orders of magnitude more energy to emulate a neuron than it takes to run a hardware neuron, in addition to being that much slower as well.
Essentially, it's way more direct to make a "analog difference circuit" than it is to make a digital adder, and this is the next threshold we need to cross with regards to Moore's law. There's a reason a floating point operation is the standard: because they're expensive.
It's also expensive to convert between an analog and a digital signal, with ADC data rates acting as bottlenecks in most systems I've used them in.
There's a massive difference in the efficiency there, and the brain is NOT a simple system by any means. Neural systems quickly become more complicated than any other computational network.
You're missing a critical point, though--you skipped over the scan technology but it is critically relevant
We already have sub-5-micron non-destructive MRI. Destructive MRI is even finer resolution. If we're going to be even finer about it, we have long had the technology to bring a living body to a chilled temperature (we do this for open heart surgery) at which point body is, as I understand it, essentially filled with very cold saline. From there we can just bring it colder and colder until the head can be removed while still "viable", chopped up, and scanned layer by layer and scanned at way smaller resolutions than 5 microns.
The scanning technology has long existed for the non-squeamish among us; ironically it is the data storage technology that is recent (storing that many images at that high resolution means the data requirements are high).
When it comes to operating "at speed"... Well, when you can literally just suspend your mind to time travel forward, as you said before, fast forwarding until that's solved becomes an option.
Other "hints" that the scanning element is sufficient is that we have AI systems right now that can reconstruct linguistic embeddings that describe the data being transported in active human brains. In less technical language: we have an AI that can read a mind, so we can scan at least well enough to read the configuration and structures of a mind.
I would suspect the final solution will be some combination of using a read on general activation states at a lower resolution while someone is "alive and warm", chilling them, chopping the brain for scanning to get all the fine structures right, and then applying the general activation pattern that was scanned on top of the reconstructed fine structure network to "wake it up".
The most dangerous and risky aspect of any of this is "do you trust the demon putting you in the phylactery to put you into it exactly as you are, or do you worry it will make changes to suit it's own goals?"
Strictly speaking this process would technically be the creation of a lich.