Why is there no memory loss when a brain's hemisphere is removed?
When it comes to the physical architecture of a computer/brain for an artificial mind, I envision having a functionality much like the brain where you can remove the simetrical half of it and it will re-adapt.
To be able to implement something like this with electronics, I imagine a cluster of computer modules (much like a Rasp Pi module cluster) where you can simply replace one with a new one.
For something like this to work it means that memory doesn’t live in a single place, but rather it’s distributed in a global and public world model that contains the relationships in all. In reality, memory is just associations - connections between nodes. So for example, someones phone number is part of the mental model of a person, it's part of its essence which is stored.
A key part here is that the essence of a concept, in this case a person and their phone number, can be changed.
Change only becomes hard to "save" when there's emotional intentions that recommend the contrary. For example, changing a core belief about the world could result in death based on what the mind has learned and the priorities it has like sustaining the self.
So, getting back to the real implementation, I think the "brain" would need to know when a module is added or removed. The real optimization with more modules is parallelism — the processing power gets delegated accross the whole cluster.
This would also make it portable to handheld devices with less power. It could be made faster by extending the handheld device with modules in the net. The problem would come when having clones. You can't have clones. they will become different as they do things and merging might result in weird results. So this would mean that you can only run one mind at a time whether it's on your local network at home, remotely or offline in your handheld device.