When you want to make a cup of tea, you processed thousands of micro-instructions, many simultaneously. You gathered information from multiple sources, many simultaneously – and simultaneously with processing the micro-instructions. You carried out hundreds of micro tasks. Even taking a single step requires a stream of tasks, each with multiple instructions based on multiple data / information streams.
You don’t even know you are doing it.
Machines aren’t like that. They don’t know what to do unless they are told what to do.
Machines cannot exist in that white hole because it’s full of physics (which they use), mathematics (which is how they get their instructions) chemistry (not that relevant to our present discussion but vitally important for others) and biology. If we add in philosophy, that is thinking about thinking, then the challenges of the white hole are far beyond the capability of machines. Hell, they are far beyond the capability of most people – and I include myself in that. Even though I enjoy peering over the edge, sometimes sitting there with my legs dangling over it, I know that I don’t have the knowledge to be able to adequately acquit myself when it comes to asking the correct who, when, why, where, (w)how questions.
But there is a big question that I keep coming back to: I’m not thick; I’m reasonably well and very widely read and if I’m certain that the white hole won’t produce adequate responses, why are so many people convinced they don’t even have to ask?
Why does it seem as if the whole ″artificial intelligence″ theorem is an artifice built on unsustainable assumptions that machines have comprehension, a state of mind, and as if ″machine learning″ is actually nothing more than adding information to a database to which a piece of code knows what to check and what do do if certain conditions are satisfied?