Maybe just a few short years from now AGI will be announced. A superintelligent AI that thinks like a human and can solve problems. And of course, write new AGI programs. How would such an AI affect humanity? Would we ever need to invent anything again? Would we ever again need to think of solutions to problems? Could we leave everything to the AGI?
Or maybe you think such an AI would be dangerous. Perhaps we would need one more invention, the way to destroy it. Should we open that Pandora’s box?
Do we need an AGI? Can we manage our own future without it? Would such an AGI necessarily be self-aware? Or would we be ruled by a mindless program that could not be controlled?
Chance favours the prepared mind so come along to GeekSpeak on Saturday at noon SLT to discuss this upcoming technology shock. Bring your friends!