Before we embark on a thought experiment, some necessary caveats:
We don’t know if general AI (AGI) is even possible with our current approaches.
If it is possible, we (obviously) haven’t figured out how to get there yet.
Even if we get there, we have no idea what it will look/act like.
On to the fun part.
Emergence 🌐
In 1997, historian George Dyson (son of the late polymath Freeman Dyson - yes, the guy of the Dyson sphere*) published Darwin Among the Machines. In the book, Dyson presents the idea that the internet might be/become the first true digital sentience.
(* based on Olaf Stapledon’s brilliant fiction in Star Maker.)
The idea and book are based on Samuel Butler’s essay of the same name published in 1863. Even then, there was a worry about the robot uprising. In Butler’s words:
… it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race.
Sounds a lot like the evil robot/AI overlord scenario Hollywood is so fond of, doesn’t it?
While I don’t think Dyson’s sentient internet is here yet - let’s hope not, it’d be a cat meme-obsessed, porn-addicted troll - there’s a neat idea in there. We’re so obsessed with/worried about building AGI that we don’t consider that it might emerge unintentionally. All the sentient creatures we know so far have evolved, and have not been designed. Somewhere along the evolutionary yellow brick road, consciousness, sentience, self-awareness, whatever you want to call it, emerged. It evolved, probably gradually, like a set of dimmer switches instead of a single on-off one. (Very cool recent work suggests - correctly, I think - several dimensions of consciousness, each with its own sliding scale. Reminds me of my recent post on the minds evolution forgot.)
So, why wouldn’t AGI emerge instead of being designed? Remember we’re in thought experiment town, but there could be a few reasons why AGI might even be more likely to (unintentionally?) emerge from crowdsourcing efforts rather than be intentionally designed by this company or that.
Task specificity ✍🏽
We’ve encountered something similar when letting our minds roam into the realm of creative AI. Current AI systems are for something. Solve this puzzle, play this game, identify signs of cancer on this image, etc. Machine learning systems for wide implementation are almost always developed by a company. As such, these systems have to be useful. In capitalist lingo, this translates to profitable. This is not a swipe at capitalism (okay, maybe a little one). To survive (in our current system), a company has to make a profit.
But let’s imagine a group of people developing a machine learning system without this constraint. There doesn’t have to be a profitable end goal. Just making a system that learns to learn. Sure, at first, this will involve established tasks, such as image recognition, natural language processing, and so on. Yet, the open-ended nature of this collaborative development might allow building more and more ‘modules’ (perhaps related to the different dimensions of consciousness mentioned earlier?) to be added. Perhaps, at some point, the system itself begins to contribute? Hello, baby AGI? (Open world games might actually be a good starting point for this. More on that here.)
All this, of course, only works if you have the resources to do so. This brings me to…
The power of many 🖧
With more people who have personal computers and increased internet connectivity, citizen science has taken off. You can help identify galaxies or explore the human genome from the comforts of your own home.
IT companies might have the expertise and the server stacks, but there is another crucial component needed for building massive machine learning systems: man-hours (woman-hours? Person-hours?). Crowd-sourced efforts have potential access to (tens of) thousands of person-hour equivalents at any given time. The people contributing also don’t all have to be experts or even be familiar with programming.
Consider DALL-E2, MidJourney, or DreamStudio. The companies/groups building these text-to-image AIs opened their beta versions to a group of people not involved in the development. Why? Testing. No matter how good your developers are, having a substantial group of people test different prompts can flag unforeseen issues, biases, and bugs far better and faster than a very select group of experts. (Involving experts still has its uses and is crucial, of course. Actually, I venture a guess that quite some company developers moonlight in open-source and crowdsourced initiatives.)
For great results, the group of people involved should probably be quite diverse.
Diverse perspectives 🧑🏿🤝🧑🏻
Even setting aside AGI for a sentence, bias is one of the most pressing and pernicious problems in today’s AI research. Whatever seed systems will lead to AGI (assuming that it’s possible to start with), we’d be better off if they weren’t excessively biased.
Here too, crowdsourcing can come to the (partial) rescue. While some tech companies are increasingly aware of the issue and several initiatives address algorithmic bias, there is power in numbers. The more people involved, the more likely that diverse perspectives will be present and - hopefully - included.
But this isn’t only about avoiding bias (as much as possible), this is a thought experiment about emerging AGI. A wide diversity of perspectives can be helpful here as well. After all, with more varied ways of looking at things, the chance increases of someone stumbling upon a unique idea that will allow AI to take the large and unfathomable leap to AGI.
Three, two, one… jump?
This was a ‘let go of the reins’ thought experiment, but there are some indications in the real world that there are worthwhile ideas along these lines. Check out Stability AI, for example, the collective behind the Stable Diffusion that has taken AI-generated imagery by storm. While not exactly crowdsourced, they work with developer communities totaling over 20,000 members and explicitly state:
We trust that our differences make us more robust, and so we seek reason within every difference of perspective.