AInventor? 🤖
A few weeks ago I came across this interesting article by two scholars, one in law and the other in AI. In the piece, they claim that some current AI systems break patent law. I won’t pretend to understand the small print, but the general idea is that patent law (obviously) assumes that the inventors listed on a patent are human (or ‘natural persons’).
That’s no longer necessarily the case.
In 2019, two patent applications were filed naming the AI system DABUS as the inventor. The patents were not granted.
But DABUS will not be the last. I’ve written about AI systems in science and art before, and their influence is only growing. Perhaps the most well-known example is Deepmind’s AlphaFold, a system that can predict protein structures from a given amino acid sequence. This has long stymied scientists, which is annoying because (changes in) structure are a key to understanding a protein’s function. Now, AlphaFold is rapidly expanding our databases of known protein structures, already providing the groundwork for new drugs. Other AI systems are stirring the pot too, for example by discovering new antibiotics. It’s not just medicine either. AI systems are providing a big boost to material science as well, to pick only one of several examples.
“But,” you might argue, “can’t we simply list the developers of the system as inventors?”
The issue here is that a lot of these systems are deep neural networks. Deep being the key word. Stuff goes in, stuff comes out. What happens in between is anyone’s guess. I mentioned these ‘black boxes’ in an earlier newsletter in a different context.
I’ll leave it to the lawyers to figure out how to handle the intellectual property and patent madness. I’m interested in something else.
Intuitive leaps 🎨
Usually, a claimed invention has to tick a few boxes before being considered patentable. It has to involve something tangible (which can be software as well); it has to be new (obviously); it has to be ‘industrially useful’ (haha, hello capitalism); and it has to involve an ‘inventive’ or ‘non-obvious’ step compared to other products or processes in the field of application.
That last fancy part basically means: there has to be an intuitive leap somewhere.
Without getting lost in the woods of what intuition actually means, that sounds like something that is beyond current AI systems. Or does it? When AlphaGo beat human Go champ Lee Sedol in 2016, many commentators were taken aback by the machine learning system’s unexpected moves. Some people called this (a step toward) creativity, others are much less convinced.
Intuitive leap? You tell me.
I guess most of us would still say: “No, there’s something lacking. Intuition, insight, creativity, whatever you want to call it, is more than advanced pattern recognition (which is what AI still is today).”
Then what is missing? Intuition, creativity, and their mental siblings are notoriously hard to define, but many attempts at a definition include ‘combining different ideas/concepts into something new’. That sounds a lot like pattern recognition to me. Most of it is probably subconscious. It happens (mostly) in our black box.
But who's to say what happens in an AI’s black box?
Intention and selection 👇
Don’t worry, perhaps we can still save creativity. One argument against current AI creativity is the lack of intentionality. When we human meatbags set out to do something creative, we intend to create something. We point our mental attention to specific actions that will hopefully result in the creative output we envision. (To add some confusion: one of those ‘intentional’ actions can be mindless doodling. Sometimes you have to convince your mind to be mindless.)
Yet, I’m not sure that the intentionality argument is enough. For example, there is some work on intentional AI in card games and medical settings. That last paper, for example, directly mentions:
…the technically mediated intentions manifest in AI systems,…
What a sec. We can still save (part of) the intentionality idea. Technically mediated? That implies:
The intentions have been (at least partially) programmed. That in itself is not enough. You could make the case that some of our intentions - conscious or not - have been ‘evolutionarily programmed’. Find resources, find a partner, and so on.
But technically mediated also suggests that the AI’s intentionality is task-constrained. I’d like to think human creative intentionality is broader than that. Yes, you could see ‘make something new’ as a task in itself, but it comes with much more degrees of creative freedom than current AI systems have in terms of different inspirational source materials.
That last long sentence hints at something important, I think. We - the meatbags - can be both more selective and random in what we expose ourselves to. Sure, AI systems can sail through the entire corpus of digitized world literature faster and more comprehensively than any human can ever aspire to, but we can read a book, listen to a song, go for a run in the morning, hug a loved one, and watch a sunset. All of those things can inspire us. The breadth of different experiences we have access to is (still?) larger than what AI systems can experience. (Can they even ‘experience’ anything? Leave a comment to let me know what you think.)
We can choose to expose ourselves to randomness more than any current AI system, and our available randomness comes in many more flavors.
To be creative you need creative freedom and creative constraints. The limits of its programming and available datasets give AI that last one. But it also robs them of that first one. Truly creative AI will require the capacity to twiddle its virtual thumbs and daydream. (Open-world gaming environments with imposed ‘physical’ constraints could be an interesting avenue here, but more on that later. Subscribe so you don’t miss it. 😉)