Time for May’s nuggets. Here are some things that caught my attention this past month. Tiny tentacled monsters, remote science, AI doing IQ tests, and more.
I’ll try to do this monthly. You can get it in your inbox if you…
Of course, maybe you have already subscribed (a million thanks). Don’t worry, there’s a button for you too.
Bonus tool
Because you can never have enough interesting stuff to read: check out Refind. When you sign up (for free), you can choose topics you’re interested in. Refind then scours the web and sends you 7 links to articles, blogs, and so on every day at a time of your choosing. I’ve been using it for a few weeks so far and it has led me to a few interesting reads along the way. A nice morning injection of ideas.
Science
Consciousness or something. A good question to ask if you want a scientist’s eyes to glaze over is ‘What is consciousness?’ There are plenty of theories out there, and little evidence. In this review, the authors try to identify key characteristics of several of these theories to outline a research program that might distinguish between them empirically.
Hybrid origins. In the April nuggets, there was a little piece about a metabolism first scenario for the origin of life. Now, it’s time for an RNA-protein hybrid. This is a variation on the RNA world hypothesis of life’s origin. RNA, so the idea goes, can replicate itself. Screw the molecular middleman. That RNA could then eventually evolve to encode amino acids that link into proteins. Let there be life. RNA sucks at building proteins by itself, though. This new research now illustrates how a molecule that is part RNA, part protein can bridge that theoretical gap.
Remote science. Generally, remote science is less ‘innovative’ than on-site, in-person research. Well, until around 2010, that is. This report finds a reversal in the innovative potential of remote science, which the authors attribute to better remote work technologies. Personally, I’d also add an ever-increasing shift to computational tools and cloud-stored data. Here’s a thread by one of the authors:
Tiny tentacled monsters. The good kind, though. Here’s a nice visual of a little tentacled monster that is actually an immune cell checking things in the environment like a cop asking for people’s IDs.
Technology
AI IQ. AI systems are getting a lot better at doing IQ tests, regularly acing them. At the same time, these systems can make really dumb mistakes. Stickers on stop signs can confuse self-driving cars, for example. Let’s hope that’s been rectified; that was five years ago, after all. Still, researchers are questioning the benchmarks we use for ‘smart’ systems. One initiative to do that is Standord’s WILDS, where developers can test their system on diverse datasets.
TrAIders. This one reminds me of my previous newsletter offering where I started with a brief story of how financial trading algorithms sometimes do their own thing. A new publication by Deepmind now finds ‘emergent bartering behavior in multi-agent reinforcement learning’. In short, a bunch of virtual agents that produce and consume virtual resources started to build their own economy. Who needs people anyway?
Gato. Another Deepmind advance is Gato, or what they call ‘single generalist agent’, which:
…can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens.
One of the researchers launched this tweet thread to claim that all that’s left now is upscaling before we arrive at a true artificial general intelligence:
Not everyone agrees, though. I’m in the ‘not agree’ camp myself.
Your own AI research assistant. This is another interesting thing I’ve been playing around with: Elicit. You basically ask it a question and the system will go through millions of research papers to find the best ones to answer that question. Of course, it has got its limitations - not all papers are indexed, research is nuanced and that is not always clear from Elicit’s output, and so on. Elicit can give you papers to look into, but that looking into is still something you’ll have to do yourself.
Philosophy
Modeling the simulation. One of the more mind-bending philosophical arguments that has hit the mainstream recently is the simulation argument. It’s built on various older skeptical arguments but has been popularized in its current form by Nick Bostrom in a 2003 paper. Basically, are we living in the Matrix? And how do we know? A new paper now uses mathematical modeling to show that no, we’re probably not in the Matrix. Then again, that’s exactly what they’d want us to think, isn’t it?
The shape of knowledge. This very interesting Noema article goes into how humans have structured knowledge differently over time. From a chain of being over cosmological spheres, trees of life, and - now - a rhizome-like network, with the wonderful conclusion:
As we look outward, we sense future epistemes, shimmering over the event horizon of knowability, that have not yet taken shape.