Tool use and intelligence: A conversation

Published on the CLR blog, where researchers are free to explore their own ideas on how humanity can best reduce suffering. (more)
Cross-posted from my website on cause prioritization research.

 

This post is a discussion between Lukas Gloor and Tobias Baumann on the meaning of tool use and intelligence, which is relevant to our thinking about the future or (artificial) intelligence and the likelihood of AI scenarios. To help distinguish the participants, I use different background colors.

See also: Magnus Vinding's response to this conversation.

The trigger of the discussion was this statement:

> Intelligence is the only advantage we have over lions.

Tobias Baumann:

I think this framing is a bit confused. The advantage over lions is because of tools (weapons) which resulted from a process of cultural evolution taking thousands of years, not just "intelligence". An individual human without technology and a lion rule the Earth equally little.

Lukas Gloor:

Hominids drove megafauna into extinction across several continents, which is a fairly large 'accomplishment' on a species-level scale.

More importantly, cultural evolution greatly increased our intelligence (in the “goal-achieving capacity” sense). Agriculture led to specialization and more time for people to learn new skills; printing press led to accumulation of knowledge; increased nutrition led to higher IQs; possibly gene-culture co-evolution on surprisingly short timescales also increasing IQ; etc. There’s a conceptual risk of confusion in that the way we normally use "intelligence" is underspecified. I suggest we distinguish between

1) differences in innate cognitive algorithms.

and

2) what difference the above make when coupled with a lifetime of goal-directed learning and becoming proficient in the use of (computer-)tools.

There is a sizeable threshold effect here between lions (and chimpanzees) and humans, where with a given threshold of intelligence, you're also able to reap all the benefits from culture. (There might be an analogous threshold for self-improvement FOOM benefits in AI.)

> The advantage over lions is because of tools (weapons) which resulted from a process of cultural evolution taking thousands of years, not just "intelligence".

This is an oversimplification because the lion could not make use of tools. The availability of tools in an environment amplifies how many returns you can get out of being intelligent, but the effect need not be proportional. A superintelligence in the Stone Age would most likely be lost, never achieving anything special, because it might just run out of electricity/resources before it can influence the course of the world in ways that favor its survival and goals. Superintelligence today already has a much better shot at attaining dominance, because more tools are plausibly within its reach. Superintelligence in 100 years may have it even easier if the tools are not protected from its reach. (E.g. think of this Die Hard movie where a supercomputer controls all the street lights or something – this only work if the society is already pretty computerized.)

So the point is:

There are some (very simple and tool-empty) environments where intelligence differences don't have much of a practical effect (and domain-specific “intelligence,” i.e. evolutionary adaptiveness, is more relevant in these environments, which is why the lion is going to eat Einstein but not necessarily the indigenous hunter who has experience with anti-lion protection). But there are also (more interesting, complex) environments where intelligence differences play a greatly amplified role, and we currently live in such an environment.

Magnus Vinding seems to think that because humans do all the cool stuff "only because of tools," innate intelligence differences are not very consequential. This seems wrong to me, and among other things we can observe that e.g. von Neumann’s intellectual accomplishments were so much greater an out of reach in a sense than the accomplishments that would be possible with an average human brain.

Tobias Baumann:

> Hominids drove megafauna into extinction on many continents, which is a fairly big accomplishment on a species-level scale.

I feel uneasy about this because it's talking about the species-level, which cannot be modeled as an agent-like process. I would say humans have occupied an ecological niche (being a technological civilization), which transforms the world (suddenly there are weapons) in ways that causes some species to go extinct because they were not adapted to the change.

> cultural evolution greatly increased our intelligence.

Can we agree on intelligence meaning "innate cognitive ability"? I would agree that the statement is true even in this definition (Flynn effect). One can also talk about goal achievement ability, but I would simply call that "power". (It's correlated, of course.)

One difference in our thinking seems to be that you see the existence of tools (e.g. guns) as an environment that amplifies how powerful your intelligence is, whereas I look at it as "there are billions of agents in the past and present that have contributed to you being more powerful, by building tools for humans, by transmitting knowledge, and many more". In this picture, the lion just got unlucky because no one worked to make him more powerful. The "threshold" between chimps and humans just reflects the fact that all the tools, knowledge, and so on were tailored to humans (or maybe tailored to individuals with superior cognitive ability).

However, this seems to be mostly a semantic point, not a genuine disagreement. I would agree that intelligence does correlate strongly with power in the environment we live in. Whether a smarter-than-human AI would be able to achieve dominance depends on whether it would be able to tap into this collection of tools, knowledge, and so on. This is plausible for certain domains (e.g. knowledge that can be read on wikipedia) and less plausible for other domains (e.g. implicit knowledge that is not written down anywhere, or tools that require a physical body of some sort).

I would assign a decent probability that a smarter-than-human AGI would actually be able to achieve dominance (e.g. it might find ways to coerce humans into doing the stuff that it can't access on its own). What I find more problematic is the notion of AGI itself, or more precisely that the idea that there's a single measure of intelligence. It seems more likely to me that we will see machines achieve superhuman ability in more and more domains (stuff like Go and image recognition has changed status in the last years), but not in all at once, and there will be some areas that are very difficult (e.g. conceptual thinking, big-picture strategic thinking, "common sense", social skills).

It is plausible (though not clear-cut) that machines would eventually master all these areas, but this would not be a foom-like scenario because it does not become superhuman in all of these at once. Also, it's perfectly possible that an AI has mastered enough domains to radically transform society even if it lacks some crucial components (e.g. social intuitions, or the ability to access human tools), which would also be a scenario that's different from the usual superintelligence picture in relevant ways.

Side note: One might argue that empirical evidence confirms the existence of a meaningful single measure of intelligence in the human case. I agree with this, but I think it's a collection of modules that happen to correlate in humans for some reason that I don't yet understand.

[Note: Robin Hanson makes the point that “most mental tasks require the use of many modules, which is enough to explain why some of us are smarter than others. There’d be a common “g” factor in task performance even with independent module variation.”]

Lukas Gloor

> The "threshold" between chimps and humans just reflects the fact that all the tools, knowledge, etc. was tailored to humans (or maybe tailored to individuals with superior cognitive ability).

So there's a possible world full of lion-tailored tools where the lions are making our lives miserable all day?

Further down you acknowledge that the difference is "or maybe tailored to individuals with superior cognitive ability" – but what would it mean for a tool to be tailored to inferior cognitive ability? The whole point of cognitive ability is to be good at make the most out of tool-shaped parts of the environment. edit: In the sense of cognitive ability being defined/measured in a way that tends to correlate with this.

Tobias Baumann

> So there's a possible world full of lion-tailored tools where the lions are making our lives miserable all day?

Yes. If not for lions, it’s at least possible for chimps or elephants.

[Note: The claim is not that such a world is plausible – why would anyone build tools for lions – just that it is physically possible.]

> Further down you acknowledge that the difference is "or maybe tailored to individuals with superior cognitive ability" – but what would it mean for a tool to be tailored to inferior cognitive ability?

Hmm, fair point, but it's at least conceivable that tools are such that they can be used by those with lower cognitive ability, too.

> The whole point of cognitive ability is to be good at make the most out of tool-shaped parts of the environment.

I’m not sure if it's the whole point. Intelligence can also be about how to best find sexual mates, gain status, or something like that. Elephant brains are bigger than human brains, but elephant tool use is along the lines of "use branches to scratch yourself", so I have a hard time believing that this is the only reason.

Lukas Gloor

I don't think tool use and intelligence work that way, but it's an interesting idea and the mental pictures I'm generating are awesome!

-----------------------------------------------------------------

Addendum by Lukas Gloor, from a related discussion:  

Magnus takes the human vs. chimp analogy to mean that intelligence is largely “in the (tool-and-culture-rich) environment.” But I would put it differently: “General intelligence” is still the most crucial factor (after all, something has to explain intra-human variation in achievement prospects), but intelligence differences – here’s where Magnus’ example with the lone human vs. lone chimpanzee comes in – seem to matter a lot more in some environments than in others. Early cultural evolution created an environment for runaway intelligence selection (possibly via Baldwin effect), and so the more “culturally developed” the environment became, the bigger the returns from small increases to general intelligence.

I’m not sure “culturally developed” is exactly the thing that I was looking for above. I mean something that corresponds to “environments containing a varied range of potentially useful tools,” but it could be fruitful to think more about specific features that affect the amount of “edge” you get over less intelligent agents depending on the environment they compete in. Factors for environments where intelligence gives the greatest returns seem to be things like the following:

– Lawfulness

– Complexity/predictability (though maybe you want things to be “just difficult enough” rather than as easy as possible?)

– Variation: many different aspects to explore

– Optimized features, sub-environments: Optimization processes have previously been at work, crafting features of the environment in ways that might be useful for convergent instrumental goals of the competing agents

– Transferrability of insights/inventions: Common “language” of the environment (e.g. literal language giving you lots of options in the world/society; programming language giving you lots of options because of universal application in computers, etc.)

– Availability of data

– Upwards potential: How much novel insights there are left to discover, as opposed to just learning and perfecting pre-existing tricks (Kaj’s “How feasible is the rapid development of superintelligence” paper seems relevant here)

– etc.

(Re the last point: It is probably true that the best poker player today has a lower edge (in terms of how much they win in expectation) versus the average player, than the best player five years ago had versus the average player back then. This is (among other reasons) due to more popular teaching sites and the game in general being closer to the “mostly solved” stage. So even though there nowadays exist more useful tools to improve one’s poker game (including advanced “solver software”), the smartest minds have less of an advantage over the competition. Maybe Magnus would view this as a counterpoint to the points I suggested above, but I’d reply that poker, as a narrow game where humans have already explored the low-hanging fruit, is relevantly different from ‘let’s gain lots of influence and achieve our goals in the real world.)

My view implies that quick AI takeover becomes more likely as society advances technologically. Intelligence would not be in the tools, but tools amplify how far you can get by being more intelligent than the competition (this might be mostly semantics, though).  

(Sure, to some extent the humans are cheating because there is no welfare state and no free education for chimpanzee children. But the point is even if there were, even if today’s society optimized for producing highly capable chimpanzees, they wouldn’t get very far.)


One thought on “Tool use and intelligence: A conversation

Leave a Reply

Your email address will not be published. Required fields are marked *