In Our Brains…

In our brains, almost everything is connected to the world outside of our brains. Thinking about artificial intelligence (AI), my friends Ted and Brandon are asking for help (@http://concerning.ai). In my humble opinion: If you want to „get somewhere“ then you need to think „outside of the box“.

What I’m writing here has mainly to do with things Brandon and Ted talk about in episode 10. Also, in episodes 11 and 12, Brandon and Ted talk with Evan Prodromou, a „practitioner“ in the field. Evan points out (at least) two fascinating points: 1. Procedural code and 2. Training sets. Below, I will also talk about these two issues.

When I said above that there is a need to „think out side of the box“, I was alluding to much larger systems than what is usually considered (note that Evan, Ted and Brandon also touched on a notion of „open systems“). For example: Language. So-called „natural language“ is extremely complex. To present just a shimmer of the enormous complexity of natural language, consider the „threshold anecdote“ Ted shared at the beginning of episode 11. A threshold is both a very concrete thing and also an abstract concept. When people use the term „threshold“, other people can only understand the meaning of the term by at the same time also considering the context in which the term is being used. This is for all practical purposes an intractable problem for any computational device which might be constructed by humans sometime in the coming century. Language itself does not exist in one person or one book, but it is something which is distributed among a large number of people belonging to the same linguistic community. The data is qualitative rather than qantitative. Only the most fantastically optimistic researchers would ever venture to try to „solve“ language computationally – and I myself was also once one such researcher. I doubt humans will ever be able to build such a machine… not only due to the vast resources it might require, but also because the nature of (human) natural language is orthogonal to the approach of „being solvable“ via procedural code.

Another anecdote I have often used to draw attention to how ridiculous the aim to „solve language“ seems is Kurzweil’s emphasis on pattern recognition. Patterns can only be recognized if they have been previously defined. Keeping with another example from episode 11, it would require humans to walk from tree to tree and say „this is an ash tree“ and „that is not an ash tree“ over and over until the computational device were able to recognize some kind of pattern. However, the pattern recognized might be something like „any tree located at a listing of locations where ash trees grow“. Indeed: The hope that increasing computational resources might make pattern recognition easier underscores the notion that such „brute force“ procedures might be applied. Yet the machine would nonetheless not actually understand the term „ash tree“. A computer can recognize what an ash tree is IFF (if and only if) a human first defines the term. If a human must first define the term, then there is in fact no „artificial intelligence“ happening at all.

I have a hunch that human intelligence has evolved according to entirely different laws – „laws of nature“ rather than „laws of computer science“ (and/or „mathematical logic“). Part of my thinking here is quite similar to what Tim Ferris has referred to as „not-to-do lists“ (see „The 9 Habits to Stop Now“). Similarly, it is well-known that Socrates referred to „divine signs“ which prevented him from taking one or another course of action. You might also consider (from the field of psychology) Kurt Lewin’s „Field Theory“ (in particular the “Force Field Analysis” of positive / negative forces) in this context, and/or (from the field of economics) the „random walk“ hypothesis. The basic idea is as follows: Our brains have evolved with a view towards being able to manage (or „deal with“) situations we have never experienced before. Hence „training sets“ are out of the question. We are required to make at best „educated“ guesses about what we should do in any moment. Language is a tool-set which has symbiotically evolved in our environment (much like the air we breathe is also conducive to our own survival). Moreover: Both we and our language (as also other aspects of our environment) continue to evolve. Taken to the ultimate extreme, this means that the coexistence of all things evolving in concert shapes the intelligence of each and every sub-system within the universe. To put it rather plainly: the evolution of birds and bees enables us to refer to them as birds and bees; the formation of rocks and stars enables us to refer to them as rocks and stars; and so on.

In case you find all of this somewhat scientific theory too theoretical, please feel free to check out one of my recently launched projects – in particular the „How to Fail“ page … over at bestopopular.com (which also utilizes the „negative thinking“ approach described above).

This entry was posted in Uncategorized and tagged , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply