Literacy and Machine Readability: Some First Attempts at a Derivation of the Primary Implications for Rational Media

Online, websites are accessed exclusively via machine-readable text. Specifically, the character set prescribed by ICANN, IANA, and similar regulatory organizations consists of the 26 characters of the latin alphabet, the „hyphen“ character and the 10 arabic numbers (i.e. The symbols / zyphers 0-9). Several years ago, there was a move to accommodate other language character sets (this movement is generally referred to as „Internationalized Domain Names“ [IDN]), but in reality this accommodation is nothing more than an algorithm which translates writing using such „international“ symbols into strings from the regular latin character set, and to used reserved spaces from the enormous set of strings managed by ICANN for such „international“ strings. In reality, there is no way to register a string directly using such „international“ characters. Another rarely mentioned tidbit is that this obviously means that the set of IDN strings that can be registered is vastly smaller than strings exclusively using the standardized character set approved for direct registration.

All of that is probably much more than you wanted to know. The „long story short“ is that all domain names are machine readable (note, however, that – as far as I know – no search engine available today on the world-wide-web uses algorithms to translate IDN domain name strings into their intended „international“ character strings). All of the web works exclusively via this approved character set (even the so-called „dotted decimals“ – the numbers which refer to individual computers [the „servers“] – are named exclusively using arabic numerals, though in reality are based on groups of bits: each number represents a „byte“-sized group of 8 bits… in other words: it could be translated into a character set of 256 characters. In the past several years, there has also been a movement to extend the number of strings available to accommodate more computers from 4 bytes (commonly referred to as Ipv4 or „IP version 4“) to 6 bytes (commonly referred to as Ipv6 or „IP version 6“), thereby accommodating 256 x 256 = 65536 as many computers as before. Note, however, that each computer can accommodate many websites / domains, and the number of domain names available excedes the number of computers available by many orders of magnitude (coincidentally, the number of domain names available in each top level domain [TLD] is approximately 1 x 10^100 – in the decimal system, that’s a one with one hundred zeros, also known as 1 Googol).

Again: Very much more than you wanted to know. 😉

The English language has a much smaller number of words – a very large and extensive dictionary might have something like 100,000 entries. With variants such as plural forms or conjugated verb forms, that will still probably amount to far less than a million possible strings – in other words: about 94 orders of magnitude less than the number of strings available as domain names. What is more, most people you might meet on the street probably use only a couple thousand words in their daily use of „common“ language. Beyond that, the will use even fewer than that when they use the web to search for information (for example: instead of searching for „sofa“ directly, they may very well first search for something more general like „furniture“).

What does „machine readable“ mean? It means a machine can take in data and process it algorithmicly to produce a result – you might call the result „information“. For example: There is a hope that machines will someday be able to process strings – or even groups of strings, such as this sentence – and be able to thereby derive („grok“ or „understand“) the meaning. This hope is a dream that has already existed for decades, but the successes so far have been extremely limited. As I wrote over a decade ago (in my first „Wisdom of the Language“ essay), it seems rather clear that languages change faster than machines will ever be able to understand them. Indeed, this is almost tautologically true, because machines (and so-called „artificial intelligence“) require training sets in order to learn (and such training sets from so-called „natural language“ must be expressions from the past – and not even just from the past, but also approved by speakers of the language, i.e. „literate“ people). So-called „pattern recognition“ – a crucial concept in the AI field – is always recognizing patterns which have been previously defined by humans. You cannot train a machine to do anything without a human trainer, who designs a plan (i.e., an algorithmic set of instructions) which flow from to human intelligence.

There was a very trendy movement which was quite popular several years ago that led to the view that data might self-organize, that trends might „emerge from the data“ without needing the nuissance of consulting costly humans, and this movement eventually led to what is now commonly hyped as „big data“. All of this hype about „emergence“ is hogwash. If you don’t know what I mean when I say „hogwash“, then please look it up in a dictionary. 😉

This entry was posted in Uncategorized and tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply