2005-05-02
Information Technology and Trust
This essay, in two parts, is meant both as an introduction to the basic concepts of information technology in a context relevent for members of the legal community, as well as a positing of IT as a channel for the promotion of trust, both complementary to - and in competition with, legal, mechanical, psychological, and socio-political systems.
The following texts are based on ideas originally presented in "An Essay on Information Technology and Trust" published in "Legal Management of Information Systems" Cecilia Magnusson Sjöberg (ed). Studentlitteratur (2005), and the preparatory notes for my lectures for the Master Programme in Law and Information Technology offered by the Law and Informatics Research Institute, Department of Law, University of Stockholm.
Table of contents
Part 1 – Technology
1.1 What’s different this time?
1.2 The great commonwealth
1.2.a Commonalization
1.2.b Commoditization
1.2.c Completeness
1.2.d Communalization
1.3 The evolution of Information Technology
1.4 World 3
1.5 dW3 7
1.6 Language
1.7 dW3 that works
1.8 It doesn’t count
1.9 Tiger Shot a Birdie
1.10 Local and Global Taxonomies
1.11 Contextual Frameworks
1.12 W3 technology
1.13 Herman’s punch cards
1.14 The design of a data system in terms of completeness
1.15 Who you gonna' call?
1.16 Programs
1.17 After Hollerith
1.18 Task-specificity
1.19 Ask not what your dW3 can do for us
1.20 The Turing Test
1.21 If we think they think
1.22 Hi, I’m Lara Croft, can I help you.
2.1 The bell tower
2.2 Social Capital
2.2.a The China Syndrome
2.3 Transaction costs
2.4 Definitions of trust
2.5 Verospheres
2.6 Redundancy and Tolerance
2.7 Real time
2.8 A cow will wince
2.9 Nurture– Nature– Normal
2.10 Who’s buying this round?
2.11 If no logo – what?
2.12 The world wide web of content description
2.13 The Marlboro Man versus the Surgeon General
2.14 38 Six degrees
2005-04-28
Part One - Information Technology
1.1 What’s different this time?
Technological advancement, particularly rapid advancement, is by nature unsettling. We may hail technologies as revolutionary or evolutionary or even deterministically necessary, but no matter what we call them, there will always be an old way of doing things that must, often reluctantly or obstinately, make way for a new way of doing things, and there will be winners and losers in the scuffle. Of course the latter don’t write history.
In a startling display of enthusiasm for technological progress, the British Parliament, in 1812, passed an act stipulating the death sentence for those found guilty of damaging the new machines that were rationalizing the textile industry. The act was not an empty threat. In early 19th century England, thousands of workers, shut out literally overnight from their livelihood; desperate over their inability to feed their families and the discovery that their acumen and skills were redundant, violently revolted against the mechanical innovations taking place in their industry. Many were arrested, shot or hanged, including one Abraham Charleston, 12 years old, who is said to have cried out for his mother from the gallows. It is sadly ironic that this group of desperate workers, called Luddites, have given their name to those who are said to “irrationally” oppose technological progress.
In an economically networked free market society, long term resistance to rationalization in the interest of job retention can only be carried out by the state exercising its prerogative to redistribute wealth. Or through individuals practicing vocational voluntarism, – the willingness to work for less, for glory, or for nothing at all. Improvements in cost-benefit ratios for one sector of a networked economy will inevitably put pressure on all other sectors. As Vilfredo Pareto pointed out over a century ago1:
As the proportion between capital and labour changes, the former becomes less precious, while the latter grows in value. Wherever technically possible, the machine replaces man’s physical energy. This can be done economically, among civilized nations precisely because there in no shortage of capital; among the other nations the conversion, though technically possible, is not often economical, and therefore man has a greater share in the physical work2. Hence where there is a great abundance of capital, man turns necessarily to work in which the machine cannot compete with him.
Technology does cost people their jobs – and yes, it creates jobs as well, but that is scant solace for those who cannot migrate from the one to the other, scrambling to find work in which the machine cannot compete with him. The cost of labour in the developed world, affected by an abundance of capital has risen almost without a hitch since Pareto wrote those words. Labour costs more –- technology costs less – and the transfer of workloads from humans to machines will not decrease -– on the contrary.3 The effects of abundant capital and technology, not only promote rationalization, but also encourage the proliferation of services and goods that are immune to rationalization, as workers and entrepreneurs seek a safe haven from technological encroachment of their jobs.
Nevertheless, it is not just the inevitable loss of unprofitable jobs that causes us to resist technology, but also the perceived loss of control, or in the context of this essay: the undermining of trust. We mistrust those preaching disruptive change, we mistrust (selectively) technology: We fear loss of security and privacy, loss of psychological well being, loss of social capital, and loss of cultural identity.4 And yet we continue to innovate, to rationalize, and to disrupt, because we can so readily see how, with the help of machines, our needs can be served – faster, safer, more accurately, and more economically than without them.
Progress, in a nutshell, is the extension of range. How far we can reach – spiritually, materially, socially, and epistemologically. We, albeit disproportionately, extend our reach outwards, internalizing goods, services, pleasures and knowledge via a coalescence of innumerable communication systems, some of which are the traditional fabric of human societies, some institutional, some the product of technological achievement. It is the goal of this essay to touch upon the interactions of these systems.
1The Rise And Fall Of The Elites p:75 : (Palgrave Macmillan, 1993
2The well-to-do in the underdeveloped world thrive on cheap labour luxury. The well-to-do in the developed world thrive on cheap technology luxury materialized by cheap labour..
3This effect is of course mollified by free trade between nations with widely divergent labour costs. Free trade optimists believe that specialization will eventually replace labour cost inequality as the prime mover of trade. But, of course, today specialization is very much a product of labour cost disequilibrium.
4We also fear things that think and move faster than we do, at least initially until we can master them.
1.2 The great commonwealth
1Ned Lud or General Ludd, who gave his name to the frame-smasher movement, might have been a mythical figure. See http://www.usu.edu/sanderso/multinet/lud1.html
2For contrasting arguments see Debora Spar’s Ruling the Waves Harcourt Brace, 2001
1.2.a Commonalization
It is generally agreed that Adam Smith, when he suggested that the division of labour leads to inventions because workmen engaged in specialised routine operations come to see better ways of accomplishing the same results, missed the main point. The important thing, of course, is that with the division of labour a group of complex processes is transformed into a succession of simpler processes, some of which, at least, lend themselves to the use of machinery.Little could the economist Allyn Young, in writing the above in 1928, have dreamed to what extent "simpler processes" would come to dominate industry. Nor that one particular simple process - digitalization would permeate throughout all economic endeavor.
Information Technology thrives on an unparalleled commonalization of human effort. The common denominator, the simplest of all processes, is the bit, the material expression of what is logically true or false. In the IT world everything is cached, counted, carried and consumed in the form of digital bits: Chinese opera recordings, hip bone x-rays, customs declarations, pictures of nude people in compromising poses, building plans, court orders, Shrek II, barometric pressure at the North Pole, the simultaneous chatting of tens of millions of teenagers, the nerve system of your automobile.
Since so much enterprise and activity virtually share the same laboratories, production plants, and distribution channels, the efforts of each are to the benefit of all - one for all and all for one.
Long before the invention of the transistor Young wrote:
Every important advance in the organization of production, regardless of whether it is based upon anything which, in a narrow or technical sense, would be called a new "invention," or involves a fresh application of the fruits of scientific progress to industry, alters the conditions of industrial activity and initiates responses elsewhere in the industrial structure which in turn have a further unsettling effect. Thus change becomes progressive and propagates itself in a cumulative way.Increasing Returns and Economic Progress The Economic Journal, volume 38 (1928), pp. 527-42
Economists speak of network effects or path dependencies.1 If one person takes a particular route through a field it will be easier for those that follow to take the same path, and the more that take the path, the wider it becomes for those that follow2.
Classic examples of path dependencies in technology are the QWERTY keyboard and the VHS player/recorder3. Whether the network effects of these two solutions shut out technically supperior competitors, or not, is still contentious, but regardless, their scope is limited by the singularity of thier utility and purpose.
The path of digitalization is unbounded, compounding and self-reinforcing the bit is a path, the computer is a path of paths, the digital network is a path of paths of...
As the advantages of digitally encoded music relegate analogue encoding and fossil-fuel-powered distribution to our museums of engineering, consumers, producers, distributors and equipment manufactures will all, in their assessments and expectations of each other choices, collectively supercharge this changeover. Producers choose digital because buyers choose digital buyers choose digital because producers do so. This is a network effect.
But when the photographic industry takes the same digital path as the music industry and countless other industries, then all sectors collectively will supercharge that path. At the moment we are witnessing the technological commonalization of telecommunications, broadcast entertainment and data networks as telephones become mobile computers and computers work as telephones, televisions become computers and computers become televisions4.
1http://www.utdallas.edu/~liebowit/palgrave/palpd.html
2Further, path dependencies are psychologically reinforced. A decision makers attitude towards a concept or products feature set, tends to change ex post the decision, and her preferences will be readjusted in line with the actual outcome of her choice. For example, in a choice between apples and pears one might chose the former on the strength of their flavour despite the occurrence of less seeds in the latter. After a choice was made in favour of apples, the disadvantage of the seeds would be discounted. Dan Simon, Daniel C. Krawczyk, and Keith J. Holyoak Construction of Preferences by Constraint Satisfaction http://lawweb.usc.edu/faculty/documents/ConstructionofPreferencesbyConstraintSatisfaction.pdf
3The keyboard skirmish concerned not just what sort of typewriter you or your company might buy but also as a consequence of that purchase, what sort of typing skills you would be investing in. And the video recorder fight entailed, beyond what make of machine you chose, the investment in media (cartridges and tapes) that would work in only one type of machine. Path dependencies have network effects. Eventually a large pool of QWERTY trained typists and VHS standard cassettes would seal the fate of the competing Dvorak and Betamax technologies. http://www2003.org/cdrom/papers/alternate/P552/p552 FitzPatrick.html
4Actually there is still one large stumbling block for the convergence of television and computing the screen technologies; interlaced for television, and progressive for computer monitors are not readily compatible.
1.2.b Commoditization
In every great technological epoch, commoditization - the standardization of methods, tools and parts in order to facilitate interchangablilty, has served as as a sister-ship to the division of labour and task specialization. The power of commonalization as described in the previous section drives IT infrastructure into a Legoland of interchangeable modular parts in both hardware and software. The components of the former; processors, memory chips, storage devices, and communication channels, are primarily assemblies of commodity items working equally well in thousands of competitive offerings. For the latter, despite the strategy of industry leaders of building walled gardens of computing tools incompatible with those of rival firms, pools of interoperable, interworking software abound. This is what the current software trend known as web services1, is all about – proprietarily unencumbered application modules that facilitate successful interaction between firms and organizations with previously incompatible technology2.
The traditional view of specialization amongst economists, is that larger markets provide for the division of labour, and a narrower range of skills. Yet, as Becker and Murphy point out, the various costs of "coordinating" specialized workers who perform complementary tasks, and the amount of general knowledge available will set limits to just how far this specialization can reach.3
Due to the unprecedentedly high levels of commonalization and commoditization ocurring in IT, coordinating costs are dramatically reduced as specialized tasks are minimized and shifted to the outer edges of a commons-based, generalized problem domain.
Of course, at some point specialization must interwork: the labourers in Adam Smith’s famous pin factory must be able to pass on the results of their specialized skills to the next in line along a serial path – in the production line, the output of one task is the designated input of the next. But the interworking of software modules, when carried out at appropriate levels of granularity, is not serial but networked. The output of one task is potentially the input of any other task and visaversa.
In his amazingly concise, poignant 1943 “What is life?” lecture, the physicist Erwin Schrödinger noted the importance of size. He pointed out that molecular components in a system must be sufficiently large or significant in order to resist mutation, yet sufficiently small or insignificant in order to be replaceable. The stability of large systems, known in biology, as homoeostasis, is attained when its molecular components (cells) are constantly replaced and regenerated. These dynamicly, reproducing “large systems” will play a similar role as the replaceanble components of even still larger systems. We can say that molecular systems are commoditized.
In software, when we speak of the granularity of systems, we are paralleling Schrödinger’s observations on stable, yet evolving, organic life forms. In granular systems, software objects are large enough to resist mutation, yet small enough to join together as the replaceable parts of a larger whole. As in the poetic title of a book by David Weinberger they become Small Pieces Loosely Joined.
1See http://www.w3.org/2002/ws/
2It should also be pointed out that, increasingly, consumers own and operate their own technology when interacting with firms. Technological interoperability is no longer merely a firm to firm consideration.
3The Division of Labor, Coordination Costs, and Knowledge Gary S. Becker, Kevin M. Murphy Quarterly Journal of Economics, Vol. 107, No. 4 (Nov., 1992) , pp. 1137-1160
1.2.c Completeness
According to a "law" formulated by Robert Metcalfe, the inventor of Ethernet:
The value of a network increases in proportion to the square of the number of its nodes1The catch to Metcalfe's law is that in speaking of value, he is not including the costs necessarilly assumed in deriving it. To determine the economies of expanding networks, one would have to factor in the cost of adding nodes. In the construction of many sorts of networks these costs are prohibitive and non-uniform, wiping out any gains promised by Metcalfe's law.
In this essay we are going to take up and differentiate between two types of networks. There are of course many, but we will concern ourselves here with those most apparently relevent to IT: The physical and the virtual. The physical IT network is comprised of hardware; computers, switches and cables. The virtual network is comprised of data.
The physical network (which we will later call a W1 network) can not escape the effects of diminshing returns, nor for that matter can the virtual (which we will come to call a W3 network). But the evolution of digital technology radically favors the latter over the former. The advances made in hardware developement, production and implementation, dramatic as they may be, are trivial in comparison with the advances made in explotating the data that resides upon them.
As I point out further on, these two networks are not only interdependet upon each other - they are in competition. But, for the sake of describing a particular sort of completness - the virtual, I will assume the pre-existence of a hardware network, albeit a definitely incomplete one. I am not discounting the importance of discussing hardware network completeness, there are all sorts of social, political, economic issues of rank to be hashed out - just not here.
When speaking of virtual networks residing upon pre-existent hardward networks. Here is my complement law to Metcalfe's:
The cost of a virtual network does not increase in proportion to the square of its nodes.If both Metcalfe’s and my laws are correct, virtual networks would seem capable of reversing the law of diminishing returns, and “the more – the merrier” axiom bounded by no practical limits, should rule the day. At some point in maximizing a network's value, as the cost curve for increased membership consistently moves in the opposite (downward) direction to the upwards pointing value curve, the inclusion of each new node will be determined soley on the relevance of its potential contribution to other nodes.
The posibility of near-zero transaction cost environments gives rise to theories of completeness, where "all the bits that fit" are included in a virtual network, increasing the population until every member, and every bit of data, of even the slightest significance are admited to the fold.
The net gain of our actions is the benefit derived minus the costs incurred. It is rational to assume that costs increase with effort, time and distance - the further one must walk to fill a bucket of water, the more energy expended, and and the less time available to do something else. In an physical network, each connected node represents a cost, be it for cables, hardware, resource time, or electrical power. If a new node is added - the network designers must consider the extra costs occurred in relation to the benefits gained.
If we take telephone networks as an example, the infrastructure costs of adding lines and nodes is not uniform – land line installations in geographically remote areas will obviously be disproportionately costly compared to city installations, and many societies provide subsidies or demand bulk service commitments from their telcos to adjust for this. Yet taken as a whole, the costs in infrastructure and power consumption for the world's telephone network are easily offset by gains in utility. Though some infrastructural nodes are “cheap” to include and others “expensive”, in sum, benefits outweigh costs.
Now if we look past issues of infrastructure (hardware and lines) to matters of address-structure we will uncover a fascinating phenomenon. The ubiquity of telephones (in the developed world at least) is so commonplace and humdrum that we forget the startling fact that we have created a network that is immune to overcrowding. Our telephone system is essentially global and homogeneous – using globally accepted standards, and no matter how many new nodes are connected, the benefits of expanding membership outweigh the costs. There is no economic rationale for creating two or more non-connecting telephone networks, no matter how many new telephones, telephone companies, and telephone lines are added to the one that already exists.2
Our global postal network does the same trick - through the technology of distribution we extend past the limitations of time and space. Imagine the following ridiculous conversation.
- Call Fiona and tell her we have approved the plan.
- I can't call her – she is on GEC and we are on DBB
- You mean the two systems can't connect? How stupid. Well then write her a letter!
- I can't write her. She is FedEx and we are BRM . I can't email her either. She's on AOL and we are on Gmail.
- Why isn't she on Gmail too?
- Gmail is full - overcrowded. They have no more room for new subscribers.
Actually it is the complexity of addressing and connecting nodes that puts the heaviest burdens on digital networks and once (and if) those problems are solved, the complexity of addressing and accessing specific content on or across nodes. Completeness is desirable if it adds to efficiency of access – if not, then the division of networks and the compartmentalisation of resources are preferable alternatives.
The complaint, so often heard in the early stages of Internet development – “too much information”, gradually looses its rationale, as each additional node (or web site) contributes to the description of the entire Internet corpus, making the discovery of appropriate resources easier – not more difficult.
From a state of completeness – we design tools of selectivity. Google – the catalogue of all the web pages, Amazon – the catalogue all the books and eBay – the catalogue of all the PEZ-dispensers, are parade examples of the advantages of cleverly filtered completeness, but above all it is the Internet itself, the medium of Google, Amazon and eBay, that proves the robustness and scalability of completeness in digital networks.
According to the reigning theory of communication which we will deal more with further on, efficiency in transmitting information is increased by pre-existent knowledge – the more someone knows the less you have to tell them, and of course inversely, the more you know the less you have to ask. By removing, once necessary, but now artificial, barriers between data stores in our networks we are increasing efficiency – not reducing it. More costs less. As Bakos and Brynjolfsson have written:
[...] the near-zero marginal costs of reproduction for digital goods make many types of aggregation more attractive. While it is uneconomical to provide goods to users who value them at less that the marginal cost of production, when the marginal cost is zero and users can freely dispose of good they do not like, then no users will value the goods at less than their marginal cost. As a result economic effiency and, often, profitability are maximized by providing the maximum number of such goods to the maximum number of people for the maximum amount of time.31 For example, if you have four nodes, or computers, on a network, say, an office intranet, its "value" would be four squared (4^2), or 16. If you added on addition node, or PC, then the value would increase to 25 (5^2). See http://www.mgt.smsu.edu/mgt487/mgtissue/newstrat/metcalfe.htm
2 Of course one might do so for reasons of secrecy.
3 Bakos and Brynjolfsson Aggregation and Disaggregation of Information Goods Internet Publishing and beyond MIT Press, Cambridge 2000
1.2.d Communalization
IT is the first world-class demonstration of technical innovation as a social act. People have always made ideologically motivated contributions to society, or worked for no greater reward than self-satisfaction and self-esteem, but never on the scale made possible by the Internet. The IT commonwealth provides itself with remarkable tools of collaboration, enabling product development transparency, simultaneous consumer feedback, frictionless distribution and instant donor gratification.
IT rearranges the structure of social capital and challenges the traditional conception of infrastructure. To paraphrase Winston Churchill; Never could so few, create so much, of value to so many, at so little cost, and in so little time – and then give it away for free. Yet we should also keep in mind that open source software is not just the utopian vision of the digerati, but an integral part of the long term strategies of several of the world’s largest software companies. Communalization is not the equivalent of “free” software or “open source” software1. Donor communities also form around commercial products.
Rationally, one would think that our willingness to give things away, would be dependent upon the cost incurred in obtaining or creating them. A watch that took 300 hours in the making, would probably seem more valuable to its creator than a watch she spent 30 hours making. A book that costs 40 Euros would be a more unselfish gift than a book that cost 10. But these are what we call material, non-replenishing goods, give them away and they are gone from your possession.
Goods in Information Technology, as it has been pointed out by many writers, are non-deplenishing – give them away and yet you still have them, if no longer exclusively. It appears as if we are prepared to make gifts of non-deplenishing goods, no matter the cost incurred in obtaining them. We might hold tightly to a watch that took 30 hours to make and share freely ideas, solutions, methods, that took thousands of hours to work out.
While the commons can be a physical resource owned jointly by all citizens or members of a community, it can also be seen as a social regime for managing common assets. One type of commons, the gift economy, is a powerful mode of collaboration and sharing that can be tremendously productive, creative and socially robust. The Internet is a fertile incubator of innovation precisely because it relies heavily upon gift-exchange. Scientific communities, too, are highly inventive and stable because they are rooted in an open, collaborative ethic. In some gift economies, the value of the collective output is greater as the number of participants grows — “the more, the merrier.” The result has been called a “comedy of the commons,” a windfall of surplus value that over the long term can actually make the commons more productive — and socially and personally satisfying — than conventional private markets. New America Foundation, 20042
As Adam Smith pointed out, the division of labour depends upon the extent of the market. As the "market" embodied in the internet expands, so does room for specialization. But the modular, commoditisized nature of Information Technology and the commonality of its building blocks creates a "higher order" of specialization. Specialization in the majority of tasks deals with the assembling and arrangement of commonized sollutions often in the public domain, thus promoting fruitful interaction between specialists, previously isolated by the highly specific nature of their tasks.
1A good summary of Open Source and Free Software can be found at http://www.dwheeler.com/oss_fs_why.html
2See http://www.newamerica.net/Download_Docs/pdfs/Pub_File_649_1.pdf
2005-04-27
The evolution of Information Technology
As we have noted, technological innovation comes seldom without resistance, be it from legacy technologies and business models, or in the defence of the entrenched human position. “Will we be needed in the future?” asked Bill Joy in a famous article in Wired Magazine, contemplating the eventual droidian takeover of the planet. Our more salient fears concern robots – no doubt due to their anthropomorphic characterization in popular media. Yet the fear of machines is perhaps misdirected. We might show more foresight in fearing our books and plays and works of art – or any other objectification of knowledge you choose to think of. As the power of information technology increases – the relative influence of our computing machinery diminishes. Does that sound confusing? The evolution of IT, and many other technologies demonstrates readily that if there really was a struggle between machines and the data they work with, machines would be the losers – hands down.
Primitive machines place heavier constraints on what goes into them and what can come out, while sophisticated machines allow much greater flexibility at both ends. We should not look at technological progress merely in terms of increased power, speed and output, but also in measures of reduced distortion in the goods processed. The elimination of costly pre-process modifications on the raw materials of manufacturing is a keystone of industrial efficiency. When applied to information technology, this translates to the ability to work with our data as it is, without subjection to the contortions caused by stringent media constraints or grammatical, syntactical, or semantic rearrangement. We are entering an age where a modern purveyor of IT does not tell their customer what she must do to get her data to work with their product, but rather asks what they must do to service her data as it is.
World 3
There are definition problems when talking about the entities of “Information Technology”: what should we call the main ingredients? Intuitively it would seem that there is a technological domain, consisting of machines and programs that work with something called information. Inversely, information would be that which machines and programs can work with. Well, not only is that a recursive definition, but the word information in misleading. Particularly for students of communication theory who learn that information can only be that which is not already known – information is news, once we have it – it ceases to be information. Does it then become knowledge? Do we really want to think of all the terabytes on the Internet as knowledge? Who really knows when we should say information, and when should we say knowledge? What is the difference anyway?
In this essay I will often speak of the hardware of IT intermittently as machines and computers and I choose to call the soft stuff dW3, digitalized world 3. Here is why.
Borrowing from a scheme of classification made famous by the philosopher of science Karl Popper1, we can avoid information– knowledge definition disputes by calling all data (knowledge, information, signification, representation, what have you) external to our minds for world 3 or W3, and whatever goes on inside our minds; thoughts, memories, dreams, perceptions and so forth, for world 2. Finally, world 1 is the physical world – all the atoms of the universe in what ever form they happen to combine – be it super novas or milkshakes, jumbo jets or comic books.
In W3, the production of our subjective minds finds a support structure and varying vestiges of permanence. Languages are the dominating vehicle of human discourse and consequently the aristocracy of W3. Joining language is art, architecture, calculation and symbolic science, bookkeeping, music, photography, sound recording, graphic imagery of processes and systems, cartography, signs and much more. W3 is the human mindprint. It might be information at some times, knowledge at others, or just somebody having aimless fun.
The significant thing about W3, is the degree of autonomy it enjoys. Existing on the outside of our heads, objectified in some form of representation that is decipherable by more than one subjective mind with some degree of shared meaning or impetus, W3 lives on, though dependent on its W1 vestige – independent of its W2 origins.
It follows of course that our mental W2 production, once out of the chute, must have some form of W1 (something physical) to hang on to. Some W1 material must be rearranged to represent our mentalese; some marks scratched on a stone, or ashes on the wall of a cave. Take, for example, the idea of a house. Someone can think of a house and then go right out and build it from her vision of that house – off the top of her head so to speak. This W1 house becomes the manifestation of her W2 conception. Alternatively, she could have drawn some plans first and made up a list of the materials she wished to use. Both the plans for the house and the house itself are W3 objectifications of a mental vision, and they each have their advantages as such: You can’t live in the drawing plans and you can’t make 10 copies of the physical house at Kinko’s2.
Anything we can dream of, or reason about; anything we can hear, or see, or touch, can be replicated in W3, though the representation, the form and expression, of the reproductions will greatly vary. Some representations will seek exactitude – others beauty and eloquence, or even purposeful distortion. Yet once represented, once incarnated in W3, all such representations are themselves reproducible.
1Karl Popper is no longer with us to protest against the simplification I have made with his three worlds. He would no doubt argue that I have confused objective with objectified; that world 3 status should be reserved for objective knowledge, or that world 2 is also capable of holding world 3 content – these points are somewhat vague in his texts, and I make no pretence of accurately representing his position. I believe the utility of defining worlds 1-2-3 as I have done here is apparent. There are of course other classification schemes – some of which would also distinguish between types of knowledge based on their a priori truthfulness. I mention this distinction here, because a priori truths, whether they are true or not, are the base of all digital computer programming.
2Kinko’s is a chain of stores in the United States where one goes to print and copy documents.
dW3
We don’t store physical objects on our computers, only some representational extraction or picture of them , nor do we have, at least for the present I should add, anyway of getting our W2 thoughts unintermediated into our computers, or back again. We must always go through some inbetween of W3 representation; drawings, words, sound waves, whatever.
Our machines can swallow any W3 representation we can make – as the symbolic manifestation of an idea, but not as a corporal object. Ideas expressed in written language, or numbers, or some notational system such as that of music composition, can be passed on to our computers quite smoothly, since written language and other notational systems are already quantified to the equivalent of digitalization. But ideas as expressed in physical houses and bridges and paintings and music must be extracted and quantified first ( a process which of course preceded computers). Once that is accomplished, these expressions may be assimilated along with language and other notational systems into the digital realm. W3 becomes dW3, as the data of all media is digitalized; reduced to a serial representation of ones and zeros. The silicon wafers, magnetic filings, reflecting surfaces, and light years of copper and glass cables of the “world3” age become the hat rack for dW3.
But the process of transferring W3 into dW3 is not unproblematic. Remember W3 is already itself a code. If we see a beautiful landscape and wish to “immortalize” it, then we can paint it, photograph it, write or even sing about it. We express something about the landscape, and almost unfailingly we will apply style to these expressions – we will use the media of choice to personalize our perception of reality. And the media of choice, via it’s physical attributes will stylistically channel our efforts.
If one takes a snapshot in black and white today, it is rarely because colour is not available, but because we hope, by voluntarily removing colour, to dramatize the effect of the picture. If, when digitalizing our analogue b&w photos, we were forced to accept that they would be given back the colours we artfully removed, we would certainly be annoyed. The example is silly perhaps; we would expect the opposite if any thing – like turning a nice colour photo into a grainy b&w copy by sending it as a fax, but the point is that expression gained in the codification of W3 can be lost when transferring to dW3.1
When writing, we use the layout of text on the page to express meaning and certainly penmanship is a language of its own. To your child, who hasn’t picked up a thing off her floor for the last two weeks, you might write a note “Clean up your room!” There will no doubt be a great deal of determination in the way you layout your message, the size of your letters, the thickness of your pen strokes. When transferring W3 into dW3, there has always been a temptation to forgo these expressions of meaning through form and style, in order to save on bandwidth or storage space, or due to primitive technology. The precedent for this was of course the typewriter – though you could at least sign your letters. Though it might seem trivial to some, to others the inability of email to facilitate a handwritten signature is a serious deficiency.
Style can achieve a formal status in the codification of contracts, formulas and laws. There are many cases where the intent of a document can only be derived through the combined interpretation of words, formatting and layout. One of difficulties in transferring paper-based W3 into dW3, other than as digitalized photographic replicas, is the perceived loss of tacit intent expressed in text formatting and layout.
You would be forgiven for assuming that this problem is not insurmountable, but the difficulty is exacerbated when no formal laws govern or describe the methods of layout and style that have evolved though praxis. This is the case in Sweden, which still has no viable solution for giving legal status to dW3 encoded laws.
Once W3 becomes dW3, it gains entrance to the digital commonwealth. It can be stored in computers and transported over networks; but though this brings convenience, in many cases truly remarkable convenience, it is only the beginning of what we can do with dW3.
1Though some actually applaud this development: See http://www.ifla.org/documents/infopol/copyright/lanham1.htm
Natural Language
If words were nuts and bolts, people could make any bolt fit into any nut: they’d just squish the one into the other, as in some surrealistic painting, where everything goes soft. Language in human hands becomes more like a fluid despite the course grain of its components. Douglas Hofstadter:
Natural language with its extensions in domain specific vocabularies is the cornerstone of human interaction. Due to contextual harmonization – our shared knowledge of people and things and the contextual frameworks we live in, we are able to reuse natural language for an enormous spectrum of human endeavour – the same language is used for poetry, gossip and dirty jokes, as is used in the discourse of science, technology, politics and not least the law. As David Mellinkoff wrote in 1963, in his book on The Language & the Law :”The law is a profession of words”.
The success of language lies with its fungibility and ambiguity. All natural language is ambiguous.
When one person uses a word, he does not mean by it the same thing as another person means by it. I have often heard it said that that is a misfortune. That is a mistake. It would be absolutely fatal if people meant the same things by their words. It would make all intercourse impossible, and language the most hopeless and useless thing imaginable, because the meaning you attach to your words must depend on the nature of the objects you are acquainted with, and since different people are acquainted with different objects, they would not be able to talk to each other unless they attached quite different meanings to their words.
The preceding Bertrand Russell’s quote flies in the face of how we would normally like to look at law. If words are ambiguous, then what is the worth of written law, legal documentation, or case history? Of course students of jurisprudence know that things are not this simple. The Post World War I Freirechtschule movement in Germany was a reaction to literal and sometimes absurd adherence to the letter of the codified law. There target was Begriffsjuriprudenz, the jurisprudence of concepts, which imagined it had constructed a seamless network of rules which answered all problems scientifically, and excluded all extraneous values.
Unfortunately, under the National Socialist regime the idea of departing from the strict language of statute and looking instead at values (which were likely to be subjectively and unpredictably appraised) like the the “spirit” of the law [...] was taken to sinister extremes. [...] An amendment of the German Criminal Code on June 1935 imported a new § 2 which read as follows:1
Punishment is to be inflicted on any person who commits an act declared by the law to be punishable, or which, in the light of the basic purpose of criminal law, and according to healthy popular feeling, deserves to be punished. If no specific criminal law applies directly to such an act, it is to be punished according to whatever law, in its basic purpose, best applies to it2
In science, we use numbers, symbols and formalized logic to obviate ambiguity, and all fields of human endeavour create domain specific taxonomies to the same ends. We look for exactitude, but exactitude comes at a cost. In artistic or cultural or social intercourse we try to avoid exactitude – it is tedious, we cultivate vagueness in its place. The interface between exactitude and vagueness is always problematic. Think of a trial in where the court attempts to create ex post facto exactitude in the carryings on of people who were only carrying on quite vaguely. Our creative use of ambiguity and vagueness is the greatest problem of all in our interaction with machines who don’t really know what to make of it.
Here is an interesting problem. Almanacs and appointment books are great tools, in the form of dW3 they are even better when we wish to synchronize our activities with others. If you are invited to someone’s house for dinner the time of your arrival will normally be stipulated and you will be expected to come roughly at this moment, but very rarely will a host tell you in advance that you must leave at certain time: that would be considered almost rude. If you want to enter this dinner date in your dW3 appointment book, it will invariably ask you to enter a time when the party is over, which you don’t know and don’t usually want to think about unless perhaps you have a baby sitter. Computerized almanacs that organize our appointments always want to know when we are going home even when we don’t want to tell them.
Machines are, at least theoretically, unambiguous. Once correctly constructed and in the absence of material failure, they are expected to act uniformly when given uniform input or instructions. The various parts of a machine communicate with each other, again in theory, unambiguously and machines in concert – machines that interwork, are expected to continue this unambiguous chain of communication. There is no parallel in natural language. The famous game of Grapevine whereas a circle of players will gradually garble a message as it is passed between them, won’t be as fun if machines are invited to the party. Machines are built not to garble messages.
1J.M. Kelly A short history of Western Legal Theory Oxford University Press 1992
2ibid
dW3 that works
We say that language works because we can, at least most of the time, understand each other, and mathematics works because we multiply and divide entities with confidence that the answers will be correct. If you explain something for me, then I can perhaps fit that together with something else I know and formulate an opinion, or ask someone to do something for me based on your explanation, without resorting to numerical calculation.
I might have a book on my shelf containing all sorts of interesting W3. I can take it down and read that Damascus is the capital of Syria and lies to the south. I have another book next to the first and I can take it down and read that Syria is mostly landlocked, with only a short patch of Mediterranean coast line in the North. Someone asks me if the Syrian capital has a nice harbour and I answer that it seems not to be the case from what I have read.
Now lets say I also have on my shelf a stack of little formulas like л = 3 and the square root of X, and someone writes me a letter and asks if I know what the square root of 529 times л is. I take down the two appropriate formulas and I figure out that the answer is 69. I write that down and send it back to my friend. So far, these examples are similar. In both cases I looked up relevant data and came up with an answer. Of course, I wasn’t totally sure about Damascus, north and south can sort of bleed into each other, and I only assumed that a town without an ocean wouldn’t have a harbour.
Suppose I take all my books and formulas and transfer them into dW3 and keep them on my computer. My computer already has a function for finding the square root of a number and it also has a value for л, though it is not 3, even if the old testament thinks so, my computer doesn’t. Now, when asked the same questions all over again, I will have an easier time of it – I can use my search tool to find all references to Damascus quickly – and as for the math problem, well I merely have to submit the appropriate numbers to my software calculator.
As long as problems are couched in terms that can be interworked with numbers and logic, my computer can easily and unambiguously deal with them, but problems or queries framed in natural language are another matter. Even if I have hundreds of volumes in my computer pertaining to Syria, if the information stored, or the questions asked can not be framed in logic and numbers, I am not going to get straight answers. I ask if Damascus has a harbour, but since it really isn’t on the ocean the chances that some text actually would bother to say that Damascus has no harbour is small. Even if that information was available how could I frame the question to address it and receive a correct answer? Unless W3 is formulated in something computers can unambiguously calculate with it doesn’t count.
It doesn’t count
The phrase “it doesn’t count” is quite telling. When taken literally, it means that to be important things have to be countable – in numbers. My laptop is connected to the Internet via WiFi, so that I can wonder about in the house and work wherever I please. There is a an indicator on my screen that tells me the quality of the signal I am transmitting with. Some clever heuristics engineer has decided that I would rather have this information in a human fashion – with words rather than numbers. My signal is either excellent, very good, good, poor or very poor, according to this indicator. I don’t really have a problem with this, I can figure out the order of connection quality intended. But what the computer can dish out – it can not take in return. I cant ask my computer to do something excellently or poorly, unless an arrangement has been made in advance as to the numerical value of these terms.
I have often thought how much more fun it would be if the engineers had worked out a scale of 99 adjectives, using words like remarkable, fantastic, dodgy, cool, so-so, pretty good, wonderful, miserable, horrendous, paltry, acceptable, catastrophic, and so on. It would be fun to see if I could learn them in relation to the reception I was experiencing.
Some sporting contests are very difficult to count, not downhill ski racing, where we would never know who was the winner without stopwatches accurate to 100ths second, but disciplines like figure skating, diving, and synchronized swimming. Here the aggregated results of a bench of judges determine the winner. If judges had only the 100 adjectives I fantasized about above, even if all were words familiar to them from daily usage, with no mapping allowed to any sort of numbering or ordering system, there would be no way of determining a winner. The judges could argue just as much about the relative values of the adjectives as they could about the perceived quality of the competitor’s performances. Practitioners of law will recognize the imbroglio.
Tiger Shot a Birdie
There are several solutions to dealing with ambiguous language. One is to gather about you all the contextual help you can find. If you know the context it is formulated in, then the tittle of this section is no longer ambiguous for you – if it ever was. It is a statement about golf. I could have written, “Tiger Woods shot a birdie on the 17th hole, but then I would have made the riddle too easy for you. Yet for a dumb machine, it would make little difference if I wrote Tiger, or Tiger Woods, or Tiger Woods the Nike guy, or anything else. “What do I know?”, the computer would say.
Imagine that a machine is used to keep score in a golf tournament. The correct procedure is to feed it the results of every hole for every player. The input routine might be something like this:
Input number of hole: ??
Input contestant’s number: ??
Input score: ??
An operator has been giving the machine the holes, players and scores as stipulated – in numbers, but on the 17th hole she forgets herself and writes “Tiger shot a birdie on the Road”. What does the machine think now? Well for starters, most machines are not going to let you treat them like that, specially dumb machines are going to demand, that if you want to tell them anything, it has to be said their way – not yours. Such a machine would tell her, “I don’t know what you are talking about – Just numbers in the correct order, thank you!
But if the machine was semi-smart, it might accept free text or natural language input, and have a go at figuring out what the operator meant. To do this it would need to have a rough idea of sentence structure, the rules of grammar and a dictionary. It would also need a taxonomy of the terms most often used within a particular domain, in this case Golf.
Some people are surprised to hear that machine translation or transcription works best for discourse which, as outsiders, we normally consider difficult. We might find the language of doctors obscure and hard to understand and wonder why a machine would have an easier time with a lot of arcane terms than with lite everyday conversation. The answer is of course that machines prefer obscure and arcane words because they are less likely to have ambivalent meanings. They are the code words of a domain – their use is constrained within a limited discourse. When the machine knows that now we are talking medicine, it knows that within medicine these words have crisp definitions.
Local and Global Taxonomies
The meaning of “birdie”, “Tiger”, and “the Road” could all be mapped within the machine of our example to the numbers it was originally asking for. The use of “shot” in golf is ambivalent, but in our example the machine is looking for the total score of a particular hole and not whether Tiger shot into the woods or onto a bunker. So the value of “shot” is the value of score.
An important detail in the example is the need for certain local and temporal facts that are not part of the generalized corpus of golfing knowledge. Knowing the numerical value of a birdie is dependent upon the par value of the actual hole it is scored on. A birdie is one stroke under par. Tiger’s competition number is not a permanent part of his identity, but rather a designator assigned to him for this particular tournament. “The Road” is the nickname of the 17th hole at St Andrews, the world’s oldest golf course, and would probably have another numerical equivalent if it was used to designate holes at other courses.
In the interest of commonalization and completeness, the taxonomy of golf, both that which has been established by tradition and that which is topical, localized and event specific, could be shared by the human race, just as the human race shares the dictionaries and encyclopaedias of its many written languages.
Though just who should assume responsibility for such a task is debatable, many IT thinkers believe this is the logical extension of the Internet, the next step in the IT revolution. Provided the costs of such an enterprise could be distributed in a feasible manner, the savings would be significant. Though golf might not be their first concern, governments could build infrastructures of taxonomies for utilitarian purposes in order to create efficiency in computer aided transactions.1
Ambiguity is not eliminated in our use of natural language unless we disambiguate words themselves. “Hole” for example has many meanings even in golf. Holes are not just the term that describes a section of the course, holes are everywhere, the actual cup in the centre of a green, wherever an animal decides to dig, in our pockets, etc, so even in the eventual existence of a universal taxonomy on golf, there must be way of discerning what sort of a hole is meant.
One method currently in vogue is to to use double-speak. The general idea is that we would use natural language as it is customary for us to do, and then on top of that, we would add an extra layer of reference pointers for words and phrases to clear up any doubts about their meaning. The reference pointers are addresses to a source of authority. If, for example, we were to write “a kilo of gold” then on top of that we could also write the address of an authority, perhaps somewhere in Paris, where the meaning of “kilo” and “gold” could be resolved. For golf the double-speak score notation could look like this:
Tiger (as defined at address A) shot (as defined at address B) a birdie (as defined at address C) on the Road (as defined at address D).
This technique called mark-up, has probably been around since the Sumerians discovered that a written language, as great as it was for counting crops, still lacked precision, but it passed a milestone in the 1970s with the invention of standardized mark-up languages such as SGML which will be discussed in detail in other parts of this book. Unfortunately double-speak in natural language and SGML is tremendously burdensome and resource demanding, and consequently only a few large corporations and military establishments have adapted the language for use in their daily activities.
Of course any formalized interaction in the absence of hard-wiring, uses common points of reference: This is, for example, what standards are all about, and the modern successor of SGML, called XML, does so in a clever way. It utilizes the already proven addressing and hyper-linking technologies of the Internet as its unique addresses. But what is at the other end of such an address? If a computer agent busily parsing information came upon the predicate phrase “is the owner of” coupled to the address of some authority for the canonical definition of that phrase – what would it find there, if not a definition written – in natural language? What is a poor machine looking for logic and numbers to think of that?
1This is actually what happens when trading systems such as EDI, Electronic Data Interchange and EDIFACT are created. Though EDI is a commercial initiative, EDIFACT is sponsored by the United Nations. See http://www.itworld.com/Man/3830/CWD010703EDIXML/
Contextual Frameworks
Orality, the act of speaking, qualifies as a world 3 construct, even if its W1 vehicle is only ephemeral air pressure, carrying it on short hops in and out of the subjective realm. With the invention of writing, which Walter Ong calls "The technologizing of the word", the human tribe entered a new world of autonomous discourse.
A deeper understanding of pristine or primary orality enables us better to understand the new world of writing, what it truly is, and what functionally literate human beings really are: beings whose thought processes do not grow out of simply natural powers but out of these powers as structured, directly or indirectly, by the technology of writing. Without writing, the literate mind would not and could not think as it does... More than any other single invention, writing has transformed human consciousness. Walter J Ong Orality and Literacy1
An initial and revolutionary effect of literacy was the temporal-spatial decoupling of the written word from its immediate surroundings and this applies, though non-uniformly, to all W3 constructs. Yet once W3 objects are removed in time and space from their W2 origins, the faithful reproduction of original intention becomes dependent upon to what degree the contextual framework (the environment and circumstances) of their origin is available to interpreters.
It is easy to understand that the records of some lost culture stored on microfilm would be unreadable if we had destroyed all our microfilm readers, yet that problem is only technical and could perhaps be solved with some alternative apparatus. But if we did not understand the language used, or if we did not know what people and things named were, or what they did; if we did not grasp the motives, the methods of reasoning, the norms and conventions underlying decisions made - then those microfilms would have little meaning for us, even if they had explicit meaning for their creators.
There is a legal term, "the four corners of an instrument" which audaciously implies that there are documents where all there is to know about their contents is that which lies within the four corners of the paper they are printed on, without need of reference to any extrinsic factors. This contention is myopic - it simply neglects to accept that shared context is an absolute necessity to understanding anything at all. See Borge's spoof with "the four corners of an instrument" below.
Economy in all communication lies in shared context. The famous example of brevity in correspondence, between Victor Hugo and his publisher
"?", wrote Hugo.
"!" answered his Publisher
exemplifies this. The writer was enquiring about the reception to his book and his publisher was answering that it was doing marvellously. In order for these two marks of punctuation to express any meaning at all, both correspondents had to share a considerable amount of knowledge and contextual harmonization.
An example to the opposite extreme is Argentina author Borges' tale On exactitude in Science.
In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography2.
The above examples help to illustrate "The Mathematical Theory of Communication" as formulated by Claude Shannon, in one of the most seminal papers in the history of modern technology. In layman terms it goes roughly like this.
The amount of energy and bandwidth needed to send signals between a transmitter and a receiver is measured by:
The pre-existing knowledge shared at both ends of the communications channel
The ratio of what-is-said to that which could-be-said
The interference (or noise) inherent to the channel.
In terms of a, Victor Hugo and his publisher were being extremely efficient because they shared so much pre-existent knowledge. They were contextually harmonized and could express a great deal with minimal effort because of that.
In terms of b, Victory Hugo and his publisher were being inefficient because their communications channel - written language, allows so much to be said, and they were saying so little. Of all the letters, words and punctuation marks at their disposal, they were using only two of the latter.
I realize this might seem counter-intuitive, but think of it this way. If these two men had both spent a great deal of time learning their language and internalizing all sorts of knowledge, and all they did were to write a lot of letters to each other using just these two punctuation marks, then they would have been making wasteful use, of not only their learning, but also of the communications channel built to handle a much richer set of signals.
Writing was invented in order to say anything you could speak, nothing more or less. A written sentence is one in a set of all possible sentences. A communications channel is designed to transmit one particular message from a set of all possible messages. The efficiency of any system is the ratio of what is done, used or said to that which could be done, used or said.
In terms of a, Borges's cartographers were being extremely inefficient because their solution precluded all use of contextual harmonization. Without the use of shared context they were forced to build a hopelessly over-dimensioned channel - the 1:1 map.
In terms of b, the cartographers acted efficiently, because they were making maximum utilization of their channel.
For Shannon channels were wires and airwaves, and signals contained code, text, video and audio coding, but in a wider perspective channels of communication are myriad: the flow of energy in machines, Adam Smith's invisible hand, the facial expression of lovers, the firing of neurons in our brains, the evolution of species by genetic code, and so on. And in every instance Shannon's laws apply. We will return to this in the discussion of trust in the second part of this essay.
1Ongs book
2Translated by Andrew Hurley - copyright Penguin 1999
W3 technology
Early artefacts, such as those produced by chisel on stone, charcoal on wall, hand on clay or pen on paper were, of course, instrumental in the distribution of W3, and qua technologies, their rational was clear – an increase in the economy, permanence , portability, duplicability and veracity of W3. All of these objectives, weighted according to alternating needs, served to drive the evolution of media technology on the heels, or in many cases, at the forefront, of scientific discovery. We will presently take up the story of that progress at one historical moment; the first large scale introduction of computing powered by electricity, 20 years after Charles Babbage’s marvellous vision of an “Analytic Engine”, a steam powered contraption beyond the technical feasibilities of its time, which was fated to be buried together with its inventor.
The 19th century saw significant advancement in media technology. Through the harnessing of electricity, a greater understanding of chemistry, and increasingly sophisticated mechanical engineering; lithography (1798), the telegraph, the camera, the typewriter, rotary printing machines, the Wharfedale cylinder press, mechanised paper manufacturing, wireless radio, the telephone, the gramophone, the motion picture camera, and Linotype all saw the light of day.
This was the great analogue age, and it continued until the 1950s, by which time practically all major discovery and invention in analogue technology had been made. From then on, digitalization and medium commonalization has ruled the day, as all media channels merge into a common stream. And though digitalization was the key to matching up binary coded W3 production with logical machines – it is the ensuing commonalization of media channels which vies to be the most significant event in human technology.
Herman’s punch cards
Censi (or censuses if you will) are a big deal. They help to determine the tax base and the make-up of political constituencies and they underlay the decisions of government. At the end of the 19th century the US Census Bureau, despite being the single largest employer in the land at the time1, was having a rough time keeping up with the rapidly expanding population, swelled by millions of immigrants. By federal law the census was to be taken every ten years, and since the 1880 census had taken eight years to complete, it was feared that the 1890 census would not be finished before the 1900 count was due to begin.
Herman Hollerith put together one of three entries in a Census Bureau contest, staged in order to find a way to speed up the process of tabulating census records. He had knowledge of the Jacquard loom invented almost a hundred years earlier, and during a stint of working for the railroads he had observed the use of what was called a "punch photograph": Conductors, in order to discourage free riders, would punch notches in the edges of a passenger’s ticket denoting their height, colour of eyes and hair, etc. Hollerith amalgamated these technologies into a set of machines that were able to tabulate the collected information on 62 million individuals in a matter of months. The punch cards designed by Hollerith, which were purposely the size of the US dollar, were still in use up into the 1970s and the company which he founded eventually became IBM
Herman Hollerith rationalized the US census with the help of tabulating machines and a warehouse full of punch cards. But there wasn’t a lot of room on the punch cards for the information desired – room for W3 has traditionally come at a premium. Hollerith’s machines calculated statistics that were gathered by an army of census workers called enumerators, part time workers who roamed the country gathering facts about the populace. The enumerator carried scorecards, known as a schedules, on which to notate relevant figures.
Any enumerator is going to see and experience all sorts of things in her work, which she might reflect upon, but which will have no appropriate notch on her scorecard, and for that matter, be of no interest to her employer. In the end, census figures are about the generalities of a populace – not an individual’s personal details. But we may note that any attempt to record even a fraction of the W2 observations of one single enumerator, if so desired, would easily outweigh, in terms of data storage, the relevant statistics of the entire populace. So the filtering of excess W3 begins at the source of its collection.
1Today the Census Bureau has nearly 12,000 employees. The workforce expands dramatically when the census is taken every 10 years. About 860,000 temporary workers were hired for Census 2000.
The design of a data system in terms of completeness
What can we feasibly know? What do we want to know of “what we can feasibly know”? What can we feasibly do with “what we want to know of what we can feasibly know”? Well for starters, we don’t want to know everything. We don’t want to deal with 1:1 maps of reality. But there is no question that our willingness to accumulate knowledge is influenced by our potential for doing so, including the potential for actually making use of that which we accumulate. In Hollerith’s own words, arguing for increasing the statistical base and the computations carried out:
To know simply the number of single, married, widowed, and divorced persons among our people would be of great value, still it would be of very much greater value to have the same information in combination with age, with sex, with race, with nativity, with occupation, or with various sub-combinations of these data. If the data regarding the relationship of each person to the head of the family were properly compiled, in combination with various other data, a vast amount of valuable information would be obtained. So again, if the number of months unemployed were properly enumerated and compiled with reference to age, to occupation, etc., much information might be obtained of great value to the student of the economic problems affecting our wage-earners.
And in the words of his boss, General Francis A. Walker, Superintendent of the Tenth Census in a letter to Hollerith:
In the census of a country so populous as the United States the work of tabulation might be carried on almost literally without limit, and yet not cease to obtain new facts and combinations of facts of political, social, and economic significance. With such a field before the statistician, it is purely a question of time and money where he shall stop.
The savings in “time and money” delivered by Hollerith’s tabulator were astounding. Rather than taking ten years to add up the results of the census, the automated tabulation took only weeks. It was only natural for Hollerith and Walker to wish to reinvest these savings in more elaborate statistical models. In a modern perspective, with the realization that we could carry out the entire 1990 census tabulation on our home PCs, while taking a coffee break, the 1890 constraints of computation seem very distant, but the collection of W3 is still limited by time and money, even if there are exceptions.
The Ministry for State Security – better known as the Stasi – was the "shield and sword" of East Germany’s state party, the Socialist Unity Party of Germany (SED). [...] At the time just before the Berlin Wall fell in 1989, the feared secret police had 91,000 full-time employees and around 175,000 unofficial informers whose job it was to spy on people in the German Democratic Republic (GDR). This they did to an extent that is barely imaginable and that took on almost grotesque proportions.
"We must know everything," was the mantra that the Minister for State Security, Erich Mielke, never tired of repeating to his employees, who numbered approximately 5.5 people for every 1,000 citizens. And they took him seriously. Over four decades they gathered information about their victims, writing down even the smallest details and accumulating 184,000 metres of written material in the process. Not to mention 986,000 photographic documents, 89,000 films, videos and sound recordings and 17,870 electronic data storage devices – and this is just the material at the Berlin headquarters1.
Apparently a state harbouring the paranoiac suspicion that any citizen could be a covert agent of subversion would want to keep tabs on, well, everyone, damned the cost. Stasi believed they could feasibly eavesdrop on the GDR’s 16 million inhabitants and seem to have had the resources to do so. Though just how they actually accessed and drew conclusions about this 184,000 meters of written material, plus pictures and sound recordings, etc., is something I know little about, I would assume there were problems. After all, they didn’t have Google.
Most directories are not as complete as Stasi’s, because such completeness is neither practical nor affordable: Decisions must be made about what to include and what to exclude. The design of the Hollerith punchcards was made at a time where the limits of physical space and technological feasibility still played a decisive role in determining the quality and quantity of what was stored on them, and until quite recently this has always been the case. W3 has always been at the mercy of the medium built to hold it and the tabulating technology meant to calculate it.
1Goethe Institute web page – http://www.goethe.de/kug/ges/ztg/thm/en162253.htm
Who you gonna call?
A famous American newspaper has as its slogan, “All the news that’s fit to print”. Without examining exactly what that implies in the way of editorial criteria, is seems that a more appropriate motto would be, “All the news that fits in print”, since the paper-based distribution format of the paper can not possibly contain more than a fraction of all the news. As nice as paper is to read off, and as convenient as it is to have it delivered to your door in sync with your morning coffee, it is not the most suitable medium for completeness.
Consider the paper-based telephone catalogue, the size and scope of which is determined by the material it is printed on, the area of distribution intended, the relevance of information to users, various utilitarian considerations, and, not trivially, the business model of the publisher. Further, some (private) subscribers will not want to find themselves listed and other (commercial) subscribers might prefer that their competitors were not.
The digitalization of catalogue media transforms these constraints from being factors determining inclusion and exclusion, to factors determining effective selection through filtering. There is literally no technical hinder for an Internet based catalogue of all the world’s telephone subscribers with their telephone numbers, street addresses, email addresses, professed profession or whatever. And there is, technically, no need for the existence of more than one such catalogue service.
In such a catalogue, proximity relevance is maintained by proximity– relevant search criteria like: find me all the Pizzerias in Durban, or all the Marias on my block. Commercial exposure can still be made possible by paid advertising appearing in conjunction with search results.
Or what if you wished to buy a car? Perhaps you are thinking – well, maybe a used car, that is all I can afford at the moment, and you set out to find some likely prospects. You look in the newspapers and on the Internet and maybe shop around at car lots. But the more extensively you search, the effectiveness of that search diminishes, because only a decreasingly smaller portion of the potential cars that would suit your taste and budget will turn up as you access new sources, and you will begin to see duplicate listings as well. If all the cars for sale were listed at one source, then your task would be different. Rather than having to worry about finding enough potential cars to make an optimal choice from – you would have to worry about having too many good alternatives to bother to evaluate them all.
Why is this not the case then? Why are there so few complete directories? Is it because legacy business models built on outdated technologies are so firmly entrenched in the market? Yes, this is partly true, when the accumulation of data is costly it often leads to a few specialist firms slicing up the pie between themselves. Path dependencies play their part and wannabe competitors find that the thresholds to market entry are high. But there is more to it than that.
If there are things to buy or know about, then we can assume that somebody owns them or somebody knows about them. These owners and knowers normally have some way of representing their holdings. They create w3 or dW3 representations; descriptions, abstracts, catalogues, menus and so forth. Often these representations are created for use within an interpersonal or corporate contextual framework. Without knowledge of the framework and the taxonomies used, representations can be ambiguous or meaningless to outsiders – even in dW3 formats made available over the Internet.
Directory middlepeople, or infomediaries as they are sometimes called, map the taxonomies and contextual frameworks of holders and seekers. Sally wants a Culowop, Kim has an Undel. Depak the infomediary, knowing that a Culowop actually is the same thing as an Undel, helps Sally by mapping between the two terms. Sally, when looking for Culowops, is shown Kims Undel in Depak’s directory.
Or Depak creates his own term, for culowops and undels,, so that both Kim and Sally must learn to map their own taxonomies to Depak’s, in order to find each other. If this is the case, Depak will effectively isolate Kim and Sally from any eventual harmonisation of their taxonomies. Making himself indispensable to both.
At the same time Depak might offer other services to Kim and Sally. He might provide some degree of quality assurance to both buyers and sellers. He might personalize interactions, making recommendations based on his knowledge of particular field. He might offer an appealing solution to the complexity of flimsily structured markets and ambivalent information flows. But above all he offers entrance into a network, albeit a very primitive one, limited by the technology available.
Here is a story I heard at a conference: An attractive position at Charlotte and Bob’s firm had been advertised and 200 applications came in through the post. Bob took 180 of them off the top of the stack and threw them in the trash. Charlotte was shocked, “What in the hell are you doing?” Bob points to the trash and answers, “We don’t want to hire anybody who is that unlucky do we?
It’s a funny story, but it is possible that Bob and Charlotte just didn’t have the resources to thoroughly check out all those 200 CVs anyway, and that Bob’s action was not that irrational. After all, a great deal of selective choice is made quite arbitrarily. What Bob and Charlotte needed was more information working on its own. They needed a better filtering system to avoid making arbitrary choices.
Subscribe to Posts [Atom]