自治Lab Physical digits: how has the history of objectivity led to phygitality?

Last revised on June 11, 2024 at 18:55:20. See the history of this page for a list of all contributions to it.

Contents

Objectivity is independent of us. It exists by itself. It is not we who cause it to exist. It is not we who sustain it in its existence. But how correctly does such a disposition reflect its status? 

If an object is our product, is it also subjective? And if not, what is there in it that prevents it from dissolving into our will?

And these questions are even more complicated with the advent of reality, be it named virtual, but no less real. To answer them, consider the path from natural to digital object, and at the same time examine what made the latter possible and what could come after it.

I. Objectivity on its own.

As Willard Quine has argued (Quine 1951), the condition of possibility for any object is its ability to undergo the actions described by verbs. For this it must be endowed with a minimum stability, namely to be distinguishable from the act of thinking. 

Carrying out this distinction is the main problem of cognition. It is how things are for us and not outside of us. What the world is beyond our perceptions does not matter; what matters is our inductions about it. If these latter are wrong, we will not survive.

Explain again. Every physical object is a logical construction of sensible data. All of them are artificial mediators between the world and our consciousness. Their adequacy is confirmed by practice: if the consequences of such premises are consistent, they are true; if not, not.

So, the more efficient our treatment of things, the better our position. But if objectivity is not homogeneous, then neither are the ways of treating it. Then, what exactly are we addressing?

II. Natural object.

All else being equal, science, until the second half of the twentieth century, attempted to grasp and dissect natural objects. Their initial obscurity had to be clarified, their inaccessibility to our thought converted into accessibility.

All such things originally were on their own, and were so when we were not. They derive themselves from themselves, not from something else. We meet them finished before us. They already had a certainty, so to define them is first of all to eliminate ourselves from them. In other words, it is necessary that our concept of them be their own and reflect the way they were heretofore.

That is, a natural object contains in itself that principle which, on the one hand, identifies it with itself and, on the other hand, distinguishes it from everything else. Aristotle called this latter essence - τό τί ήν είναι, quod quid erat esse - what it meant for this thing to be itself (Caujolle-Zaslawsky 1981). Essence renders the thing indivisible and inseparable with respect to itself, or more precisely, equal to itself, αυτό έστιν αυτό, idem (Aristotle Metaph. VI 17, 1041a 18f).

Such a being is independent of us because the cause of its unity is not us, but itself. It has an inner form by virtue of which it is one in relation to its extrinsic characteristics. It is the “subject”, υποκείμενον, subjectum, which gathers together its predicates. Therefore, the thought of it can be unified - the intellect can bind into a coherent structure the manifold sense data about it, but so that the origin of this structure is not the intellect but the thing itself (Aubenque 1983 [1962], 456-466).

Yet, what if the object is our production? For Aristotle, any manufactured thing is equally part of an interconnected whole, or cosmos. It reveals the possibilities inherent in nature without conveying anything new to it. For nothing can surpass its source: everything arises from being, ἐξ ὄντος γίγνεται πάντα (Metaph. 1069b 19).

III. Technical object.

It took a long time to debunk these views of the idle ancient Greek slave owners. Christianity, with its creation from nothing, deprived nature of self-sufficiency. The Renaissance, with its mechanisation, converted natural necessity into freedom. All this at once theoretically and practically interrupted cosmic continuity (Blumenberg 1974).

Nevertheless, what is an artificial thing remained unconceptualised until Gilbert Simondon.

He first began to investigate the object through the relations (a) between its constituent parts, and (b) between them and their milieu. With this principle, not just natural and technical, but also animate and inanimate entities are shown to be analogous (Simondon MEOT 2012 [1958], 56-58). 

Each of them is some process of individuation (Simondon ILFI 2013 [2005], 190), whose by-product is the individual in its current formation (ILFI 31-32, 311-312). The latter appears at the intersection of different orders of reality and is a way of holding them together. For instance, a plant relates sunlight necessary for its photosynthesis and mineral salts necessary for its nutrition (ILFI 34-35).

The object is gradually organised through the repetition of causality (la récurrence de causalité) between it and its environment (ILFI 162; MEOT 70). As a result, what conditions it starts to be conditioned by it in return. The same plant takes nutrients from the soil but gives it organic material and controls its chemical composition (Waring et al 2015). And with each such iteration, both the individual and its associated milieu are increasingly specified regarding each other.

With this approach Simondon was able to explicate technical phenomena as well. They are concretised in a similar way (MEOT 22-23). 

To illustrate, describe the evolution from diode to triode (MEOT 49-55).

In a diode, inside a vacuum tube a negatively charged cathode is heated and releases electrons during thermionic emission. The anode is not heated, and can either attract them or not. When its voltage is positive, the electric charge moves towards it according to Coulomb’s law. When negative, not.

In a triode, a filter grid is placed between the anode and the cathode. Its more negative charge will reflect more electrons and reduce the current. Its less negative charge will let more electrons through and increase the current. Thus, with a signal of a few volts, a much larger voltage can be controlled.

In the course of the described concretisation, some functionality of the old elements is transposed to the new one, which extends the scope of their application - the flux strength can now be adjusted. This enhances the coherence of the technical object (MEOT 361-362).

As can be seen, nowhere here is essence, nothing becomes what it was from eternity. The individual is not just in relation, but is itself a relation (ILFI 63). No form is superimposed externally on matter; on the contrary, the very dualism between them is abolished (Barthélémy 2008, 13-19).

But although technical beings subsist without substance, they cannot be without a complementary system. A lamp will never work without a lampholder, wiring, and power plant. 

To the extent we use technics, we are connected to the network supporting them. But while the former can still be built alone and from scratch, the latter never can (MEOT 302).

So in the industrial age comes a new, deeper type of alienation - no longer socio-economic, but psycho-somatic (MEOT 165-167). A machine is never a part of our body whoever owns it. It is part of a much larger entity that transcends whoever we are.

IV. Digital object.

In summary, a technical object has certain attributes and behaviours, and based on them it can interact with its environment. Although it does not perform its operation outside the network, its structure is nontheless preserved: a lamp does not have to shine.

If digitalisation is a superstructure over a technical system to optimise its performance, then the digital object must adopt the characteristics of the technical one. Specifically, it must combine structure and operation (Kurtov 2016, 257).

How is this possible if it does not exist outside the computer? How can a machine provide a new objectivity?

As shown above, every individual has (a) a mechanism to keep itself internally stable and (b) an associated milieu that conditions it. Consequently, a digital object appears when it acquires both.

Trace the reason for its arrival.

1. Computing (more than) thing.

Computer is the first technical object with an undefined operation defined anew each time (MEOT 13). Its functions are not known in advance. They can be anything, or rather encoded in any way (Grosman 2016, 248-250).

Its origin is the difference between 0 and 1. This is a binary code invented by Wilhelm Gottfried Leibnitz in 1703 to reconstruct the world from the simplest propositions (Leibnitz 1703). For him it was a reflection of divine creation out of nothing.

If anything can be a series of 0’s and 1’s, then anything can be replicated in a computer. Hence like natural language, the machine one ends where our cognition ends. One thing creates in itself a new digital type of things superior to the analogue.

Such operational openness sets priority of operation over structure (Kurtov 2016, 256-257). Here, a correctly posed task is its solution. Not “what” to perform, but “how” is important for a computing machine. Since it transitions from one set of data to another by itself, whereas the programmer elaborates the logic of this transition.

Its functioning unfolds at several levels of abstraction at once: files for the user, binary codes for the operating system, electrical signals for the circuit boards (Colburn 2000, 186-189). And in each one, the same process takes place. Therein nothing is closed for the sake of inaccessibility, only hidden for the sake of better accessibility.

Therefore, contrary to mathematics and engineering, in programming one structure does not absorb another (Abelson, Sussmans 1996, 438-441). New code segments do not cancel out previous ones. Instead, they are all simultaneous with each other, or exist at once synchronically and diachronically. And it is viable to move between them and bring them into new relations without loss of content. Anything can be reconnected to anything.

Moreover, this is true both for a single programme and the whole programming history. Omitting differences in apparatuses, all computer languages are the ways of dealing with the same machine code (Breton 1990, 178-182). For machines, nothing changes; for humans, what changes is the way of communicating with them.

But how did the programme morph into a relatively sovereign entity?

It took several stages of evolution to arrive at this point. They encompass (a) the means of communication, or information transfer (input and output tools) and (b) the means of expressing the content being transferred (programming languages).

2. From number to symbol.

Everything commenced with the batch interface that combined binary operating codes and punched cards. The latter, although inducing a certain activity of the processor, existed outside of it. Because of this, the computer was not a carrier of its states. It was a technical object without predetermined functioning. Programming worked with it to cause it to work in a certain way.

Binary instructions were used for commands, there were no letter designations. 0 and 1 were the solely available characters, the simplest symbolisation of what was happening at the board level. 

All programme instructions referred to memory locations (Wilkes 1957, 1-7). If one added a code not at the end, but at an arbitrary place there, all the address references following it would fall off. Its writing and editing was extremely rigid (Wilkes 1957, 218-220). This was the case with the Harvard Mark I, Electronic Delay Storage Automatic Calculator (EDSAC), Electronic Discrete Variable Automatic Computer (EDVAC), and other early devices.

The situation was amended with the MIT Whirlwind I. In it Douglas Tylor Ross managed to connect the programming and computing parts (Ross 1988, 57, 83-87). To do so, in 1956 he used the next mechanism: commands were typed from the keyboard, the printer recorded the input information, which was then transmitted to a punched tape and entered into the computer (Ross 1956). Typing was unidirectional: individual signs could not be deleted, the whole line should be cancelled instead.

The first assembler was introduced at the same place in 1954: “A Programme for Translation of Mathematical Equations for Whirlwind I” (Laning, Zierler 1954). The user could input algebraic formulas, and it would translate them into binary form, which Whirlwind would execute. But this language was not general purpose (Ceruzzi 2003, 86).

The systems of the same type that ensued were in a one-to-one correspondence with machine code, and were specific to each processor architecture. Though they afforded a minimum of abstraction, it was an important shift from numbers to letters. Thus the assembler was able to symbolise memory locations in order to track them as the programme was run (Ceruzzi 2003, 87).

Again in 1954, the IBM 704 with built-in index addressing was released. Now the address mapping was not broken with each new operation attached at the wrong location (IBM 1954). Modification of programmes was facilitated.

But despite all the advances, operations and structures, or data and manipulations with them, stayed separate in programming. The first computer languages demonstrate this split well.

3. First languages.
a. Fortran.

Fortran invented in 1954 (Backus, Herrick, Ziller 1954) and published in 1957 (Backus et al 1957) specifically for the IBM 704 had the priority of operation. It continued the work of Laning and Zierler as its syntax was close to algebraic. Two-character alphabetical names were available for functions. Its aim was to formulate problems for the IBM 704 in terms of mathematical notation.

The programme written on it was dynamic. Its states, or unique configurations of information, could be different at different points in time. This was due to assignment operators that could reassociate a value with its variable, or a memory location with its affiliated address (Backus et al 1957, 17). As a result, the meaning of expressions varied as the code unfolded.

b. LISP.

Further, not numeric information alone had to be interpreted, but the symbolic too. Fortran could not cope with this. So John Mc Carthy? for IBM in 1960 developed LISP I - the LISt Processor - to operate on the same IBM 704 (McCarthy et al 1960).

The language was extrinsically similar to lambda computing (Hudak 1989, 368; McCarthy 1960, 185-186). It had two types of data - atoms and lists - and they could finally be added and removed at arbitrary places in the programme (Sebesta 2012, 47-51). Neither were there statements, only functions (Jones et al 1990, 11). All calculations were done by applying them to arguments. 

It ushered in functional programming (Hudak 1989, 367-369). However, its purely functional version, Lispkit Lisp, did not emerge until 1980 (Henderson 1980; Henderson, Jones G., Jones S. 1983). In this latter, the value of each variable was assigned once - reassignment was impossible. The absolute condition was referential transparency - i.e., given the same input, a certain function had to produce the same output at any point in time. Expressions had a constant meaning throughout the execution of the programme, which was a static system of mappings.

Evidently, structure was prioritised in this case.

In the second half of the twentieth century, the computer integrated more and more elements not previously belonging to it. With the launch of the IBM 305 Ramac (Random Access Method of Accounting and Control) in 1957 [[IBM 1957](https://search.worldcat.org/title/16578771)], the hard drive replaced paper and magnetic storage. The programme was no longer stored externally, it was now part of the machine (Bashe 1986, 297-300).

Regardless of this, it was a sequence of characters absolutely reliant on its environment. Its existence was disembodied. But what could embody the code?

If the technical object is the unity of structure and operation, then the digital one must be the same. This requires, on the one hand, hardware capable of translating machine text into a visible image and, on the other hand, software capable of not dividing this image over again.

c. Smalltalk.

How could these goals be achieved if all computing machines of the late 1960s were package-oriented? As Alan Kay diagnosed, they could not be used by non-programmers (Kay 1969, 8-9). To rectify this, their usage had to be interactive.

For this purpose, he proposed the Flex system. According to it, the model for such communication was a screen with many windows on it, of which the top one was the best to see. The user could manipulate them with manipulators (Kay 1969, 127-132, 235-238).

Later at Xerox Paro Alto Research Centre, Kay headed a group to implement his paradigm. It resulted in the Xerox Alto equipment, originally called Interim Dynabook, and the Smalltalk-72 programming framework (Kay 1996, 533-535).

And it altered the whole landscape of computer science. 

In Smalltalk, every entity that could be handled at all, from an integer to a programme, was an object. Each could receive and accept messages, plus respond to them (Goldberg, Kay 1976, 44-45). All of them communicated with each other. In doing so, objects coalesced local data and its possible behaviour, or structure and operation. The graphical interface was one of them.

Smalltalk was not merely instructing, but simulating a computer. The object was capable of doing the same thing it did - storing, processing and transmitting information. The machine now contained its virtual semblances relatively independent of its actual host. And thanks to Xerox Alto, they were distinguishable to us: “programme segments can now be entirely identified with their displayed representation” (Kay 1969, 236). We were able to contact them as an autonomous being.

All this culminated in the connection, on the hardware side, of display, keyboard and mouse; on the software side, of windows, icons and menus (Thacker et al 1981, 549-572). Thus in lieu of input and output of some formalisms came interactivity previously never formalised. This heralded the inception of digital objectivity.

4. Digital associated milieu.

But objects do not exist by themselves. To be, they must correlate with their associated milieu. What can this be if there is no physical causality for them?

In the late 1960s, the problem of integrating a text editing application with an information retrieval system and a page composition programme arose, for they could not run simultaneously (Hui 2016, 59-60). Following this, in 1969 Charles Goldfarb, Edward Mosher, and Raymond Lorie devised a special set of macros for text layout called Generalised Markup Language, or GML, which was issued in 1978 (Goldfarb 1999, 76-78).

GML standardised the composition of a document through markup tags and its type definitions (IBM 1978, 5-10). In this way, different contents were given universal forms (paragraphs, headings, lists, tables, etc.). By customising them the text profile could be tailored for a particular device - teletype, screen and other terminals (Ibid 99).

The external characteristics of data became unified and thereby identifiable. This allowed one set of code to invoke the operation of another - for example, a word processor could send a file to a printer driver for printing. 

Such invocation is the recurring causality that determines and develops them. Due to this, applications are constantly being updated.

This established the first associated milieu for digital objects. From then on, it was not the hardware architecture that schematised the software, but the logic of its own conduct.

V. Phygital object.

What might arrive next? Simondon asserted that technical object evolves towards greater and greater autonomy. And the less its dependence on the environment, the more it incorporates the latter. At the limit, it must comprise within itself the conditions of its possibility (MEOT 68).

However, complete sovereignty is hardly attainable. It cannot even be a target. A machine without an interface, one with which no interaction is possible, could not ever be instantiated (Chazal 2002, 152-160). For to be so for us means to answer the appeal to it while retaining its selfhood.

a. Bank card.

How be it, the concretisation of digital objects must eventually bring them beyond computers.

One of the first attempts to do this is the debit card. Visa (then Bank Americard?) presented it in 1975 in the United States and entitled it Entrée (Stearns 2011, 175-177). In it, the virtual bank account gained its actual equivalent. The card worked in ATM networks, but to pay with it at a point of sale one needed telephone authentication, along with a lot of paperwork. This continued till Visa in 1979 developed requirements for electronic payment terminals (Stearns 2011, 149-155). These devices were massively deployed in the US in the early 1980s. This shaped the physical part of the card‘s associated milieu.

The digital one was formed, on the one hand, by the card encryption standards that Visa had adopted by 1979 (Stearns 2011, 147-149); on the other hand, by the transaction processing systems launched earlier, in the first half of the 1970s: the Interbank Network for Electronic Transfer (INET) for Master Card? (at the time Master Charge) and the Bank Americard? System Exchange (BASE I) for the present-day Visa (Mandell 61-62; Stearns 2011, 82-85).

To summarise, in the case considered, material and virtual objects are coupled in the third instance: the former is not a piece of plastic, the latter is not a series of numbers. But though they act together, whatever happens to the first, the second will not be affected. Their duality is never surmount. For contrary to the account, the card is not the property of the user. Moreover, both will permanently continue to be themselves: manipulations can transform them quantitatively (appearance, money balance), and never qualitatively (the magnetic stripe will not be a chip, and the depositor will not be a banker).

b. Tangible User Interface.

This situation is partly remedied by the Tangible User Interface (TUI). 

Graphical display, keyboard and mouse reduce our experience of digital objects mostly to visibility. Some of our sensible faculties - like tactility - are switched off.

In turn, TUI promises to convert any item into a manipulator. Anything from the physical dimension must be linkable to the digital one. In other words, their connection must be ubiquitous (Ishii, Ullmet 1997a, 234-236). 

Unfortunately, instead of extending to the whole space, TUI confined itself to its fixed part. For example, Meta Desk? (Ullmer, Ishii 1997b) by reifying windows, icons, and menus into units on the appliance table, narrowed their possible utilisation. Such a machine was no longer general purpose. Its speciality was the modelling of geographical spaces, e.g. for urban planning (Ishii, Ullmet 1997a, 237; Ullmer, Ishii 1997b, 225-226).

Other TUI samples perform the same function equivalently: T(ether) (Lakatos et al 2014) enables collaborative handling of bodies; Materiable (Nakagaki et al 2016) for different surfaces; Zeron (Lee et al 2011) for levitating elements like planets.

Owing to this, as opposed to GUI, it cannot program the machine, but is itself programmed. Its operation is predefined, and it has no means of escaping that definition. Rather, it taps into various programmes than into computers.

The virtual, however realistic, is always a map and never a territory (Weiser 1999 [1991], 3). Accordingly, the goal is not to institute another variation of it, nor to actualise some part of it, but to de-virtualise it completely. That is, to withdraw it from the computer and insert it into the world.

No matter how vast and substantial the universe inside the machine is, anyone without special devices will not enter it. It is enclosed within itself, and as long as it is, no activity within it will evolve into anything. 

Technology to work must not be an isolated part of our surroundings, but be them themselves. It must merge with our environment, not force us into its own. Obviously, TUI is not suitable for this.

c. Towards universal medium.

What would approximate this?

In themselves, digital objects are bundles of formalisms; for us, they are representations on a screen. Hence their reconfigurability and mobility. Then, for the correspondence between the virtual and the physical realm to occur, the latter must be equally reconfigurable and mobile.

Respectively, the condition for its possibility is twofold. It is at once

  • things whose shape is permutable depending on the code;
  • code whose record is rewritable depending on the thing.

Its fulfilment will grant a strong phygitality.

But how can materials be analogous to pixels? And assuming they can, why would some repeat the movements of others?

And this is where the Material User Interface (MUI) steps in. This is a hypothetical construct by Hiroshi Ishii and his team based on their concept of Radical Atoms (Ishii et al 2012, 48). By combining advances in science and technology, they could conceivably constitute a new objectivity: matter that is programmable whilst not dematerialisable.

As far as there is no such thing, what remains is to develop what is. Fortunately, a weak phygitality - that whose physical and digital states are modified simultaneously - is technically feasible today. An object that meets this requirement, albeit it does not sublate the duality of these levels, can move between them without loss of content. 

The question of the milieu for such movements is still open. How global will it be? And how freely will they pass there?

Currently, it is not a stop, but a destination. Whether digital and technical objects can be one and the same will decide on the future of second nature.

Our objectivity is already inseparable from the information reseau, and its fragments will one day be reassembled.

Petr Zavisnov

Bibliography.

  1. Abelson, Harold, Gerald Jay Sussman, Julie Sussman (1996). Structure and Interpretation of Computer Programs. Second edition. New York: MIT Press
  2. Aubenque, Pierre (1983). Le Problème De L‘être Chez Aristote : Essai Sur La Problématique Aristotélicienne. 5e ed. Paris: Presses universitaires de France.
  3. Backus, John Warner et al (1957). The FORTRAN Automatic Coding System. // IRE-AIEE-ACM ‘57: Papers presented at the February 26-28, 1957, Western Joint Computer Conference. New York: Association for Computing Machinery - pp. 188–198
  4. Backus, John Warner; Beeber, R. J.; Best, Sheldon F.; Goldberg, Richard; Herrick, Harlan L.; Hughes, R. A.; Mitchell, L. B.; Nelson, Robert A.; Nutt, Roy; Sayre, David; Sheridan, Peter B.; Stern, Harold; Ziller, Irving (1956). The FORTRAN Automatic Coding System for the IBM 704 EDPM: Programmer‘s Reference Manual. New York: International Business Machines Corporation, October 15
  5. Backus, John Warner; Herrick, Harlan and Ziller, Irving (1954). Specifications for the IBM Mathematical FORmula TRANslating system, FORTRAN. New York: International Business Machines Corporation, Programming Research Group. November 10
  6. Barthélémy, Jean-Hugues (2008). Simondon Ou L‘encyclopédisme Génétique. 1re éd. Paris: Presses universitaires de France.
  7. Bashe, Charles (1986). IBM‘s Early Computers : A Technical History. Cambridge, Mass.: MIT Press.
  8. Blumenberg, Hans (1974). On a Lineage of the Idea of Progress. Social Research, volume 41, no. 1: pp. 5–27
  9. Breton, Philippe (1990). Une Histoire de L‘informatique. Nouv. éd. Paris: Editions La Découverte.
  10. Ceruzzi, Paul E. (2003). A History of Modern Computing. 2nd ed. London: MIT Press.
  11. Chazal, Gérard (2002). Interfaces : Enquêtes Sur Les Mondes Intermédiaires. Seyssel: Champ Vallon : Diffusion, Presses universitaires de France.
  12. Colburn, Timothy R. 2000. Philosophy and Computer Science. Armonk, N.Y.: M.E. Sharpe.
  13. Coujolle-Zaslawsky, Frangoise (1981). Sur quelques traductions récentes de To Ti Hn Einai. Revue de Théologie et de Philosophie 113, p. 61-75
  14. Goldberg, Adele; Alan Kay. (1976). Smalltalk-72 : Instruction Manual. Palo Alto, CA: Xerox Corp.
  15. Goldfarb, Charles F. (1999). The roots of SGML: A personal recollection. Technical Communication. Volume 46. Issue 1. Society for Technical Communication. Washington - pp. 75-78
  16. Grosman, Jérémy (2016). Simondon et l‘informatique II // Gilbert Simondon ou l‘invention du futur. Paris: Klincksieck
  17. Henderson, Peter (1980). Functional Programming : Application and Implementation. Englewood Cliffs, N.J.: Prentice-Hall International.
  18. Henderson, Peter, Geraint A. Jones, and Simon B. Jones (1983). The Lisp Kit? Manual. Oxford: Oxford University, Programming Research Group
  19. Hui, Yuk (2016). On the Existence of Digital Objects. Minneapolis: University of Minnesota Press.
  20. IBM (1954). 704 Electronic Data-Processing Machines: Manual of Operation. New York: International Business Machines Corporation
  21. IBM (1957). 305 RAMAC (Random Access Method of Accounting and Control). Manual of Operation. New York: International Business Machines Corporation.
  22. IBM (1978). Document Composition Facility : Generalized Markup Language (GML) : User‘s Guide. 1st ed. San Jose, California: International Business Machines Corporation
  23. Ishii, Hiroshi, Dávid Lakatos, Leonardo Bonanni, and Jean-Baptiste Labrune  (2012). Radical atoms: beyond tangible bits, toward transformable materials.“ interactions 19, no. 1: pp. 38-51
  24. Ishii, Hiroshi; Ullmer, Brygg (1997a). Tangible bits: towards seamless interfaces between people, bits and atoms // Proceedings of the ACM SIGCHI Conference on Human factors in computing systems, pp. 234-241
  25. Ishii, Hiroshi; Ullmer, Brygg (1997b). The metaDESK: models and prototypes for tangible user interfaces // Proceedings of the 10th annual ACM symposium on User interface software and technology, pp. 223-232
  26. Jones, Robin; Maynard, Clive; Stewart, Ian (1990). The Art of Lisp Programming. London: Springer-Verlag
  27. Kay, Alan Curtis (1969) The reactive engine. Ph.D. Ann Arbor. Michigan: University of Utah, University Microfilms Inc.
  28. Kay, Alan Curtis (1996). The early history of Smalltalk // History of Programming Languages. Volume II. New York: Addison-Wesley Pub. Co. - pp. 511-598.
  29. Kurtov, Michael (2016). Simondon et l‘informatique III. L‘évolution des langages de programmation à la lumière de l‘allagmatique // Gilbert Simondon ou l‘invention du futur. Paris: Klincksieck
  30. Lakatos, David, Matthew Blackshaw, Alex Olwal, Zachary Barryte, Ken Perlin, and Hiroshi Ishii (2014). T (ether) spatially-aware handhelds, gestures and proprioception for multi-user 3D modeling and animation // Proceedings of the 2nd ACM symposium on Spatial user interaction, pp. 90-93.
  31. Laning, J. Halcombe, and Neal Zierler (1954). A Program for Translation of Mathematical Equations for Whirlwind I. Engineering Memorandum E-364. Cambridge, Mass: Instrumentation Laboratory, Massachusetts Institute of Technology, January
  32. Lee, Jinha, Rehmi Post, and Hiroshi Ishii (2011) ZeroN: mid-air tangible interaction enabled by computer controlled magnetic levitation // Proceedings of the 24th annual ACM symposium on User interface software and technology, pp. 327-336.
  33. Leibniz, Gottfried Wilhelm (1703). Explication de l‘arithmétique binaire, qui utilise seulement les caractères 1 et 0, avec quelques remarques sur son utilité, et sur la lumière qu‘elle jette sur les anciennes figures chinoises de Fu Xi. Histoire de l‘Académie royale des sciences, Paris, Charles-Estienne Hochereau
  34. Mandell, Lewis (1990). The Credit Card Industry : A History. Boston: Twayne Publishers.
  35. Mc Carthy?, John (1960). Recursive functions of symbolic expressions and their computation by machine, Part I. Communications of the ACM, 3(4), pp.184-195.
  36. Mc Carthy?, John et al. (1960). LISP I Programmer‘s Manual. Cambridge: Massachusetts Institute of Technology, Computation Center and Research Laboratory of Electronics.
  37. Nakagaki, Ken, Luke Vink, Jared Counts, Daniel Windham, Daniel Leithinger, Sean Follmer, and Hiroshi Ishii (2016). Materiable: Rendering dynamic material properties in response to direct physical touch with shape changing interfaces // Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 2764-2772.
  38. Quine, Willard Van Orman (1951). Main Trends in Recent Philosophy: Two Dogmas of Empiricism. The Philosophical Review 60, no. 1: 20–43.
  39. Ross, Douglas Taylor (1956) Flexowriter Keyboard Input to WWI, Preliminary Specifications. Cambridge, Mass.: MIT Servo Lab. Memo 7138-M-148, March 14 - 3 pp.
  40. Ross, Douglas Taylor (1988). A personal view of the personal work station: Some firsts in the fifties. // A history of personal workstations. New York: Addison-Wesley - pp. 51-114.
  41. Sebesta, Robert W. (2012). Concepts of Programming Languages. Tenth edition. Boston: Pearson.
  42. Simondon, Gilbert (2005). [ILFI] L‘individuation À La Lumière Des Notions De Forme Et D‘information. Grenoble: Millon.
  43. Simondon, Gilbert (2012). [MEOT] Du Mode D‘existence Des Objets Techniques. Nouvelle édition revue et corrigée. Paris: Aubier.
  44. Stearns, David L. (2011). Electronic Value Exchange : Origins of the VISA Electronic Payment System. London: Springer.
  45. Svigals, Jerome (2012). The long life and imminent death of the mag-stripe card. IEEE Spectrum 49, no. 6: 72-76.
  46. Thacker, Charles P.; Mc Creight?, Ed; Lampson, Butler; Sproull, Robert; Boggs, David (1981). Alto: A personal computer // Computer Structures: Principles and Examples (2nd ed.). Mc Graw?-Hill. - pp. 549–572
  47. Waring, Bonnie G., Leonor Álvarez-Cansino, Kathryn E. Barry, Kristen K. Becklund, Sarah Dale, Maria G. Gei, Adrienne B. Keller et al. (2015). Pervasive and strong effects of plants on soil chemistry: a meta-analysis of individual plant Zinke effects. Proceedings of the Royal Society B: Biological Sciences. Volume 282, Issue 1812, pp. 1-8
  48. Weiser, Mark (1999 [1991]). The computer for the 21st century. ACM SIGMOBILE Mobile Computing and Communications Review Volume? 3. Issue 3: pp. 3-11.
  49. Wilkes, Maurice Vincent (1957). The Preparation of Programs for an Electronic Digital Computer. 2d ed. Reading, Mass.: Addison-Wesley Pub. Co.