Showing changes from revision #3 to #4:
Added | Removed | Changed
Science increases our command over the environment. All development refers to it as a condition of its possibility. Thanks to it, humanity is free from the need to survive. Everything we have around us, our very surroundings, is a consequence of its activity. Any creativity that can hardly be called scientific is mediated by what can be. Any problems that have arisen due to scientific progress are solved through it alone.
But what is science? How much do we rule it and ourselves with it? And how not to lose it? To answer this, trace its stages and summarise their intermediate outcome.
Its point of departure is the Cartesian method (Lecourt 2010, 6-11).
Its origin is not to acknowledge as reliable anything that is not thought clearly and precisely (Alquié 2005, 39-69). And such is only that which has no parts and cannot be varied. If no difference is there, then there is nothing to doubt. For example, two quantities equal to a third are equal to each other, since the converse cannot have any content. The first truths are undoubted because they are simple, and therefore ground our thinking (Descartes 1984, 111-150).
On this account, for any compound substance, a series of operations should be performed:
first divide it into elementary ones, or elevate it to its first premises;
then derive it from them by proceeding from simple to complex without missing any links (so that their validity is preserved when transitioning to the compound, which otherwise would not be valid).
Following this, it is possible to achieve completeness of knowledge (registers so detailed and lists so comprehensive that nothing is missing) in two ways:
through a movement in breadth, or by deriving all inferences from initial concepts (e.g. Euclid constructs all possible geometrical figures, their relations, etc. from the first five postulates);
through a movement in depth, or by producing increasingly complex and narrow concepts from simple and general ones (at that, maximally concrete ramifications of speculative construction will be indistinguishable from experience).
This means that sensible data can be the product of logical and mathematical deduction.
With the Cartesian method, science was able to categorise the world into clear and precise representations. Their simplicity allowed them to be quantified (Le Moigne 2007, 14-40).
As a result, anything however dark could now be unfolded into its constituent elements and then folded into an already transparent construction. All relations determining the internal structure of anything could be reliably established. So nature became completely computable.
Therefore, the notion of scientific objectivity points first of all to the totality of calculation: everything can be calculated (Milet 2000, 9-12). Any quality here is converted into quantity - things cease to be self-contained substances and turn out to be sets of measurable parameters. Why they are like this and behave like this can be reconstructed mathematically.
In this manner, science translates the indefinable complexity of things into our uncertainty about them. It works with the inaccessible to place it within the accessible.
Furthermore, the presence of its object in our experience is secondary. Through modelling, what cannot be sensory can be mathematical (Sinaceur 2006, 756-759). When, in the nineteenth century, the electromagnetic theory of light substituted aether as the medium for spreading optical vibrations, neither the interaction of charged particles nor the all-pervasive material carrier could be observed by our senses. In both cases the model was primary, reality was just a correspondence found for it.
Consider in more detail what a model is.
Latin modulus meant a measure for setting proportions between parts of an architectural structure. In the Middle Ages it evolved into French moule, English mould and German model. In the Renaissance, Italian modello generated French modèle, English model and German modell (Armatte, Dahan 2004, 245). All this time it was a simplified materialisation of some entity, or a maquette used mainly by sculptors, stonemasons and architects. Eighteenth century encyclopaedists employed the word “system” in a similar sense (Blanchard, Olsen 2002).
Modelling was not practised in physics until the early 20th century (Israël 1996, 211). Only in 1902, in the tenth edition of the Encyclopaedia Britannica, Ludwig Boltzmann marked a shift from the artistic to the scientific usage of the model (Boltzmann 1902, 788-791). It now schematised some empirical facts to facilitate theorising and experimenting with them. Its aim was to seek mathematical expressions and equations adequate to this. In doing so, the large number of observations could be generalised by a relatively small number of parameters.
This was caused by the rejection of mechanism and realism and the advent of physical analogies. Clark Maxwell defined these latter as partial similarities between the laws of one science and another, which afforded them the ability to illustrate each other (Maxwell 1890, 156). Although even earlier in 1859 Gabriel Lamé noted that the theories of magnetism and heat are so close that a solution correct in one is so in the other (Lamé 1859, IX). But this necessitated their unified formalisation - the physical quantities for both must be of the same mathematical class (Armatte 2005, 95).
For one science to be convertible into another, they must all have a common foundation. Hence David Hilbert‘s project to formalise all mathematics by proving its consistency (Hilbert 1996 [1922], 1115-1134). In line with this, the Vienna Circle proposed to reduce all fields of knowledge to a formal system built on protocol statements expressing elementary sensible data (Hahn, Neurath, Carnap 1996 [1929], 321-340). This would have enabled logical analysis to be applied to any empirical material and thus would have ushered in a unified science.
However, as Kurt Gödel and Alfred Tarski have shown, formal theory is unable to demonstrate its own coherence: relying on certain rules of inference one cannot say whether they are true (Blanché 1962, 57-58).
This implies there is no universal modelling accurate in all possible cases. It must be ascertained anew each time. As Rudolf Carnap recognised in 1935, mathematics is not identical with one system, it is an infinite series of ever richer languages (Carnap 1964, 222).
All the phases described have led to the modern concept of the model (Israël 1996, 75-82). It is a mathematical reconstruction that schematises different but isomorphic realities while remaining probabilistic (or representing its object partially). On this principle, Balthasar Van der Paul in 1928 first described the heartbeat by analogy with relaxation oscillations (Van der Paul 1928). Earlier the concept of atom put forward by Niels Bohr in 1913 to cover all elements was suitable solely for hydrogen (Bensaude-Vincent , Stengers 1996, 236-237).
In total, calculations of possible implications for some assumption are confirmed (or not) exclusively by other calculations. The findings are no longer immediate: the atom and the heartbeat are unperceivable; their models are perceptible. Meanwhile, no mathematisation can be totalised. Science henceforth deals with noumens without depriving them of their noomenality.
Here seems to be a dualism:
Fortunately, its presence is an effect of external perception.
The primacy of technical and numerical operations has accompanied science since the scientific revolution in the 16th century. Further, in the course of progress, the intensity of their application grew. Even by the admission of Galileo Galilei, the book of nature is written in mathematical language (Galileo 1896, 232).
This means that in scientific cognition, calculation is primary, experience is secondary. And this truth could no longer be ignored since the late 19th early 20th century. Among others, the history of telecommunications testifies to this. James Clerk Maxwell devised the formulae for the propagation of electromagnetic waves in 1861-1862, Heinrich Hertz experimentally confirmed them in 1886-1888, then based on them Guglielmo Marconi invented radio in 1896 (Hong 2001).
Since then physical objectivity could only be specified technically: if not, the sighting would not have taken place. So Ernest Rutherford in 1911 developed an equation describing the scattering of alpha particles such that the nucleus of an atom has a positive charge (Rutherford 1911). In 1913 Hans Geiger and Ernest Marsden erected a special apparatus to measure this scattering and conducted a series of experiments with it, eventually verifying his model (Geiger, Marsden 1913).
Capturing these changes, Gaston Bachelard wrote in 1934 about technical science and its new spirit (Bachelard 1973, 17; Hottois 2004). This latter consists in antisubstantialism, or rather in the identity of construction and contemplation. Nature and technology are from now on indistinguishable there: the former cannot be known without the latter. Nevertheless, in this regard, for him it was not so much a question of a new qualitative leap as of understanding the essence of what had already happened (Hottois 1984, 60).
But since scientific endeavour has not undergone fundamental alterations from the 16th century, the advent of what technoscience names?
Jean-Marc Lévy-Leblond attributes the turning point that heralded its arrival to the Manhattan Project, which yielded the first atomic weapons (Lévy-Leblond 2013). At that time, the scientific community was occupied not with elucidating nature, but with transcending it. All the previous study of the atom and everything discovered in its course was put at the service of a political task. Unintentional discoveries have been supplanted by their utilisation in the name of a certain intention. It was not research that dictated the direction of development, but on the contrary, development assigned goals to research. For the first time, science was seized by its own results.
Today, its practice outpaces theory: we learn the “how” before the “why”. The story of 20th-21st century physics illustrates this. Contrary to the theories of that time, Heike Kamerling-Onnes revealed in 1911 a sharp decrease in the electrical resistance of mercury at 3 Kelvin (-270 °C). Not until 1935 were brothers Fritz and Heinz London able to explain it. A later hypothesis proposed by John Bardeen, Leon Cooper and John Schriffen in 1957 believed that superconductivity was impossible at temperatures above 30 Kelvin. The record of 23.2 Kelvin was reached in 1972. But already in 1986 with new materials it was attained at 35 Kelvin (Bednorz, Müller 1986), in 1987 - at 90 (Wu et al 1987), in 2014 - at 250 (under pressure of 170 gigapascals), which is identical to room temperature (Drozdov et al 2019). Yet no new theoretical basis for all these changes has ever been offered since the days of Bardeen, Cooper and Schriffen.
With non-zero probability, we will soon be in a position, if not already there, where it is impossible to report either on the current techno-scientific condition or on its future course. What is happening to the second nature is escaping our grasp.
Although we know how to manufacture technical objects, we do not know what they are (and the further the more). The reason for this is the escalating complexity of technology, which has surpassed the capacity of individual consciousness to comprehend it. Our brain is not able to process such a large amount of information necessary for its functioning. Then how to delegate this excessive intellectual labour to us?
The more intricate the system, the more difficult to regulate it. A single centre coordinating many efforts simultaneously in many areas is hardly possible. Nevertheless, the computing power of machines is sufficient to hold together as many variables and their combinations as could represent the workings of large systems.
This is where simulation comes in. The first of its kind was the design of hydrogen bomb, which began in 1946 at Los Alamos (Galison 2011, 99-100). There was no object for which a model could be built and then validated. There was no theory to predict the unfolding of the trial. Equally, there was no experiment to test the conjecture. Neither physical conditions nor mathematical properties of the examined process were known in advance. Moreover, they could not be reproduced in the laboratory. Radiation spreading, hydrodynamics of materials, possible hydrogen compositions and so on not only fell into non-overlapping scientific fields, but withal had to be analysed simultaneously, under a shock wave and at the temperature of the sun‘s surface (Armatte, Dahan 2004, 263-265).
The development of thermonuclear arms thus required an artificial environment with natural laws - one that replicated nature free of all its constraints (Galison 2011, 117-122). Stanislas Ulam, Nicholas Metropolis, Enrico Fermi, John von Neumann recreated this with the first Turing-complete digital computer ENIAC and a series of computational algorithms with random sampling (Ibid 98). By relying on this, they were able to reenact the nuclear chain reaction - the scattering of neutrons and their exposure to uranium cores. This method was later called Monte Carlo (Metropolis, Ulam 1949).
Finally, the machine ceased to be a tool and started to be a receptacle of numerous fictitious realities, less limited but no less real. And the further, the larger and more complicated they got. In 1950, on the basis of the same ENIAC, Jul Charlie simulated the first 24-hour weather forecast (Gramelsberger 2011, 140). In 1972, to determine the limits of demographic and economic growth with depleting resources, Dennis Meadows and others built a dynamic system model world3 that outlined 12 future scenarios for humanity (Meadows et al 1972). In 1995, as part of Trans Sims?, NISAC (National Infrastructure Simulation and Analysis Center) imitated Portland‘s transport network with all the streets, buildings, canals and vehicles, and in addition its 1.6 million inhabitants each engaged in their daily activities (Smith, Beckman, Baggerly 1995). In 2018, the simultaneous use of 2000 processors at the University of Arizona‘s high-performance computing cluster over 3 weeks restored the development of about 8 million universes from the big bang to the present (Behroozi et al 2019).
So, simulation is the absorption of non-technical and non-digital phenomena by technology and digitalisation. It repeats the creation of things, at the limit, of everything (Quéau 1986, 254-255). The boundary between the artificial and the natural does not exist for it. Any process can be simulated and brought to its probable end. Any quality can be decomposed into quantity. Everything is divided into sequences of 0 and 1, for 0 and 1 are the matter of which multiple virtual worlds are composed (Vial 2012). Therefore, the Cartesian method finds its apex here.
In simulation, computation stops being an external manipulation (Quéau 1986, 122-124). It acquires autonomy and unfolds along its own trajectory, while the researcher merely records its outcome. It becomes a reality where 0 and 1 operate with themselves, but in contrast to formal and natural languages, are able to authenticate themselves. Hypothesis and experiment are indistinguishable here - bringing forward one is carrying out the other.
And though simulation transforms the previously unthinkable into at once thinkable and real, its reality remains virtual. Hence, it just partially overcomes our alienation from the technical system.
To accomplish this, it is necessary for simulation to enter our lives, or rather for our practice to be its realisation. Besides, such experimentation must test possible social relations, and so that such testing extends the frontiers of our sociality.
Society has always been a construct resulting from organisational efforts and antecedent conditions. After investigating the origins of modern nations, they no longer appear to be an immutable givenness (Gellner 1983; Hobsbawm 1990; Anderson 1983). They are as artificial as all around us. They once unified local diversity through a single literary language and a common political form, and today they themselves are a local context gradually eroding into a global one.
Regrettably, merely knowing about the artificiality of all social forms does not make them approachable to us. For the majority, their engineering is unattainable; for the minority, it is impossible owing to the complexity of its target.
To defeat this unattainability and impossibility, social relations must be programmable. More precisely, they must be digital objects, or series of 0 and 1 entirely transparent and changeable.
This demands a digital environment that involves many agents (potentially, everyone), is open to their influence and at the same time does not belong to any of them.
It arose with Ethereum, the first blockchain with a Turing-complete programming language (Wood 2014). Each of its nodes contains a virtual machine processing such code in which anything can be encoded. Compared to Bitcoin‘s scripts, which assume strictly one transaction programme, Ethereum lets anyone write that programme. Accordingly, it has arbitrarily many ways to transit from one network state to another. Each of them is a protocol where any logic can be inscribed. And as long as that logic is executed by many agents simultaneously, it cannot but be social.
This renders web3 a real-time simulation of institutions probing possible forms of collective connections. All its participants are both subject and object of the global experiment. Thus collectivity loses its immediacy - it is no longer a givenness, but our engineering product. We are now its creators. With some assumptions, web3 is technoscience expanded from physical to cultural realms.
How be it, all this remains virtual. The outputs of this simulation cannot be incorporated into life without coupling with actual practices. For the new to be accepted, it must be relatable, i.e. it must be an improved analogy of the old. If there is no link to what was trusted before, nothing will ever succeed.
From here, contrary to many claims (Caseau, Soudoplatoff 2016), trust cannot be ensured purely algorithmically. The inability to find such an equivalent predetermined the collapse of non-fungible tokens (Conlon, Corbet 2023). Having failed to fulfil their social function, they ended up symbolising unfulfilled hopes.
To surmount this desperate situation, there at least needs a protocol capable of transferring the status of virtual objects into the actual one: for instance, to guarantee ownership of tokens not technologically, but socially, in the form of property rights. In other words, for web3, the shift from digitalisation to phygitalisation? is existential.
How much freedom proclaimed online will also be offline is proportional to how freely one can pass from one to another. If simulation cannot transmute the second nature, it never existed.
Petr Zavisnov