God (programmer)

Is God a Programmer? analyzing the deep-universe simulation hypothesis at the Planck scale

The simulation hypothesis is the proposal that all of reality, including the Earth and the rest of the universe, could be an artificial simulation, such as a computer simulation. Neil deGrasse Tyson put the odds at 50-50 that our entire existence is a program on someone else’s hard drive [1]. David Chalmers noted “We in this universe can create simulated worlds and there’s nothing remotely spooky about that. Our creator isn’t especially spooky, it’s just some teenage hacker in the next universe up. Turn the tables, and we are essentially gods over our own computer creations [2] [3] [4].

The commonly postulated ancestor simulation approach, which Nick Bostrom called "the simulation argument", argues for "high-fidelity" simulations of ancestral life that would be indistinguishable from reality to the simulated ancestor. However this simulation variant can be traced back to an 'organic base reality' (the original programmer ancestors and their physical planet).

The Programmer God hypothesis[5] conversely states that a (deep universe) simulation began with the big bang and was programmed by an external intelligence (external to the physical universe), the Programmer by definition a God in the creator of the universe context. Our universe in its entirety, down to the smallest detail, and including life-forms, is within the simulation, the laws of nature, at their most fundamental level, are coded rules running on top of the simulation operating system. The operating system itself is mathematical (and potentially the origin of mathematics).


Any candidate for a Programmer-God simulation-universe source code must satisfy these conditions;

1. It can generate physical structures from mathematical forms.
2. The sum universe is dimensionless (simply data on a celestial hard disk).
3. We must be able to use it to derive the laws of physics (because the source code is the origin of the laws of nature, and the laws of physics are our observations of the laws of nature).
4. The mathematical logic must be unknown to us (the Programmer is a non-human intelligence).
5. The coding should have an 'elegance' commensurate with the Programmer's level of skill.


Philosophy

Physicist Eugene Wigner (The Unreasonable Effectiveness of Mathematics in the Natural Sciences) [6]

The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve.


Discussion

Philosophy of mathematics is that branch of philosophy which attempts to answer questions such as: ‘why is mathematics useful in describing nature?’, ‘in which sense, if any, do mathematical entities such as numbers exist?’ and ‘why and how are mathematical statements true?’ This reasoning comes about when we realize (through thought and experimentation) how the behavior of Nature follows mathematics to an extremely high degree of accuracy. The deeper we probe the laws of Nature, the more the physical world disappears and becomes a world of pure math. Mathematical realism holds that mathematical entities exist independently of the human mind. We do not invent mathematics, but rather discover it. Triangles, for example, are real entities that have an existence [7].

The mathematical universe refers to universe models whose underlying premise is that the physical universe has a mathematical origin, the physical (particle) universe is a construct of the mathematical universe, and as such physical reality is a perceived reality. It can be considered a form of Pythagoreanism or Platonism in that it proposes the existence of mathematical objects; and a form of mathematical monism in that it denies that anything exists except these mathematical objects.

Physicist Max Tegmark in his book "Our Mathematical Universe: My Quest for the Ultimate Nature of Reality"[8][9] proposed that Our external physical reality is a mathematical structure.[10] That is, the physical universe is not merely described by mathematics, but is mathematics (specifically, a mathematical structure). Mathematical existence equals physical existence, and all structures that exist mathematically exist physically as well. Any "self-aware substructures will subjectively perceive themselves as existing in a physically 'real' world".[11]

The principle constraints to any mathematical universe simulation hypothesis are;

1. the computational resources required. The ancestor simulation can resolve this by adapting from the virtual reality approach where only the observable region is simulated and only to the degree required, and

2. that any 'self-aware structures' (humans for example) within the simulation must "subjectively perceive themselves as existing in a physically 'real' world".[12]. Succinctly, our computer games may be able to simulate our physical world, but they are still only simulations of a physical reality (regardless of how realistic they may seem) ... we are not yet able to program actual physical dimensions of mass, space and time from mathematical structures, and indeed this may not be possible with a computer hard-ware architecture that can only process binary data.


Deep-universe source code

As a deep-universe simulation is programmed by an external (external to the universe) intelligence (the Programmer God hypothesis), we cannot presume a priori knowledge regarding the simulation source code other than from this code the laws of nature emerged, and so any deep-universe simulation model we try to emulate must be universal, i.e.: independent of any system of units, of the dimensioned physical constants (G, h, c, e .. ), and of any numbering systems. Furthermore, although a deep-universe simulation source code may use mathematical forms (circles, spheres ...) we are familiar with (for ultimately the source code is the origin of these forms), it will have been developed by a non-human intelligence and so we may have to develop new mathematical tools to decipher the underlying logic. By implication therefore, any theoretical basis for a source code that fits the above criteria (and from which the laws of nature will emerge), could be construed as our first tangible evidence of a non-human intelligence.


Constants

A physical universe is characterized by measurable quantities (size, mass, color, texture ...), and so a physical universe can be measured and defined. Contrast then becomes information, the statement this is a big apple requires a small apple against which big becomes a relative term. For analytical purposes we select a reference value, for example 0C or 32F, and measure all temperatures against this reference. The smaller the resolution of the measurements, the greater the information content (the file size of a 32 mega-pixel photo is larger than a 4 mega-pixel photo). A simulation universe may be presumed to also have a resolution dictating how much information the simulation can store and manipulate.

To measure the fundamental parameters of our universe physics uses physical constants. A physical constant is a physical quantity that is generally believed to be both universal in nature and have a constant value in time. These can be divided into


1) dimension-ed (measured using physical units kg, m, s, A ...) such as the speed of light c, gravitational constant G, Planck constant h ... as in the table

Physical constants
constant symbol SI units
Speed of light c
Planck constant h
Elementary charge e
Boltzmann constant kB

Physicist Lev Okun noted "Theoretical equations describing the physical world deal with dimensionless quantities and their solutions depend on dimensionless fundamental parameters. But experiments, from which these theories are extracted and by which they could be tested, involve measurements, i.e. comparisons with standard dimension-ful scales. Without standard dimension-ful units and hence without certain conventions physics is unthinkable [13].


2) dimension-less, such as the fine structure constant α. A dimension-less constant does not measure any physical quantity (it has no units; units = 1).


3) dimension-less mathematical constants, most notably pi = 3.14159265358979 and e = 2.718281828459. Although these are transcendental numbers, they can be constructed by integers in a series, and so for a universe expanding incrementally (see simulation time), these constants could be formed within the simulation. For example, at time t;


We may now define the fundamental physical constant as a parameter specifically chosen by the Programmer and is encoded into the simulation code directly, and so whilst it may be inferable, it is not derived from other constants (Richard Feynman on the fine structure constant). It should also be dimension-less otherwise the simulation itself becomes dimensioned (if the simulation is running on a celestial computer it is merely data, it has no physical size or shape or ...), and so the dimensioned constants (G, h, c, e...) must all be derivable (derived from within the simulation) via the (embedded in the source code) fundamental physical constants (of which the fine structure constant alpha may be an example). Although pi and e are dimensionless, they can be derived internally (from integers), and as they have application in the mathematical realm, they can be referred to as mathematical constants.




Planck scale

The Planck scale refers to the magnitudes of space, time, energy and other units, below which (or beyond which) the predictions of the Standard Model, quantum field theory and general relativity are no longer reconcilable, and quantum effects of gravity are expected to dominate (quantum gravitational effects only appear at length scales near the Planck scale).

Although particles may not be cognizance of our 'laws of physics', they do know the 'laws of nature'. These laws of nature would run directly off the universe OS (operating system), and so below this OS, 'physics' as we know it must necessarily break down. At present the Planck scale is the lowest known level, consequently any attempt to detect evidence of an underlying simulation coding must consider (if not actually begin at) this, the Planck scale[14].

The SI units for the dimension-ful mksa units are; meter (length), kilogram (mass), second (time), ampere (electric current). There are Planck units that represent these SI units, and so a simulation could use them as discrete building blocks; Planck length (the smallest possible unit of length), Planck mass (the unit of mass), Planck time (the smallest possible unit of time), Planck charge (the unit of charge). The speed of light then becomes c = 1 Planck length / 1 Planck time. These units would define the resolution and so information carrying capacity of the simulation universe.



Numbering systems

As well as our decimal system, computers apply binary and hexadecimal numbering systems. In particular the decimal and hexadecimal are of terrestrial origin and may not be considered 'universal'. Furthermore numbering systems measure only the frequency of an event and contain no information as to the event itself. The number 299 792 458 could refer to the speed of light (299 792 458 m/s) or could equally be referring to the number of apples in a container (299 792 458 apples). As such, numbers require a 'descriptive', whether m/s or apples. Numbers also do not include their history, is 299 792 458 for example a derivation of other base numbers?

Present universe simulations use the laws of physics and the physical constants are built in, however both these laws and the physical constants are known only to a limited precision, and so a simulation with 1062 iterations (the present age of the universe in units of Planck time) will accumulate errors. Number based computing may be sufficient for ancestor-simulation models where only the observed region needs to be calculated, but has inherent limitations for deep universe simulations where the entire universe is continuously updated. The actual computational requirements for a Planck scale universe simulation based on a numbering system with the laws of physics embedded would be an unknown and consequently lead to an 'non-testable' hypothesis. This is a commonly applied reasoning for rejecting the deep universe simulation.



Geometrical objects

A mathematical constant such as pi refers to a geometrical construct (the ratio of circle circumference to circle radius) and so is not constrained by any particular numbering system (in the decimal system π = 3.14159...), and so may be considered both universal and eternal. Likewise, by assigning geometrical objects instead of numbers to the Planck units, the problems with a numbering system can be resolved. These objects would however have to fulfill the following conditions;


1) embedded attribute - for example the object for length must embed the function of length such that a descriptive (km, mile ... ) is not required. Electron wavelength would then be measurable in terms of the length object, as such the length object must be embedded within the electron (the electron object). Although the mass object would incorporate the function mass, the time object the function time ..., it is not necessary that there be an individual physical mass or physical length or physical time ..., but only that in relation to the other units, the object must express that function (i.e.: the mass object has the function of mass when in the presence of the objects for space and time). The electron could then be a complex event (complex geometrical object) constructed by combining the objects for mass, length, time and charge into 1 event, and thus electron charge, wavelength, frequency and mass would then be different aspects of that 1 geometry (the electron event) and not independent parameters (independent of each other).


2) The objects for mass, length, time and charge must be able to combine with each other Lego-style to form more complex objects (events) such as electrons and apples whilst still retaining the underlying information (the mass of the apple derives from the mass objects embedded within that apple). Not only must these objects be able to form complex events such as particles, but these events themselves are geometrical objects and so must likewise function according to their geometries. Electrons would orbit protons according to their respective electron and proton geometries, these orbits the result of geometrical imperatives and not due to any built-in laws of physics (the orbital path is a consequence of all the underlying geometries). However, as orbits follow regular and repeating patterns, they can be described (by us) using mathematical formulas. As the events grow in complexity (from atoms to molecules to planets), so too will the patterns (and likewise the formulas we use to describe them). Consequently the laws of physics would then become our mathematical descriptions of the underlying geometrically imposed patterns.


3) These objects would replace coded instructions (the instruction sets would be built into the objects) thereby instigating a geometrically autonomous universe. The electron 'knows' what to do by virtue of the information encoded within its geometry, no coded electron CALL FUNCTION is required. This would be equivalent to combining the hardware, software and CPU together such that the 'software' changes (adjusts) to the changing 'hardware' (DNA may be an analogy).


Note: A purely mathematical universe has no limits in size and can be infinitely large and infinity small. A geometrical universe (that uses objects) has limitations, it can be no smaller than the smallest object for example and has discrete parts (those objects). The philosophy of the TOE (theory of everything) therefore includes a debate between the mathematical universe and the geometrical universe, however this distinction between mathematical and geometrical would only be apparent at the Planck scale.



Evidence of a simulation

The laws of physics are our incomplete observations of the natural universe, and so evidence of a simulation may be found in ambiguities or anomalies within these laws. Furthermore, if complexity arises over time, then at unit time the 'handiwork' of the Programmer may be notable by a simplicity and elegance of the geometries employed ... for the Programmer by definition has God-level programming skills. Here is a notable example.


Mass, space and time from the number 1

The dimensions of mass, space and time are considered by science to be independent of each other, we cannot measure the distance from Tokyo to London using kilograms and amperes, or measure mass using space and time. Indeed, what characterizes a physical universe as opposed to a simulated universe is the notion that there is a fundamental structure underneath, that in some sense mass 'is', that time 'is' and space 'is' ... thus we cannot write kg in terms of m and s. To do so would render our concepts of a physical universe meaningless. The 26th General Conference on Weights and Measures (2019 redefinition of SI base units) assigned exact numerical values to 4 physical constants (h, c, e, kB) independently of each other (and thereby confirming these as fundamental constants), and as they are measured in SI units (kg, m, s, A, K), these units must also be independent of each other (i.e.: these are fundamental units, for example if we could define m using A then the speed of light could be derived from, and so would depend upon, the value for the elementary charge e, and so the value for c could not be assigned independently from e).

Physical constants
constant symbol SI units
Speed of light c
Planck constant h
Elementary charge e
Boltzmann constant kB


We are familiar with inverse properties; plus charge and minus charge, matter and anti-matter ... and we can observe how these may form and/or cancel each other. A simulation universe however is required to be dimensionless (in sum total), for the simulated universe does not 'exist' in any physical sense outside of the 'Computer' (it is simply data on a celestial hard-disk).

Our universe does not appear to have inverse properties such as anti-mass (-kg), anti-time (-s) or anti-space (anti-length -m), therefore the first problem the Programmer must solve is how to create the physical scaffolding (of mass, space and time). For example, the Programmer can start by selecting 2 dimensioned quantities, here are used r, v [15] such that

Quantities r and v are chosen so that no unit (kg, m, s, A) can cancel another unit (i.e.: the kg cannot cancel the m or the s ...), and so we have 4 independent units (we still cannot define the kg using the m or the s ...), however if 3 (or more) units are combined together in a specific ratio, they can cancel (in a certain ratio our r and v become inverse properties and so cancel each other; units = 1).

This fX, although embedded within are the dimensioned structures for mass, time and length (in the above ratio), would be a dimensionless mathematical structure, units = 1. Thus we may create as much mass, time and length as we wish, the only proviso being that they are created in fX ratios, so that regardless of how massive, old and large our universe becomes, it is still in sum total dimensionless.

Defining the dimensioned quantities r, v in SI unit terms.


Mass


Length


Time


And so, although fX is a dimensionless mathematical structure, we can embed within it the (mass, length, time ...) structures along with their dimensional attributes (kg, m, s, A ..). In the mathematical electron model (discussed below), the electron itself is an example of an fX structure, it (felectron) is a dimensionless geometrical object that embeds the physical electron parameters of wavelength, frequency, charge (note: A-m = ampere-meter are the units for a magnetic monopole).

We may note that at the macro-level (of planets and stars) these fX ratio are not found, and so this level is the domain of the observed physical universe, however at the quantum level, fX ratio do appear, felectron as an example, the mathematical and physical domains then blurring. This would also explain why physics can measure precisely the parameters of the electron (wavelength, mass ...), but has never found the electron itself.





Programming

“God vs. science debates tend to be restricted to the premise that a God does not rely on science and that science does not need a God. As science and God are thus seen as mutually exclusive there are few, if any, serious attempts to construct mathematical models of a universe whose principle axiom does require a God. However, if there is an Intelligence responsible for the 14 billion year old universe of modern physics, being the universe of Einstein and Dirac, and beginning with the big bang as the act of 'creation', then we must ask how it might be done? What construction technique could have been used to set the laws of physics in motion?” [16]



Simulation Time

The (dimensionless) simulation clock-rate would be defined as the minimum 'time variable' (age) increment to the simulation. Using a simple loop as analogy, at age = 1, the simulation begins (the big bang), certain processes occur, when these are completed age increments (age = 2, then 3, then 4 ... ) until age reaches the_end and the simulation stops.

 'begin simulation
 FOR age = 1 TO the_end                       'big bang = 1                 
         conduct certain processes ........ 
 NEXT age 
 'end simulation


Quantum spacetime and Quantum gravity models refer to Planck time as the smallest discrete unit of time and so the incrementing variable age could be used to generate units of Planck time (and other Planck units, the physical scaffolding of the universe). In a geometrical model, to these Planck units could be assigned geometrical objects, for example;

 Initialize_physical_constants;
 FOR age = 1 TO the_end                                                    
         generate 1 unit of (Planck) time;       '1 time 'object' T
         generate 1 unit of (Planck) mass;       '1 mass 'object' M
         generate 1 unit of (Planck) length;     '1 length 'object' L
         ........ 
 NEXT age 


The variable age is the simulation clock-rate, it is simply a counter (1, 2, 3 ...) and so is a dimensionless number, the object T is the geometrical Planck time object, it is dimensioned and is measured by us in seconds. If age is the origin of Planck time (1 increment to age generates 1 T object) then age = 1062, this is based on the present age of the universe, which, at 14 billion years, equates to 1062 units of Planck time.

For each age, certain operations are performed, only after they are finished does age increment (there is no time interval between increments). As noted, age being dimensionless, is not the same as dimensioned Planck time which is the geometrical object T, and this T, being dimensioned, can only appear within the simulation. The analogy would be frames of a movie, each frame contains dimensioned information but there is no time interval between frames.

 FOR age = 1 TO the_end (of the movie)                                                   
         display frame{age}
 NEXT age 

Although operations (between increments to age) may be extensive, self-aware structures from within the simulation would have no means to determine this, they could only perceive themselves as being in a real-time (for them the smallest unit of time is 1T, just as the smallest unit of time in a movie is 1 frame). Their (those self-aware structures) dimension of time would then be a measure of relative motion (a change of state), and so although ultimately deriving from the variable age, their time would not be the same as age. If there was no motion, if all particles and photons were still (no change of state), then their time dimension could not update (if every frame in a movie was the same then actors within that movie could not register a change in time), age however would continue to increment.

Thus we have 3 time structures; 1) the dimension-less simulation clock-rate variable age, 2) the dimensioned time unit (object T), and 3) time as change of state (the observers time). Observer time requires a memory of past events against which a change of state can be perceived.


The forward increment to age would constitute the arrow of time. Reversing this would reverse the arrow of time, the universe would likewise shrink in size and mass accordingly (just as a white hole is the (time) reversal of a black hole).

 FOR age = the_end TO 1 STEP -1                                 
         delete 1 unit of Planck time;
         delete 1 unit of Planck mass;
         delete 1 unit of Planck length;
         ........ 
 NEXT age 


Adding mass, length and time objects per increment to age would force the universe expansion (in size and mass), and as such an anti-gravitational dark energy would not be required, however these objects are dimensioned and so are generated within the simulation. This means that they must somehow combine in a specific ratio whereby they (the units for mass length, time, charge; kg, m, s, A) in sum total cancel each other, leaving the sum universe (the simulation itself) residing on that celestial hard-disk.

We may introduce a theoretical dimensionless (fX) geometrical object denoted as fPlanck within which are embedded the dimensioned objects MLTA (mass, length, time, charge), and from which they may be extracted.

 FOR age = 1 TO the_end                                                    
         add 1 fPlanck                     'dimensionless geometrical 'object'
         {
               extract 1 unit of (Planck) time;       '1 time 'object' T
               extract 1 unit of (Planck) mass;       '1 mass 'object' M
               extract 1 unit of (Planck) length;     '1 length 'object' L
         }
         ........ 
 NEXT age 

This then means that the simulation, in order to create time T, must also create mass M and space L (become larger and more massive). Thus no matter how small or large the physical universe is (when seen internally), in sum total (when seen externally), there is no universe - it is merely data without physical form.


Universe time-line

As the universe expands outwards (through the constant addition of units of mass and length via fPlanck), and if this expansion pulls particles with it (if it is the origin of motion), then now (the present) would reside on the surface of the (constantly expanding at the speed of light; c = 1 Planck length / 1 Planck time) universe, and so the 'past' could be retained, for the past cannot be over-written by the present in an expanding universe (if now is always on the surface). As this expansion occurs at the Planck scale, information even below quantum states, down to the Planck scale, can be retained, the analogy would be the storing of every keystroke, a Planck scale version of the Akashic records ... for if our deeds (the past) are both stored and cannot be over-written (by the present), then we have a candidate for the 'karmic heavens' (Matthew 6:19 But lay up for yourselves treasures in 'heaven', where neither moth nor rust doth corrupt, and where thieves do not break through nor steal).

This also forms a universe time-line against which previous information can be compared with new information (a 'memory' of events), without which we could not link cause with effect.


Singularity

In a simulation, the data (software) requires a storage device that is ultimately hardware (RAM, HD ...). In a data world of 1's and 0's such as a computer game, characters within that game may analyze other parts of their 1's and 0's game, but they have no means to analyze the hard disk upon which they (and their game) are stored, for the hard disk is an electro-mechanical device, it is not part of their 1's and 0's world, it is a part of the 'real world', the world of their Programmer. Furthermore the rules programmed into their game would constitute for them the laws of physics (the laws by which their game operates), but these may or may not resemble the laws that operate in the 'real world' (the world of their Programmer). Thus any region where the laws of physics (the laws of their game world) break down would be significant. A singularity inside a black hole is such a region [17].

For the black-hole electron, its black-hole center would then be analogous to a storage address on a hard disk, the interface between the simulation world and the real world. A massive (galactic) black-hole could be as an entire data sector.

The surface of the black-hole would then be of the simulation world, the size of the black hole surface reflecting the stored information, the interior of the black-hole however would be the interface between the data world and the 'hard disk' of the real world, and so would not exist in any 'physical' terms. It is external to the simulation. As analogy, we may discuss the 3-D surface area of a black-hole but not its volume (interior).



Laws of Physics

The scientific method is built upon testable hypothesis and reproducible results. Water always boils (in defined conditions), at 100°C. In a geometrical universe particles will behave according to geometrical imperatives, the geometry of the electron and proton ensuring that electrons will orbit nuclei in repeating and predictable patterns. The laws of physics would then be a set of mathematical formulas that describe these repeating patterns, the more complex the orbits, the more complex the formulas required to describe them and so forth. However if there is a source code from which these geometrical conditions were programmed, then there may also be non-repeating events, back-doors built into the code (a common practice by terrestrial programmers), these by definition would lie outside the laws of physics and so be labelled as miracles, yet they would be no less valid.



Determinism

An animation of the figure-8 solution to the three-body problem over a single period T ≃ 6.3259.[18]

Particles form more complex structures such as atoms and molecules via a system of orbitals; nuclear, atomic and gravitational. The 3-body problem is the problem of taking the initial positions and velocities (or momenta |momentum|momenta) of three or more point masses and solving for their subsequent motion according to Newton's laws of motion and Newton's law of universal gravitation.[19]. Simply put, this means that although a simulation using gravitational orbitals of similar mass may have a pre-determined outcome, it seems that for gods and men alike the only way to know what that outcome will be is to run the simulation itself.





Geometry coded universe

Modelling a Planck scale simulation universe using geometrical forms (links)





References

  1. Are We Living in a Computer Simulation? https://www.scientificamerican.com/article/are-we-living-in-a-computer-simulation/
  2. https://www.youtube.com/watch?v=yqbS5qJU8PA, David Chalmers, Serious Science
  3. The Matrix as Metaphysics, David Chalmers http://consc.net/papers/matrix.pdf
  4. Are We Living in a Computer Simulation?https://www.scientificamerican.com/article/are-we-living-in-a-computer-simulation/
  5. https://codingthecosmos.com/podcast/ AI generated podcasts
  6. Wigner, E. P. (1960). "The unreasonable effectiveness of mathematics in the natural sciences. Richard Courant lecture in mathematical sciences delivered at New York University, May 11, 1959". Communications on Pure and Applied Mathematics 13: 1–14. doi:10.1002/cpa.3160130102.
  7. - http://plato.stanford.edu/entries/platonism-mathematics/
  8. Tegmark, Max (November 1998). "Is "the Theory of Everything" Merely the Ultimate Ensemble Theory?". Annals of Physics 270 (1): 1–51. doi:10.1006/aphy.1998.5855.
  9. M. Tegmark 2014, "Our Mathematical Universe", Knopf
  10. Tegmark, Max (February 2008). "The Mathematical Universe". Foundations of Physics 38 (2): 101–150. doi:10.1007/s10701-007-9186-9.
  11. Tegmark (1998), p. 1.
  12. Tegmark (1998), p. 1.
  13. Michael J. Duff et al, Journal of High Energy Physics, Volume 2002, JHEP03(2002)
  14. https://www.youtube.com/watch?v=AMQRXGYCDrY Planck scale, Brian Greene
  15. Macleod, Malcolm J. (22 March 2018). "Programming Planck units from a mathematical electron; a Simulation Hypothesis". Eur. Phys. J. Plus 113: 278. doi:10.1140/epjp/i2018-12094-x.
  16. The Programmer God, are we in a Simulation
  17. https://www.youtube.com/watch?v=W5j7umtZYB4 a black hole singularity as the interface between worlds
  18. Here the gravitational constant G has been set to 1, and the initial conditions are r1(0) = −r3(0) = (−0.97000436, 0.24308753); r2(0) = (0,0); v1(0) = v3(0) = (0.4662036850, 0.4323657300); v2(0) = (−0.93240737, −0.86473146). The values are obtained from Chenciner & Montgomery (2000).
  19. Barrow-Green, June (2008), "The Three-Body Problem", in Gowers, Timothy; Barrow-Green, June; Leader, Imre (eds.), The Princeton Companion to Mathematics, Princeton University Press, pp. 726–728