Holographic principle (Universe)

Post Reply
User avatar
Pigeon
Posts: 18059
Joined: Thu Mar 31, 2011 3:00 pm

Holographic principle (Universe)

Post by Pigeon » Thu Dec 13, 2012 4:36 pm

The holographic principle is a property of quantum gravity and string theories which states that the description of a volume of space can be thought of as encoded on a boundary to the region—preferably a light-like boundary like a gravitational horizon.

First proposed by Gerard 't Hooft, it was given a precise string-theory interpretation by Leonard Susskind who combined his ideas with previous ones of 't Hooft and Charles Thorn. As pointed out by Raphael Bousso, Thorn observed in 1978 that string theory admits a lower dimensional description in which gravity emerges from it in what would now be called a holographic way.

In a larger and more speculative sense, the theory suggests that the entire universe can be seen as a two-dimensional information structure "painted" on the cosmological horizon, such that the three dimensions we observe are only an effective description at macroscopic scales and at low energies.

Cosmological holography has not been made mathematically precise, partly because the cosmological horizon has a finite area and grows with time.

The holographic principle was inspired by black hole thermodynamics, which implies that the maximal entropy in any region scales with the radius squared, and not cubed as might be expected.

In the case of a black hole, the insight was that the informational content of all the objects which have fallen into the hole can be entirely contained in surface fluctuations of the event horizon. The holographic principle resolves the black hole information paradox within the framework of string theory


User avatar
Pigeon
Posts: 18059
Joined: Thu Mar 31, 2011 3:00 pm

Re: Holographic principle (Universe)

Post by Pigeon » Thu Dec 13, 2012 4:48 pm

A team of researchers at the US Department of Energy's Fermilab are trying to take a measurement of the fabric of spacetime to show that there is a finite unit that makes up the universe. To do so, they have created the world's most accurate clock, the holographic interferometer or holometer.

Is reality a 3D hologram of a 2D universe? This is a question that the researchers are asking almost a hundred years after physicist Max Planck came up with the idea of a finite measurement of distance, leading to the concept of Planck distance and Planck time. Stephen Hawking built on this concept to suggest that there is a discrete fidelity or resolution to the universe – sort of like pixels in a picture.

Further credence was given to the idea when German scientists working on the GEO600 project noticed distortions in their observations while studying the gravitational waves created by black holes. The cause of this distortion is thought to be because the team were approaching the lower limit of the universe's resolution. They might have been the first to see the fabric of reality. So, why is the universe two-dimensional you ask? Because at these smallest of distances (Planck scale units), time and the third dimension of space become intertwined or one, leaving only two spatial dimensions or a 2D universe.

“If there is a minimum interval of time, or a maximum frequency in nature, there is a corresponding limit on the fidelity of space and time. Everyone is familiar these days with the blurry and pixelated images, or noisy sound transmission, associated with poor internet bandwidth. The holometer seeks to detect the equivalent blurriness or noise in reality itself, associated with the ultimate frequency limit imposed by nature,” the Fermilab team said.

“More recently, theoretical studies of black holes, and later in string theory and other forms of unification, have suggested that physics on the Planck scale is holographic. It is conjectured that space is two dimensional, and the third dimension is inextricably linked with time.

Link


User avatar
Pigeon
Posts: 18059
Joined: Thu Mar 31, 2011 3:00 pm

Re: Holographic principle (Universe)

Post by Pigeon » Thu Dec 13, 2012 8:23 pm

The Planck constant (denoted h, also called Planck's constant) is a physical constant that is the quantum of action in quantum mechanics. The Planck constant was first described as the proportionality constant between the energy (E) of a photon and the frequency (ν) of its associated electromagnetic wave. This relation between the energy and frequency is called the Planck relation:

E = hv

Since the frequency v, wavelength λ, and speed of light c are related by λν = c, the Planck relation can also be expressed as

E = hc \ λ

The Planck constant is named after Max Planck, one of the founders of quantum theory, who discovered it in 1900. Classical statistical mechanics requires the existence of h (but does not define its value).

Planck discovered that physical action could not take on any indiscriminate value. Instead, the action must be some multiple of a very small quantity (later to be named the "quantum of action" and now called Planck's constant).

This inherent granularity is counterintuitive in the everyday world, where it is possible to "make things a little bit hotter" or "move things a little bit faster". This is because the quanta of action are very, very small in comparison to everyday macroscopic human experience.

Thus, on the macroscopic scale, quantum mechanics and classical physics converge—the classical limit. Nevertheless, it is impossible, as Planck found out, to explain some phenomena without accepting the fact that action is quantized.

In many cases, such as for monochromatic light or for atoms, this quantum of action also implies that only certain energy levels are allowed, and values in-between are forbidden. In 1923, Louis de Broglie generalized the Planck relation by postulating that the Planck constant represents the proportionality between the momentum and the quantum wavelength of not just the photon, but any particle. This was confirmed by experiments soon afterwards.

The Planck constant is related to the quantization of light and matter. Therefore, the Planck constant can be seen as an atomic-scale constant. In a unit system adapted to atomic scales, the electronvolt is the appropriate unit of energy and the Petahertz the appropriate unit of frequency. In such an atomic units system, Planck's constant is indeed described by a number of order 1.

The numerical value of the Planck constant depends entirely on the system of units used to measure it. When it is expressed in SI units, it is one of the smallest constants used in physics. This reflects the fact that, on a scale adapted to humans, where energies are typically of the order of kilojoules and times are typically of the order of seconds or minutes, Planck's constant, the quantum of action, is very small.

Equivalently, the smallness of Planck's constant reflects the fact that everyday objects and systems are made of a large number of particles.

Planck frequency of 10^44 bits per second

Link


User avatar
Pigeon
Posts: 18059
Joined: Thu Mar 31, 2011 3:00 pm

Re: Holographic principle (Universe)

Post by Pigeon » Thu Dec 13, 2012 8:41 pm

Photoelectric effect

The photoelectric effect is the emission of electrons (called "photoelectrons") from a surface when light is shone on it. It was first observed by Alexandre Edmond Becquerel in 1839, although credit is usually reserved for Heinrich Hertz, who published the first thorough investigation in 1887. Another particularly thorough investigation was published by Philipp Lenard in 1902. Einstein's 1905 paper discussing the effect in terms of light quanta would earn him the Nobel Prize in 1921, when his predictions had been confirmed by the experimental work of Robert Andrews Millikan.

To put it another way, in 1921 at least, Einstein's theories on the photoelectric effect were considered more important than his theory of relativity (a name coined, as it happens, by Max Planck).

Prior to Einstein's paper, electromagnetic radiation such as visible light was considered to behave as a wave: hence the use of the terms "frequency" and "wavelength" to characterise different types of radiation.

The energy transferred by a wave in a given time is called its intensity. The light from a theatre spotlight is more intense than the light from a domestic lightbulb; that is to say that the spotlight gives out more energy per unit time (and hence consumes more electricity) than the ordinary bulb, even though the colour of the light might be very similar. Other waves, such as sound or the waves crashing against a seafront, also have their own intensity.

However the energy account of the photoelectric effect didn't seem to agree with the wave description of light.

The "photoelectrons" emitted as a result of the photoelectric effect have a certain kinetic energy, which can be measured. This kinetic energy (for each photoelectron) is independent of the intensity of the light, but depends linearly on the frequency; and if the frequency is too low (corresponding to a kinetic energy for the photoelectrons of zero or less), no photoelectrons are emitted at all, unless a plurality of photons, whose energetic sum is greater than the energy of the photoelectrons, acts virtually simultaneously (multiphoton effect)

Assuming the frequency is high enough to cause the photoelectric effect, a rise in intensity of the light source causes more photoelectrons to be emitted with the same kinetic energy, rather than the same number of photoelectrons to be emitted with higher kinetic energy.

Einstein's explanation for these observations was that light itself is quantized; that the energy of light is not transferred continuously as in a classical wave, but only in small "packets" or quanta. The size of these "packets" of energy, which would later be named photons, was to be the same as Planck's "energy element", giving the modern version of Planck's relation:

E = hv

Einstein's postulate was later proven experimentally: the constant of proportionality between the frequency of incident light (ν) and the kinetic energy of photoelectrons (E) was shown to be equal to the Planck constant (h).[


User avatar
Pigeon
Posts: 18059
Joined: Thu Mar 31, 2011 3:00 pm

Re: Holographic principle (Universe)

Post by Pigeon » Thu Dec 13, 2012 11:18 pm

"An atom is essentially a point with a cloud of electrons," Czysz noted. "If you took a human and removed all the space between the electrons and nucleus of each atom in his or her body and then condensed it down, the human would be smaller than the point of a pin."

In other words, each of us -- like a hologram -- is "basically a diffraction pattern that appears to be solid," he suggested. Solid as we may feel, we're mostly just empty space.

In any case, the idea of a holographic universe can be approached and thought about in multiple ways. It's potentially holographic in a sense by virtue of the fact that the solid objects around us are mostly just empty space, and that their mass is essentially just condensed energy.

The world only appears solid, in other words.


User avatar
Pigeon
Posts: 18059
Joined: Thu Mar 31, 2011 3:00 pm

Re: Holographic principle (Universe)

Post by Pigeon » Thu Dec 13, 2012 11:38 pm

Membrane (M-Theory)

The central idea is that the visible, four-dimensional universe is restricted to a brane (a mathematical synonym for thin shell) inside a higher-dimensional space, called the "bulk". If the additional dimensions are compact, then the observed universe contains the extra dimensions, and then no reference to the bulk is appropriate. In the bulk model, at least some of the extra dimensions are extensive (possibly infinite), and other branes may be moving through this bulk. Interactions with the bulk, and possibly with other branes, can influence our brane and thus introduce effects not seen in more standard cosmological models.

Why gravity is weak

The model can explain the weakness of gravity relative to the other fundamental forces of nature, thus solving the so-called hierarchy problem.

In the brane picture, the other three forces (electromagnetism and the weak and strong nuclear forces) are localized on the brane, but gravity has no such constraint and so much of its attractive power "leaks" into the bulk. As a consequence, the force of gravity should appear significantly stronger on small (subatomic or at least sub-millimetre) scales, where less gravitational force has "leaked". Various experiments are currently under way to test this.

Might this explain some of the UFO events.

User avatar
Royal
Posts: 10565
Joined: Mon Apr 11, 2011 5:55 pm

Re: Holographic principle (Universe)

Post by Royal » Tue Dec 18, 2012 10:03 pm

In 2003, Swedish philosopher Nick Bostrom proposed the idea that the world we live in, and indeed we ourselves, may be a computer simulation run by people in the far, far future. Now, a team of physicists from the University of Washington report that they may have a way to test that idea.
Bostrom's 'Simulation Hypothesis" proposes that one of the three following possibilities is true:

1) The human race will go extinct before it ever reaches a "post-human" stage of evolution that possesses a technological level capable of running complex computer simulations of our universe (what he calls "ancestor simulations"),

2) The human race evolves to a "post-human" level and is capable of running ancestor simulations, but does not do so — possibly due to the energy requirements needed, or perhaps due to ethical concerns about simulating self-aware beings,

3) We (in the here and now) are most certainly living in a computer simulation.

Furthermore (and this is where it gets a bit 'trippy'), unless we are currently living in a computer simulation, it is highly unlikely that the human race will evolve to a post-human level that runs ancestor simulations. By "post-human", Bostrom meant that we would evolve to a point where we would be unrecognizable from what we currently define as 'human'. Either we evolve, biologically, to be completely different, or we transcend physical form into creatures of pure intellect, or we achieve the 'singularity' and merge ourselves with machines. Beings at this post-human level would, presumably, still be curious about the universe around them, and quite possibly about where they came from. With the fantastic levels of technology available to them, they could harness vast energies — such as tapping directly into the fusion furnace of a star — to power incredibly advanced computers which would run simulations of the universe to show them how they came to evolve from single-celled organisms all the way to their current state.

Bostrom's argument continues that if these post-humans ran ancestor simulations, they would be capable of running a very high number of them, and unless they constrained themselves (as in possibility 2), they indeed would run a very high number of them. Thus, because there would be thousands — possibly millions — of these simulated universes and only one real universe, just by looking at the probabilities, it is far more likely that we are currently living in a simulated universe, rather than the real universe.

Professor Martin Savage, from the physics department at the University of Washington, is already running some of these kinds of simulations. Savage works in a field of physics that deals with the motions and behaviours of the tiniest of sub-atomic particles — quarks and gluons — called 'lattice quantum chromodynamics' (QCD).

Lattice QCD is basically using very powerful supercomputers to simulate the universe from 'first principles' - starting with just the basic laws of physics and building up from there. With current computers, the biggest thing they can simulate is about the size of an atomic nucleus, but that will expand as computing power increases in the future.

"If you make the simulations big enough, something like our universe should emerge," Savage said, according to Science Daily. Savage is well-aware of the resource restrictions of his own simulations, which use certain assumptions and physical constraints in order to conserve computer power, such as limiting the size of the simulation and laying down the framework of the simulation on a cube-shaped grid or lattice. He believes that these same kinds of assumptions and constraints would show up in future simulations, including the ones run by Bostrom's post-humans. Proving whether we are living in a computer simulation or not would just be a matter of finding evidence of those constraints.

The constraint that would stand out the most, according to the research paper, is the spacing of the lattice. The more powerful your computer, the smaller the spacing of the lattice could be, essentially giving you a better and better 'resolution' for your simulated universe.
http://ca.news.yahoo.com/blogs/geekquin ... 33266.html

User avatar
Pigeon
Posts: 18059
Joined: Thu Mar 31, 2011 3:00 pm

Re: Holographic principle (Universe)

Post by Pigeon » Wed Dec 26, 2012 4:20 am

The constraint that would stand out the most, according to the research paper, is the spacing of the lattice. The more powerful your computer, the smaller the spacing of the lattice could be, essentially giving you a better and better 'resolution' for your simulated universe.

Stephen Hawking built on this concept to suggest that there is a discrete fidelity or resolution to the universe – sort of like pixels in a picture.

A team of researchers at the US Department of Energy's Fermilab are trying to take a measurement of the fabric of spacetime to show that there is a finite unit that makes up the universe. To do so, they have created the world's most accurate clock, the holographic interferometer or holometer.

But does it mean it is a simulation. Might we not be able to actual determine this fact.

User avatar
Royal
Posts: 10565
Joined: Mon Apr 11, 2011 5:55 pm

Re: Holographic principle (Universe)

Post by Royal » Thu Sep 03, 2015 4:50 am

Deepak's on board:

User avatar
Royal
Posts: 10565
Joined: Mon Apr 11, 2011 5:55 pm

Re: Holographic principle (Universe)

Post by Royal » Sat Jul 17, 2021 7:41 pm

AI Designs Quantum Physics Experiments beyond What Any Human Has Conceived

Quantum physicist Mario Krenn remembers sitting in a café in Vienna in early 2016, poring over computer printouts, trying to make sense of what MELVIN had found. MELVIN was a machine-learning algorithm Krenn had built, a kind of artificial intelligence. Its job was to mix and match the building blocks of standard quantum experiments and find solutions to new problems. And it did find many interesting ones. But there was one that made no sense.

“The first thing I thought was, ‘My program has a bug, because the solution cannot exist,’” Krenn says. MELVIN had seemingly solved the problem of creating highly complex entangled states involving multiple photons (entangled states being those that once made Albert Einstein invoke the specter of “spooky action at a distance”). Krenn, Anton Zeilinger of the University of Vienna and their colleagues had not explicitly provided MELVIN the rules needed to generate such complex states, yet it had found a way. Eventually, he realized that the algorithm had rediscovered a type of experimental arrangement that had been devised in the early 1990s. But those experiments had been much simpler. MELVIN had cracked a far more complex puzzle.

https://www.scientificamerican.com/arti ... conceived/


Post Reply