We've Moved! Visit our NEW FORUM to join the latest discussions. This is an archive of our previous conversations...

You can find the login page for the old forum here.
CHATPRIVACYDONATELOGINREGISTER
DMT-Nexus
FAQWIKIHEALTH & SAFETYARTATTITUDEACTIVE TOPICS
Is Particle Physics Dead? Options
 
#1 Posted : 6/21/2018 12:47:14 AM
DMT-Nexus member

ModeratorSenior Member

Posts: 4612
Joined: 17-Jan-2009
Last visit: 07-Mar-2024
Going nowhere fast

Quote:
After the success of the Standard Model, experiments have stopped answering to grand theories. Is particle physics in crisis?


Quote:
In recent years, physicists have been watching the data coming in from the Large Hadron Collider (LHC) with a growing sense of unease. We’ve spent decades devising elaborate accounts for the behaviour of the quantum zoo of subatomic particles, the most basic building blocks of the known universe. The Standard Model is the high-water mark of our achievements to date, with some of its theoretical predictions verified to within a one-in-ten-billion chance of error – a simply astounding degree of accuracy. But it leaves many questions unanswered. For one, where does gravity come from? Why do matter particles always possess three, ever-heavier copies, with peculiar patterns in their masses? What is dark matter, and why does the universe contain more matter than antimatter?


Quote:
Behind the question of mass, an even bigger and uglier problem was lurking in the background of the Standard Model: why is the Higgs boson so light? In experiments it weighed in at 125 times the mass of a proton. But calculations using the theory implied that it should be much bigger – roughly ten million billion times bigger, in fact.

This super-massive Higgs boson is meant to be the result of quantum fluctuations: an ultra-heavy particle-antiparticle pair, produced for a fleeting instant and then subsequently annihilated. Quantum fluctuations of ultra-heavy particle pairs should have a profound effect on the Higgs boson, whose mass is very sensitive to them. The other particles in the Standard Model are shielded from such quantum effects by certain mathematical symmetries – that is, things don’t change under transformation, like a square turned through 90 degrees – but the Higgs boson is the odd one out, and feels the influence very keenly.

Except that it doesn’t, because the mass of the Higgs appears to be so small. One logical option is that nature has chosen the initial value of the Higgs boson mass to precisely offset these quantum fluctuations, to an accuracy of one in 1016. However, that possibility seems remote at best, because the initial value and the quantum fluctuation have nothing to do with each other. It would be akin to dropping a sharp pencil onto a table and having it land exactly upright, balanced on its point. In physics terms, the configuration of the pencil is unnatural or fine-tuned. Just as the movement of air or tiny vibrations should make the pencil fall over, the mass of the Higgs shouldn’t be so perfectly calibrated that it has the ability to cancel out quantum fluctuations.

However, instead of an uncanny correspondence, maybe the naturalness problem with the Higgs boson could be explained away by a new, more foundational theory: supersymmetry. To grasp supersymmetry, we need to look a bit more closely at particles. Particles behave a bit like tiny spinning tops, although the amount of their spin is restricted. For example, all electrons in the universe have the same amount of spin; all photons have double this amount, and all Higgs bosons have no spin at all. The fundamental unit of spin is the spin of the electron. Other particles may only have spins equal to some whole number multiplied by the electron’s spin.


Quote:
A major consequence of supersymmetry is that every particle we know about should have a copy (a ‘superpartner’) with exactly the same properties – except for two things. One, its spin should differ by one unit. And two, the superpartner should be heavier. The mass of the superpartner is not fixed, but the heavier one makes them, the less exact the cancellation between the particle and its superpartner, and the more you have to rely on the mass of the particle itself being fine-tuned. One can make superpartners have a mass of around 1,000 times that of a proton, and they still function reasonably well. But increase the mass by a factor of 10 and the theory goes back to looking quite unnatural.

By smashing protons together, the LHC should be able to produce these superpartners, provided they weigh around 1,000 times the mass of a proton. To do this, you change the energy of the proton beams into the mass of the predicted superpartners, via Einstein’s equation of special relativity: E=mc2 (energy equals the square of the mass). Each collision is a quantum process, however, which means it’s inherently random and you can’t predict exactly what will happen.


Quote:
As you can already tell, finding out what happens at the point of the protons colliding involves a lot of detective work. In this case, you try to check how often supersymmetric particles are produced by watching them decay into more ordinary particles. The positions of these byproducts are measured by huge detectors, machines placed around crossing points in the counter-rotating beams of the LHC that act like enormous three-dimensional cameras.

The signature of supersymmetric particles was meant to be the production of a heavy invisible particle, which could sneak through the detector like a thief, leaving no trace. These very weakly interacting particles are candidates for the origin of dark matter in the universe; the strange, invisible stuff that we know from cosmological measurement should be about four times more prevalent than ordinary matter. The red flag for their presence was meant to be theft of momentum from a collision, meaning that the momentum before and after the collision doesn’t balance.

My colleagues and I watched the LHC closely for such tell-tale signs of superpartners. None have been found. We started to ask whether we might have missed them somehow. Perhaps some of the particles being produced were too low in energy for the collisions to be observed. Or perhaps we were wrong about dark matter particles – maybe there was some other, unstable type of particle.

In the end, these ideas weren’t really a ‘get-out-of-jail-free’ card. Using various experimental analysis techniques, they were also hunted out and falsified. Another possibility was that the superpartners were a bit heavier than expected; so perhaps the mass of the Higgs boson did have some cancellation in it (one part in a few hundred, say). But as the data rolled in and the beam energy of the LHC was ramped up, supersymmetry became more and more squeezed as a solution to the Higgs boson naturalness problem.

The bleakest sign is that the naturalness problem isn’t confined to the Higgs boson

The trouble is that it’s not clear when to give up on supersymmetry. True, as more data arrives from the LHC with no sign of superpartners, the heavier they would have to be if they existed, and the less they solve the problem. But there’s no obvious point at which one says ‘ah well, that’s it – now supersymmetry is dead’. Everyone has their own biased point in time at which they stop believing, at least enough to stop working on it. The LHC is still going and there’s still plenty of effort going into the search for superpartners, but many of my colleagues have moved on to new research topics. For the first 20 years of my scientific career, I cut my teeth on figuring out ways to detect the presence of superpartners in LHC data. Now I’ve all but dropped it as a research topic.

It could be that we got the wrong end of the stick with how we frame the puzzle of the Higgs boson. Perhaps we’re missing something from the mathematical framework with which we calculate its mass. Researchers have worked along these lines and so far come up with nothing, but that doesn’t mean there’s no solution. Another suspicion relates to the fact that the hypothesis of heavy particles relies on arguments based on a quantum theory of gravity – and such a theory has not yet been verified, although there are mathematically consistent constructions.

Perhaps the bleakest sign of a flaw in present approaches to particle physics is that the naturalness problem isn’t confined to the Higgs boson. Calculations tell us that the energy of empty space (inferred from cosmological measurements to be tiny) should be huge. This would make the outer reaches of the universe decelerate away from us, when in fact observations of certain distant supernovae suggest that the outer reaches of our universe are accelerating. Supersymmetry doesn’t fix this conflict. Many of us began to suspect that whatever solved this more difficult issue with the universe’s vacuum energy would solve the other, milder one concerning the mass of the Higgs.

All these challenges arise because of physics’ adherence to reductive unification. Admittedly, the method has a distinguished pedigree. During my PhD and early career in the 1990s, it was all the rage among theorists, and the fiendishly complex mathematics of string theory was its apogee. But none of our top-down efforts seem to be yielding fruit. One of the difficulties of trying to get at underlying principles is that it requires us to make a lot of theoretical presuppositions, any one of which could end up being wrong. We were hoping by this stage to have measured the mass of some superpartners, which would have given us some data on which to pin our assumptions. But we haven’t found anything to measure.

This doesn’t mean we need to give up on the unification paradigm. It just means that incrementalism is to be preferred to absolutism

Instead, many of us have switched from the old top-down style of working to a more humble, bottom-up approach. Instead of trying to drill down to the bedrock by coming up with a grand theory and testing it, now we’re just looking for any hints in the experimental data, and working bit by bit from there. If some measurement disagrees with the Standard Model’s predictions, we add an interacting particle with the right properties to explain it. Then we look at whether it’s consistent with all the other data. Finally, we ask how the particle and its interactions can be observed in the future, and how experiments should sieve the data in order to be able to test it.

The bottom-up method is much less ambitious than the top-down kind, but it has two advantages: it makes fewer assumptions about theory, and it’s tightly tethered to data. This doesn’t mean we need to give up on the old unification paradigm, it just suggests that we shouldn’t be so arrogant as to think we can unify physics right now, in a single step. It means incrementalism is to be preferred to absolutism – and that we should use empirical data to check and steer us at each instance, rather than making grand claims that come crashing down when they’re finally confronted with experiment.

A test case for the bottom-up methodology is the bottom meson, a composite particle made of something called a bottom quark and another known as a lighter quark. Bottom mesons appear to be decaying with the ‘wrong’ probabilities. Experiments in the LHC have measured billions of such decays, and it seems that the probability of getting a muon pair from particular interactions is about three-quarters of the probability of what the Standard Model says it should be. We can’t be totally sure yet that this effect is in strong disagreement with the Standard Model – more data is being analysed to make sure that the result is not due to statistics, or some subtle systematic error.

Some of us are busy speculating on what these findings might mean. Excitations of two different types of new, unobserved, exotic particles – known as Z-primes and leptoquarks, each buried deep within the bottom mesons – could be responsible for the bottom mesons misbehaving. However, the trouble is that one doesn’t know which (if either) type of particle is responsible. In order to check, ideally we’d produce them in LHC collisions and detect their decay products (these decay products should include muons with a certain energy). The LHC has a chance of producing Z-primes or leptoquarks, but it’s possible they’re just too heavy. In that case, one would need to build a higher energy collider: an ambitious plan for a beam of energy of seven times the intensity of the LHC would be a good option.

In the meantime, my colleagues and I ask: ‘Why should the new particles be there?’ A new mathematical symmetry might be the answer for Z-primes: it requires the Z-prime’s existence to hold. From this symmetry, one then gets additional theoretical constraints, and also some predictions for likely experimental signatures which could be checked with experiments in the future. Often, the bottom mesons are predicted to decay in other ways with some probability – for example, to something called an antimuon-tau. The LHC will be actively analysing their data for such signals in the future.

We began with an experimental signature (the particular bottom meson decays that disagree with Standard Model predictions), then we tried to ‘bung in’ a new hypothesised particle to explain it. Its predictions must be compared with current data to check that the explanation is still viable. Then we started building an additional theoretical structure that predicted the existence of the particle, as well as its interactions. This theory will allow us to make predictions for future measurements of decays, as well as search for the direct production of the new particle at the LHC. Only after any hints from these measurements and searches have been taken into account, and the models tweaked, might we want to embed the model in a larger, more unified theoretical structure. This may drive us progressively on the unification road, rather than attempting to jump to it in one almighty leap.
 

STS is a community for people interested in growing, preserving and researching botanical species, particularly those with remarkable therapeutic and/or psychoactive properties.
 
dragonrider
#2 Posted : 6/22/2018 3:12:47 PM

DMT-Nexus member

Moderator

Posts: 3090
Joined: 09-Jul-2016
Last visit: 03-Feb-2024
Maybe the idea of a grand unifying theory is tempting, but impossible. Maybe every theory will eventually suffer from a sort of Gödel-incompleteness thing. In mathematics, whenever a system get's complex enough, you start getting these kind of problems: you know that some things must be true within that system, but it is simply impossible to prove them. Or worse, you can prove something that is actually incorrect (incorrectness theorem).

In the field of neuropsychology, people also hope to bridge the gap between neurosiences and psychology. But maybe, grand unifying theories simply cannot work.
Maybe every theory does eventually have a perspective from witch it sees things. No matter how hard you try to be objective. Maybe your perspective is decided already, the very moment you have an axiom to start with. And any added axiom will change your perspective, so that eventually you will see things from one perspective that you simply cannot ever expect to see from the other perspective.

In such a case, a grand unifying theory will always come at a cost. You will have to give up things from the theories you want to unify.

In our daily lives, we don't realy notice any problems, though we unify a lot of stuff in our heads. But we never do it all at once. I think we're constantly rewriting our code. What we can 'prove' today, we can't prove tomorrow because we've learned something new and have a new perspective.
 
kensho2
#3 Posted : 10/2/2018 11:24:33 AM
DMT-Nexus member


Posts: 5
Joined: 28-Jan-2017
Last visit: 18-Feb-2019
Location: Oslo
dragonrider wrote:
[...] Or worse, you can prove something that is actually incorrect (incorrectness theorem).

I think you're mistaken on this point, soundness is not broken as far as I know..
 
swimer
#4 Posted : 10/2/2018 7:39:37 PM

DMT-Nexus member


Posts: 61
Joined: 21-Jan-2018
Last visit: 27-Apr-2022
as Grof mentioned in his book there is no truth, everything is just a paradigm and as times goes it changes. We need more time until someone will come up with need theory (and like always everyone will laugh at him becase of his insane ideas) and after as time goes he will prove right and more and more people will turn to his site. This way old paradigm will become outdated and new will take place.

As we all know universe is infinite so the deeper we dig new things we will find. It's cool that we do it as we progress thx to it as a society but thinking we can find something that will never become outdated is wrong kind of mindset.
 
dragonrider
#5 Posted : 10/3/2018 12:59:51 PM

DMT-Nexus member

Moderator

Posts: 3090
Joined: 09-Jul-2016
Last visit: 03-Feb-2024
kensho2 wrote:
dragonrider wrote:
[...] Or worse, you can prove something that is actually incorrect (incorrectness theorem).

I think you're mistaken on this point, soundness is not broken as far as I know..

Maybe not. But the problem is that you'd need to know "the truth", if you'd realy want to be sure. And normally we don't see the truth of a theory and the way of describing or proving that truth as separate, because usually they both are adequate. But what if your method of proof is nearly perfect? How do you determine that it is faulty in some very rare cases?

I think you risk incorporating faults when you keep expanding and expanding a theory. Or to put it differently, you risk sacrificing correctness for completeness.

In formal logic, many philosophers and mathematicians have tried to reconcile the formal systems, with paradoxes like "this sentence is false" or "there is a list of all lists that do not contain themselves". Often, their way of "solving" these problems is by extending the formal systems, like adding a third status beside "true" and "false". And though you could save the logical system, formally, from these nasty little problems, it is often somewhat disputable whether it is still completely correct. Mathematics and logic are very exact. But that is on a formal level only. The "truth" of mathematics, the soundness of the formal system, relies for a great part on common sense. And usually common sense works. But with these paradoxes, it's realy hard to tell whether a solution actually is sound or not.

Graham priest for instance, is a famous logician who claims to have solved the problem, by, in some cases, allowing statements to be both true and false at the same time. It all works out formally. But is it valid?
 
Nitegazer
#6 Posted : 10/3/2018 2:52:08 PM

DMT-Nexus member


Posts: 368
Joined: 09-Jun-2011
Last visit: 27-Nov-2020
I think that the greatest limitations we face have less to do with the nature of systems, measurement or the complexities of nature, and more to do with the limitations of the human mind.

The concept of a unified theory is a human construct based on aesthetics. The notion that the universe must be restricted by a human idea (or ideal) is absurd. Every model (even math) is a simplification and introduces error at some point.

Arthur Clarke's third law states that "any sufficiently advanced technology is indistinguishable from magic." How then could the full grandeur of the cosmos strike us as anything more penetrable?

In the area of theoretical physics, we have nearly reached the limits of what the human mind can understand without the assistance of computers. I think the next great breakthroughs will be grasped by computers built by computers (second order computers) and will be so far from our capabilities of understanding that we will depend on first order computers to provide only a rough translation.
 
tryptographer
#7 Posted : 10/3/2018 9:21:00 PM

tryptamine photographer


Posts: 760
Joined: 01-Jul-2008
Last visit: 21-Aug-2023
The author shows some humility, going from a top-down to a bottom-up approach, sounds like Quark behavior Pleased

Thanks Tatt, interesting article!
 
Jonabark
#8 Posted : 10/15/2018 4:07:22 AM

DMT-Nexus member


Posts: 123
Joined: 01-Sep-2018
Last visit: 16-Jul-2023
kensho2 wrote:
dragonrider wrote:
[...] Or worse, you can prove something that is actually incorrect (incorrectness theorem).

I think you're mistaken on this point, soundness is not broken as far as I know..


String theory , which is a part of the search for unified theory and elements of which are being discussed here seems like an example of the ability to create incorrect proofs. It is my rough understanding that by allowing 11 dimensions , various physics theorists create proofs with contradictions between the different theories.

In my oversimplifying mind It is hard for me to believe that mathematics is the only language in which lying , self deception or profound errors are impossible. Anyway a cool article even if I can barely follow the meaning of the physics concepts.

I would like to know more about soundness and what it means in this context so any reference would be appreciated.
 
 
Users browsing this forum
Guest

DMT-Nexus theme created by The Traveler
This page was generated in 0.039 seconds.