Normal view

Received before yesterdayPhysics World

Quantum computing and AI join forces for particle physics

23 October 2025 at 13:57

This episode of the Physics World Weekly podcast explores how quantum computing and artificial intelligence can be combined to help physicists search for rare interactions in data from an upgraded Large Hadron Collider.

My guest is Javier Toledo-Marín, and we spoke at the Perimeter Institute in Waterloo, Canada. As well as having an appointment at Perimeter, Toledo-Marín is also associated with the TRIUMF accelerator centre in Vancouver.

Toledo-Marín and colleagues have recently published a paper called “Conditioned quantum-assisted deep generative surrogate for particle–calorimeter interactions”.

Delft logo

This podcast is supported by Delft Circuits.

As gate-based quantum computing continues to scale, Delft Circuits provides the i/o solutions that make it possible.

The post Quantum computing and AI join forces for particle physics appeared first on Physics World.

How to solve the ‘future of physics’ problem

22 October 2025 at 10:00

I hugely enjoyed physics when I was a youngster. I had the opportunity both at home and school to create my own projects, which saw me make electronic circuits, crazy flying models like delta-wings and autogiros, and even a gas chromatograph with a home-made chart recorder. Eventually, this experience made me good enough to repair TV sets, and work in an R&D lab in the holidays devising new electronic flow controls.

That enjoyment continued beyond school. I ended up doing a physics degree at the University of Oxford before working on the discovery of the gluon at the DESY lab in Hamburg for my PhD. Since then I have used physics in industry – first with British Oxygen/Linde and later with Air Products & Chemicals – to solve all sorts of different problems, build innovative devices and file patents.

While some students have a similarly positive school experience and subsequent career path, not enough do. Quite simply, physics at school is the key to so many important, useful developments, both within and beyond physics. But we have a physics education problem, or to put it another way – a “future of physics” problem.

There are just not enough school students enjoying and learning physics. On top of that there are not enough teachers enjoying physics and not enough students doing practical physics. The education problem is bad for physics and for many other subjects that draw on physics. Alas, it’s not a new problem but one that has been developing for years.

Problem solving

Many good points about the future of physics learning were made by the Institute of Physics in its 2024 report Fundamentals of 11 to 19 Physics. The report called for more physics lessons to have a practical element and encouraged more 16-year-old students in England, Wales and Northern Ireland to take AS-level physics at 17 so that they carry their GCSE learning at least one step further.

Doing so would furnish students who are aiming to study another science or a technical subject with the necessary skills and give them the option to take physics A-level. Another recommendation is to link physics more closely to T-levels – two-year vocational courses in England for 16–19 year olds that are equivalent to A-levels – so that students following that path get a background in key aspects of physics, for example in engineering, construction, design and health.

But do all these suggestions solve the problem? I don’t think they are enough and we need to go further. The key change to fix the problem, I believe, is to have student groups invent, build and test their own projects. Ideally this should happen before GCSE level so that students have the enthusiasm and background knowledge to carry them happily forward into A-level physics. They will benefit from “pull learning” – pulling in knowledge and active learning that they will remember for life. And they will acquire wider life skills too.

Developing skillsets

During my time in industry, I did outreach work with schools every few weeks and gave talks with demonstrations at the Royal Institution and the Franklin Institute. For many years I also ran a Saturday Science club in Guildford, Surrey, for pupils aged 8–15.

Based on this, I wrote four Saturday Science books about the many playful and original demonstrations and projects that came out of it. Then at the University of Surrey, as a visiting professor, I had small teams of final-year students who devised extraordinary engineering – designing superguns for space launches, 3D printers for full-size buildings and volcanic power plants inter alia. A bonus was that other staff working with the students got more adventurous too.

But that was working with students already committed to a scientific path. So lately I’ve been working with teachers to get students to devise and build their own innovative projects. We’ve had 14–15-year-old state-school students in groups of three or four, brainstorming projects, sketching possible designs, and gathering background information. We help them and get A-level students to help too (who gain teaching experience in the process). Students not only learn physics better but also pick up important life skills like brainstorming, team-working, practical work, analysis and presentations.

We’ve seen lots of ingenuity and some great projects such as an ultrasonic scanner to sense wetness of cloth; a system to teach guitar by lighting up LEDs along the guitar neck; and measuring breathing using light passing through a band of Lycra around the patient below the ribs. We’ve seen the value of failure, both mistakes and genuine technical problems.

Best of all, we’ve also noticed what might be dubbed the “combination bonus” – students having to think about how they combine their knowledge of one area of physics with another.  A project involving a sensor, for example, will often involve electronics as well the physics of the sensor and so student knowledge of both areas is enhanced.

Some teachers may question how you mark such projects. The answer is don’t mark them! Project work and especially group work is difficult to mark fairly and accurately, and the enthusiasm and increased learning by students working on innovative projects will feed through into standard school exam results.

Not trying to grade such projects will mean more students go on to study physics further, potentially to do a physics-related extended project qualification – equivalent to half an A-level where students research a topic to university level – and do it well. Long term, more students will take physics with them into the world of work, from physics to engineering or medicine, from research to design or teaching.

Such projects are often fun for students and teachers. Teachers are often intrigued and amazed by students’ ideas and ingenuity. So, let’s choose to do student-invented project work at school and let’s finally solve the future of physics problem.

The post How to solve the ‘future of physics’ problem appeared first on Physics World.

A recipe for quantum chaos

22 October 2025 at 09:44

The control of large, strongly coupled, multi-component quantum systems with complex dynamics is a challenging task.

It is, however, an essential prerequisite for the design of quantum computing platforms and for the benchmarking of quantum simulators.

A key concept here is that of quantum ergodicity. This is because quantum ergodic dynamics can be harnessed to generate highly entangled quantum states.

In classical statistical mechanics, an ergodic system evolving over time will explore all possible microstates states uniformly. Mathematically, this means that a sufficiently large collection of random samples from an ergodic process can represent the average statistical properties of the entire process.

Quantum ergodicity is simply the extension of this concept to the quantum realm.

Closely related to this is the idea of chaos. A chaotic system is one in which is very sensitive to its initial conditions. Small changes can be amplified over time, causing large changes in the future.

The ideas of chaos and ergodicity are intrinsically linked as chaotic dynamics often enable ergodicity.

Until now, it has been very challenging to predict which experimentally preparable initial states will trigger quantum chaos and ergodic dynamics over a reasonable time scale.

In a new paper published in Reports on Progress in Physics, a team of researchers have proposed an ingenious solution to this problem using the Bose–Hubbard Hamiltonian.

They took as an example ultracold atoms in an optical lattice (a typical choice for experiments in this field) to benchmark their method.

The results show that there are certain tangible threshold values which must be crossed in order to ensure the onset of quantum chaos.

These results will be invaluable for experimentalists working across a wide range of quantum sciences.

The post A recipe for quantum chaos appeared first on Physics World.

This jumping roundworm uses static electricity to attach to flying insects

17 October 2025 at 14:30

Researchers in the US have discovered that a tiny jumping worm uses static electricity to increase the chances of attaching to its unsuspecting prey.

The parasitic roundworm Steinernema carpocapsae, which live in soil, are already known to leap some 25 times their body length into the air. They do this by curling into a loop and springing in the air, rotating hundreds of times a second.

If the nematode lands successfully, it releases bacteria that kills the insect within a couple of days upon which the worm feasts and lays its eggs. At the same time, if it fails to attach to a host then it faces death itself.

While static electricity plays a role in how some non-parasitic nematodes detach from large insects, little is known whether static helps their parasitic counterparts to attach to an insect.

To investigate, researchers are Emory University and the University of California, Berkeley, conducted a series of experiments, in which they used highspeed microscopy techniques to film the worms as they leapt onto a fruit fly.

They did this by tethering a fly with a copper wire that was connected to a high-voltage power supply.

They found that a charge of a few hundred volts – similar to that generated in the wild by an insect’s wings rubbing against ions in the air – fosters a negative charge on the worm, creating an attractive force with the positively charged fly.

Carrying out simulations of the worm jumps, they found that without any electrostatics, only 1 in 19 worm trajectories successfully reached their target. The greater the voltage, however, the greater the chance of landing. For 880 V, for example, the probability was 80%.

The team also carried out experiments using a wind tunnel, finding that the presence of wind helped the nematodes drift and this also increased their chances of attaching to the insect.

“Using physics, we learned something new and interesting about an adaptive strategy in an organism,” notes Emory physicist Ranjiangshang Ran. “We’re helping to pioneer the emerging field of electrostatic ecology.”

The post This jumping roundworm uses static electricity to attach to flying insects appeared first on Physics World.

Wearable UVA sensor warns about overexposure to sunlight

17 October 2025 at 08:09
Illustration showing the operation of the UVA detector
Transparent healthcare Illustration of the fully transparent sensor that reacts to sunlight and allows real-time monitoring of UVA exposure on the skin. The device could be integrated into wearable items, such as glasses or patches. (Courtesy: Jnnovation Studio)

A flexible and wearable sensor that allows the user to monitor their exposure to ultraviolet (UV) radiation has been unveiled by researchers in South Korea. Based on a heterostructure of four different oxide semiconductors, the sensor’s flexible, transparent design could vastly improve the real-time monitoring of skin health.

UV light in the A band has wavelengths of 315–400 nm and comprises about 95% of UV radiation that reaches the surface of the earth. Because of its relatively long wavelength, UVA can penetrate deep into the skin. There it can alter biological molecules, damaging tissue and even causing cancer.

While covering up with clothing and using sunscreen are effective at reducing UVA exposure, researchers are keen on developing wearable sensors that can monitor UVA levels in real time. These can alert users when their UVA exposure reaches a certain level. So far, the most promising advances towards these designs have come from oxide semiconductors.

Many challenges

“For the past two decades, these materials have been widely explored for displays and thin-film transistors because of their high mobility and optical transparency,” explains Seong Jun Kang at Soongsil University, who led the research. “However, their application to transparent ultraviolet photodetectors has been limited by high persistent photocurrent, poor UV–visible discrimination, and instability under sunlight.”

While these problems can be avoided in more traditional UV sensors, such as gallium nitride and zinc oxide, these materials are opaque and rigid – making them completely unsuitable for use in wearable sensors.

In their study, Kang’s team addressed these challenges by introducing a multi-junction heterostructure, made by stacking multiple ultrathin layers of different oxide semiconductors. The four semiconductors they selected each had wide bandgaps, which made them more transparent in the visible spectrum but responsive to UV light.

The structure included zinc and tin oxide layers as n-type semiconductors (doped with electron-donating atoms) and cobalt and hafnium oxide layers as p-type semiconductors (doped with electron-accepting atoms) – creating positively charged holes. Within the heterostructure, this selection created three types of interface: p–n junctions between hafnium and tin oxide; n–n junctions between tin and zinc oxide; and p–p junctions between cobalt and hafnium oxide.

Efficient transport

When the team illuminated their heterostructure with UVA photons, the electron–hole charge separation was enhanced by the p–n junction, while the n–n and p–p junctions allowed for more efficient transport of electrons and holes respectively, improving the design’s response speed. When the illumination was removed, the electron–hole pairs could quickly decay, avoiding any false detections.

To test their design’s performance, the researchers integrated their heterostructure into a wearable detector. “In collaboration with UVision Lab, we developed an integrated Bluetooth circuit and smartphone application, enabling real-time display of UVA intensity and warning alerts when an individual’s exposure reaches the skin-type-specific minimal erythema dose (MED),” Kang describes. “When connected to the Bluetooth circuit and smartphone application, it successfully tracked real-time UVA variations and issued alerts corresponding to MED limits for various skin types.”

As well as maintaining over 80% transparency, the sensor proved highly stable and responsive, even in direct outdoor sunlight and across repeated exposure cycles. Based on this performance, the team is now confident that their design could push the capabilities of oxide semiconductors beyond their typical use in displays and into the fast-growing field of smart personal health monitoring.

“The proposed architecture establishes a design principle for high-performance transparent optoelectronics, and the integrated UVA-alert system paves the way for next-generation wearable and Internet-of-things-based environmental sensors,” Kang predicts.

The research is described in Science Advances.

The post Wearable UVA sensor warns about overexposure to sunlight appeared first on Physics World.

Ask me anything: Scott Bolton – ‘It’s exciting to be part of a team that’s seeing how nature works for the first time’

29 September 2025 at 10:00

What skills do you use every day in your job?

As a planetary scientist, I use mathematics, physics, geology and atmospheric science. But as the principal investigator of Juno, I also have to manage the Juno team, and interface with politicians, people at NASA headquarters and other administrators. In that capacity, I need to be able to talk about topics at various technical levels, because many of the people I’m speaking with are not actively researching planetary science. I need a broad range of skills, but one of the most important is to be able to recognize when I don’t have the right expertise and need to find someone who can help.

The surface of Jupiter
Pretty amazing Hurricane-like spiral wind patterns near Jupiter’s north pole as seen by NASA’s Juno mission, of which Scott Bolton is principal investigator. (Courtesy: NASA/JPL/Caltech SwRIMSSS Gerald Eichstädt, SeánDoran)

What do you like best and least about your job?

I really love being part of a mission that’s discovering new information and new ideas about how the universe works. It’s exciting to be at the edge of something, where you are part of a team that’s seeing an image or an aspect of how nature works for the first time. The discovery element is truly inspirational. I also love seeing how a mixture of scientists with different expertise, skills and backgrounds can come together to understand something new. Watching that process unfold is very exciting to me.

Some tasks I like least are related to budget exercises, administrative tasks and documentation. Some government rules and regulations can be quite taxing and require a lot of time to ensure forms and documents are completed correctly. Occasionally, an urgent action item will appear requiring an immediate response and having to drop current work to fit in a new task. As a result, my normal work gets delayed, and this can be frustrating. I consider one of my main jobs to shelter the team from these extraneous tasks so they can get their work done.

What do you know today that you wish you’d known at the start of your career?

The most important thing I know now is that if you really believe in something, you should stick to it. You should not give up. You should keep trying, keep working at it, and find people who can collaborate with you to make it happen. Early on, I didn’t realize how important it was to combine forces with people who complemented my skills in order to achieve goals.

The other thing I wish I had known is that taking time to figure out the best way to approach a challenge, question or problem is beneficial to achieving one’s goals.  That was a very valuable lesson to learn. We should resist the temptation to rush into finding the answer – instead, it’s worthwhile to take the time to think about the question and develop an approach.

The post Ask me anything: Scott Bolton – ‘It’s exciting to be part of a team that’s seeing how nature works for the first time’ appeared first on Physics World.

Discovery of the Higgs boson at CERN inspires new stained-glass artwork

25 September 2025 at 14:02

London-based artist Oksana Kondratyeva has created a new stained-glass artwork – entitled Discovery – that is inspired by the detection of the Higgs boson at CERN’s Large Hadron Collider (LHC) in 2012.

Born in Ukraine, Kondratyeva has a PhD in the theory of architecture and has an artist residency at the Romont Glass Museum (Vitromusée Romont) in Switzerland, where Discovery is currently exhibited.

In 2023 Kondratyeva travelled to visit the LHC at CERN, which she notes represents “more than a laboratory [but] a gateway to the unknown”.

Discovery draws inspiration from the awe I felt standing at the frontier of human knowledge, where particles collide at unimaginable energies and new forms of matter are revealed,” Kondratyeva told Physics World.

Kondratyeva says that the focal point of the artwork – a circle structured with geometric precision – represents the collision of two high-energy protons.

The surrounding lead lines in the panel trace the trajectories of particle decays as they move through a magnetic field: right-curved lines represent positively charged particles, left-curved lines indicate negatively charged ones, while straight lines signify neutral particles unaffected by the magnetic field.

The geometric composition within the central circle reflects the hidden symmetries of physical laws – patterns that only emerge when studying the behaviour of particle interactions.

Kondratyeva says that the use of mouth-blown flashed glass adds further depth to the piece, with colours and subtle shades moving from hot and luminous at the centre to cooler, more subdued tones toward the edges.

“Through glass, light and colour I sought to express the invisible forces and delicate symmetries that define our universe – ideas born in the realm of physics, yet deeply resonant in artistic expression,” notes Kondratyeva. “The work also continues a long tradition of stained glass as a medium of storytelling, reflecting the deep symmetries of nature and the human drive to find order in chaos.”

In 2022 Kondratyeva teamed up with Rigetti Computing to create piece of art inspired by the packaging for a quantum chip. Entitled Per scientiam ad astra (through science to the stars), the artwork was displayed at the 2024 British Glass Biennale at the Ruskin Glass Centre in Stourbridge, UK.

The post Discovery of the Higgs boson at CERN inspires new stained-glass artwork appeared first on Physics World.

Imagining alien worlds: we explore the science and fiction of exoplanets

25 September 2025 at 11:00

In the past three decades astronomers have discovered more than 6000 exoplanets – planets that orbit stars other than the Sun. Many of these exoplanets are very unlike the eight planets of the solar system, making it clear that the cosmos contains a rich and varied array of alien worlds.

Weird and wonderful planets are also firmly entrenched in the world of science fiction, and the interplay between imagined and real planets is explored in the new book Amazing Worlds of Science Fiction and Science Fact. Its author Keith Cooper is my guest in this episode of the Physics World Weekly podcast and our conversation ranges from the amazing science of “hot Jupiter” exoplanets to how the plot of a popular Star Trek episode could inform our understanding of how life could exist on distant exoplanets.

The post Imagining alien worlds: we explore the science and fiction of exoplanets appeared first on Physics World.

Gyroscopic backpack improves balance for people with movement disorder

25 September 2025 at 08:00

A robotic backpack equipped with gyroscopes can enhance stability for people with severe balance issues and may eventually remove the need for mobility walkers. Designed to dampen unintended torso motion and improve balance, the backpack employs similar gyroscopic technology to that used by satellites and space stations to maintain orientation. Individuals with the movement disorder ataxia put the latest iteration of the device – the GyroPack – through its paces in a series of standing, walking and body motion exercises.

In development for over a decade, GyroPack is the brainchild of a team of neurologists, biomechanical engineers and rehabilitation specialists at the Radboud University Medical Centre, Delft University of Technology (TU Delft) and Erasmus Medical Centre. The first tests of its ability to improve balance performance with ataxia-impacted adults, described in npj Robotics, produced encouraging enough results to continue the GyroPack’s development as a portable robotic wearable for individuals with neurological conditions.

Degenerative ataxias, a variety of diseases of the nervous system, cause progressive cerebral dysfunction manifesting as symptoms including lack of coordination, imbalance when standing and difficulty walking. Ataxia can afflict people of all ages, including young children. Managing the progressive symptoms may require lifetime use of cumbersome, heavily weighted walkers as mobility aids and to prevent falling.

GyroPack design

The 6 kg version of the GyroPack tested in this study contains two control moment gyroscopes (CMGs), which are attitude control devices that control orientation to a specific inertial frame-of-reference. Each CMG consists of a flywheel and a gimbal, which together generate the change in angular momentum that’s exerted onto the wearer to resist unintended torso rotations. Each CMG also contains an inertial measurement unit to determine the orientation and angular rate of change of the CMG.

The backpack also holds two independent, 1.5 kg miniaturized actuators designed by the team that convert energy into motion. The system is controlled by a laptop and powered through a separate power box that filters and electrically separates electrical signals for safety. All activities can be immediately terminated when an emergency stop button is pushed.

Lead researcher Jorik Nonnekes of Radboud UMC describes how the system works: “The change of orientation imposed by the gimbal motor, combined with the angular momentum of the flywheels, causes a free moment, or torque, that is exerted onto the system the CMG is attached to – which in this study is the human upper body,” he explains. “A cascaded control scheme reliably deals with actuator limitations without causing undesired disturbances on the user. The gimbals are controlled in such a way that the torque exerted on the trunk is proportional and opposite to the trunk’s angular velocity, which effectively lets the system damp rotational motion of the wearer. This damping has been shown to make balancing easier for unimpaired subjects and individuals post-stroke.”

Performance assessment

Study participant wearing the GyroPack
Exercise study A participant wearing the GyroPack. (Courtesy: npj Robot. 10.1038/s44182-025-00041-4)

For the study, 14 recruits diagnosed with degenerative ataxia performed five tasks: standing still with feet together and arms crossed for up to 30 s; walking on a treadmill for 2 min without using the handrail; making a clockwise and a counterclockwise 360° turn-in-place; performing a tandem stance with the heel of one foot touching the toes of the other for up to 30 s; and testing reactive balance by applying two forward and two backward treadmill perturbations.

The participants performed these tasks under three conditions, two whilst wearing the backpack and one without as a baseline. In one scenario, the backpack was operated in assistive mode to investigate its damping power and torque profiles. In the other, the backpack was in “sham mode”, without assistive control but with sound and motor vibrations indistinguishable from normal operation.

The researchers report that when fully operational, the GyroPack increased the user’s average standing time compared with not wearing the backpack at all. When used during walking, it reduced the variability of trunk angular velocity and the extrapolated centre-of-mass, two common indicators of gait stability. The trunk angular velocity variability also showed a significant reduction when comparing assistive to sham GyroPack modes. However, the performance of turn-in-place and perturbation recovery tasks were similar for all three scenarios.

Interestingly, wearing the backpack in the sham scenario improved walking tasks compared with not wearing a backpack at all. The researchers attributed this to possibly more weight in the torso area improving body stabilization or to a placebo effect.

Next, the team plans to redesign the device to make it lighter and quieter. “It’s not yet suitable for everyday use,” says Nonnekes in a press statement. “But in the future, it could help people with ataxia participate more freely in daily life, like attending social events without needing a walker, which many find bulky and inconvenient. This could greatly enhance their mobility and overall quality of life.”

The post Gyroscopic backpack improves balance for people with movement disorder appeared first on Physics World.

The pros and cons of reinforcement learning in physical science

17 September 2025 at 10:30

Today’s artificial intelligence (AI) systems are built on data generated by humans. They’re trained on huge repositories of writing, images and videos, most of which have been scraped from the Internet without the knowledge or consent of their creators. It’s a vast and sometimes ill-gotten treasure trove of information – but for machine-learning pioneer David Silver, it’s nowhere near enough.

“I think if you provide the knowledge that humans already have, it doesn’t really answer the deepest question for AI, which is how it can learn for itself to solve problems,” Silver told an audience at the 12th Heidelberg Laureate Forum (HLF) in Heidelberg, Germany, on Monday.

Silver’s proposed solution is to move from the “era of human data”, in which AI passively ingests information like a student cramming for an exam, into what he calls the “era of experience” in which it learns like a baby exploring its world. In his HLF talk on Monday, Silver played a sped-up video of a baby repeatedly picking up toys, manipulating them and putting them down while crawling and rolling around a room. To murmurs of appreciation from the audience, he declared, “I think that provides a different perspective of how a system might learn.”

Silver, a computer scientist at University College London, UK, has been instrumental in making this experiential learning happen in the virtual worlds of computer science and mathematics. As head of reinforcement learning at Google DeepMind, he was instrumental in developing AlphaZero, an AI system that taught itself to play the ancient stones-and-grid game of Go. It did this via a so-called “reward function” that pushed it to improve over many iterations, without ever being taught the game’s rules or strategy.

More recently, Silver coordinated a follow-up project called AlphaProof that treats formal mathematics as a game. In this case, AlphaZero’s reward is based on getting correct proofs. While it isn’t yet outperforming the best human mathematicians, in 2024 it achieved silver-medal standard on problems at the International Mathematical Olympiad.

Learning in the physics playroom

Could a similar experiential learning approach work in the physical sciences? At an HLF panel discussion on Tuesday afternoon, particle physicist Thea Klaeboe Åarrestad began by outlining one possible application. Whenever CERN’s Large Hadron Collider (LHC) is running, Åarrestad explained, she and her colleagues in the CMS experiment must control the magnets that keep protons on the right path as they zoom around the collider. Currently, this task is performed by a person, working in real time.

Four people sitting on a stage with a large screen in the background. Another person stands beside them
Up for discussion: A panel discussion on machine learning in physical sciences at the Heidelberg Laureate Forum. l-r: Moderator George Musser, Kyle Cranmer, Thea Klaeboe Åarrestad, David Silver and Maia Fraser. (Courtesy: Bernhard Kreutzer/HLFF)

In principle, Åarrestad continued, a reinforcement-learning AI could take over that job after learning by experience what works and what doesn’t. There’s just one problem: if it got anything wrong, the protons would smash into a wall and melt the beam pipe. “You don’t really want to do that mistake twice,” Åarrestad deadpanned.

For Åarrestad’s fellow panellist Kyle Cranmer, a particle physicist who works on data science and machine learning at the University of Wisconsin-Madison, US, this nightmare scenario symbolizes the challenge with using reinforcement learning in physical sciences. In situations where you’re able to do many experiments very quickly and essentially for free – as is the case with AlphaGo and its descendants – you can expect reinforcement learning to work well, Cranmer explained. But once you’re interacting with a real, physical system, even non-destructive experiments require finite amounts of time and money.

Another challenge, Cranmer continued, is that particle physics already has good theories that predict some quantities to multiple decimal places. “It’s not low-hanging fruit for getting an AI to come up with a replacement framework de novo,” Cranmer said. A better option, he suggested, might be to put AI to work on modelling atmospheric fluid dynamics, which are emergent phenomena without first-principles descriptions. “Those are super-exciting places to use ideas from machine learning,” he said.

Not for nuclear arsenals

Silver, who was also on Tuesday’s panel, agreed that reinforcement learning isn’t always the right solution. “We should do this in areas where mistakes are small and it can learn from those small mistakes to avoid making big mistakes,” he said. To general laughter, he added that he would not recommend “letting an AI loose on nuclear arsenals”, either.

Reinforcement learning aside, both Åarrestad and Cranmer are highly enthusiastic about AI. For Cranmer, one of the most exciting aspects of the technology is the way it gets scientists from different disciplines talking to each other. The HLF, which aims to connect early-career researchers with senior figures in mathematics and computer science, is itself a good example, with many talks in the weeklong schedule devoted to AI in one form or another.

For Åarrestad, though, AI’s most exciting possibility relates to physics itself. Because the LHC produces far more data than humans and present-day algorithms can handle, Åarrestad explained, much of it is currently discarded. The idea that, as a result, she and her colleagues could be throwing away major discoveries sometimes keeps her up at night. “Is there new physics below 1 TeV?” Åarrestad wondered.

Someday, maybe, an AI might be able to tell us.

The post The pros and cons of reinforcement learning in physical science appeared first on Physics World.

Are we heading for a future of superintelligent AI mathematicians?

16 September 2025 at 19:54

When researchers at Microsoft released a list of the 40 jobs most likely to be affected by generative artificial intelligence (gen AI), few outsiders would have expected to see “mathematician” among them. Yet according to speakers at this year’s Heidelberg Laureate Forum (HLF), which connects early-career researchers with distinguished figures in mathematics and computer science, computers are already taking over many tasks formerly performed by human mathematicians – and the humans have mixed feelings about it.

One of those expressing disquiet is Yang-Hui He, a mathematical physicist at the London Institute for Mathematical Sciences. In general, He is extremely keen on AI. He’s written a textbook about the use of AI in mathematics, and he told the audience at an HLF panel discussion that he’s been peddling machine-learning techniques to his mathematical physics colleagues since 2017.

More recently, though, He has developed concerns about gen AI specifically. “It is doing mathematics so well without any understanding of mathematics,” he said, a note of wonder creeping into his voice. Then, more plaintively, he added, “Where is our place?”

AI advantages

Some of the things that make today’s gen AI so good at mathematics are the same as the ones that made Google’s DeepMind so good at the game of Go. As the theoretical computer scientist Sanjeev Arora pointed out in his HLF talk, “The reason it’s better than humans is that it’s basically tireless.” Put another way, if the 20th-century mathematician Alfréd Rényi once described his colleagues as “machines for turning coffee into theorems”, one advantage of 21st-century AI is that it does away with the coffee.

Arora, however, sees even greater benefits. In his view, AI’s ability to use feedback to improve its own performance – a technique known as reinforcement learning – is particularly well-suited to mathematics.

In the standard version of reinforcement learning, Arora explains, the AI model is given a large bank of questions, asked to generate many solutions and told to use the most correct ones (as labelled by humans) to refine its model. But because mathematics is so formalized, with answers that are so verifiably true or false, Arora thinks it will soon be possible to replace human correctness checkers with AI “proof assistants”. Indeed, he’s developing one such assistant himself, called Lean, with his colleagues at Princeton University in the US.

Humans in the loop?

But why stop there? Why not use AI to generate mathematical questions as well as producing and checking their solutions? Indeed, why not get it to write a paper, peer review it and publish it for its fellow AI mathematicians – which are, presumably, busy combing the literature for information to help them define new questions?

Arora clearly thinks that’s where things are heading, and many of his colleagues seem to agree, at least in part. His fellow HLF panellist Javier Gómez-Serrano, a mathematician at Brown University in the US, noted that AI is already generating results in a day or two that would previously have taken a human mathematician months. “Progress has been quite quick,” he said.

The panel’s final member, Maia Fraser of the University of Ottawa, Canada, likewise paid tribute to the “incredible things that are possible with AI now”.  But Fraser, who works on mathematical problems related to neuroscience, also sounded a note of caution. “My concern is the speed of the changes,” she told the HLF audience.

The risk, Fraser continued, is that some of these changes may end up happening by default, without first considering whether humans want or need them. While we can’t un-invent AI, “we do have agency” over what we want, she said.

So, do we want a world in which AI mathematicians take humans “out of the loop” entirely? For He, the benefits may outweigh the disadvantages. “I really want to see a proof of the Riemann hypothesis,” he said,  to ripples of laughter. If that means that human mathematicians “become priests to oracles”, He added, so be it.

The post Are we heading for a future of superintelligent AI mathematicians? appeared first on Physics World.

Space–time crystal emerges in a liquid crystal

16 September 2025 at 14:27

The first-ever “space–time crystal” has been created in the US by Hanqing Zhao and Ivan Smalyukh at the University of Colorado Boulder. The system is patterned in both space and time and comprises a rigid lattice of topological solitons that are sustained by steady oscillations in the orientations of liquid crystal molecules.

In an ordinary crystal atomic or molecular structures repeat at periodic intervals in space. In 2012, however, Frank Wilczek suggested that systems might also exist with quantum states that repeat at perfectly periodic intervals in time – even as they remain in their lowest-energy state.

First observed experimentally in 2017, these time crystals are puzzling to physicists because they spontaneously break time–translation symmetry, which states that the laws of physics are the same no matter when you observe them. In contrast, a time crystal continuously oscillates over time, without consuming energy.

A space–time crystal is even more bizarre. In addition to breaking time–translation symmetry, such a system would also break spatial symmetry, just like the repeating molecular patterns of an ordinary crystal. Until now, however, a space–time crystal had not been observed directly.

Rod-like molecules

In their study, Zhao and Smalyukh created a space–time crystal in the nematic phase of a liquid crystal. In this phase the crystal’s rod-like molecules align parallel to each other and also flow like a liquid. Building on computer simulations, they confined the liquid crystal between two glass plates coated with a light-sensitive dye.

“We exploited strong light–matter interactions between dye-coated, light-reconfigurable surfaces, and the optical properties of the liquid crystal,” Smalyukh explains.

When the researchers illuminated the top plate with linearly polarized light at constant intensity, the dye molecules rotate to align perpendicular to the direction of polarization. This reorients nearby liquid crystal molecules, and the effect propagates deeper into the bulk. However, the influence weakens with depth, so that molecules farther from the top plate are progressively less aligned.

As light travels through this gradually twisting structure, its linear polarization is transformed, becoming elliptically polarized by the time it reaches the bottom plate. The dye molecules there become aligned with this new polarization, altering the liquid crystal alignment near the bottom plate. These changes propagate back upward, influencing molecules near the top plate again.

Feedback loop

This is a feedback loop, with the top and bottom plates continuously influencing each other via the polarized light passing through the liquid crystal.

“These light-powered dynamics in confined liquid crystals leads to the emergence of particle-like topological solitons and the space–time crystallinity,” Smalyukh says.

In this environment, particle-like topological solitons emerge as stable, localized twists in the liquid crystal’s orientation that do not decay over time. Like particles, the solitons move and interact with each other while remaining intact.

Once the feedback loop is established, these solitons emerge in a repeating lattice-like pattern. This arrangement not only persisted as the feedback loop continued, but is sustained by it. This is a clear sign that the system exhibits crystalline order in time and space simultaneously.

Accessible system

Having confirmed their conclusions with simulations, Zhao and Smalyukh are confident this is the first experimental demonstration of a space–time crystal. The discovery that such an exotic state can exist in a classical, room-temperature system may have important implications.

“This is the first time that such a phenomenon is observed emerging in a liquid crystalline soft matter system,” says Smalyukh. “Our study calls for a re-examining of various time-periodic phenomena to check if they meet the criteria of time-crystalline behaviour.”

Building on these results, the duo hope to broaden the scope of time crystal research beyond a purely theoretical and experimental curiosity. “This may help expand technological utility of liquid crystals, as well as expand the currently mostly fundamental focus of studies of time crystals to more applied aspects,” Smalyukh adds.

The research is described in Nature Materials.

The post Space–time crystal emerges in a liquid crystal appeared first on Physics World.

Physicists set to decide location for next-generation Einstein Telescope

10 September 2025 at 09:30

A decade ago, on 14 September 2015, the twin detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) in Hanford, Washington, and Livingston, Louisiana, finally detected a gravitational wave. The LIGO detectors – two L-shaped laser interferometers with 4 km-long arms – had measured tiny differences in laser beams bouncing off mirrors at the end of each arm. The variations in the length of the arms, caused by the presence of a gravitational wave, were converted into the now famous audible “chirp signal”, which indicated the final approach between two merging black holes.

Since that historic detection, which led to the 2017 Nobel Prize for Physics, the LIGO detectors, together with VIRGO in Italy, have measured several hundred gravitational waves – from mergers of black holes to neutron-star collisions. More recently, they have been joined by the KAGRA detector in Japan, which is located some 200 m underground, shielding it from vibrations and environmental noise.

Yet the current number of gravitational waves could be dwarfed by what the planned Einstein Telescope (ET) would measure. This European-led, third-generation gravitational-wave detector would be built several hundred metres underground and be at least 10 times more sensitive than its second-generation counterparts including KAGRA. Capable of “listening” to a thousand times larger volume of the universe, the new detector would be able to spot many more sources of gravitational waves. In fact, the ET will be able to gather in a day what it took LIGO and VIRGO a decade to collect.

The ET is designed to operate in two frequency domains. The low-frequency regime – 2–40 Hz – is below current detectors’ capabilities and will let the ET pick up waves from more massive black holes. The high-frequency domain, on the other hand, would operate from 40 Hz to 10 kHz  and detect a wide variety of astrophysical sources, including merging black holes and other high-energy events. The detected signals from waves would also be much longer with the ET, lasting for hours. This would allow physicists to “tune in” much earlier as black holes or neutron stars approach each other.

Location, location, location

But all that is still a pipe dream, because the ET, which has a price tag of €2bn, is not yet fully funded and is unlikely to be ready until 2035 at the earliest. The precise costs will depend on the final location of the experiment, which is still up for grabs.

Three regions are vying to host the facility: the Italian island of Sardinia, the Belgian-German-Dutch border region and the German state of Saxony. Each candidate is currently investigating the suitability of its preferred site (see box below), the results of which will be published in a “bid book” by the end of 2026. The winning site will be picked in 2027 with construction beginning shortly after.

Other factors that will dictate where the ET is built include logistics in the host region, the presence of companies and research institutes (to build and exploit the facility) and government support. With the ET offering high-quality jobs, economic return, scientific appeal and prestige, that could give the German-Belgian-Dutch candidacy the edge given the three nations could share the cost.

Another major factor is the design of the ET. One proposal is to build it as an equilateral triangle with each side being 10 km. The other is a twin L-shaped design where both arms are 15 km long and each detector located far from each other. The latter design is similar to the two LIGO over-ground detectors, which are 3000 km apart. If the “2L design” is chosen, the detector would then be built at two of the three competing sites.

The 2L design is being investigated by all three sites, but those behind the Sardinia proposal strongly favour this approach. “With the detectors properly oriented relative to each other, this design could outperform the triangular design across all key scientific objectives,” claims Domenico D’Urso, scientific director of the Italian candidacy. He points to a study by the ET collaboration in 2023 that investigated the impact of the ET design on its scientific goals. “The 2L design enables, for example, more precise localization of gravitational wave sources, enhancing sky-position reconstruction,” he says. “And it provides superior overall sensitivity.”

Where could the next-generation Einstein Telescope be built?

Three sites are vying to host the Einstein Telescope (ET), with each offering various geological advantages. Lausitz in Saxony benefits from being a former coal-mining area. “Because of this mining past, the subsurface was mapped in great detail decades ago,” says Günther Hasinger, founding director of the German Center for Astrophysics, which is currently being built in Lausitz and would house the ET if picked. The granite formation in Lausitz is also suitable for a tunnel complex because the rock is relatively dry. Not much water would need to be pumped away, causing less vibration.

Thanks to the former lead, zinc and silver mine of Sos Enattos, meanwhile, the subsurface near Nuoro in Sardinia – another potential location for the ET – is also well known. The island is on a very stable, tectonic microplate, making it seismically quiet. Above ground, the area is undeveloped and sparsely populated, further shielding the experiment from noise.

The third ET candidate, lying near the point where Belgium, Germany and the Netherlands meet, also has a hard subsurface, which is needed for the tunnels. It is topped by a softer, clay-like layer that would dampen vibrations from traffic and industry. “We are busy investigating the suitability of the subsurface and the damping capacity of the top layer,” says Wim Walk of the Dutch Center for Subatomic Physics (Nikhef), which is co-ordinating the candidacy for this location. “That research requires a lot of work, because the subsurface here has not yet been properly mapped.”

Localization is important for multi­messenger astronomy. In other words, if a gravitational-wave source can be located quickly and precisely in the sky, other telescopes can be pointed towards it to observe any eventual light or other electromagnetic (EM) signals. This is what happened after LIGO detected a gravitational wave on 17 August 2017, originating from a neutron star collision. Dozens of ground- and space-based satellites were able to pick up a gamma-ray burst and the subsequent EM afterglow.

The triangle design, however, is favoured by the Belgian-German-Dutch consortium. It would be the Earth equivalent to the European Space Agency’s planned LISA space-based gravitational-waves detector, which will consist of three spacecraft in a triangle configuration that is set for launch in 2035, the same year that the ET could open. LISA would detect gravitational waves with even much lower frequency, coming, for example, from mergers of supermassive black holes.

While the Earth-based triangle design would not be able to locate the source as precisely, it would – unlike the 2L design – be able to do “null stream” measurements. These would yield  a clearer picture of the noise from the environment and the detector itself, including  “glitches”, which are bursts of noise that overlap with gravitational-wave signals. “With a non-stop influx of gravitational waves but also of noise and glitches, we need some form of automatic clean-up of the data,” says Jan Harms, a physicist at the Gran Sasso Science Institute in Italy and member of the scientific ET collaboration. “The null stream could provide that.”

However, it is not clear if that null stream would be a fundamental advantage for data analysis, with Harms and colleagues thinking more work is needed. “For example, different forms of noise could be connected to each other, which would compromise the null stream,” he says. The problem is also that a detector with a null stream has not yet been realized. And that applies to the triangle design in general. “While the 2L design is well established in the scientific community,” adds D’Urso.

Backers of the triangle design see the ET as being part of a wider, global network of third-generation detectors, where the localization argument no longer matters. Indeed, the US already has plans for an above-ground successor to LIGO. Known as the Cosmic Explorer, it would feature two L-shaped detectors with arm lengths of up to 40 km. But with US politics in turmoil, it is questionable how realistic these plans are.

Matthew Evans, a physicist at the Massachusetts Institute of Technology and member of the LIGO collaboration, recognizes the “network argument”. “I think that the global gravitational waves community are double counting in some sense,” he says. Yet for Evans it is all about the exciting discoveries that could be made with a next-generation gravitational-wave detector. “The best science will be done with ET as 2Ls,” he says.

The post Physicists set to decide location for next-generation Einstein Telescope appeared first on Physics World.

Garbage in, garbage out: why the success of AI depends on good data

1 September 2025 at 12:00

Artificial intelligence (AI) is fast becoming the new “Marmite”. Like the salty spread that polarizes taste-buds, you either love AI or you hate it. To some, AI is miraculous, to others it’s threatening or scary. But one thing is for sure – AI is here to stay, so we had better get used to it.

In many respects, AI is very similar to other data-analytics solutions in that how it works depends on two things. One is the quality of the input data. The other is the integrity of the user to ensure that the outputs are fit for purpose.

Previously a niche tool for specialists, AI is now widely available for general-purpose use, in particular through Generative AI (GenAI) tools. Also known as Large Language Models (LLMs), they’re now widley available through, for example, OpenAI’s ChatGPT, Microsoft Co-pilot, Anthropic’s Claude, Adobe Firefly or Google Gemini.

GenAI has become possible thanks to the availability of vast quantities of digitized data and significant advances in computing power. Based on neural networks, this size of model would in fact have been impossible without these two fundamental ingredients.

GenAI is incredibly powerful when it comes to searching and summarizing large volumes of unstructured text. It exploits unfathomable amounts of data and is getting better all the time, offering users significant benefits in terms of efficiency and labour saving.

Many people now use it routinely for writing meeting minutes, composing letters and e-mails, and summarizing the content of multiple documents. AI can also tackle complex problems that would be difficult for humans to solve, such as climate modelling, drug discovery and protein-structure prediction.

I’d also like to give a shout out to tools such as Microsoft Live Captions and Google Translate, which help people from different locations and cultures to communicate. But like all shiny new things, AI comes with caveats, which we should bear in mind when using such tools.

User beware

LLMs, by their very nature, have been trained on historical data. They can’t therefore tell you exactly what may happen in the future, or indeed what may have happened since the model was originally trained. Models can also be constrained in their answers.

Take the Chinese AI app DeepSeek. When the BBC asked it what had happened at Tiananmen Square in Beijing on 4 June 1989 – when Chinese troops cracked down on protestors – the Chatbot’s answer was suppressed. Now, this is a very obvious piece of information control, but subtler instances of censorship will be harder to spot.

Trouble is, we can’t know all the nuances of the data that models have been trained on

We also need to be conscious of model bias. At least some of the training data will probably come from social media and public chat forums such as X, Facebook and Reddit. Trouble is, we can’t know all the nuances of the data that models have been trained on – or the inherent biases that may arise from this.

One example of unfair gender bias was when Amazon developed an AI recruiting tool. Based on 10 years’ worth of CVs – mostly from men – the tool was found to favour men. Thankfully, Amazon ditched it. But then there was Apple’s gender-biased credit-card algorithm that led to men being given higher credit limits than women of similar ratings.

Another problem with AI is that it sometimes acts as a black box, making it hard for us to understand how, why or on what grounds it arrived at a certain decision. Think about those online Captcha tests we have to take to when accessing online accounts. They often present us with a street scene and ask us to select those parts of the image containing a traffic light.

The tests are designed to distinguish between humans and computers or bots – the expectation being that AI can’t consistently recognize traffic lights. However, AI-based advanced driver assist systems (ADAS) presumably perform this function seamlessly on our roads. If not, surely drivers are being put at risk?

A colleague of mine, who drives an electric car that happens to share its name with a well-known physicist, confided that the ADAS in his car becomes unresponsive, especially when at traffic lights with filter arrows or multiple sets of traffic lights. So what exactly is going on with ADAS? Does anyone know?

Caution needed

My message when it comes to AI is simple: be careful what you ask for. Many GenAI applications will store user prompts and conversation histories and will likely use this data for training future models. Once you enter your data, there’s no guarantee it’ll ever be deleted. So  think carefully before sharing any personal data, such medical or financial information. It also pays to keep prompts non-specific (avoiding using your name or date of birth) so that they cannot be traced directly to you.

Democratization of AI is a great enabler and it’s easy for people to apply it without an in-depth understanding of what’s going on under the hood. But we should be checking AI-generated output before we use it to make important decisions and we should be careful of the personal information we divulge.

It’s easy to become complacent when we are not doing all the legwork. We are reminded under the terms of use that “AI can make mistakes”, but I wonder what will happen if models start consuming AI-generated erroneous data. Just as with other data-analytics problems, AI suffers from the old adage of “garbage in, garbage out”.

But sometimes I fear it’s even worse than that. We’ll need a collective vigilance to avoid AI being turned into “garbage in, garbage squared”.

The post Garbage in, garbage out: why the success of AI depends on good data appeared first on Physics World.

Nano-engineered flyers could soon explore Earth’s mesosphere

21 August 2025 at 11:00

Small levitating platforms that can stay airborne indefinitely at very high altitudes have been developed by researchers in the US and Brazil. Using photophoresis, the devices could be adapted to carry small payloads in the mesosphere where flight is notoriously difficult. It could even be used in the atmospheres of moons and other planets.

Photophoresis occurs when light illuminates one side of a particle, heating it slightly more than the other. The resulting temperature difference in the surrounding gas means that molecules rebound with more energy on the warmer side than the cooler side – producing a tiny but measurable push.

For most of the time since its discovery in the 1870s, the effect was little more than a curiosity. But with more recent advances in nanotechnology, researchers have begun to explore how photophoresis could be put to practical use.

“In 2010, my graduate advisor, David Keith, had previously written a paper that described photophoresis as a way of flying microscopic devices in the atmosphere, and we wanted to see if larger devices could carry useful payloads,” explains Ben Schafer at Harvard University, who led the research. “At the same time, [Igor Bargatin’s group at the University of Pennsylvania] was doing fascinating work on larger devices that generated photophoretic forces.”

Carrying payloads

These studies considered a wide variety of designs: from artificial aerosols, to thin disks with surfaces engineered to boost the effect. Building on this earlier work, Schafer’s team investigated how lightweight photophoretic devices could be optimized to carry payloads in the mesosphere: the atmospheric layer at about 50–80 km above Earth’s surface, where the sparsity of air creates notoriously difficult flight conditions for conventional aircraft or balloons.

“We used these results to fabricate structures that can fly in near-space conditions, namely, under less than the illumination intensity of sunlight and at the same pressures as the mesosphere,” Schafer explains.

The team’s design consists two alumina membranes – each 100 nm thick, and perforated with nanoscale holes. The membranes are positioned a short distance apart, and connected by ligaments. In addition, the bottom membrane is coated with a light-absorbing chromium layer, causing it to heat the surrounding air more than the top layer as it absorbs incoming sunlight.

As a result, air molecules move preferentially from the cooler top side toward the warmer bottom side through the membranes’ perforations: a photophoretic process known as thermal transpiration. This one-directional flow creates a pressure imbalance across the device, generating upward thrust. If this force exceeds the device’s weight, it can levitate and even carry a payload. The team also suggests that the devices could be kept aloft at night using the infrared radiation emitted by Earth into space.

Simulations and experiments

Through a combination of simulations and experiments, Schafer and his colleagues examined how factors such as device size, hole density, and ligament distribution could be tuned to maximize thrust at different mesospheric altitudes – where both pressure and temperature can vary dramatically. They showed that platforms 10 cm in radius could feasibly remain aloft throughout the mesosphere, powered by sunlight at intensities lower than those actually present there.

Based on these results, the team created a feasible design for a photophoretic flyer with a 3 cm radius, capable of carrying a 10 mg payload indefinitely at altitudes of 75 km. With an optimized design, they predict payloads as large as 100 mg could be supported during daylight.

“These payloads could support a lightweight communications payload that could transmit data directly to the ground from the mesosphere,” Schafer explains. “Small structures without payloads could fly for weeks or months without falling out of the mesosphere.”

With this proof of concept, the researchers are now eager to see photophoretic flight tested in real mesospheric conditions. “Because there’s nothing else that can sustainably fly in the mesosphere, we could use these devices to collect ground-breaking atmospheric data to benefit meteorology, perform telecommunications, and predict space weather,” Schafer says.

Requiring no fuel, batteries, or solar panels, the devices would be completely sustainable. And the team’s ambitions go beyond Earth: with the ability to stay aloft in any low-pressure atmosphere with sufficient light, photophoretic flight could also provide a valuable new approach to exploring the atmosphere of Mars.

The research is described in Nature.

The post Nano-engineered flyers could soon explore Earth’s mesosphere appeared first on Physics World.

Deep-blue LEDs get a super-bright, non-toxic boost

21 August 2025 at 08:00

A team led by researchers at Rutgers University in the US has discovered a new semiconductor that emits bright, deep-blue light. The hybrid copper iodide material is stable, non-toxic, can be processed in solution and has already been integrated into a light-emitting diode (LED). According to its developers, it could find applications in solid-state lighting and display technologies.

Creating white light for solid-state lighting and full-colour displays requires bright, pure sources of red, green and blue light. While stable materials that efficiently emit red or green light are relatively easily to produce, those that generate blue light (especially deep-blue light) are much more challenging. Existing blue-light emitters based on organic materials are unstable, meaning they lose their colour quality over time. Alternatives based on lead-halide perovskites or cadmium-containing colloidal quantum dots are more stable, but also toxic for humans and the environment.

Hybrid copper-halide-based emitters promise the best of both worlds, being both non-toxic and stable. They are also inexpensive, with tuneable optical properties and a high luminescence efficiency, meaning they are good at converting power into visible light.

Researchers have already used a pure inorganic copper iodide material, Cs3Cu2I5, to make deep-blue LEDs. This material emits light at the ideal wavelength of 445 nm, is robust to heat and moisture, and it emits between 87–95% of the excitation photons it absorbs as luminescence photons, giving it a high photoluminescence quantum yield (PLQY).

However, the maximum ratio of photon output to electron input (known as the maximum external quantum efficiency, EQEmax) for this material is very low, at just 1.02%.

Strong deep-blue photoluminescence

In the new work, a team led by Rutgers materials chemist Jing Li developed a hybrid copper iodide with the chemical formula 1D-Cu4I8(Hdabco)4 (CuI(Hda), where Hdabco is 1,4-diazabicyclo-[2.2.2]octane-1-ium. This material emits strong deep-blue light at 449 nm with a PLQY near unity (99.6%).

Li and colleagues opted to use CuI(Hda) as the sole light emitting layer and built a thin-film LED out of it using a solution process. The new device has an EQEmax of 12.6% with colour coordinates (0.147, 0.087) and a peak brightness of around 4000 cd m-2. It is also relatively stable, with an operational half-lifetime (T50) of approximately 204 hours under ambient conditions. These figures mean that its performance rivals the best existing solution-processed deep-blue LEDs, Li says. The team also fabricated a large-area device measuring 4 cm² to demonstrate that the material could be used in real-world applications.

Interfacial hydrogen-bond passivation strategy

The low PLQY of previous such devices is partly due to the fact that charge carriers (electrons and holes) in these materials rapidly recombine in a non-radiative way, typically due to surface and bulk defects, or traps. The charge carriers also have a low radiative recombination rate, which is associated with a small exciton (electron-hole pair) binding energy.

Li and colleagues overcame this problem in their new device thanks to an interfacial hydrogen-bond passivation (DIHP) strategy that involves introducing hydrogen bonds via an ultrathin sheet of polymethylacrylate (PMMA) and a carbazole-phosphonic acid-based self-assembled monolayer (Ac2PACz) at the two interfaces of the CuI(Hda) emissive layer. This effectively passivates both heterojunctions of the copper-iodide hydride light-emitting layer and optimizes exciton binding energies. “Such a synergistic surface modification dramatically boosts the performance of the deep-blue LED by a factor of fourfold,” explains Li.

According to Li, the study suggests a promising route for developing blue emitters that are both energy-efficient and environmentally benign, without compromising on performance. “Through the fabrication of blue LEDs using a low cost, stable and nontoxic material capable of delivering efficient deep-blue light, we address major energy and ecological limitations found in other types of solution-processable emitters,” she tells Physics World.

Li adds that the hydrogen-bonding passivation technique is not limited to the material studied in this work. It could also be applied to minimize interfacial energy losses in a wide range of other solution-based, light-emitting optoelectronic systems.

The team is now pursuing strategies for developing other solution-processable, high-performance hybrid copper iodide-based emitter materials similar to CuI(Hda). “Our goal is to further enhance the efficiency and extend the operational lifetime of LEDs utilizing these next-generation materials,” says Li.

The present work is detailed in Nature.

The post Deep-blue LEDs get a super-bright, non-toxic boost appeared first on Physics World.

NASA launches TRACERS mission to study Earth’s ‘magnetic shield’

13 August 2025 at 11:02

NASA has successfully launched a mission to explore the interactions between the Sun’s and Earth’s magnetic fields. The Tandem Reconnection and Cusp Electrodynamics Reconnaissance Satellites (TRACERS) craft was sent into low-Earth orbit on 23 July from Vandenberg Space Force Base in California by a SpaceX Falcon 9 rocket. Following a month of calibration, the twin-satellite mission is expected to operate for a year.

The spacecraft will observe particles and electromagnetic fields in the Earth’s northern magnetic “cusp region”, which encircles the North Pole where the Earth’s magnetic field lines curve down toward Earth.

This unique vantage point allows researchers to study how magnetic reconnection — when field lines connect and explosively reconfigure — affects the space environment. Such observations will help researchers understand how processes change over both space and time.

The two satellites will collect data from over 3000 cusp crossings during the one-year mission with the information being used to understand space-weather phenomena that can disrupt satellite operations, communications and power grids on Earth.

Each nearly identical octagonal satellite – weighing less than 200 kg each – features six instruments including magnetomers, electric-field instruments and devices to measure the energy of ions and electrons in plasma around the spacecraft.

It will operate in a Sun-synchronous orbit about 590 km above ground with the satellites following one behind the other in close separation, passing through regions of space at least 10 seconds apart.

“TRACERS is an exciting mission,” says Stephen Fuselier from the Southwest Research Institute in Texas, who is the mission’s deputy principal investigator. “The data from that single pass through the cusp were amazing. We can’t wait to get the data from thousands of cusp passes.”

The post NASA launches TRACERS mission to study Earth’s ‘magnetic shield’ appeared first on Physics World.

Jet stream study set to improve future climate predictions

13 August 2025 at 09:35
Factors influencing the jet stream in the southern hemisphere
Driven by global warming The researchers identified which factors influence the jet stream in the southern hemisphere. (Courtesy: Leipzig University/Office for University Communications)

An international team of meteorologists has found that half of the recently observed shifts in the southern hemisphere’s jet stream are directly attributable to global warming – and pioneered a novel statistical method to pave the way for better climate predictions in the future.

Prompted by recent changes in the behaviour of the southern hemisphere’s summertime eddy-driven jet (EDJ) – a band of strong westerly winds located at a latitude of between 30°S and 60°S – the Leipzig University-led team sifted through historical measurement data to show that wind speeds in the EDJ have increased, while the wind belt has moved consistently toward the South Pole. They then used a range of innovative methods to demonstrate that 50% of these shifts are directly attributable to global warming, with the remainder triggered by other climate-related changes, including warming of the tropical Pacific and the upper tropical atmosphere, and the strengthening of winds in the stratosphere.

“We found that human fingerprints on the EDJ are already showing,” says lead author Julia Mindlin, research fellow at Leipzig University’s Institute for Meteorology. “Global warming, springtime changes in stratospheric winds linked to ozone depletion, and tropical ocean warming are all influencing the jet’s strength and position.”

“Interestingly, the response isn’t uniform, it varies depending on where you look, and climate models are underestimating how strong the jet is becoming. That opens up new questions about what’s missing in our models and where we need to dig deeper,” she adds.

Storyline approach

Rather than collecting new data, the researchers used existing, high-quality observational and reanalysis datasets – including the long-running HadCRUT5 surface temperature data, produced by the UK Met Office and the University of East Anglia, and a variety of sea surface temperature (SST) products including HadISST, ERSSTv5 and COBE.

“We also relied on something called reanalysis data, which is a very robust ‘best guess’ of what the atmosphere was doing at any given time. It is produced by blending real observations with physics-based models to reconstruct a detailed picture of the atmosphere, going back decades,” says Mindlin.

To interpret the data, the team – which also included researchers at the University of Reading, the University of Buenos Aires and the Jülich Supercomputing Centre – used a statistical approach called causal inference to help isolate the effects of specific climate drivers. They also employed “storyline” techniques to explore multiple plausible futures rather than simply averaging qualitatively different climate responses.

“These tools offer a way to incorporate physical understanding while accounting for uncertainty, making the analysis both rigorous and policy-relevant,” says Mindlin.

Future blueprint

For Mindlin, these findings are important for several reasons. First, they demonstrate “that the changes predicted by theory and climate models in response to human activity are already observable”. Second, she notes that they “help us better understand the physical mechanisms that drive climate change, especially the role of atmospheric circulation”.

“Third, our methodology provides a blueprint for future studies, both in the southern hemisphere and in other regions where eddy-driven jets play a role in shaping climate and weather patterns,” she says. “By identifying where and why models diverge from observations, our work also contributes to improving future projections and enhances our ability to design more targeted model experiments or theoretical frameworks.”

The team is now focused on improving understanding of how extreme weather events, like droughts, heatwaves and floods, are likely to change in a warming world. Since these events are closely linked to atmospheric circulation, Mindlin stresses that it is critical to understand how circulation itself is evolving under different climate drivers.

One of the team’s current areas of focus is drought in South America. Mindlin notes that this is especially challenging due to the short and sparse observational record in the region, and the fact that drought is a complex phenomenon that operates across multiple timescales.

“Studying climate change is inherently difficult – we have only one Earth, and future outcomes depend heavily on human choices,” she says. “That’s why we employ ‘storylines’ as a methodology, allowing us to explore multiple physically plausible futures in a way that respects uncertainty while supporting actionable insight.”

The results are reported in the Proceedings of the National Academy of Sciences.

The post Jet stream study set to improve future climate predictions appeared first on Physics World.

Physicists get dark excitons under control

12 August 2025 at 14:30
Dark exciton control: Researchers assemble a large cryostat in an experimental physics laboratory, preparing for ultra-low temperature experiments with quantum dots on a semiconductor chip. (Courtesy: Universität Innsbruck)

Physicists in Austria and Germany have developed a means of controlling quasiparticles known as dark excitons in semiconductor quantum dots for the first time. The new technique could be used to generate single pairs of entangled photons on demand, with potential applications in quantum information storage and communication.

Excitons are bound pairs of negatively charged electrons and positively charged “holes”. When these electrons and holes have opposite spins, they recombine easily, emitting a photon in the process. Excitons of this type are known as “bright” excitons. When the electrons and holes have parallel spins, however, direct recombination by emitting a photon is not possible because it would violate the conservation of spin angular momentum. This type of exciton is therefore known as a “dark” exciton.

Because dark excitons are not optically active, they have much longer lifetimes than their bright cousins. For quantum information specialists, this is an attractive quality, because it means that dark excitons can store quantum states – and thus the information contained within these states – for much longer. “This information can then be released at a later time and used in quantum communication applications, such as optical quantum computing, secure communication via quantum key distribution (QKD) and quantum information distribution in general,” says Gregor Weihs, a quantum photonics expert at the Universität Innsbruck, Austria who led the new study.

The problem is that dark excitons are difficult to create and control. In semiconductor quantum dots, for example, Weihs explains that dark excitons tend to be generated randomly, for example when a quantum dot in a higher-energy state decays into a lower-energy state.

Chirped laser pulses lead to reversible exciton production

In the new work, which is detailed in Science Advances, the researchers showed that they could control the production of dark excitons in quantum dots by using laser pulses that are chirped, meaning that the frequency (or colour) of the laser light varies within the pulse. Such chirped pulses, Weihs explains, can turn one quantum dot state into another.

“We first bring the quantum dot to the (bright) biexciton state using a conventional technique and then apply a (storage) chirped laser pulse that turns this biexciton occupation (adiabatically) into a dark state,” he says. “The storage pulse is negatively chirped – its frequency decreases with time, or in terms of colour, it turns redder.” Importantly, the process is reversible: “To convert the dark exciton back into a bright state, we apply a (positively chirped) retrieval pulse to it,” Weihs says.

One possible application for the new technique would be to generate single pairs of entangled photons on demand – the starting point for many quantum communication protocols. Importantly, Weihs adds that this should be possible with almost any type of quantum dot, whereas an alternative method known as polarization entanglement works for only a few quantum dot types with very special properties. “For example, it could be used to create ‘time-bin’ entangled photon pairs,” he tells Physics World. “Time-bin entanglement is particularly suited to transmitting quantum information through optical fibres because the quantum state stays preserved over very long distances.”

The study’s lead author, Florian Kappe, and his colleague Vikas Remesh describe the project as “a challenging but exciting and rewarding experience” that combined theoretical and experimental tools. “The nice thing, we feel, is that on this journey, we developed a number of optical excitation methods for quantum dots for various applications,” they say via e-mail.

The physicists are now studying the coherence time of the dark exciton states, which is an important property in determining how long they can store quantum information. According to Weihs, the results from this work could make it possible to generate higher-dimensional time-bin entangled photon pairs – for example, pairs of quantum states called qutrits that have three possible values.

“Thinking beyond this, we imagine that the technique could even be applied to multi-excitonic complexes in quantum dot molecules,” he adds. “This could possibly result in multi-photon entanglement, such as so-called GHZ (Greenberger-Horne-Zeilinger) states, which are an important resource in multiparty quantum communication scenarios.”

The post Physicists get dark excitons under control appeared first on Physics World.

MOND versus dark matter: the clash for cosmology’s soul

12 August 2025 at 10:00

The clash between dark matter and modified Newtonian dynamics (MOND) can get a little heated at times. On one side is the vast majority of astronomers who vigorously support the concept of dark matter and its foundational place in cosmology’s standard model. On the other side is the minority – a group of rebels convinced that tweaking the laws of gravity rather than introducing a new particle is the answer to explaining the composition of our universe.

Both sides argue passionately and persuasively, pointing out evidence that supports their view while discrediting the other side. Often it seems to come down to a matter of perspective – both sides use the same results as evidence for their cause. For the rest of us, how can we tell who is correct?

As long as we still haven’t identified what dark matter is made of, there will remain some ambiguity, leaving a door ajar for MOND. However, it’s a door that dark-matter researchers hope will be slammed shut in the not-too-distant future.

Crunch time for WIMPs

In part two of this series, where I looked at the latest proposals from dark-matter scientists, we met University College London’s Chamkaur Ghag, who is the spokesperson for Lux-ZEPLIN. This experiment is searching for “weakly interacting massive particles” or WIMPs – the leading dark-matter candidate – down a former gold mine in South Dakota, US. A huge seven-tonne tank of liquid xenon, surrounded by an array of photomultiplier tubes, watches patiently for the flashes of light that may occur when a passing WIMP interacts with a xenon atom.

Running since 2021, the experiment just released the results of its most recent search through 280 days of data, which uncovered no evidence of WIMPs above a mass of 9 GeV/c2 (Phys. Rev. Lett. 135 011802). These results help to narrow the range of possible dark-matter theories, as the new limits impose constraints on WIMP parameters that are almost five times more rigorous than the previous best. Another experiment at the INFN Laboratori Nazionali del Gran Sasso in Italy, called XENONnT, is also hoping to spot the elusive WIMPs – in its case by looking for rare nuclear recoil interactions in a liquid xenon target chamber.

Huge water tank surrounded by pipes
Deep underground The XENON Dark Matter Project is hosted by the INFN Gran Sasso National Laboratory in Italy. The latest detector in this programme is the XENONnT (pictured) which uses liquid xenon to search for dark-matter particles. (Courtesy: XENON Collaboration)

Lux-ZEPLIN and XENONnT will cover half the parameter space of masses and energies that WIMPs could in theory have, but Ghag is more excited about a forthcoming, next-generation xenon-based WIMP detector dubbed XLZD that might settle the matter. XLZD brings together both the Lux-ZEPLIN and XENONnT collaborations, to design and build a single, common multi-tonne experiment that will hopefully leave WIMPs with no place to hide. “XLZD will probably be the final experiment of this type,” says Ghag. “It’s designed to be much larger and more sensitive, and is effectively the definitive experiment.”

I think none of us are ever going to fully believe it completely until we’ve found a WIMP and can reproduce it in a lab

Richard Massey

If WIMPs do exist, then this detector will find them, and it could happen on UK shores. Several locations around the world are in the running to host the experiment, including Boulby Mine Underground Laboratory near Whitby Bay on the north-east coast of England. If everything goes to plan, XLZD – which will contain between 40 and 100 tonnes of xenon – will be up and running and providing answers by the 2030s. It will be a huge moment for dark matter, and a nervous one for its researchers.

“I think none of us are ever going to fully believe it completely until we’ve found [a WIMP] and can reproduce it in a lab and show that it’s not just some abstract stuff that we call dark matter, but that it is a particular particle that we can identify,” says astronomer Richard Massey of the University of Durham, UK.

But if WIMPs are in fact a dead-end, then it’s not a complete death-blow for dark matter – there are other dark-matter candidates and other dark-matter experiments. For example, the Forward Search Experiment (FASER) at CERN’s Large Hadron Collider is looking for less massive dark-matter particles such as axions (read more about them in part 2). However, WIMPs have been a mainstay of dark-matter models since the 1980s. If the xenon-based experiments turn up empty-handed it will be a huge blow, and the door will creak open just a little bit more for MOND.

Galactic frontier

MOND’s battleground isn’t in particle detectors – it’s in the outskirts of galaxies and galaxy clusters, and its proof lies in the history of how our universe formed. This is dark matter’s playground too, with the popular models for how galaxies grow being based on a universe in which dark matter forms 85% of all matter. So it’s out in the depths of space where the two models clash.

The current standard model of cosmology describes how the growth of the large-scale structure of the universe, over the past 13.8 billion years of cosmic history since the Big Bang, is influenced by a combination of dark matter and dark energy (responsible for the accelerated expansion of the universe). Essentially, density fluctuations in the cosmic microwave background (CMB) radiation reflect the clumping of dark matter in the very early universe. As the cosmos aged, these clumps thinned out into the cosmic web of matter. This web is a universe-spanning network of dark-matter filaments, where all the matter lies, between which are voids that are comparatively less densely packed with matter than the filaments. Galaxies can form inside “dark matter haloes”, and at the densest points in the dark-matter filaments, galaxy clusters coalesce.

Simulations in this paradigm – known as lambda cold dark matter (ΛCDM) – suggest that galaxy and galaxy-cluster formation should be a slow process, with small galaxies forming first and gradually merging over billions of years to build up into the more massive galaxies that we see in the universe today. And it works – kind of. Recently, the James Webb Space Telescope (JWST) peered back in time to between just 300 and 400 million years after the Big Bang and found the universe to be populated by tiny galaxies perhaps just a thousand or so light-years across (ApJ 970 31). This is as expected, and over time they would grow and merge into larger galaxies.

1 Step back in time

infrared image showing thousands of stars and galaxies
a (Courtesy: NASA/ESA/CSA/STScI/ Brant Robertson, UC Santa Cruz/ Ben Johnson, CfA/ Sandro Tacchella, University of Cambridge/ Phill Cargile, CfA)

Graph of brightness versus wavelength of light showing a clear peak at roughly 1.8 microns
b (Courtesy: NASA/ESA/CSA/ Joseph Olmsted, STScI/ S Carniani, Scuola Normale Superiore/ JADES Collaboration)

Data from the James Webb Space Telescope (JWST) form the basis of the JWST Advanced Deep Extragalactic Survey (JADES). (a) This infrared image from the JWST’s NIRCam highlights galaxy JADES-GS-z14-0. (b) The JWST’s NIRSpec (Near-Infrared Spectrograph) obtained this spectrum of JADES-GS-z14-0. A galaxy’s redshift can be determined from the location of a critical wavelength known as the Lyman-alpha break. For JADES-GS-z14-0 the redshift value is 14.32 (+0.08/–0.20), making it the second most distant galaxy known at less than 300 million years after the Big Bang. The current record holder, as of August 2025, is MoM-z14, which has a redshift of 14.4 (+0.02/–0.02), placing it less than 280 million years after the Big Bang (arXiv:2505.11263). Both galaxies belong to an era referred to as the “cosmic dawn”, following the epoch of reionization, when the universe became transparent to light. JADES-GS-z14-0 is particularly interesting to researchers not just because of its distance, but also because it is very bright. Indeed, it is much more intrinsically luminous and massive than expected for a galaxy that formed so soon after the Big Bang, raising more questions on the evolution of stars and galaxies in the early universe.

Yet the deeper we push into the universe, the more we observe challenges to the ΛCDM model, which ultimately threatens the very existence of dark matter. For example, those early galaxies that the JWST has observed, while being quite small, are also surprisingly bright – more so than ΛCDM predicts. This has been attributed to an initial mass function (IMF – the property that determines the average mass of stars that form) that skews more towards higher-mass stars and therefore more luminous stars than today. It does sound reasonable, except that astronomers still don’t understand why the IMF is what it is today (favouring the smallest stars; massive stars are rare) never mind what it might have been over 13 billion years ago.

Not everyone is convinced, and this is compounded by slightly later galaxies, seen around a billion years after the Big Bang, which continue the trend of being more luminous and more massive than expected. Indeed, some of these galaxies sport truly enormous black holes hundreds of times more massive than the black hole at the heart of our Milky Way. Just a couple of billion years later and significantly large galaxy clusters are already present, earlier than one would have surmised with ΛCDM.

The fall of ΛCDM?

Astrophysicist and MOND advocate Pavel Kroupa, from the University of Bonn in Germany, highlights giant elliptical galaxies in the early universe as an example of what he sees as a divergence from ΛCDM.

“We know from observations that the massive elliptical galaxies formed on shorter timescales than the less massive ellipticals,” he explains. This phenomenon has been referred to as “downsizing”, and Kroupa declares it is “a big problem for  ΛCDM” because the model says that “the big galaxies take longer to form, but what we see is exactly the opposite”.

To quantify this problem, a 2020 study (MNRAS 498 5581) by Australian astronomer Sabine Bellstedt and colleagues showed that half the mass in present-day elliptical galaxies was in place 11 billion years ago, compared with other galaxy types that only accrued half their mass on average about 6 billion years ago. The smallest galaxies only accrued that mass as recently as 4 billion years ago, in apparent contravention of ΛCDM.

Observations (ApJ 905 40) of a giant elliptical galaxy catalogued as C1-23152, which we see as it existed 12 billion years ago, show that it formed 200 billion solar masses worth of stars in just 450 million years – a huge firestorm of star formation that ΛCDM simulations just can’t explain. Perhaps it is an outlier – we’ve only sampled a few parts of the sky, not conducted a comprehensive census yet. But as astronomers probe these cosmic depths more extensively, such explanations begin to wear thin.

Kroupa argues that by replacing dark matter with MOND, such giant early elliptical galaxies suddenly make sense. Working with Robin Eappen, who is a PhD student at Charles University in Prague, they modelled a giant gas cloud in the very early universe collapsing under gravity according to MOND, rather than if there were dark matter present.

“It is just stunning that the time [of formation of such a large elliptical] comes out exactly right,” says Kroupa. “The more massive cloud collapses faster on exactly the correct timescale, compared to the less massive cloud that collapses slower. So when we look at an elliptical galaxy, we know that thing formed from MOND and nothing else.”

Elliptical galaxies are not the only thing with a size problem. In 2021 Alexia Lopez, a PhD student at the University of Central Lancashire, UK, discovered a “Giant Arc” of galaxies spanning 3.3 billion light-years, some 9.2 billion light-years away. And in 2023 Lopez spotted another gigantic structure, a “Big Ring” (shaped more like a coil) of galaxies 1.3 billion light-years in diameter, but with a circumference of about 4 billion light-years. The opposite of these giant structures are the massive under-dense voids that take up space between the filaments of the cosmic web. The KBC Void (sometimes called the “Local Hole”), for example, is about two billion light-years across and the Milky Way among a host of other galaxies sits inside it. The trouble is, simulations in ΛCDM, with dark matter at the heart of it, cannot replicate structures and voids this big.

“We live in this huge under-density; we’re not at the centre of it but we are within it and such an under-density is completely impossible in ΛCDM,” says Kroupa, before declaring, “Honestly, it’s not worthwhile to talk about the ΛCDM model anymore.”

A bohemian model

Such fighting talk is dismissed by dark-matter astronomers because although there are obviously deficiencies in the ΛCDM model, it does such a good job of explaining so many other things. If we’re to kill ΛCDM because it cannot explain a few large ellipticals or some overly large galaxy groups or voids, then there needs to be a new model that can explain not only these anomalies, but also everything else that ΛCDM does explain.

“Ultimately we need to explain all the observations, and some of those MOND does better and some of those ΛCDM does better, so it’s how you weigh those different baskets,” says Stacy McGaugh, a MOND researcher from Case Western Reserve University in the US.

As it happens, Kroupa and his Bonn colleague Jan Pflamm-Altenburg are working on a new model that they think has what it takes to overthrow dark matter and the broader ΛCDM paradigm. Calling it the Bohemian model (the name has a double meaning – Kroupa is originally from Czechia), it incorporates MOND as its main pillar and Kroupa describes the results they are getting from their simulations in this paradigm as “stunning” (A&A 698 A167).

A lot of experts at Ivy League universities will say it’s all completely impossible. But I know that part of the community is just itching to have a completely different model

Pavel Kroupa

But Kroupa admits that not everybody will be happy to see it published. “If it’s published, a lot of experts at Ivy League universities will say it’s all completely impossible,” he says. “But I know for a fact that there is part of the community, the ‘bright part’ as I call them, which is just itching to have a completely different model.”

Kroupa is staying tight-lipped on the precise details of his new model, but says that according to simulations the puzzle of large-scale structure forming earlier than expected, and growing larger faster than expected, is answered by the Bohemian model. “These structures [such as the Giant Arc and the KBC Void] are so radical that they are not possible in the ΛCDM model,” he says. “However, they pop right out of this Bohemian model.”

Binary battle

Whether you believe Kroupa’s promises of a better model or whether you see it all as bluster, the fact remains that a dark-matter-dominated universe still has some problems. Maybe they’re not serious, and all it will take is a few tweaks to make those problems go away. But maybe they’ll persist, and require new physics of some kind, and it’s this possibility that continues to leave the door open for MOND. For the rest of us, we’re still grasping for a definitive statement one way or another.

For MOND, perhaps that definitive statement could still turn out to be binary stars, as discussed in the first article in this series. Researchers have been particularly interested in so-called “wide binaries” – pairs of stars that are more than 500 AU apart. Thanks to the vast distance between them, the gravitational impact of each star on the other is weak, making it a perfect test for MOND. Idranil Banik, of the University of St Andrews, UK, controversially concluded that there was no evidence for MOND operating on the smaller scales of binary-star systems. However, other researchers such as Kyu-Hyun Chae of Sejong University in South Korea argue that they have found evidence for MOND in binary systems, and have hit out at Banik’s findings.

Indeed, after the first part of this series was published, Chae reached out to me, arguing that Banik had analysed the data incorrectly. Chae specifically points out the fraction of wide binaries (pairs that are more than 500 AU apart, meaning that the gravitational impact of each star on the other is weak, making it a perfect test for MOND) with an extra unseen close stellar companion (a factor designated fmulti) to one or both of the binary stars must be calibrated for when performing the MOND calculations. Often when two stars are extremely close together, their angular separation is so small that we can’t resolve them and don’t realize that they are binary, he explains. So we might mistake a triple system, with two stars so close together that we can’t distinguish them and a third star on a wider circumbinary orbit, for just a wide binary.

“I initially believed Banik’s claim, but because what’s at stake is too big and I started feeling suspicious, I chose to do my own investigation,” says Chae (ApJ 952 128). “I came to realize the necessity of calibrating fmulti due to the intrinsic degeneracy between mass and gravity (one cannot simultaneously determine the gravity boost factor and the amount of hidden mass).”

The probability of a wide binary having an unseen extra stellar companion is the same as for shorter binaries (those that we can resolve). But for shorter binaries the gravitational acceleration is high enough that they obey regular Newtonian gravity – MOND only comes into the picture at wider separations. Therefore, the mass uncertainty in the study of wide binaries in a MOND regime can be calibrated for using those shorter-period binaries. Chae argues that Banik did not do this. “I’m absolutely confident that if the Banik et al. analysis is properly carried out, it will reveal MOND’s low-acceleration gravitational anomaly to some degree.”

So perhaps there is hope for MOND in binary systems. Given that dark matter shouldn’t be present on the scale of binary systems, any anomalous gravitational effect could only be explained by MOND. A detection would be pretty definitive, if only everyone could agree upon it.

the Bullet Cluster
Bullet time and mass This spectacular new image of the Bullet Cluster was created using NASA’s James Webb Space Telescope and Chandra X-ray Observatory. The new data allow for an improved measurement of the thousands of galaxies in the Bullet Cluster. This means astronomers can more accurately “weigh” both the visible and invisible mass in these galaxy clusters. Astronomers also now have an improved idea of how that mass is distributed. (X-ray: NASA/CXC/SAO; near-infrared: NASA/ESA/CSA/STScI; processing: NASA/STScI/ J DePasquale)

But let’s not kid ourselves – MOND still has a lot of catching up to do on dark matter, which has become a multi-billion-dollar industry with thousands of researchers working on it and space missions such as the European Space Agency’s Euclid space telescope. Dark matter is still in pole position, and its own definitive answers might not be too far away.

“Finding dark matter is definitely not too much to hope for, and that’s why I’m doing it,” says Richard Massey. He highlights not only Euclid, but also the work of the James Webb Space Telescope in imaging gravitational lensing on smaller scales and the Nancy G Roman Space Telescope, which will launch later this decade on a mission to study weak gravitational lensing – the way in which small clumps of matter, such as individual dark matter haloes around galaxies, subtly warp space.

“These three particular telescopes give us the opportunity over the next 10 years to catch dark matter doing something, and to be able to observe it when it does,” says Massey. That “something” could be dark-matter particles interacting, perhaps in a cluster merger in deep space, or in a xenon tank here on Earth.

“That’s why I work on dark matter rather than anything else,” concludes Massey. “Because I am optimistic.”

  • In the first instalment of this three-part series, Keith Cooper explored the struggles and successes of modified gravity in explaining phenomena at varying galactic scales
  • In the second part of the series, Keith Cooper explored competing theories of dark matter

The post MOND versus dark matter: the clash for cosmology’s soul appeared first on Physics World.

❌