Normal view

Received before yesterdayIEEE Spectrum

4 Weird Things You Can Turn into a Supercapacitor

22 October 2025 at 16:00


What do water bottles, eggs, hemp, and cement have in common? They can be engineered into strange, but functional, energy-storage devices called supercapacitors.

As their name suggests, supercapacitors are like capacitors with greater capacity. Similar to batteries, they can store a lot of energy, but they can also charge or discharge quickly, similar to a capacitor. They’re usually found where a lot of power is needed quickly and for a limited time, like as a nearly instantaneous backup electricity for a factory or data center.

Typically, supercapacitors are made up of two activated carbon or graphene electrodes, electrolytes to introduce ions to the system, and a porous sheet of polymer or glass fiber to physically separate the electrodes. When a supercapacitor is fully charged, all of the positive ions gather on one side of the separating sheet, while all of the negative ions are on the other. When it’s discharged, the ions are randomly distributed, and it can switch between these states much faster than batteries can.

Some scientists believe that supercapacitors could become more super. They think there’s potential to make these devices more sustainably, at lower-cost, and maybe even better performing if they’re built from better materials.

And maybe they’re right. Last month, a group from Michigan Technological University reported making supercapacitors from plastic water bottles that had a higher capacitance than commercial ones.

Does this finding mean recycled plastic supercapacitors will soon be everywhere? The history of similar supercapacitor sustainability experiments suggests not.

About 15 years ago, it seemed like supercapacitors were going to be in high demand. Then, because of huge investments in lithium-ion technology, batteries became tough competition, explains Yury Gogotsi, who studies materials for energy-storage devices at Drexel University, in Philadelphia. “They became so much cheaper and so much faster in delivering energy that for supercapacitors, the range of application became more limited,” he says. “Basically, the trend went from making them cheaper and available to making them perform where lithium-ion batteries cannot.”

Still, some researchers remain hopeful that environmentally friendly devices have a place in the market. Yun Hang Hu, a materials scientist on the Michigan Technological University team, sees “a promising path to commercialization [for the water-bottle-derived supercapacitor] once collection and processing challenges are addressed,” he says.

Here’s how scientists make supercapacitors with strange, unexpected materials:

Water Bottles

It turns out your old Poland Spring bottle could one day store energy instead of water. Last month in the journal Energy & Fuels, the Michigan Technological University team published a new method for converting polyethylene terephthalate (PET), the material that makes up single-use plastic water bottles, into both electrodes and separators.

As odd as it may seem, this process is “a practical blueprint for circular energy storage that can ride the existing PET supply chain,” says Hu.

To make the electrodes, the researchers first shredded bottles into 2-millimeter grains and then added powdered calcium hydroxide. They heated the mixture to 700 °C in a vacuum for 3 hours and were left with an electrically conductive carbon powder. After removing residual calcium and activating the carbon (increasing its surface area), they could shape the powder into a thin layer and use it as an electrode.

The process to produce the separators was much less intensive—the team cut bottles into squares about the size of a U.S. quarter or a 1-euro coin and used hot needles to poke holes in them. They optimized the pattern of the holes for the passage of current using specialized software. PET is a good material for a separator because of its “excellent mechanical strength, high thermal stability, and excellent insulation,” Hu says.

Filled with an electrolyte solution, the resulting supercapacitor not only demonstrated potential for eco- and finance-friendly material usage, but also slightly outperformed traditional materials on one metric. The PET device had a capacitance of 197.2 farads per gram, while an analogous device with a glass-fiber separator had a capacitance of 190.3 farads per gram.

Eggs

Wait, don’t make your breakfast sandwich just yet! You could engineer a supercapacitor from one of your ingredients instead. In 2019, a University of Virginia team showed that electrodes, electrolytes, and separators could all be made from parts of a single object—an egg.

First, the group purchased grocery store chicken eggs and sorted their parts into eggshells, eggshell membranes, and the whites and yolks.

They ground the shells into a powder and mixed them with the egg whites and yolks. The slurry was freeze-dried and brought up to 950 °C for an hour to decompose. After a cleaning process to remove calcium, the team performed heat and potassium treatments to activate the remaining carbon. They then smoothed the egg-derived activated carbon into a film to be used as electrodes. Finally, by mixing egg whites and yolks with potassium hydroxide and letting it dry for several hours, they formed a kind of gel electrolyte.

To make separators, the group simply cleaned the eggshell membranes. Because the membranes naturally have interlaced micrometer-size fibers, their inherent structures allow for ions to move across them just as manufactured separators would.

Interestingly, the resulting fully egg-based supercapacitor was flexible, with its capacitance staying steady even when the device was twisted or bent. After 5,000 cycles, the supercapacitor retained 80 percent of its original capacitance—low compared to commercial supercapacitors, but fairly on par for others made from natural materials.

Hemp

Some people may like cannabis for more medicinal purposes, but it has potential in energy storage, too. In 2024, a group from Ondokuz Mayıs University in Türkiye used pomegranate hemp plants to produce activated carbon for an electrode.

They started by drying stems of the hemp plants in a 110 °C oven for a day and then ground the stems into a powder. Next, they added sulfuric acid and heat to create a biochar, and, finally, activated the char by saturating it with potassium hydroxide and heating it again.

After 2,000 cycles, the supercapacitor with hemp-derived electrodes still retained 98 percent of its original capacitance, which is, astoundingly, in range of those made from nonbiological materials. The carbon itself had an energy density of 65 watt-hours per kilogram, also in line with commercial supercapacitors.

Cement

It may have a hold over the construction industry, but is cement coming for the energy sector, too? In 2023, a group from MIT shared how they designed electrodes from water, nearly pure carbon, and cement. Using these materials, they say, creates a “synergy” between the hydrophilic cement and hydrophobic carbon that aids the electrodes’ ability to hold layers of ions when the supercapacitor is charged.

To test the hypothesis, the team built eight electrodes using slightly different proportions of the three ingredients, different types of carbon, and different electrode thicknesses. The electrodes were saturated with potassium chloride—an electrolyte—and capacitance measurements began.

Impressively, the cement supercapacitors were able to maintain capacitance with little loss even after 10,000 cycles. The researchers also calculated that one of their supercapacitors could store around 10 kilowatt-hours—enough to serve about one third of an average American’s daily energy use—though the number is only theoretical.

How Badly Is AI Cutting Early-Career Employment?

24 September 2025 at 13:00


As AI tools become more common in people’s everyday work, researchers are looking to uncover its effects on the job market—especially for early-career workers.

A paper from the Stanford Digital Economy Lab, part of the Stanford Institute for Human-Centered AI, has now found early evidence that employment has taken a hit for young workers in the occupations that use generative AI the most. Since the widespread adoption of AI tools began in late 2022, a split has appeared, and early-career software engineers are among the hardest hit.

The researchers used data from the largest payroll provider in the United States, Automatic Data Processing (ADP), to gain up-to-date employment and earning data for millions of workers across industries, locations, and age groups. While other data may take months to come out, the researchers published their findings in late August with data through July.

Although there has been a rise in demand for AI skills in the job market, generative AI tools are getting much better at doing some of the same tasks typically associated with early-career workers. What AI tools don’t have is the experiential knowledge gained through years in the workforce, which makes more-senior positions less vulnerable.

visualization

These charts show how employment over time compares among early-career, developing, and senior workers (all occupations). Each age group is divided into five groups, based on AI exposure, and normalized to 1 in October 2022—roughly when popular generative AI tools became available to the public.

The trend may be a harbinger for more widespread changes, and the researchers plan to continue tracking the data. “It could be that there are reversals in these employment declines. It could be that other age groups become more or less exposed [to generative AI] and have differing patterns in their employment trends. So we’re going to continue to track this and see what happens,” says Bharat Chandar, one of the paper’s authors and a postdoctoral fellow at the Stanford Digital Economy Lab. In the most “AI exposed” jobs, AI tools can assist with or perform more of the work people do on a daily basis.

So, what does this mean for engineers?

Software Developers Among Most AI-Exposed

With the rise of AI coding tools, software engineers have been the subject of a lot of discussion—both in the media and research. “There have been conflicting stories about whether that job is being impacted by AI, especially for entry-level workers,” says Chandar. He and his colleagues wanted to find data on what’s happening now.

Since late 2022, early-career software engineers (between 22 and 30 years old) have experienced a decline in employment. At the same time, midlevel and senior employment has remained stable or grown. This is happening across the most AI-exposed jobs, and software engineering is a prime example.

chart visualization

Since late 2022, employment for early-career software developers has dropped. Employment for other age groups, however, has seen modest growth.

Chandar cautions that, for specific occupations, the trend may not be driven by AI alone; other changes in the tech industry could also be causing the drop. Still, the fact that it holds across industries suggests that there’s a real effect from AI.

The Stanford team also looked at a broader category of “computer occupations” based on the U.S. Bureau of Labor classifications—which includes hardware engineers, Web developers, and more—and found similar results.

chart visualization

Growth in employment between October 2022 and July 2025 by age and AI-exposure group. Quintiles 1–3 represent the lowest AI-exposure groups, which experienced 6–13 percent growth. Quintiles 4 and 5 are the most AI-exposed jobs; employment for the youngest workers in these jobs fell 6 percent.

Augmentation vs. Automation

Part of the analysis uses data from the Anthropic Economic Index, which provides information about how Anthropic’s AI products are being used, including estimates of whether the types of queries used for certain occupations are more likely to automate work, potentially replacing employees, or augment an existing worker’s output.

With this data, the researchers were able to estimate whether an occupation’s use of AI generally complements employees’ work or replaces it. Jobs in which AI tools augment work did not see the same declines in employment, compared with roles involving tasks that could be automated.

This part of the analysis was based on Anthropic’s index alone. “Ideally, we would love to get more data on AI usage from the other AI companies as well, especially Open AI and Google,” Chandar says. (A recent paper from researchers at Microsoft did find that Copilot usage aligned closely with the estimates of AI exposure the Stanford team used.)

Going forward, the team also hopes to expand to data on employment outside of the United States.

How to Measure Nothing Better

24 September 2025 at 11:02


There’s no such thing as a complete vacuum. Even in the cosmic void between galaxies, there’s an estimated density of about one hydrogen or helium atom per cubic meter. But these estimates are largely theoretical—no one has yet launched a sensor into intergalactic space and beamed back the result. On top of that, we have no means of measuring vacuums that low.

At least, not yet.

Researchers are now developing a new vacuum-measurement tool that may be able to detect lower densities than any existing techniques can. This new quantum sensor uses individual atoms, cooled to just shy of absolute zero, to serve as targets for stray particles to hit. These atom-based vacuum measurers can detect lower atomic concentrations than ever before, and they don’t require calibration, making them a good candidate to serve as a standard.

This article is part of The Scale Issue.

“The atom was already our standard for time and frequency,” says Kirk Madison, professor of physics at the University of British Columbia (UBC), in Vancouver, and one of the pioneers of cold-atom-based vacuum-measurement technology. “Wouldn’t it be cool if we could make an atom the standard for vacuum measurement as well?”

This quantum-sensor technology promises a dual achievement in scale: Not only does it extend our ability to measure incredibly rarefied conditions with unprecedented sensitivity, it also establishes the fundamental reference point that defines the scale itself. By eliminating the need for calibration and serving as a primary standard, this atom-based approach doesn’t just measure the farthest edges of the density spectrum—it could become the very ruler by which all other vacuum measurements are compared.

Vacuum measurement on Earth

While humans haven’t yet succeeded in making vacuum as pure as it is in deep space, many earthly applications still require some level of emptiness. Semiconductor manufacturing, large physics experiments in particle and wave detection, some quantum-computing platforms, and surface-analysis tools, including X-ray photoelectron spectroscopy, all require so-called ultrahigh vacuum.

At these low levels of particles per unit volume, vacuum is parameterized by pressure, measured in pascals. Regular atmospheric pressure is 105 Pa. Ultrahigh vacuum is considered to be anything less than about 10-7 Pa. Some applications require as low as 10-9 Pa. The deepest depths of space still hold the nothingness record, reaching below 10-20 Pa.

The method of choice for measuring pressure in the ultrahigh vacuum regime is the ionization gauge. “They work by a fairly straightforward mechanism that dates back to vacuum tubes,” says Stephen Eckel, a member of the cold-atom vacuum-measurement team at the National Institute of Standards and Technology (NIST).

red light shining from a circular window in a metal vacuum chamber. A portable cold-atom vacuum-measurement tool [top] detects the fluorescence of roughly 1 million lithium atoms [bottom], and infers the vacuum pressure based on how quickly the fluorescence decays. Photos: Jayme Thornton

small dot of red light in the middle of a circular window.

Indeed, an ionization gauge has the same basic components as a vacuum tube. The gauge contains a heated filament that emits electrons into the chamber. The electrons are accelerated toward a positively charged grid. En route to the grid, the electrons occasionally collide with atoms and molecules flying around in the vacuum, knocking off their electrons and creating positively charged ions. These ions are then collected by a negatively charged electrode. The current generated by these positive ions is proportional to the number of atoms floating about in the vacuum, giving a pressure reading.

Ion gauges are relatively cheap (under US $1,000) and commonplace. However, they come with a few difficulties. First, although the current in the ion gauge is proportional to the pressure in the chamber, that proportionality constant depends on a lot of fine details, such as the precise geometry of the filament and the grid. The current cannot be easily calculated from the electrical and physical characteristics of the setup—ion gauges require thorough calibrations. “A full calibration run on the ion gauges is like a full month of somebody’s time,” says Daniel Barker, a physicist at NIST who’s also working on the cold-atom vacuum-measurement project.

Second, the calibration services provided by NIST (among others) calibrate down to only 10-7 Pa. Performance below that pressure is questionable, even for a well-calibrated gauge. What’s more, at lower pressures, the heat from the ion gauge becomes a problem: Hotter surfaces emit atoms in a process called outgassing, which pollutes the vacuum. “If you’re shooting for a vacuum chamber with really low pressures,” Madison says, “these ionization gauges actually work against you, and many people turn them off.”

Third, the reading on the ion gauge depends very strongly on the types of atoms or molecules present in the vacuum. Different types of atoms could produce readings that vary by up to a factor of four. This variance is fine if you know exactly what’s inside your vacuum chamber, or if you don’t need that precise a measurement. But for certain applications, especially in research settings, these concerns are significant.

How a cold-atom vacuum standard works

The idea of a cold-atom vacuum-measurement tool developed as a surprising side effect of the study of cold atoms. Scientists first started cooling atoms down in an effort to make better atomic clocks back in the 1970s. Since then, cooling atoms and trapping them has become a cottage industry, giving rise to optical atomic clocks, atomic navigation systems, and neutral-atom quantum computers.

These experiments have to be done in a vacuum, to prevent the surrounding environment from heating the atoms. For decades, the vacuum was thought of as merely a finicky factor to be implemented as well as possible. “Vacuum limitations on atom traps have been known since the dawn of atom traps,” Eckel says. Atoms flying around the vacuum chamber would collide with the cooled atoms and knock them out of their trap, leading to loss. The better the vacuum, the slower that process would go.

A glass cylinder with a coil inside it on a gray background. The most common vacuum-measurement tool in the high-vacuum range today is the ion gauge, basically a vacuum tube in reverse: A hot filament emits electrons that fly toward a positively charged grid, ionizing background atoms and molecules along the way. Jayme Thornton

UBC’s Kirk Madison and his collaborator James Booth (then at the British Columbia Institute of Technology, in Burnaby), were among the first to turn that thinking on its head back in the 2000s. Instead of battling the vacuum to preserve the trapped atoms, they thought, why not use the trapped atoms as a sensor to measure how empty the vacuum is?

To understand how they did that, consider a typical cold-atom vacuum-measurement device. Its main component is a vacuum chamber filled with a vapor of a particular atomic species. Some experiments use rubidium, while others use lithium. Let’s call it lithium between friends.

A tiny amount of lithium gas is introduced into the vacuum, and some of it is captured in a magneto-optical trap. The trap consists of a magnetic field with zero intensity at the center of the trap, increasing gradually away from the center. Six laser beams point toward the center from above, below, the left, the right, the front, and the back. The magnetic and laser forces are arranged so that any lithium atom that might otherwise fly away from the center is most likely to absorb a photon from the lasers, getting a momentum kick back into the trap.

The trap is quite shallow, meaning that hot atoms—above 1 kelvin or so—will not be captured. So the result is a small, confined cloud of really cold atoms, at the center of the trap. Because the atoms absorb laser light occasionally to keep them in the trap, they also reemit light, creating fluorescence. Measuring this fluorescence allows scientists to calculate how many atoms are in the trap.

To use this setup to measure vacuum, you load the atoms into the magneto-optical trap and measure the fluorescence. Then, you turn off the light and hold the atoms in just the magnetic field. During this time, background atoms in the vacuum will chance upon the trapped atoms, knocking them out. After a little while, you turn the light back on and check how much the fluorescence has decreased. This measures how many atoms got knocked out, and therefore how many collisions occurred.

The reason you need the trap to be so shallow and the atoms to be so cold is that these collisions are very weak. “A few collisions are quite energetic, but most of the background gas particles fly by and, like, whisper to the trapped atom, and it just gently moves away,” Madison says.

This method has several advantages over the traditional ion-gauge measurement. The atomic method does not need calibration; the rate at which fluorescence dims depending on the vacuum pressure can be calculated accurately. These calculations are involved, but in a paper published in 2023 the NIST team demonstrated that the latest method of calculation shows excellent agreement with their experiment. Because this technique does not require calibration, it can serve as a primary standard for vacuum pressure, and even potentially be used to calibrate ion gauges.

The cold-atom measurement is also much less finicky when it comes to the actual contents of the vacuum. Whether the vacuum is contaminated with helium or plutonium, the measured pressure will vary by perhaps only a few percent, while the ion gauge sensitivity and reading for these particles might differ by an order of magnitude, Eckel says.

Cold atoms could also potentially measure much lower vacuum pressures than ion gauges can. The current lowest pressure they’ve reliably measured is around 10-9 Pa, and NIST scientists are working on figuring out what the lower boundary might be. “We honestly don’t know what the lower limit is, and we’re still exploring that question,” Eckel says.


A chart of vacuum pressures in the universe

No vacuum is completely empty. The degree to which vacuum pressure approaches pure nothingness is measured in pascals, with Earth’s atmosphere clocking in at 105 Pa and intergalactic space at a measly 10-20. In between, the new cold-atom vacuum gauges can measure further along the emptiness scale than the well-established ionization gauges can.

Sources: S. Eckel (cold-atom gauge, ionization gauge); K. Zou (molecular-beam epitaxy, chemical vapor deposition); L. Monteiro, “1976 Standard Atmosphere Properties” (Earth’s atmosphere); E.J. Öpik, Planetary and Space Science (1962) (Mars, moon atmosphere); A. Chambers, ‘Modern Vacuum Physics” (2004) (interplanetary and intergalactic space)

Of course, the cold-atom approach also has drawbacks. It struggles at higher pressure, above 10-7 Pa, so its applications are confined to the ultrahigh vacuum range. And, although there are no commercial atomic vacuum sensors available yet, they are likely to be much more expensive than ion gauges, at least to start.

That said, there are many applications where these devices could unlock new possibilities. At large science experiments, including LIGO (the Laser Interferometer Gravitational-Wave Observatory) and ones at CERN (the European Organization for Nuclear Research), well-placed cold-atom vacuum sensors could measure the vacuum pressure and also help determine where a potential leak might be coming from.

In semiconductor development, a particularly promising application is molecular-beam epitaxy (MBE). MBE is used to produce the few, highly pure semiconductor layers used in laser diodes and devices for high-frequency electronics and quantum technologies. The technique functions in ultrahigh vacuum, with pure elements in separate containers heated on one end of the vacuum. The elements travel across the vacuum until they hit the target surface, where they grow one layer at a time.

Precisely controlling the proportion of the ingredient elements is essential to the success of MBE. Normally, this requires a lot of trial and error, building up thin films and checking whether the proportions are correct, then adjusting as needed. With a cold-atom vacuum sensor, the quantity of each element emitted into the vacuum can be detected on the fly, greatly speeding up the process.

“If this technique could be used in molecular-beam epitaxy or other ultrahigh vacuum environments, I think it will really benefit materials development,” says Ke Zou, an assistant professor of physics at UBC who studies molecular-beam epitaxy. In these high-tech industries, researchers may find that the ability to measure nothing is everything.

This article appears in the October 2025 print issue.

In Nigeria, Why Isn’t Broadband Everywhere?

6 August 2025 at 14:00


Under the shade of a cocoa tree outside the hamlet of Atan, near Ibadan, Nigeria, Bolaji Adeniyi holds court in a tie-dyed T-shirt. “In Nigeria we see farms as father’s work,” he says. Adeniyi’s father taught him to farm with a hoe and a machete, which he calls a cutlass. These days, he says, farming in Nigeria can look quite different, depending on whether the farmer has access to the Internet or not.

Not far away, farmers are using drones to map their plots and calculate their fertilizer inputs. Elsewhere, farmers can swipe through security camera footage of their fields on their mobile phones. That saves them from having to patrol the farm’s perimeter and potentially dangerous confrontations with thieves. To be able to do those things, Adeniyi notes, the farmers need broadband access, at least some of the time. “Reliable broadband in Atan would attract international cocoa dealers and enable access to agricultural extension agents, which would aid farmers,” he says.

Adeniyi has a degree in sociology and in addition to growing cocoa trees, works as a criminologist and statistician. When he’s in Ibadan, a city of 4 million that’s southeast of Atan, he uses a laptop and has good enough Internet. But at his farm in Atan, he carries a candy-bar mobile phone and must trek to one of a few spots around the settlement if he wants better odds of getting a signal. “At times,” Adeniyi says, “it’s like wind bringing the signal.”

RELATED: Surf Africa: What to Do With a Shiny New Fiber-Optic Undersea Cable

On paper, Nigeria has plenty of broadband capacity. Eight undersea cables bring about 380 terabits of capacity to Nigeria’s coast. The first undersea cable to arrive, SAT-3/WASC, made land in 2001; the most recent is 2Africa, which landed in 2024. They’re among the 75 cables that now connect coastal Africa to the rest of the world. Nigeria’s big telecom operators continue to build long-distance, high-capacity fiber-optic networks from the cables to the important commercial nodes in the cities. But distribution to the urban peripheries and to rural places such as Atan is still incomplete.


Incomplete is an understatement: Less than half of the country’s 237 million people have regular access to broadband, with that access mostly happening through mobile devices rather than more stable fixed connections. Nigeria’s Federal Ministry of Communications, Innovation, and Digital Economy has set a goal to almost double the length of the country’s fiber-optic backbone and for broadband to reach 70 percent of the population by the end of this year. But the ministry also claimed in 2024 that it would connect Nigeria’s 774 local governments to the broadband backbone; as of February 2025, it had reached only 51. The broadband buildout has been seriously hampered by Nigeria’s unreliable power grid. Beyond the mere inconvenience of frequent outages, the poor quality of electricity drives up costs for operators and customers alike.

During a visit to Nigeria earlier this year, I talked to dozens of people about broadband’s impact on their lives. For more than two decades, the country has possessed an incredible portal to the world, and so I had hoped to hear stories of transformation. In some cases, I did. But that experience was far from uniform, with much work left to do.

Where Nigeria’s broadband has arrived

Broadband is enabling all kinds of changes in Nigeria, Africa’s most populous country. All eight undersea cables make landfall in Lagos, the cultural, commercial, and one-time federal capital of Nigeria, and one of the cables also lands near Port Harcourt to the southeast. The country’s fiber-optic backbones—which in early 2025 consisted of about 50,000 to 60,000 kilometers of fiber-optic cable—connect the undersea links to the cities.

A small map of Africa showing Nigeria in orange, with 4 detailed maps of Nigeria showing the progressive buildout of fiber-optic lines from 2008, 2015, 2020, and 2025. From 2008 to 2025, Nigeria has experienced extraordinary growth in both the number of undersea high-speed cables landing on its shores and the buildout of broadband networks, especially in its cities. Still, fixed-line broadband is unaffordable for most Nigerians, and about half of the population has no access. Africa Bandwidth Maps

“Virtually everywhere in Nigeria is covered with long-haul cables,” says Abdullateef Aliyu, general manager for projects at Phase3 Telecom, which is responsible for perhaps 10,000 km of those cables. Most Nigerian cities have at least one fiber-optic backbone, and the biggest have more than half a dozen.

The result is that the most densely populated areas enjoy competing Internet service providers offering fiber optics or satellite to the home. Connecting the other half of Nigerians, the rural majority, will become profitable someday, says Stanley Jegede, executive chairman of Phase3 Telecom, but it had better be “patient money.”

Two photos, one showing a worker on a tall ladder that\u2019s leaning against a power pole, and the other showing a man in Nigerian dress with high-voltage power lines in the background. A Phase3 Telecom worker [left] installs fiber-optic cables on power poles in Abuja, Nigeria. Abdullateef Aliyu [right], Phase3’s general manager for projects, says the country is using only around 25 percent of the capacity of its undersea cables.Andrew Esiebo

Unsurprisingly, the customers that got broadband first were those with impatient money, those that could offer the best return to the telecom firms: the oil companies that dominate Nigerian exports, the banks that have since boomed, the Nollywood studios that compete with Bollywood and Hollywood.

The impatient money showed up first in flash Victoria Island in Lagos. If you want to serve international customers or do high-speed stock trading, you need a reliable link to the outside world, and in Nigeria that means Victoria Island.

Here, the fiber-optic cables rise like thick vines in gray rooms on the ground floors or in the basements of the office towers that house the banks powering Nigerian finance. Between the towers, shopping plazas host foreign fast-food franchises and cafés.

From their perch near the submarine network, the banks realized that mobile broadband would allow them to reach exponentially more customers, especially once those customers could take advantage of Nigeria’s instant-payment system, launched by the central bank in 2011. Using mobile payments, bank apps, and other financial apps, Nigerians can conduct convenient cellphone transactions for anything from street food to airplane tickets. The central bank’s platform was such a success that until recently, it handled more money than its U.S. equivalents.

RELATED: As Nigeria’s Cashless Transition Falters, POS Operators Thrive

Just as important as convenience is trust. Nigerians trust each other so little that a university guesthouse I stayed in had its name printed on the wall-mounted air conditioner units to discourage theft. But Nigerians trust mobile payments. Uber drivers think nothing of sharing their bank account numbers with passengers, so that the passengers can pay their fares via instant payment. A Nigerian engineer explained to me that many people prefer that to disclosing their bank-card information on the Uber platform.

Broadband has also brought change to Nollywood, Nigeria’s vast film industry, second only to India’s Bollywood in terms of worldwide film output. On the one hand, broadband transformed Nollywood’s distribution model from easily pirated DVDs to paywalled streaming platforms. On the other hand, streaming platforms made it easier for Nigerians to access foreign video content, cutting into local producers’ market share. The platforms also empowered performers and other content producers to bypass the traditional Nollywood gatekeepers. Instead, content creators can publish straight to YouTube, which will pay them if they achieve enough views.

Emmanuella Njoku, a computer science major at the University of the People, an online school, is interested in a graphics or product-design job when she graduates. But a broadband-enabled side hustle is starting to look like a viable alternative, she told me in January. She edits Japanese anime recaps and publishes them to her YouTube channel. “I have 49,000 followers right now, but I need 100,000 followers and 10 million views in the last 90 days to monetize,” Njoku said.

Photo of a smiling woman in a wheelchair taking a selfie, with an open laptop at her side. Computer science student Emmanuella Njoku has found a broadband-enabled side gig: creating YouTube videos.Andrew Esiebo

A friend of hers had recently crossed the 100,000-follower threshold with YouTube videos focused on visits to high-end restaurants around Lagos. The friend expected restaurants and other companies to start paying her for visits, in addition to collecting her tiny cut of YouTube’s ad revenue.

Both women said they’d prefer jobs that allow them to telecommute, a more realistic prospect in Nigeria in the last few years thanks to the availability of broadband. More companies are open to remote work and hybrid work, says telecom analyst Fola Odufuwa. That’s especially true in Lagos, where fuel shortages and world-class traffic jams encourage people to minimize the number of days they commute.

For academics, broadband can make it easier to collaborate on research. In 2004, IEEE Spectrum reported on a Federal University of Technology researcher in Owerri carrying handwritten messages to a contact, who had a computer with an Internet connection and would type up the messages and send them as emails. Today researchers at the Federal University of Technology campus in Minna collaborate virtually with colleagues in Europe on an Internet of Things demonstration project. While some events take place in person, the collaborators also exchange emails, meet by videoconference, and work on joint publications via the Internet.

Why broadband rollout in Nigeria has been so slow

The undersea cables and fiber-optic backbones have also been a boon for Nigeria’s telecom industry, which now accounts for 14 percent of GDP, third only to agriculture (23 percent) and international trade (15 percent).

Photo of a crowded city street, with colorful cars and umbrellas, and the word Tecno on several blue buildings. Computer Village in Lagos is Nigeria’s main hub for electronics.Andrew Esiebo

Alcatel (now part of Nokia) connected SAT-3 to Nigeria’s main switching station in December 2001, just a couple of years into the first stable democratic government since independence in 1960. The state-run telephone monopoly, Nigerian Telecommunications (Nitel), was mainly responsible for the rollout of SAT-3 within the country. Less than 1 percent of the 130 million Nigerians had phone lines in 2002, so the government established a second carrier, Globacom, to try to accelerate competition in the telecom market.

But a mixture of mismanagement and wider difficulties contributed to the sluggish spread of broadband, as Spectrum reported in 2004. Broadband access has soared since then, and yet Aliyu of Phase3 Telecom estimates that the country is using only around 25 percent of the total capacity of its undersea cables.

Nigeria’s unreliable electricity drives up telecom prices, making it harder for poor Nigerians to afford broadband. The spotty power grid means that standard telecom equipment needs backup power. But battery or diesel-powered cellphone towers attract theft, which in turn undermines network reliability. Power outages occur with such frequency that even when the lights and air conditioning go out during in-person meetings, it arouses no comment.

RELATED: Nigerians Look to Get Out From Under the Nation’s Grid

A visit to Nitel’s former headquarters, a 32-story skyscraper with antennas and a lighthouse perched on top, is revealing. Telecom consultant Jubril Adesina leads the way into the once-grand entrance, where armed guards wave visitors past inoperative turnstiles.

Two stacked photos, the top one showing the front of some equipment and bottom showing a man in a suit looking at telecommunications equipment on racks. NTEL’s chief information officer, Anthony Adegbola, inspects broadband equipment at the company’s data center in Lagos, which still houses obsolete coaxial cable boxes [top]. Andrew Esiebo

Our destination is NTEL, a private firm that inherited much of Nitel’s mantle, on the 17th floor. Adesina is explaining how a recent mobile tariff increase will improve mobile penetration, but when we reach the elevator lobby, he stops talking. The power is out again. His eyes turn to the unlit indicator alongside the shut elevators, then he looks at the stairs and whispers, “We can’t.”

Instead, Adesina walks around to the back of the building and greets NTEL chief information officer Anthony Adegbola, who along with a small team of engineers and technicians guards another relic of Nigeria’s telecom past. We walk along a hallway past rooms with empty desks and old desktop computers and down a short staircase. Cables snake along the ceiling and above a door. Beyond the door, the men point proudly to SAT-3, Nigeria’s first high-speed undersea cable, rising alongside an electrical grounding cable from the tiled floor. Server racks house obsolete coaxial cable boxes, displayed as if in a museum, next to today’s fiber-optic boxes. Since the last time Spectrum visited, engineers have expanded SAT-3’s capacity from 120 gigabits per second to 1.4 terabits per second, Adegbola says, thanks to improvements in data transmission via different wavelengths, and better receiving boxes in the room. NTEL backs up the grid electricity with a battery bank and two generators.

In Nigeria, mobile broadband is popular

What is often missing in Nigeria is the local connection, the last few kilometers leading to customers. In the developed world, that connection works like this: Internet service providers (ISPs) plug into the nearest backbone via one of several technologies and deliver a small slice of bandwidth to their business and residential customers. A switching station called a point of presence (PoP) serves as an on- and off-ramp between the backbone and the ISPs. The ISPs are responsible for installing the fiber-optic cables that lead to their customers; they may also use microwave antennas to beam a signal to customers.

But in Nigeria, fiber-optic ISPs have been sluggish to capture market share. Of the country’s 300,000 or so fixed-line broadband subscribers—just 0.001 percent of Nigerians—about a third are served by the leading ISP, Spectranet. By comparison, the average fixed broadband penetration rate among countries in the Organisation for Economic Co-operation and Development (OECD) was 42.5 percent in 2023, led by South Korea, with 89.6 percent penetration.

Starlink’s satellite-based service, introduced in Nigeria in 2023, is now the second biggest broadband ISP, with about 60,000 subscribers. That’s almost triple the third biggest ISP, FiberOne. Satellite is outcompeting fiber because it’s more reliable and has higher speeds and tolerable latency, even though it costs more. A Starlink satellite terminal can serve up to 200 subscribers and retails for about US $200 plus a $37 monthly fee. A comparable fiber-to-the-home plan in Abuja, where the median monthly take-home pay is $280, costs about $19 a month.

Photo of a row of color display cases holding cellphones, with signs offering phone and laptop repair. In Lagos’s Computer Village, you can buy or sell a mobile phone or computer, or get yours repaired.Andrew Esiebo

Meanwhile, Nigeria has 142 million cellular subscriptions, and so most Internet users access the Internet wirelessly, via a mobile network. In other words, Nigeria’s mobile market is nearly 500 times as big as the market for fixed broadband. The mobile networks also rely on the fiber-optic backbones, but instead of using PoP gateways, they link to cellular base stations, each of which can reach up to thousands of mobile devices but may not offer ideal quality of service.

Mobile Internet is a good thing for people who can afford it, which is most Nigerians, according to the International Telecommunication Union. The cost of fixed-line broadband is still around five times as much, which explains why its market share is so tiny. But mobile Internet isn’t enough to run many businesses, nor do mobile network operators guarantee network speeds or low latency, which are crucial factors for high-frequency trading, telemedicine, and e-commerce, and for white-collar jobs requiring streaming video calls.

Nigeria is 129th in the world in Internet speeds

Internet speeds across Nigeria vary, but broadband tester Ookla’s spring 2025 median for fixed broadband was 28 megabits per second for downloads and 15 Mb/s for uploads, with latency of 25 milliseconds. That puts Nigeria 129th in the world for fixed broadband. In May, Starlink delivered download speeds between 44 and 50 Mb/s, uploads of around 12 Mb/s, and latency of around 61 ms. The top country, Singapore, averaged 393 Mb/s down and 286 Mb/s up, with 4 ms latency. And those numbers for Nigeria don’t capture the effect of unpredictable electricity cuts.

Steve A. Adeshina, a computer engineering professor and machine-vision expert at Nile University, in the capital city of Abuja, says he routinely runs up against the limits of Nigeria’s broadband network. That’s why he keeps two personal cellular modems on his desk. His university contracts with several Internet providers, but the broadband in his lab is still intermittent. For machine-vision research, with its huge datasets, failing to upload data stored on his local machine to the more powerful cloud processor where he runs his experiments means failing to work. “We have optical fiber, but we are not getting value for money,” Adeshina says. If he wakes up to a failed overnight data upload, he has to start it all over again.

RELATED: The Engineer Who Secured Nigeria’s Democracy

Photo of tangled black cables spilling from a rectangular opening in the sidewalk, with street traffic and a distinctive tall building in the background. Fiber-optic cable spills from an open manhole in Lagos. Local gangs may cut the cables or steal components. Andrew Esiebo

There are many causes for the slow Internet, but chief among them are frequent cable cuts—50,000 in 2024, according to the federal government. The problem is so bad that in February, the government established a committee to prevent network blackouts due to cable cuts during road construction, which it blamed for 60 percent of the incidents.

“The challenge is reaching the hinterland,” Aliyu of Phase3 Telecom says, and keeping lines intact once there. To make his point, Aliyu, dressed in a snappy three-piece suit and red tie, drives a company pickup truck from Phase3’s well-appointed offices in a leafy part of Abuja to a nearby ring road. He pulls over in the shade of an overpass and steps onto the dirt shoulder. A concrete manhole cover sits perched along one edge of an open manhole, looking like the lid of a sarcophagus.

Pointing at the hole, Aliyu explains how easy it is for local gangs, called area boys, to steal components or cut the cables, forcing backbone providers and ISPs to strike unofficial security deals with the boys, or the more powerful, shadowy men behind them. Of course, part of the problem is self-inflicted: Sloppy work crews leave manholes open and expose the cables to potential damage from nesting animals or a stray cigarette butt that ignites tumbleweed and melts the cables.

Phase3 and other telecom companies are also contending with the expense of replacing the first generation of fiber-optic cables, now about 20 years old, as well as upgrading PoP hardware to increase capacity. They’re spending money not just to reach new customers, but also to provide competitive service to existing customers.

For mobile operators such as Globacom, there’s the additional challenge of ensuring reliable power for their base stations. They often rely on diesel or gasoline generators to back up grid power, but fuel scarcity, infrastructure theft, and supply chain issues can undermine base station reliability.

How Nigeria’s offline half lives

The hamlet of Tungan Ashere is 3 km northwest of the major international airport serving Abuja. To get here, you leave the highway and drive past cinder-block huts with traditional reed roofs. The side of the dirt road is adorned with concrete pylons waiting to be strung with power lines but still naked as the day they were installed in 2021. People here farm cassava, watermelon, yam, and corn. Some keep small herds of goats and cattle. To get to market, they can ride on one of a handful of dirt-bike taxis.

Photo of a plastic tarp covering a round green structure on a platform, with a group of people gathered inside. A building with solar panels is in the background. In Tungan Ashere, the Internet hub operated by the Centre for Information Technology and Development attracts residents.Andrew Esiebo

When someone in Tungan Ashere wants to make an announcement, they stroll to a prominent tree and ring a green bar of scrap metal wedged at about head height in the tree’s branches. The metal resonates, not quite like a church bell, but it serves a similar purpose. “The bell, it’s to tell everybody to go to sleep, to wake up, if there’s an announcement. It’s an ancient way of communicating,” explains Lukman Aliu, a telecom engineer who drove me here.

The concept of connectivity in the village differs from just a few kilometers away at the airport, where passengers can enjoy free high-speed Wi-Fi in the comfort of a café. Yet the potential benefits of affordable broadband access for people living in places like Tungan Ashere are enormous.

Usman Isah Dandari is trying to meet that need. He is a technical assistant at the Centre for Information Technology and Development (CITAD), a nonprofit based in Kano, Nigeria. Dandari coordinates a handful of community networking projects, including one in Tungan Ashere. Better broadband here would help farmers track market prices, help students complete their homework, and make it easier for farmers and craftspeople to advertise their goods. CITAD uses a mixture of hardware, including Starlink terminals and cellular modems, to offer relatively reliable broadband to areas neglected by commercial operators. The group is also considering using Nigeria’s national satellite operator, NigComSat, and working with the Nigerian Communications Commission to lower the costs.

Photo of a man in Nigerian garb standing in front of a seated group of mainly children who are looking at tablets. Usman Isah Dandari [standing] coordinates several projects like the one in Tungan Ashere, to provide affordable broadband access.Andrew Esiebo

A few meters away from the scrap-metal bell in Tungan Ashere is a one-story building painted rust red, topped with a pastel green corrugated metal roof and eight solar panels, which power a computer lab inside. There’s no grid electricity here, but the solar panels are enough to run a CITAD-provided cellular modem, a few desktop computers, and a formidable floor fan some of the time.

Many of the people in the village once lived where the airport is now. The Nigerian government displaced them when it chose the region as the new federal capital territory in 1991. Since then, successive local governments have provided services piecemeal, usually in the runup to elections. The result is a string of communities like Tungan Ashere—10,000 people in all—that still lack running water, paved roads, grid electricity, and reliable Internet. These people may live on the edge of Nigeria’s broadband backbone, but they reap few of its benefits.

A private undersea cable shows how to do it

Not every undersea cable rollout has been fraught. In 2005, electrical engineer Funke Opeke was working at Verizon Communications in the United States. MTN, an African telecom company, hired her to help it build its submarine cables. Then Nitel hired her to help manage its privatization. There, she saw up close how the organization was failing to get the Internet from SAT-3 into Nigerians’ lives.

Photo of a woman seated at a table with her hands folded in front of her. Funke Opeke founded MainOne to build Nigeria’s first private undersea fiber-optic cable.George Osodi/Bloomberg/Getty Images

“I don’t think it was a question of capital or return on investment, policy, or interest,” Opeke says. Instead, officials favored suppliers offering kickbacks over those with competent bids.

Seeing an opportunity for a well-managed submarine cable, Opeke approached private investors about developing a cable of their own. The result is the MainOne cable, which arrived in Lagos in 2010 and is operated by the company of the same name. MainOne offered the first private competition to Nitel’s SAT-3 and Globacom’s Glo-1, which began service in 2010. (MTN’s two cables landed in Nigeria in 2011.)

At first, the MainOne cable suffered the same problem as the others—its capacity wasn’t reaching users. “After we built, there was no distribution,” Opeke, who’s now an advisor with MainOne, says. So the company got its own ISP license and began building fiber links into major metro areas—eventually more than 1,200 km in states near its undersea-cable landing site. It ended up offering a more complete service than originally intended, bringing the Internet from overseas, onshore, across Nigeria, and the last kilometers into businesses and homes, and it attracted more than 800 business clients.

MainOne’s success forced the publicly held telecoms and the mobile providers to compete. “The mobile networks were built for voice, and they were not investing fast enough” in data capacity, Opeke says. MainOne did invest, helping to create the broadband capacity needed for Nigeria’s first data centers. It then diversified into data centers, and in 2022 sold its whole business to American data-center giant Equinix.

Other companies, including the major mobile operators, also began building fiber between Nigerian cities, duplicating each other’s infrastructure. The problem is they didn’t offer competitive prices to independent ISPs that wanted to piggyback on those new fiber-optic links, says the telecom analyst Odufuwa.

And neither the public sector nor the private sector is meeting the needs of Nigerians at the bottom of the market, especially in rural communities such as Tungan Ashere and Atan. A crucial first step will be to improve the reliability of the electrical grid, Opeke says, which will help drive down costs for telecom operators and other businesses, and create a virtuous cycle for further growth.

Almost everyone Spectrum interviewed for this story said security is another challenge: If Nigerian states and the federal government could ensure the security of the infrastructure, telecom operators would invest more in expanding their networks. Building telecom infrastructure is well within the reach of Nigerian engineers. “Nigeria doesn’t have a skill problem,” Opeke says. “It has an opportunity problem.”

If the bureaucrats, businesspeople, and engineers can overcome those policy and technical hurdles, the unconnected half of Nigerians stand to gain a lot. Reliable broadband in Atan would draw more young people to agriculture, says the farmer and sociologist Bolaji Adeniyi: “It will provide jobs.” Then, like Adeniyi, maybe those young connected Nigerians will reconsider whether farming is just father’s work—perhaps it could be their future, too.

Special thanks to IEEE Senior Member John Funso-Adebayo for his assistance with the logistics and reporting for this story.

A Cold War Kit for Surviving a Nuclear Attack

1 August 2025 at 15:00


On 29 August 1949, the Soviet Union successfully tested its first nuclear weapon. Over the next year and a half, U.S. President Harry S. Truman resurrected the Office of Civilian Defense (which had been abolished at the end of World War II) and signed into law the Federal Civil Defense Act of 1950, which mobilized government agencies to plan for the aftermath of a global nuclear war. With the Cold War underway, that act kicked off a decades-long effort to ensure that at least some Americans survived nuclear armageddon.

As the largest civilian federal agency with a presence throughout the country, the U.S. Post Office Department was in a unique position to monitor local radiation levels and shelter residents. By the end of 1964, approximately 1,500 postal buildings had been designated as fallout shelters, providing space and emergency supplies for 1.3 million people. Occupants were expected to remain in the shelters until the radioactivity outside was deemed safe. By 1968, about 6,000 postal employees had been trained to use radiological equipment, such as the CD V-700 pictured at top, to monitor beta and gamma radiation. And a group of postal employees organized a volunteer ham radio network to help with communications should the regular networks go down.

What was civil defense in the Cold War?

The basic premise of civil defense was that many people would die immediately in cities directly targeted by nuclear attacks. (Check out Alex Wellerstein’s interactive Nukemap for an estimate of casualties and impact should your hometown—or any location of your choosing—be hit.) It was the residents of other cities, suburbs, and rural communities outside the blast area that would most benefit from civil defense preparations. With enough warning, they could shelter in a shielded site and wait for the worst of the fallout to decay. Anywhere from a day or two to a few weeks after the attack, they could emerge and aid any survivors in the harder-hit areas.

In 1957, a committee of the Office of Defense Mobilization drafted the report Deterrence and Survival in the Nuclear Age, for President Dwight D. Eisenhower. Better known as the Gaither Report, it called for the creation of a nationwide network of fallout shelters to protect civilians. Government publications such as The Family Fallout Shelter encouraged Americans who had the space, the resources, and the will to construct shelters for their homes. City dwellers in apartment buildings warranted only half a page in the booklet, with the suggestion to head to the basement and cooperate with other residents.

Black and white photo of a man and a woman sitting in a small room that\u2019s surrounded by cinder block walls. This model fallout shelter from 1960 was designed for four to six people. Bettmann/Getty Images

Ultimately, very few homeowners actually built a fallout shelter. But Rod Serling, creator of the television series “The Twilight Zone,” saw an opportunity for pointed social commentary. Aired in the fall of 1961, the episode “The Shelter” showed how quickly civilization (epitomized by a suburban middle-class family and their friends) broke down over decisions about who would be saved and who would not.

Meanwhile, President John F. Kennedy had started to shift the national strategy from individual shelters to community shelters. At his instruction, the U.S. Army Corps of Engineers began surveying existing buildings suitable for public shelters. Post offices, especially ones with basements capable of housing at least 50 people, were a natural fit.

Each postmaster general was designated as the local shelter manager and granted complete authority to operate the shelter, including determining who would be admitted or excluded. The Handbook for Fallout Shelter Management gave guidance for everything from sleeping arrangements to sanitation standards. Shelters were stocked with food and water, medicine, and, of course, radiological survey instruments.

What to do in case of a nuclear attack

These community fallout shelters were issued a standard kit for radiation detection. The kit came in a cardboard box that contained two radiation monitors, the CD V-700 (a Geiger counter, pictured at top) and the CD V-715 (a simple ion chamber survey meter); two cigar-size CD V-742 dosimeters, to measure a person’s total exposure while wearing the device; and a charger for the dosimeters. Also included was the Handbook for Radiological Monitors, which provided instructions on how to use the equipment and report the results.

Photo of a cardboard box labeled \u201cCD V-777 Radiological Defense Operational Set.\u201d Post office fallout shelters were issued standard kits for measuring radioactivity after a nuclear attack.National Postal Museum/Smithsonian Institution

Black and white image of several pieces of equipment with the heading \u201cShelter Radiation Kit (CD V-777-1).\u201d The shelter radiation kit included two radiation monitors, two cigar-size dosimeters, and a charger for the dosimeters. Photoquest/Getty Images

In the event of an attack, the operator would take readings with the CD V-715 at selected locations in the shelter. Then, within three minutes of finishing the indoor measurements, he would go outside and take a reading at least 25 feet (7.6 meters) from the building. If the radiation level outside was high, there were procedures for decontamination upon returning to the shelter. The “protection factor” of the shelter was calculated by dividing the outside reading by the inside reading. (Today the Federal Emergency Management Agency, FEMA, recommends a PF of at least 40 for a fallout shelter.) Operators were directed to retake the measurements and recalculate the protective factor at least once every 24 hours, or more frequently if the radiation levels changed rapidly.

The CD V-700 was intended for detecting beta and gamma radiation during cleanup and decontamination operations, and also for detecting any radioactive contamination of food, water, and personnel.

RELATED: DIY Gamma-Ray Spectroscopy With a Raspberry Pi Pico

Each station would report their dose rates to a regional control center, so that the civil defense organization could determine when people could leave their shelter, where they could go, what routes to take, and what facilities needed decontamination. But if you’ve lived through a natural or manmade disaster, you’ll know that in the immediate aftermath, communications don’t always work so well. Indeed, the Handbook for Radiological Monitors acknowledged that a nuclear attack might disrupt communications. Luckily, the U.S. Post Office Department had a backup plan.

In May 1958, Postmaster General Arthur E. Summerfield made an appeal to all postal employees who happened to be licensed amateur radio operators, to form an informal network that would provide emergency communications in the event of the collapse of telephone and telegraph networks and commercial broadcasting. The result was Post Office Net (PON), a voluntary group of ham radio operators; by 1962, about 1,500 postal employees in 43 states had signed on. That year, PON was opened up to nonemployees who had the necessary license.

RELATED: The Uncertain Future of Ham Radio

Although PON was never activated due to a nuclear threat, it did transmit messages during other emergencies. For example, in January 1967, after an epic blizzard blanketed Illinois and Michigan with heavy snow, the Michigan PON went into action, setting up liaisons with county weather services and relaying emergency requests, such as rescuing people stranded in vehicles on Interstate 94.

Vintage Civil Defense exhibit with equipment and a sign urging enrollment. A 1954 civil defense fair featured a display of amateur radios. The U.S. Post Office recruited about 1,500 employees to operate a ham radio network in the event that regular communications went down. National Archives

The post office retired the network on 30 June 1974 as part of its shift away from civil defense preparedness. (A volunteer civil emergency-response ham radio network still exists, under the auspices of the American Radio Relay League.) And by 1977, laboratory tests indicated that most of the food and medicine stockpiled in post office basements was no longer fit for human consumption. In 1972 the Office of Civil Defense was replaced by the Defense Civil Preparedness Agency, which was eventually folded into FEMA. And with the end of the Cold War, the civil defense program officially ended in 1994, fortunately without ever being needed for a nuclear attack.

Do we still need civil defense?

The idea for this column came to me last fall, when I was doing research at the Linda Hall Library, in Kansas City, Mo., and I kept coming across articles about civil defense in magazines and journals from the 1950s and ’60s. I knew that the Smithsonian’s National Postal Museum, in Washington, D.C., had several civil defense artifacts (including the CD V-700 and a great “In Time of Emergency” public service announcement record album).

As a child of the late Cold War, I remember being worried by the prospect of nuclear war. But then the Cold War ended, and so did my fears. I envisioned this month’s column capturing the intriguing history of civil defense and the earnest preparations of the era. That chapter of history, I assumed, was closed.

Little did I imagine that by the time I began to write this, the prospect of a nuclear attack, if not an all-out war, would suddenly become much more real. These days, I understand the complexities and nuances of nuclear weapons much better than when I was a child. But I’m just as concerned that a nuclear conflict is imminent. Here’s hoping that history repeats itself, and it does not come to that.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the August 2025 print issue.

References


The November 1951 issue of Electrical Engineering summarized a civil defense conference held at the General Electric Co.’s Electronics Park in Syracuse, N.Y., earlier that year. Two hundred eighty federal, state, county, and city officials from across the United States and Canada attended, which got me thinking about the topic.

Many of the government’s civil defense handbooks are available through the Internet Archive. The U.S. Postal Bulletins have also been digitized, and the USPS historian’s office wrote a great account, “The Postal Service’s Role in Civil Defense During the Cold War.” Although I’ve highlighted artifacts from the National Postal Museum, the Smithsonian Institution has many other objects across multiple museums.

Eric Green has been collecting civil defense material since 1978 and has made much of it available through his virtual Civil Defense Museum.

Alex Wellerstein, a historian of nuclear technology at the Stevens Institute of Technology, writes the Substack newsletter Doomsday Machines, where he gives thoughtful commentary on how we think about the end of times, in both fiction and reality. His interactive Nukemap is informative and scary.

STEM Immigration’s Impact on U.S. Workforce Diversity

21 July 2025 at 14:00


For decades, the United States has attracted students and employees from across the globe aiming to pursue careers in engineering and other STEM disciplines. Foreign-born individuals are a significant part of the U.S. workforce. In recent years, many policymakers and researchers have also sought to better understand and improve the racial and gender diversity of the STEM workforcebut these efforts have largely focused on domestic students.

Byeongdon (Don) Oh, an assistant professor of sociology and the director of the Diversity, Equity, Inclusion, and Belonging (DEIB) Research Center at SUNY Polytechnic Institute, hopes to gain insight into how immigration status intersects with efforts to foster a more diverse and inclusive STEM workforce. In a recent study, Oh examined data from a national survey of college graduates on race, gender, and immigration status in the U.S. STEM workforce. He found that many immigrants pursue STEM—about one-third of U.S. STEM graduates are foreign-born—but disparities by race and gender are more pronounced among these students than U.S.-born individuals in higher education.

IEEE Spectrum spoke to Oh about the factors driving STEM immigration trends, racial disparities, and the future of STEM immigration. The following has been edited for length and clarity.

Byeongdon Oh on:

What is STEM immigration?

Byeongdon Oh: STEM immigration refers to the growing influx of foreign-born individuals seeking STEM degrees or careers in the United States. This increase is certainly influenced by individual choice, because individuals with STEM skills have a good chance at a good career and income in the United States. But it’s not only about individual choice. So many other social forces shape STEM immigration.

Higher-education institutions have attracted talented, foreign-born students to support institutional development and generate tuition revenue. Both international students majoring in STEM and high-ranked U.S. universities benefit from each other. There’s also a mutually favorable relationship between foreign-born individuals seeking STEM careers and U.S. employers. Policymakers and employers have expressed a continued need for more STEM workers to support economic growth.

The government also knows that, and the U.S. immigration laws have evolved to attract more students and workers with high-level STEM skills. For example, international students cannot work off-campus during their studies. But after graduation, they are allowed to work for one year through the Optional Practical Training program. STEM graduates are eligible for a two-year extension of this period. Many graduates apply for H-1B and permanent residency during that time. STEM immigration is not only an individual choice; it’s increased by many social and structural factors.

What did you find in your research?

Oh: My study finds that about 30 percent of STEM degree holders living in the United States are immigrants. Many discussions have talked about how immigration affects the U.S. economy and how increasing STEM immigration is affecting the salary rate of native-born workers. This is the first study focusing on how STEM immigration affects the diversity profile of the U.S. STEM workforce.

Graphs show predicted probability by race, gender, and immigration for degree holders. Oh’s research divides college educated immigrants into three groups: first generation, 1.25 generation, and 1.5 generation.Byeongdon Oh; National Survey of College Graduates

Compared to U.S.-born white graduates, immigrants—regardless of race—are just as likely, if not more so, to hold STEM degrees. However, race and gender disparities are more pronounced among immigrants than among U.S.-born graduates. The gap is already significant among U.S.-born individuals, but it’s even wider among immigrants.

I subdivided college-educated immigrants into first generation, 1.25 generation, and 1.5 generation. The first generation refers to immigrants who complete all of their education outside the United States. The second generation is born in the United States. In my study, 1.5 generation refers to immigrants who obtained a high school diploma in the United States. The 1.25 generation completed a high school diploma abroad but attended college in the United States. The race and gender gaps in STEM representation are actually widest among the 1.25 generation.

What do you think is causing these disparities?

Oh: This does not come from my data yet, but I suspect there are three major causes. The first originates in the country of origin: Like in the United States, racial and gender disparities can exist within the country of origin. And it’s not only inequality in education or STEM skills. The racial majority and men may also have a better chance to migrate to the United States.

The second factor stems from between-country inequalities. Many white and Asian immigrants come from countries in the Global North, where stronger economies and greater investment in R&D are generally associated with higher-quality STEM education.

The third factor relates to the U.S. immigration process. The immigration process is long, and racial minorities and women can be particularly vulnerable to socioeconomic struggles during this long waiting time. Also, employers may hold biases that certain racial groups are better for STEM, or that men are more qualified. That kind of stereotype or discrimination can have an effect. So these could be three major causes, but honestly, we don’t know which one plays the largest role.

Previously, we focused so much on [diversity in] K-12 STEM education and particularly native-born students. But as I said, the 1.25 generation has the widest gap, and there is a substantial volume. So without considering those immigrants, social interventions aimed at diversifying the U.S. STEM workforce will remain limited in their impact.

How can we better support international individuals?

Oh: We need collective social interventions and policy changes. You can think of a short-term and long-term strategy.

The short-term strategy is to include more immigrants in our policy discussion and debate. Many STEM students and workers are not just coming here as tourists and going back after one or two years. There is a high chance they will stay. If we really want to improve diversity and inclusion in the U.S. STEM workforce, we should include them and learn from their experiences to improve immigration policy.

And long term, we need better data collection. Many government datasets on the immigration process are inaccessible. Immigration researchers really want to have that data, but the government hasn’t granted access to it. Additionally, the federal government requires all higher-education institutions to report racial and ethnic profiles annually, using categories similar to those used in the census. But the federal guidelines for higher ed list international students in a separate category. If they are international students, they don’t count race or ethnicity. Many institutions collect that information, but when they report, they place all international students in one category. That’s one example of how we have overlooked race and diversity issues among immigrants.

With recent federal immigration policy changes, we’re seeing early indications that international students may be turning away from seeking higher education in the United States. How does that potential trend relate to your findings?

Oh: The recent policy changes may have short-term negative effects on STEM immigration. When prospective immigrants don’t believe they can successfully settle down in the United States, they may hesitate to start the process. If they see tension between their country and the United States, that can discourage them from pursuing education or employment here. In that way we will lose STEM talent.

In the longer term, I think STEM immigration will continue. There are factors drawing them, like the economy and education. The structural demand for high-skilled STEM students and workers is unlikely to disappear anytime soon.

During the first Trump presidency, many STEM immigrants, particularly with graduate degrees, continued to use National Interest Waivers [an exemption from job offer requirements for advanced degree workers applying for certain visas]. If you have STEM graduate degrees, this provides an expedited pathway to permanent residency. I remember it didn’t decrease. Although immigration is often portrayed in political discourse as a threat to jobs or public safety, having high-skilled immigrants helps economic growth. If we lose all STEM immigrants, domestic employers will have a problem.

What’s next for your research?

Oh: I’m pursuing two directions. One is focused on STEM degree holders and the likelihood of entering STEM occupations. Not all STEM degree holders have STEM jobs, and race and gender inequalities may contribute to this education-occupation mismatch. I want to see if these disparities differ by immigration status.

The second direction is qualitative interviews. In my institution, there are many international students and immigrant faculty members. I’m planning to conduct qualitative interviews with them. I’m also a visiting research professor at University of California, Berkeley, so I want to compare UC Berkeley and my institution. Ultimately, I hope this line of research can help reframe how we think about diversity—not just in terms of race or gender within the United States, but also across borders and generations.

The Violinist Who Fell in Love With Machine Learning

26 June 2025 at 14:00


Music and engineering might seem like career paths that are almost diametrically opposed. But for Javier Orman the transition from professional violinist to a machine learning engineer at LinkedIn was a surprisingly natural one.

Growing up in Montevideo, Uruguay, Orman excelled at both music and math, and he double-majored in the subjects at college. But music was his true passion and after university he pursued a career as a professional violinist, performing, teaching, and helping to record and produce other artists.

Javier Orman


Employer:

LinkedIn

Occupation:

Machine learning engineer

Education:

Bachelor’s degrees in music and mathematics, College of Charleston; master’s degree in music, University of Michigan

But during the turmoil of the COVID-19 pandemic, a conversation with a grad-school friend who had given up music for software development piqued his curiosity. After taking some free online courses in Python and machine learning, he quickly became immersed in a fascinating new world of data and algorithms. It didn’t take long for him to realize he wanted to make a career of it.

Machine learning algorithms were “almost like magic” to Orman. “I became enamored by the methodology, the math behind it.”

A Double Prodigy

Orman’s unusual career trajectory can be traced back to his early childhood. Both of his parents were software engineers and he grew up with computers around the house from an early age. But they were also a musical family. His mother enjoyed playing the piano, and his father the trumpet.

Musician passionately playing a violin on stage.Orman’s path from music to machine learning shows how those with nontechnical backgrounds can succeed in software.Desmond “Des Money” Owusu/Instagram

His musical journey started at 4 years old when he saw a class of around 100 children playing violin as a group. Orman was captivated and immediately told his mother he wanted to play the violin too. By his teens he had begun touring with Uruguay’s national youth orchestra and entering music competitions. At the same time though, he had discovered a natural aptitude for math and was entering Math Olympiads. But Orman says math was essentially a hobby he did on the side while he focused on his music career.

After completing two degrees, in music and math, at the College of Charleston in South Carolina in 2006, Orman went on to a master’s degree in music at the University of Michigan. By 2009, he was playing in orchestras at Carnegie Hall in New York City and touring South America with a chamber music group.

Interested in pursuing a more creative avenue, Orman started composing music for short films and taught himself music production, eventually building a small studio where he would record and produce other artists. Over time, he built up a sustainable music career by combining this patchwork of creative projects with teaching violin.

A New Direction

In early 2020, as COVID-19 was upending the world, Orman found himself reevaluating his future and looking for new challenges. Near the start of the pandemic, he spoke to a friend from graduate school who had recently made the transition from professional violinist to software engineer. She told him about programming languages and what the career was like, and out of curiosity he decided to take an online Python course. Soon after he also began exploring the world of machine learning.

“I started taking online courses, but I also started looking for data on things that just interested me,” he says. For example, Orman created an animated heat map showing the rate of COVID-19 hospitalizations in each state. “Once I figured out how to make cool plots and investigate the data a bit more, that became actually fun to do.”

Within six months, Orman realized this was something he wanted to pursue as a career. “I noticed that I was having a hard time stopping for meal breaks or to go to sleep,” he says. “So I began to take it seriously.”

In April 2021, Orman got a job doing data preparation at New York City–based startup Koios Medical, which develops cancer-detection algorithms. But his big break came just a few months later when he discovered LinkedIn’s Reach apprenticeship program, which provides a way into the tech industry for people with nontraditional educational or career backgrounds. He applied and started as an apprentice in machine learning software engineering that July.

Learning Machine Learning

Orman was assigned to LinkedIn’s Feed AI team, which develops the recommendation algorithms that determine what posts a user is shown. There are multiple layers to this system, which gradually filters down millions of potential posts to determine those most of interest to a specific user.

In 2022, after one year in the Reach program, Orman was promoted to software engineer and now works on a model known as the “second-pass ranker,” the final layer of AI in this system. It decides what posts are most relevant to the user based on factors like their propensity to click or comment on similar kinds of posts.

Much of his work involves experimenting with new machine learning techniques or making small tweaks to the model to squeeze out extra performance. “It’s a pretty complex system,” says Orman. “It’s also a very mature system, so we measure gains in terms of tenths or hundreds of a percentage.”

But he relishes the challenge and the push to continually learn new things. That’s something he believes his background in music, which requires constant dedication and practice, has set him up well to do.

There are also profound mathematical underpinnings to music, and Orman thinks those connections have helped in his new career. “Those intersections run deep and they are hard to describe,” he says. “But they do feel like they both tickle my brain in a particular way.”

And Orman has some advice for others coming to engineering from nontechnical backgrounds: Focus on developing an intuition for how a technology works before diving into the nitty-gritty details. “Spending time understanding and just getting a feel for how things work on an intuitive level just makes everything easier,” he says. “And then you start practicing the nuts and bolts.”

One day, he hopes to marry his two major passions by working on recommendation algorithms for music. In the meantime, he’s content for them to play separate but complementary roles in his life. “I like to take breaks from my job and play Bach,” he says. “It just feels like a nice balance to go back and forth between the two.”

Anti-Distraction Systems Shut Down Smartphone Use

11 June 2025 at 14:00


As mobile phone use continues to be a leading cause of vehicle accidents, a range of technologies has emerged designed to combat distracted driving. From mobile apps to hardware-integrated systems, these tools aim to limit phone use behind the wheel. But a closer look reveals significant differences in how effectively they prevent distractions—especially in fleet vehicles.

While apps like AT&T’s DriveMode and Apple’s built-in Do Not Disturb While Driving offer basic protections, they rely heavily on driver cooperation. Many can be bypassed with a swipe or a second phone, limiting their effectiveness when liability and safety are paramount.

“We think technologies that reduce visual-manual interaction with phones are obviously a good thing,” Ian Reagan, a senior research scientist at the Insurance Institute for Highway Safety told IEEE Spectrum. “But most are opt-in. We’d like to see them as opt-out by default.”

“Mobile use while driving is an addiction. We needed a system that prevents distraction without waiting for the driver to choose safety. That’s what we built.” Ori Gilboa, SaverOne

Now, a new generation of anti-distraction technology is shifting from soft nudges to hard enforcement. And for companies managing fleets of drivers, the stakes—and the solutions—are getting more serious.

The Need for Enforceable Solutions

“There’s a difference between tools that monitor and tools that prevent,” says Ori Gilboa, CEO of SaverOne, a Tel Aviv–area startup leading a new wave of hardware-integrated solutions that make driver cooperation a nonissue. “That distinction matters when lives are on the line.”

SaverOne’s system uses a passive sensor network to scan the vehicle cabin for phones, identify the driver’s device, and place it into “safe mode”—automatically blocking risky apps while allowing essential functions like navigation and preapproved voice calls. Crucially, the system works even if the driver tries to cheat by disabling Bluetooth or by bringing a second phone.

Designed to Be Driverproof

The system consists of four small hidden sensors and a central receiver—about the size of an iPhone—installed inside the vehicle. It can pinpoint mobile devices within centimeters and distinguishes between driver and passenger phones. If the driver’s phone is active and doesn’t connect to SaverOne’s app, a buzzer sounds until the issue is resolved.

“What sets us apart is our prevention-first approach,” says Gilboa. “Most systems focus on what went wrong after the fact. We stop the distraction before it happens.”

Gilboa said the system’s design respects driver usability, preserving tools like turn-by-turn navigation and voice calls to approved contacts. “We want drivers to be reachable—but not distracted,” he adds.

Global Expansion, Measurable Impact

Since launching its second-generation product in 2022, SaverOne has rapidly expanded. After early pilot deployments with Israeli fleet operators such as Bynet Data Communications, Israel Electric Corporation, and ice-cream purveyor Froneri, the company gained traction, securing deals with a broader array of Israeli companies. By mid-2023, Cemex Israel, the global cement giant’s local subsidiary, had agreed to deploy the driver-distraction-prevention system on its 380-vehicle fleet. In January 2024, following a successful trial with 17 trucks, Strauss Group, one of Israel’s largest food and beverage companies, decided to install the SaverOne system on its fleet of 80 food-delivery trucks. Though smaller than the Cemex Israel contract, that agreement proved significant because Strauss accumulated data demonstrating a statistically significant reduction in accident rates among the equipped vehicles. That news has helped SaverOne in its bid to go global. CEMEX has since outfitted trucks in fleets across Europe. In the United States, SaverOne is now being adopted by FedEx contractors in North Carolina and Philadelphia, says Gilboa.

Some fleet operators report as much as a 60 percent reduction in accident rates post-installation. While those figures are difficult to verify independently, a more concrete metric is phone interaction. Fleet managers have observed a dramatic drop—from drivers checking their phones 10 times per hour to near zero.

“The system educates through behavior,” says Gilboa. “It’s not about punishment—it’s about making the right choice automatic.”

But Reagan cautions that long-term behavioral change remains unproven, comparing it to early intelligent speed assistance trials in Europe using systems that detected vehicles’ locations, used digital maps to keep track of local speed limits, and reduced engine power to prevent the vehicles from exceeding the legal limit. “When the limiter was on,” Reagan says, “people obeyed the posted speed limits. When it was turned off, they sped again. Whether tech like this [driver-distraction-prevention system] creates lasting change—well, we just don’t know yet.”

Could Regulation Be the Tipping Point?

Despite promising results, broader adoption—particularly in the consumer market—may hinge on regulation. IIHS’s Reagan notes that although distracted driving officially accounts for about 10 percent of crash fatalities, or roughly 3,500 deaths per year, the real figure is likely far higher. Despite the undercount, the urgency is still hard to ignore. As Reagan put it, “Phones let you mentally escape the car, even when you’re barreling down the highway at 115 kilometers per hour [about 70 miles per hour]. That’s the real danger.”

He adds that government regulation requiring carmakers to install systems like SaverOne’s could be a game changer. “The tech exists,” Reagan said. “What we need is the political will to mandate it.”

SaverOne is still focused on fleet customers, but the company is in discussions with insurers exploring offering discounts to young or high-risk drivers who use distraction-prevention systems, Gilboa says.

“Mobile use while driving is an addiction,” he says. “We needed a system that prevents distraction without waiting for the driver to choose safety. That’s what we built.”

Three Steps to Stopping Killer Asteroids

11 June 2025 at 12:00


Impact was imminent. Occasional gasps arose as the asteroid took shape and a jagged, rocky surface filled the view. Then the images abruptly stopped.

The mission control room at Johns Hopkins University Applied Physics Lab in Laurel, Md., erupted in cheers. “We have impact!” said the lead engineer, who gave a two-handed high five to a nearby colleague. Others waved their hands in the air in victory and slapped each other on the back.

This had been a test, and humanity had passed it, taking one crucial step closer to protecting Earth from an asteroid impact. The test was the culmination of NASA’s Double Asteroid Redirection Test (DART) mission, for which I was the coordination lead. On 26 September 2022, the DART spacecraft had successfully crashed into Dimorphos, a roughly 150-meter-diameter asteroid that was 11 million kilometers from Earth. The collision nudged the asteroid and modified its trajectory.

Diagram of DART impacting Dimorphos, altering its orbit around asteroid Didymos.In 2022, NASA’s Double Asteroid Redirection Test slammed a golf-cart-size spacecraft, DART, into the near-Earth asteroid Dimorphos (1). DART—which first deployed a small observer craft, LICIACube, to observe the collision (2)—bumped Dimorphos’s trajectory (3) enough to alter its future course (4).GyGinfographics; Source: NASA

The celebrations in the control room were the culmination of years of effort to prove that the momentum from a golf-cart-size spacecraft can alter an asteroid’s future path. And DART’s collision with asteroid Dimorphos kicked off a new era in space exploration, in which technologies for planetary defense are now taking shape.

If one day an asteroid like Dimorphos is discovered to be headed toward Earth, an interceptor craft like DART could collide with the asteroid years in advance to avert disaster. Here’s how that might work.

Step 1: Find and Track Near-Earth Asteroids

The first step in averting an asteroid impact with Earth is just to know what near-Earth objects (NEOs) are out there.

The University of Hawaii’s Asteroid Terrestrial-impact Last Alert System (ATLAS) station, in Chile plays a critical role in these observations of NEOs, which are asteroids orbiting near Earth’s orbit. In late December, it detected a previously unknown NEO during a routine sweep of the skies. The asteroid was given the name 2024 YR4, following the standard astronomical convention for new objects. “2024 Y” represents the 24th-half-month of the year 2024—that is, 16 to 31 December. The “R4” encodes the sequence of discovery—in this case, that it was the 117th object found during the year’s final couple of weeks.

Hera 


Illustration of a yellow satellite with two blue solar panels deployed.

This European Space Agency mission will rendezvous with the Didymos–Dimorphos asteroid system and study the aftereffects of NASA’s DART impact close up.

Launch:

2024

Rendezvous:

2026


Until that point in the year, more than 3,000 NEOs had already been discovered. Nothing about 2024 YR4 initially stood out as concerning. It was a seemingly run-of-the-mill asteroid. However, further observations soon suggested it wasn’t ordinary at all.

Throughout the first weeks of 2025, the probability of a 2024 YR4 collision with Earth kept growing. On 29 January, astronomers calculated its odds of eventual impact to be 1.3 percent. And in crossing the 1 percent threshold, 2024 YR4 triggered an alert from the International Asteroid Warning Network to the United Nations’ Office for Outer Space Affairs about the potential impact. Such alerts are posted publicly on the IAWN’s website. The 29 January notice assessed the regions of the planet at highest risk from 2024 YR4 (also known as its risk corridor), as well as the expected damage if the asteroid did crash into Earth.

On average, an object of 2024 YR4’s size—estimated at 60 meters across—slams into our planet once every thousand years. It’s considered a “city-killer” asteroid—not big enough to trigger a mass extinction, like the estimated 10-km one that likely killed the dinosaurs, but still big enough to be deadly up to roughly 50 km from the impact location. Fortunately, by 24 February, further observations by telescopes across the globe had refined the asteroid’s trajectory enough to rule out near-term Earth impact.

Yet when it comes to asteroids and Earth, there won’t always be such an uncomplicated, happy ending. Another asteroid that size or even larger will eventually be on a collision course with the planet [see chart below].


Near-Earth objects threat; size, frequency, damage, energy, discovery percentage comparison.


The world’s space agencies track an estimated 95 percent of NEOs greater than 1 km in diameter. The International Asteroid Warning Network and a related Space Mission Planning Advisory Group (SMPAG) are global coordinating bodies that monitor these efforts. And thankfully, none of the giant NEOs tracked by the above pose an impact risk to Earth for at least the next hundred years. (Meanwhile, comet impacts with Earth are even rarer than those of asteroids.)

But you can only track the NEOs that are known. And plenty of city-killer asteroids remain lurking and undiscovered, potentially still posing a real risk to life on the planet. In the 50-meter range, a meager 7 percent of NEOs have been found. That’s not for lack of trying. It’s just more difficult to find small asteroids because smaller asteroids appear dimmer than larger ones.

NASA’s & FEMA’s 2024 Planetary-Defense Exercise


Last year, NASA and the Federal Emergency Management Agency sponsored an Interagency Tabletop Exercise around a hypothetical asteroid impact threat. In the fictional scenario, telescope observations detected an NEO, yielding a 72 percent chance of the object colliding with Earth in 2038. I served as a facilitator for this tabletop exercise, which aimed to further discussion and opportunities to stress-test new approaches to a realistic “killer asteroid” scenario.

One complicating factor we introduced was that the NEO’s size remained alarmingly difficult to pin down: Was it a 60-meter city killer? Or was it an 800-meter object that could devastate a country? If the latter, it would have risked the lives and livelihoods of more than 10 million people.

To keep the exercise focused, we centered it around the hypothetical NEO’s detection—and the decisions and next actions that would follow. We tracked the unfolding discussions and decisions in the aftermath of detection. Several U.S. agencies and organizations participated, as did the U.N. Office of Outer Space Affairs and international partners.

One of the key gaps identified was the limited readiness to quickly deploy space missions for reconnaissance of the asteroid threat and for preventing Earth impact. The scenario’s large uncertainties underscored the need for capabilities to rapidly obtain better information about the asteroid.

“I know what I would prefer [to do],” said one anonymous participant quoted in the exercise’s quick-look report. “But Congress will tell us to wait.” —N.L.C.


New hardware is clearly needed. Sometime soon, the Vera C. Rubin Observatory, in Chile, is expected to see first light. The observatory will survey the entire visible sky every few nights, through a 3,200-megapixel camera on an 8.4-meter telescope. No Earth-based telescope in the history of the NEO hunt can match its capabilities. Adding to our NEO search will be NASA’s NEO Surveyor, an infrared space telescope scheduled to launch as soon as 2027. Together, the two new facilities are expected to discover thousands of new-to-us near-Earth asteroids. For objects 140 meters and larger, the two telescopes will locate an anticipated 90 percent of the entire population.

Once an NEO has been discovered, astronomers routinely track its orbit and extrapolate its trajectory over the coming century. So any NEO already on the books (for example, in NASA’s database or ESA’s database) is quite likely to come with decades of warning. Ideally, that should leave ample time to develop and deploy a spacecraft to learn more about it and redirect the wayward space rock if necessary.

Step 2: Send an NEO Reconnaissance Mission

Imagine that the probability of 2024 YR4 colliding with Earth rose instead of fell, with the estimated impact to take place sometime in 2032. Here’s why that would have been especially worrying.

Asteroid 2024 YR4’s elongated orbit made it unobservable from Earth after mid-May of this year. So we wouldn’t have been able to see it with even the most sensitive telescopes until its next swing through our region of the solar system—around June 2028.

In that alternate universe, we would’ve had to wait three years to launch a reconnaissance mission to study the object up close. Only then would we have known the next steps to take to redirect the asteroid away from Earth before its fated visit four years later.

As it happens, SMPAG held preliminary discussions about 2024 YR4 in late January and early February. However, because the asteroid’s risk of collision with Earth soon dwindled to zero, the group didn’t develop specific recommendations.

Hayabusa2#


Illustration of a yellow satellite with blue solar panels in space.

The Japan Aerospace Exploration Agency has extended a previous mission (Hayabusa2) to encounter two more near-Earth asteroids over the next six years.

Flyby:

2026

Rendezvous

2031


DART would have provided a foundation for a 2028 reconnaissance mission, as would NASA’s Lucy mission, which flew past the asteroid Dinkinesh in 2023. Reconnaissance flybys provide as little as a few precious seconds to capture the needed data about the target asteroid. Of course, inserting a reconnaissance craft into orbit around the asteroid would allow more detailed measurements. However, few NEO trajectories offer the opportunity for any maneuver other than a flyby—especially when time is of the essence.

Whatever the trajectory, the most important question for a reconnaissance mission would be whether the asteroid was in fact on a collision course with Earth in 2032. If so, where on the planet would it hit? That future impact location could potentially be narrowed down to within a hundred kilometers.

The mission might also uncover some complications. For starters, we might discover that the asteroid is actually plural. Some 15 percent of NEOs are believed to have secondary objects orbiting them—they’re asteroids with moons. And some asteroids are essentially a flying jumble of rocks.

Another wrinkle comes in determining the asteroid’s mass. We need to know the mass to calculate the damage it could cause on impact, as well as the oomph required to divert it.

Unfortunately, the technology to measure the mass of a city-killer asteroid doesn’t exist. The mass of a larger, kilometer-size asteroid is measured by determining the gravitational pull on the reconnaissance spacecraft, but that trick doesn’t work for smaller asteroids. Right now, the best we can do is estimate the mass by measuring the asteroid’s physical size from closeup imaging during a flyby and then inferring the composition.

These challenges will need to be mastered in time for the reconnaissance mission, as the spacecraft—traveling at up to 90,000 kilometers per hour—flies past the potentially irregularly shaped object or objects half-shrouded in darkness. So it probably makes sense to tackle those challenges now rather than waiting until an actual threat emerges.

Step 3: Change NEO’s Course With Interceptor

If the reconnaissance mission does conclude that a killer asteroid is on the way and narrows down the date of impact, then what? Returning to 2024 YR4, that might make 22 December 2032 a very bad day for one city-size region of the planet. Even if it fell in the ocean, we’d need to look at geological and oceanic computer models to forecast the tsunami risk. If that risk is small, then world leaders and NEO advisors might opt to let the asteroid proceed.

On the other hand, if the asteroid is on course to strike a highly populated area, then launching a spacecraft to deflect the asteroid and prevent impact might be warranted.

NEO Surveyor 


Diagram of the EM Spectrum Explorer satellite design with shaded components.

NASA’s infrared space telescope has been designed to detect and track near-Earth object (NEO) asteroids that are potentially hazardous to Earth.

Launch:

as early as 2027

Here, lessons from DART are instructive. For one thing, a spacecraft impact can pack only so much punch. It’s unclear whether a deflection spacecraft the size of the DART would be able to nudge a 2024 YR4–like asteroid with enough force to avoid Earth. It’s also possible the impactor’s nudge could inadvertently cause it to land in an even worse spot, inflicting more damage. And if the asteroid is only weakly held together, a DART-like collision might break it into multiple, smaller rubble piles—one or more of which could still reach Earth. So any kind of deflection mission has to be carefully considered.

Other asteroid defense technologies are also worth considering. These other options are still untested, but we might as well get started, when nothing’s yet at stake.

If you have decades of lead time, for instance, a rendezvous spacecraft could be dispatched to orbit the killer asteroid and slowly and continually act on it. Researchers have suggested using such a spacecraft’s gravity to tug the asteroid off its path or ion-beam engines to gradually push it. The spacecraft could use one or both techniques over the span of years or decades to cause a large enough change in the asteroid’s trajectory to prevent Earth impact.

But if time is short, there are far fewer options. If the situation is dire enough, with a monster asteroid likely heading for a populated area, then using a nuclear explosive to break up or divert the asteroid could be on the table. That’s the premise of the 1998 blockbuster Armageddon (as well as the 2021 Netflix satire Don’t Look Up). Absurd, yes, but worth considering if you’re otherwise out of options.

Of course, the whole idea of planetary defense is to have options and to do as much advance preparation as possible. A number of countries have planetary-defense missions currently in space or planned in the next few years.

The ESA’s Hera mission launched last year and is on its way to rendezvous late next year with the asteroid system that DART struck, to investigate the aftermath of DART’s 2022 deflection test. The Japanese Aerospace Exploration Agency’s Hayabusa2 is set to fly by an NEO in 2026 and rendezvous with a different asteroid in 2031. It’s the next chapter to JAXA’s original Hayabusa2 mission, which brought back samples of the asteroid Ryugu in 2020. China plans to perform a kinetic impactor demonstration similar to DART, with an observer spacecraft to watch, scheduled to launch in 2027.

And in 2029, a 340-meter asteroid called Apophis—after the Egyptian god of chaos and darkness—will pass within 32,000 km of Earth, which is closer than some geosynchronous satellites. This will happen on 13 April 2029—Friday the 13th, that is. Apophis won’t hit Earth, but its close pass has prompted the U.N. to designate 2029 the International Year of Asteroid Awareness and Planetary Defense. The asteroid will be bright enough to be seen by the naked eye across parts of Europe, Asia, and Africa. And NASA has redirected its OSIRIS-REx spacecraft (which returned samples of the asteroid Bennu to Earth in 2023) to rendezvous with Apophis. The renamed OSIRIS-APEX mission will give astronomers an important opportunity to further refine how we measure and characterize NEO asteroids.

While NEO researchers will continue to collect new data and develop new insights and perspectives, leading toward, we hope, better and stronger planetary defense, one perennial will hold as true in the future as it does today: In this very high-stakes game, you never get to pick the asteroid. The asteroid always picks you.

IEEE’s 5 New E-Books Provide On-ramp to Engineering

9 June 2025 at 18:00


As the home for IEEE’s preuniversity resources, activities, and hands-on experiences, TryEngineering serves as a hub for educators, parents, and IEEE volunteers to teach school-age children about engineering.

With support from IEEE partners, TryEngineering has launched a series of e-books. Bolstered by input from IEEE members who are experts in their field, the e-books use open-source, free materials written to teach complex engineering topics in an age-appropriate way. Visually appealing, the books use colorful charts and graphs to grab children’s attention.

Each of the five English-language publications provides an overview of a technology or topic. The books include stories about engineers, technologists, and early pioneers.

Engineering disciplines, solutions, and ethics

Engineers Make the World a Better Place was created with funding from the IEEE New Initiatives Committee. The book introduces students to engineering disciplines and explains how engineers improve society by solving challenging problems, such as improving access for children with limited physical mobility.

With support from Onsemi’s Giving Now program, IEEE semiconductor experts wrote Microchip Adventures: A Journey Into the World of Semiconductors. It includes an introduction to the field, a list of commonly used terms, an explanation of how chips are made, and an overview of the technology’s history.

Bolstered by input from IEEE members who are experts in their field, the e-books use open-source, free materials written to teach complex engineering topics in an age-appropriate way.

Wave Wonders: A Signal Processing Journey was written with experts from the IEEE Signal Processing Society. It teaches students how to tell the difference between digital and analog signals. The e-book introduces readers to the inventor of the telegraph, Samuel Morse. Also included is the Electric Messages lesson plan, which explains how early telegraphs worked.

Ocean Engineering Heroes: Making the Oceans and the World a Better Place was created in partnership with the IEEE Oceanic Engineering Society. It includes video interviews with several society leaders about oceans and ways to help keep them clean. It also discusses the impact of pollution including sound pollution from ships and other sources. Included are links to resources from other organizations such as the National Oceanic and Atmospheric Administration.

AI Adventures: Exploring the World of Artificial Intelligence was written with assistance from the IEEE Computer Society. The publication describes how AI models work and explains commonly used terms including machine learning and neural networks. The book covers the importance of ethics when using AI.

Visit the TryEngineering website for the e-books and many other resources for educators, parents, and volunteers. To help expand the site’s pool of offerings, consider donating to the IEEE TryEngineering Fund.

Doctors Could Hack the Nervous System With Ultrasound

9 June 2025 at 13:00


Inflammation: It’s the body’s natural response to injury and infection, but medical science now recognizes it as a double-edged sword. When inflammation becomes chronic, it can contribute to a host of serious health problems, including arthritis, heart disease, and certain cancers. As this understanding has grown, so too has the search for effective ways to manage harmful inflammation.

Doctors and researchers are exploring various approaches to tackle this pervasive health issue, from new medications to dietary interventions. But what if one of the most promising treatments relies on a familiar technology that’s been in hospitals for decades?

Enter focused ultrasound stimulation (FUS), a technique that uses sound waves to reduce inflammation in targeted areas of the body. It’s a surprising new application for ultrasound technology, which most people associate with prenatal checkups or diagnostic imaging. And FUS may help with many other disorders too, including diabetes and obesity. By modifying existing ultrasound technology, we might be able to offer a novel approach to some of today’s most pressing health challenges.

Our team of biomedical researchers at the Institute of Bioelectronic Medicine (part of the Feinstein Institutes for Medical Research), in Manhasset, N.Y., has made great strides in learning the electric language of the nervous system. Rather than treating disease with drugs that can have broad side effects throughout the body, we’re learning how to stimulate nerve cells, called neurons, to intervene in a more targeted way. Our goal is to activate or inhibit specific functions within organs.

The relatively new application of FUS for neuromodulation, in which we hypothesize that sound waves activate neurons, may offer a precise and safe way to provide healing treatments for a wide range of both acute and chronic maladies. The treatment doesn’t require surgery and potentially could be used at home with a wearable device. People are accustomed to being prescribing pills for these ailments, but we imagine that one day, the prescriptions could be more like this: “Strap on your ultrasound belt once per day to receive your dose of stimulation.”

How Ultrasound Stimulation Works

Ultrasound is a time-honored medical technology. Researchers began experimenting with ultrasound imaging in the 1940s, bouncing low-energy ultrasonic waves off internal organs to construct medical images, typically using intensities of a few hundred milliwatts per square centimeter of tissue. By the late 1950s, some doctors were using the technique to show expectant parents the developing fetus inside the mother’s uterus. And high-intensity ultrasound waves, which can be millions of milliwatts per square centimeter, have a variety of therapeutic uses, including destroying tumors.

The use of low-intensity ultrasound (with intensities similar to that of imaging applications) to alter the activity of the nervous system, however, is relatively unexplored territory. To understand how it works, it’s helpful to compare FUS to the most common form of neuromodulation today, which uses electric current to alter the activity of neurons to treat conditions like Parkinson’s disease. In that technique, electric current increases the voltage inside a neuron, causing it to “fire” and release a neurotransmitter that’s received by connected neurons, which triggers those neurons to fire in turn. For example, the deep brain stimulation used to treat Parkinson’s activates certain neurons to restore healthy patterns of brain activity.

How It Works


Neuron impulse transmission showing ion flow through cell membrane.


In FUS, by contrast, the sound waves’ vibrations interact with the membrane of the neuron, opening channels that allow ions to flow into the cell, thus indirectly changing the cell’s voltage and causing it to fire. One promising use is transcranial ultrasound stimulation, which is being tested extensively as a noninvasive way to stimulate the brain and treat neurological and psychiatric diseases.

We’re interested in FUS’s effect on the peripheral nerves—that is, the nerves outside the brain and spinal cord. We think that activating specific nerves in the abdomen that regulate inflammation or metabolism may help address the root causes of related diseases, rather than just treating the symptoms.

FUS for Inflammation

Inflammation is something that we know a lot about. Back in 2002, Kevin Tracey, currently the president and CEO of the Feinstein Institutes, upset the conventional wisdom that the nervous system and the immune system operate independently and serve distinct roles. He discovered the body’s inflammatory reflex: a two-way neural circuit that sends signals between the brain and body via the vagus nerve and the nerves of the spleen. These nerves control the release of cytokines, which are proteins released by immune cells to trigger inflammation. Tracey and colleagues found that stimulating nerves in this neural circuit suppressed the inflammatory response. The discoveries led to the first clinical trials of electrical neuromodulation devices to treat chronic inflammation and launched the field of bioelectronic medicine.

Hacking the Immune System


Ultrasound transducer scans kidney, showing bacteria spreading and evading immune response.


Tracey has been a pioneer in treating inflammation with vagus nerve stimulation (VNS), in which electrical stimulation of the vagus nerve activates neurons in the spleen. In animals and humans, VNS has been shown to reduce harmful inflammation in both chronic diseases such as arthritis and acute conditions such as sepsis. But direct VNS requires surgery to place an implant in the body, which makes it risky for the patient and expensive. That’s why we’ve pursued noninvasive ultrasound stimulation of the spleen.

Working with Tracey, collaborators at GE Research, and others, we first experimented with rodents to show that ultrasound stimulation of the spleen affects an anti-inflammatory pathway, just as VNS does, and reduces cytokine production as much as a VNS implant does. We then conducted the first-in-human trial of FUS for controlling inflammation.

We initially enrolled 60 healthy people, none of whom had signs of chronic inflammation. To test the effect of a 3-minute ultrasound treatment, we were measuring the amount of a molecule called tumor necrosis factor (TNF), which is a biomarker of inflammation that’s released when white blood cells go into action against a perceived pathogen. At the beginning of the study, 40 people received focused ultrasound stimulation, while 20 others, serving as the control group, simply had their spleens imaged by ultrasound. Yet, when we looked at the early data, everyone had lower levels of TNF, even the control group. It seemed that even imaging with ultrasound for a few minutes had a moderate anti-inflammatory effect! To get a proper control group, we had to recruit 10 more people for the study and devise a different sham experiment, this time unplugging the ultrasound machine.


Abstract collage with neuron, brain textures, and dynamic wave patterns in pastel colors.


After the subjects received either the real or sham stimulation, we took blood samples from all of them. We next simulated an infection by adding a bacterial toxin to the blood in the test tubes, then measured the amount of TNF released by the white blood cells to fight the toxin. The results, which we published in the journal Brain Stimulation in 2023, showed that people who had received FUS treatments had lower levels of TNF than the true control group. We saw no problematic side effects of the ultrasound: The treatment didn’t adversely affect heart rate, blood pressure, or the many other biomarkers that we checked.

The results also showed that when we repeated the blood draw and experiment 24 hours later, the treatment groups’ TNF levels had returned to baseline. This finding suggests that if FUS becomes a treatment option for inflammatory diseases, people might require regular, perhaps even daily, treatments.

One surprising result was that it didn’t seem to matter which location within the spleen we targeted—all the locations we tried produced similar results. Our hypothesis is that hitting any target within the spleen activates enough nerves to produce the beneficial effect. What’s more, it didn’t matter which energy intensity we used. We tried intensities ranging from about 10 to 200 mW per cm2, well within the range of intensities used in ultrasound imaging; remarkably, even the lowest intensity level caused subjects’ TNF levels to drop.

Our big takeaway from that first-in-human study was that targeting the spleen with FUS is not just a feasible treatment but could be a gamechanger for inflammatory diseases. Our next steps are to investigate the mechanisms by which FUS affects the inflammatory response, and to conduct more animal and human studies to see whether prolonged administration of FUS to the spleen can treat chronic inflammatory diseases.

FUS for Obesity and Diabetes

For much of our research on FUS, we’ve partnered with GE Research, whose parent company is one of the world’s leading makers of ultrasound equipment. One of our first projects together explored the potential of FUS as a treatment for the widespread inflammation that often accompanies obesity, a condition that now affects about 890 million people around the world. In this study, we fed lab mice a high-calorie and high-fat “Western diet” for eight weeks. During the following eight weeks, half of them received ultrasound stimulation while the other half received daily sham stimulation. We found that the mice that received FUS had lower levels of cytokines—and to our surprise, those mice also ate less and lost weight.

In related work with our GE colleagues, we examined the potential of FUS as a treatment for diabetes, which now affects 830 million people around the world. In a healthy human body, the liver stores glucose as a reserve and releases it only when it registers that glucose levels in the bloodstream have dropped. But in people with diabetes, this sensing system is dysfunctional, and the liver releases glucose even when blood levels are already high, causing a host of health problems.

Hacking the Metabolic System


Ultrasound scan diagram showing brain-liver connection through neural pathways.

For diabetes, our ultrasound target was the network of nerves that transmit signals between the liver and the brain: specifically, glucose-sensing neurons in the porta hepatis, which is essentially the gateway to the liver. We gave diabetic rats 3-minute daily ultrasound stimulation over a period of 40 days. Within just a few days, the treatment brought down the rats’ glucose levels from dangerously high to normal range. We got similar results in mice and pigs, and published these exciting results in 2022 in Nature Biomedical Engineering.

Those diabetes experiments shed some light on why ultrasound had this effect. We decided to zero in on a brain region called the hypothalamus, which controls many crucial automatic body functions, including metabolism, circadian rhythms, and body temperature. Our colleagues at GE Research started investigating by blocking the nerve signals that travel from the liver to the hypothalamus in two different ways—both cutting the nerves physically and using a local anesthetic. When we then applied FUS, we didn’t see the beneficial decrease in glucose levels. This result suggests that the ultrasound treatment works by changing glucose-sensing signals that travel from the liver to the brain—which in turn changes the commands the hypothalamus issues to the metabolic systems of the body, essentially telling them to lower glucose levels.

The next steps in this research involve both technical development and clinical testing. Currently, administering FUS requires technical expertise, with a sonographer looking at ultrasound images, locating the target, and triggering the stimulation. But if FUS is to become a practical treatment for a chronic disease, we’ll need to make it usable by anyone and available as an at-home system. That could be a wearable device that uses ultrasound imaging to automatically locate the anatomical target and then delivers the FUS dose: All the patient would have to do is put on the device and turn it on. But before we get to that point, FUS treatment will have to be tested clinically in randomized controlled trials for people with obesity and diabetes. GE HealthCare recently partnered with Novo Nordisk to work on the clinical and product development of FUS in these areas.

FUS for Cardiopulmonary Diseases

FUS may also help with chronic cardiovascular diseases, many of which are associated with immune dysfunction and inflammation. We began with a disorder called pulmonary arterial hypertension, a rare but incurable disease in which blood pressure increases in the arteries within the lungs. At the start of our research, it wasn’t clear whether inflammation around the pulmonary arteries was a cause or a by-product of the disease, and whether targeting inflammation was a viable treatment. Our group was the first to try FUS of the spleen in order to reduce the inflammation associated with pulmonary hypertension in rats.

The results, published last year, were very encouraging. We found that 12-minute FUS sessions reduced pulmonary pressure, improved heart function, and reduced lung inflammation in the animals in the experimental group (as compared to animals that received sham stimulation). What’s more, in the animals that received FUS, the progression of the disease slowed significantly even after the experiment ended, suggesting that this treatment could provide a lasting effect.

One day, an AI system might be able to guide at-home users as they place a wearable device on their body and trigger the stimulation.

This study was, to our knowledge, the first to successfully demonstrate an ultrasound-based therapy for any cardiopulmonary disease. And we’re eager to build on it. We’re next interested in studying whether FUS can help with congestive heart failure, a condition in which the heart can’t pump enough blood to meet the body’s needs. In the United States alone, more than 6 million people are living with heart failure, and that number could surpass 8 million by 2030. We know that inflammation plays a significant role in heart failure by damaging the heart’s muscle cells and reducing their elasticity. We plan to test FUS of the spleen in mice with the condition. If those tests are successful, we could move toward clinical testing in humans.

The Future of Ultrasound Stimulation

We have one huge advantage as we think about how to bring these results from the lab to the clinic: The basic hardware for ultrasound already exists, it’s already FDA approved, and it has a stellar safety record through decades of use. Our collaborators at GE have already experimented with modifying the typical ultrasound devices used for imaging so that they can be used for FUS treatments.

Once we get to the point of optimizing FUS for clinical use, we’ll have to determine the best neuromodulation parameters. For instance, what are the right acoustic wavelengths and frequencies? Ultrasound imaging typically uses higher frequencies than FUS does, but human tissue absorbs more acoustic energy at higher frequencies than it does at lower frequencies. So to deliver a good dose of FUS, researchers are exploring a wide range of frequencies. We’ll also have to think about how long to transmit that ultrasound energy to make up a single pulse, what rate of pulses to use, and how long the treatment should be.

In addition, we need to determine how long the beneficial effect of the treatment lasts. For some of the ailments that researchers are exploring, like FUS of the brain to treat chronic pain, a patient might be able to go to the doctor’s office once every three months for a dose. But for diseases associated with inflammation, a regular, several-times-per-week regimen might prove most effective, which would require at-home treatments.

For home use to be possible, the wearable device would have to locate the targets automatically via ultrasound imaging. As vast databases already exist of human ultrasound images from the liver, spleen, and other organs, it seems feasible to train a machine-learning algorithm to detect targets automatically and in real time. One day, an AI system might be able to guide at-home users as they place a wearable device on their body and trigger the stimulation. A few startups are working on building such wearable devices, which could take the form of a belt or a vest. For example, the company SecondWave Systems, which has partnered with the University of Minnesota, in Minneapolis, has already conducted a small pilot study of its wearable device, trying it out on 13 people with rheumatoid arthritis and seeing positive outcomes.

While it will be many years before FUS treatments are approved for clinical use, and likely still more years for wearable devices to be proven safe enough for home use, the path forward looks very promising. We believe that FUS and other forms of bioelectronic medicine offer a new paradigm for human health, one in which we reduce our reliance on pharmaceuticals and begin to speak directly to the body electric.

This article appears in the July 2025 print issue as “Hacking the Nervous System With Ultrasound.”

Disaster Awaits if We Don’t Secure IoT Now

2 June 2025 at 16:00


In 2015, Ukraine experienced a slew of unexpected power outages. Much of the country went dark. The U.S. investigation has concluded that this was due to a Russian state cyberattack on Ukrainian computers running critical infrastructure.

In the decade that followed, cyberattacks on critical infrastructure and near misses continued. In 2017, a nuclear power plant in Kansas was the subject of a Russian cyberattack. In 2021, Chinese state actors reportedly gained access to parts of the New York City subway computer system. Later in 2021, a cyberattack temporarily closed down beef processing plants. In 2023, Microsoft reported a cyberattack on its IT systems, likely by Chinese-backed actors.

The risk is growing, particularly when it comes to Internet of things (IoT) devices. Just below the veneer of popular fad gadgets (does anyone really want their refrigerator to automatically place orders for groceries?) is an increasing army of more prosaic Internet-connected devices that take care of keeping our world running. This is particularly true of a subclass called Industrial Internet of Things (IIoT), devices that implement our communication networks, or control infrastructure such as power grids or chemical plants. IIoT devices can be small devices like valves or sensors, but also can include very substantial pieces of gear, such as an HVAC system, an MRI machine, a dual-use aerial drone, an elevator, a nuclear centrifuge, or a jet engine.

The number of current IoT devices is growing rapidly. In 2019, there were an estimated 10 billion IoT devices in operation. At the end of 2024, it had almost doubled to approximately 19 billion. This number is set to more than double again by 2030. Cyberattacks aimed at those devices, motivated either by political or financial gain, can cause very real physical-world damage to entire communities, far beyond damage to the device itself.

Security for IoT devices is often an afterthought, as they often have little need for a “human interface” (i.e., maybe a valve in a chemical plant only needs commands to Open, Close, and Report), and usually they don’t contain information that would be viewed as sensitive (for example, thermostats don’t need credit cards, a medical device doesn’t have a Social Security number). What could go wrong?

Of course, “what could go wrong” depends on the device, but especially with carefully planned, at-scale attacks, it’s already been shown that a lot can go wrong. For example, armies of poorly secured, Internet-connected security cameras have already been put to use in coordinated distributed-denial-of-service attacks, where each camera makes a few harmless requests of some victim service, causing the service to collapse under the load.

How to Secure IoT Devices

Measures to defend these devices generally fall into two categories: basic cybersecurity hygiene and defense in depth.

Cybersecurity hygiene consists of a few rules: Don’t use default passwords on admin accounts, apply software updates regularly to remove newly discovered vulnerabilities, require cryptographic signatures to validate updates, and understand your “software supply chain:” where your software comes from, where the supplier obtains components that it may simply be passing through from open-source projects.

The rapid profusion of open-source software has prompted development of the U.S. Government’s Software Bill of Materials (SBOM). This is a document that conveys supply-chain provenance, indicating which version of what packages went into making the product’s software. Both IIoT device suppliers and device users benefit from accurate SBOMs, shortening the path to determining if a specific device’s software may contain a version of a package vulnerable to attack. If the SBOM shows an up-to-date package version where the vulnerability has been addressed, both the IIoT vendor and user can breathe easy; if the package version listed in the SBOM is vulnerable, remediation may be in order.

Defense in depth is less well-known, and deserves more attention.

It is tempting to implement the easiest approach to cybersecurity, a “hard and crunchy on the outside, soft and chewy inside” model. This emphasizes perimeter defense, on the theory that if hackers can’t get in, they can’t do damage. But even the smallest IoT devices may have a software stack that’s too complex for the designers to fully comprehend, usually leading to obscure vulnerabilities in dark corners of the code. As soon as these vulnerabilities become known, the device transitions from tight, well-managed security to no security, as there’s no second line of defense.

Defense in depth is the answer. A National Institute of Standards and Technology publication breaks down this approach to cyber-resilience into three basic functions: protect, meaning use cybersecurity engineering to keep hackers out; detect, meaning add mechanisms to detect unexpected intrusions; and remediate, meaning take action to expel intruders to prevent subsequent damage. We will explore each of these in turn.

Protect

Systems that are designed for security use a layered approach, with most of the device’s “normal behavior” in an outer layer, while inner layers form a series of shells, each of which has smaller, more constrained functionality, making the inner shells progressively simpler to defend. These layers are often related to the sequence of steps followed during the initialization of the device, where the device starts in the inner layer with the smallest possible functionality, with just enough to get the next stage running, and so on until the outer layer is functional.

To ensure correct operation, each layer must also perform an integrity check on the next layer before starting it. In each ring, the current layer computes a fingerprint or signature of the next layer out.

Concentric circles with labels: hardware root of trust (if present), firmware, operating system loader, operating system kernel, application software. To make a defensible IoT device, the software needs to be layered, with each layer running only if the previous layer has deemed it safe. Guy Fedorkow, Mark Montgomery

But there’s a puzzle here. Each layer is checking the next one before starting it, but who checks the first one? No one! The inner layer, whether the first checker is implemented in hardware or firmware, must be implicitly trusted for the rest of the system to be worthy of trust. As such, it’s called a Root of Trust (RoT).

Roots of Trust must be carefully protected, because a compromise of the Root of Trust may be impossible to detect without specialized test hardware. One approach is to put the firmware that implements the Root of Trust into read-only memory that can’t be modified once the device is manufactured. That’s great if you know your RoT code doesn’t have any bugs, and uses algorithms that can’t go obsolete. But few of us live in that world, so, at a minimum, we usually must protect the RoT code with some simple hardware that makes the firmware read-only after it’s done its job, but writable during its startup phase, allowing for carefully vetted, cryptographically signed updates.

Newer processor chips move this Root of Trust one step back into the processor chip itself, a hardware Root of Trust. This makes the RoT much more resistant to firmware vulnerabilities or a hardware-based attack, because firmware boot code is usually stored in nonvolatile flash memory where it can be reprogrammed by the system manufacturer (and also by hackers). An RoT inside the processor can be made much more difficult to hack.

Detect

Having a reliable Root of Trust, we can arrange so each layer is able to check the next for hacks. This process can be augmented with Remote Attestation, where we collect and report the fingerprints (called attestation evidence) gathered by each layer during the startup process. We can’t just ask the outer application layer if it’s been hacked; of course, any good hacker would ensure the answer is “No Way! You can trust me!”, no matter what.

But remote attestation adds a small bit of hardware, such as the Trusted Platform Module (TPM) defined by the Trusted Computing Group. This bit of hardware collects evidence in shielded locations made of special-purpose, hardware-isolated memory cells that can’t be directly changed by the processor at all. The TPM also provides protected capability, which ensures that new information can be added to the shielded locations, but previously stored information cannot be changed. And, it provides a protected capability that attaches a cryptographic signature to the contents of the Shielded Location to serve as evidence of the state of the machine, using a key known only to the Root of Trust hardware, called an Attestation Key (AK).

Given these functions, the application layer has no choice but to accurately report the attestation evidence, as proven by use of the RoT’s AK secret key. Any attempt to tamper with the evidence would invalidate the signature provided by the AK. At a remote location, a verifier can then validate the signature and check that all the fingerprints reported line up with known, trusted, versions of the device’s software. These known-good fingerprints, called endorsements, must come from a trusted source, such as the device manufacturer.

A flow chart showing device manufacturer flowing to attester and verifier. To verify that it’s safe to turn on an IoT device, one can use an attestation and verification protocol provided by the Trusted Computing Group. Guy Fedorkow, Mark Montgomery

In practice, the Root of Trust may contain several separate mechanisms to protect individual functions, such as boot integrity, attestation and device identity, and the device designer is always responsible for assembling the specific components most appropriate for the device, then carefully integrating them, but organizations like Trusted Computing Group offer guidance and specifications for components that can offer considerable help, such as the Trusted Platform Module (TPM) commonly used in many larger computer systems.

Remediate

Once an anomaly is detected, there are a wide range of actions to remediate. A simple option is power-cycling the device or refreshing its software. However, trusted components inside the devices themselves may help with remediation through the use of authenticated watchdog timers or other approaches that cause the device to reset itself if it can’t demonstrate good health. Trusted Computing Group Cyber Resilience provides guidance for these techniques.

The requirements outlined here have been available and used in specialized high-security applications for some years, and many of the attacks have been known for a decade. In the last few years, Root of Trust implementations have become widely used in some laptop families. But until recently, blocking Root of Trust attacks has been challenging and expensive even for cyberexperts in the IIoT space. Fortunately, many of the silicon vendors that supply the underlying IoT hardware are now including these high-security mechanisms even in the budget-minded embedded chips, and reliable software stacks have evolved to make mechanisms for Root of Trust defense more available to any designer who wants to use it.

While the IIoT device designer has the responsibility to provide these cybersecurity mechanisms, it’s up to system integrators, who are responsible for the security of an overall service interconnecting IoT devices, to require the features from their suppliers, and to coordinate features inside the device with external resilience and monitoring mechanisms, all to take full advantage of the improved security now more readily available than ever.

Mind your roots of trust!

Exploring the Science and Technology of Spoken Language Processing

23 May 2025 at 14:00


This is a sponsored article brought to you by BESydney.

Bidding and hosting an international conference involves great leadership, team support, and expert planning. With over 50 years’ experience, Business Events Sydney (BESydney) supports academic leaders with bidding advice, professional services, funding, and delegate promotion to support your committee to deliver a world-class conference experience.

Associate Professor Michael Proctor from Macquarie University’s Department of Linguistics recently spoke about his experience of working on the successful bid to host the Interspeech 2026 Conference in Sydney, on behalf of the Australasian Speech Science and Technology Association (ASSTA).

Why Bid for a Global Event?

Interspeech is the world’s largest and most comprehensive conference on the science and technology of spoken language processing. The conference will feature expert speakers, tutorials, oral and poster sessions, challenges, exhibitions, and satellite events, and will draw around 1,200 participants from around the world to Sydney. Interspeech conferences emphasize interdisciplinary approaches addressing all aspects of speech science and technology.

Associate Professor Proctor is Director of Research in the Department of Linguistics at Macquarie University, where he leads the Phonetics Laboratories. Under the leadership of Professor Felicity Cox at Macquarie University, Associate Professor Proctor worked in partnership with Associate Professor Beena Ahmed and Associate Professor Vidhya Sethu at the University of NSW (UNSW) to prepare the bid on behalf of ASSTA.

Every breakthrough begins with a conversation. Become a Global Conference Leader and be the voice that starts it all. BESydney’s Global Conference Leaders share their voice and leadership vision to bid and host for a global conference that drives change and shapes the future of academic and industry sectors, with BESydney’s trusted advice, guidance and support at every step of the way. BESydney

“Organizing a major international conference is an important service to the scientific community,” says Associate Professor Proctor. A primary motivation for bringing Interspeech 2026 to Sydney was to highlight the rich multilingual landscape of Australasia and refocus the energies of speech researchers and industry on under-resourced languages and speech in all its diversity. These themes guided the bid development and resonated with the international speech science community.

“Australasia has a long tradition of excellence in speech research but has only hosted Interspeech once before in Brisbane in 2008. Since then, Australia has grown and diversified into one of the most multilingual countries in the world, with new language varieties emerging in our vibrant cities,” stated Associate Professor Proctor.

Navigating the Bid Process

Working with BESydney, the bid committee were able to align the goals and requirements of the conference with local strengths and perspectives, positioning Sydney as the right choice for the next rotation of the international conference. Organizing a successful bid campaign can offer broader perspectives on research disciplines and academic cultures by providing access to global networks and international societies that engage in different ways of working.

“Organizing a major international conference is an important service to the scientific community. It provides a forum to highlight our work, and a unique opportunity for local students and researchers to engage with the international community.” —Associate Professor Michael Proctor, Macquarie University

“Although I have previously been involved in the organization of smaller scientific meetings, this is the first time I have been part of a team bidding for a major international conference,” says Associate Professor Proctor.

He added that “Bidding for and organizing a global meeting is a wonderful opportunity to reconsider how we work and to learn from other perspectives and cultures. Hosting an international scientific conference provides a forum to highlight our work, and a unique opportunity for local students and researchers to engage with the international community in constructive service to our disciplines. It has been a wonderful opportunity to learn about the bidding process and to make a case for Sydney as the preferred destination for Interspeech.”

Showcasing Local Excellence

One of the primary opportunities associated with hosting your global meeting in Sydney is to showcase the strengths of your local research, industries and communities. The Interspeech bid team wanted to demonstrate the strength of speech research in Australasia and provide a platform for local researchers to engage with the international community. The chosen conference theme, “Diversity and Equity – Speaking Together,” highlights groundbreaking work on inclusivity and support for under-resourced languages and atypical speech.

Interspeech 2026 in Sydney will provide significant opportunities for Australasian researchers – especially students and early career researchers – to engage with a large, international association. This engagement is expected to catalyze more local activity in important growth areas such as machine learning and language modeling.

Interspeech 2026 will be an important milestone for ASSTA. After successfully hosting the International Congress of Phonetic Sciences (ICPhS) in Melbourne in 2019, this will be an opportunity to host another major international scientific meeting with a more technological focus, attracting an even wider range of researchers and reaching across a more diverse group of speech-related disciplines.

“It will also be an important forum to showcase work done by ASSTA members on indigenous language research and sociophonetics – two areas of particular interest and expertise in the Australasian speech research community,” says Associate Professor Proctor.

Looking Ahead

Interspeech 2026 will be held at the International Convention Centre (ICC) Sydney in October, with an estimated attendance of over 1,200 international delegates.

The larger bid team included colleagues from all major universities in Australian and New Zealand with active involvement in speech science, and they received invaluable insights and support from senior colleagues at the International Speech Communication Association (ISCA). This collaborative effort ensured the development of a compelling bid which addressed all necessary aspects, from scientific content to logistical details.

As preparations for Interspeech 2026 continue, the Sydney 2026 team are focused on ensuring the conference is inclusive and representative of the diversity in speech and language research. They are planning initiatives to support work on lesser-studied languages and atypical speech and hearing, to make speech and language technologies more inclusive.

“In a time of increasing insularity and tribalism,” Associate Professor Proctor says, “we should embrace opportunities to bring people together from all over the world to focus on common interests and advancement of knowledge, and to turn our attention to global concerns and our shared humanity.”

For more information on how to become a Global Conference Leader sign up here.

Maximizing Solar ROI with Smarter Balance-of-System Solutions

8 September 2025 at 17:55


This white paper addresses the challenge of rising balance-of-system (BOS) costs in solar energy projects, which now make up a larger share of total system expenses due to falling solar module prices. It provides valuable insights for engineers, developers, and EPCs on how to optimize BOS components for efficiency, reliability, and lower total cost of ownership. Readers will learn how to reduce labor, avoid costly installation errors, and improve long-term performance through better product selection, installation tools, mock-up testing (golden rows), and Panduit’s comprehensive BOS solutions that bundle, connect, protect, and identify system elements.

Download this free whitepaper now!

❌