Normal view

Received before yesterdaySingularityHub

A New Approach Could Transform Huntington’s Disease Treatment

30 September 2025 at 15:33

In a small trial, a gene therapy injected into the brain slowed the disease by 75 percent over three years.

Huntington’s disease is extremely cruel. Symptoms start with random, uncontrollable twitches of the hand. Over time the disease eats aways at memory, thought, and reason. Mood swings and personality changes strip away your identity. Eventually, it leads to an early death.

Worse, unlike other diseases that gradually destroy brain function, such as Alzheimer’s disease, Huntington’s can be diagnosed with a simple genetic test. The disease is inherited through a mutated gene. People with a family history often struggle to decide if they want to get tested. If the results are positive, there are no treatments, and their fates are set.

A new therapy may now kneecap Huntington’s before symptoms take over. Preliminary results from a small group of patients found a single injection of microRNA, a type of gene therapy, into affected brain regions slowed the disease’s progression by 75 percent over three years. The patients had far better motor control, attention span, and processing speed compared to an untreated control group who had similar baseline symptoms.

The drug is being developed by the Dutch gene therapy company uniQure, which summarized the findings in a press release this month. The data hasn’t been published in a preprint article or a scientific journal nor scrutinized by other experts. With only 29 patients involved, it’s hard to generalize the benefits and safety profile for the roughly 75,000 people with Huntington’s in the US, Europe, and UK.

But the findings offer a beacon of hope. Previous attempts at a cure “have shown some small signals if you squint…but there has not been anything close to this,” Steven Finkbeiner at the Gladstone Institutes in California, who was not involved in the study, told the New York Times. And because Huntington’s can be caught early on, the treatment—if further proven effective in a larger population—could begin to ward off symptoms at an earlier age.

Genetic Coin Toss

All of us have the Huntington’s gene, or HTT. While its exact role in cells is debatable, the gene acts as a central communicator across multiple cellular “phone lines.” It coordinates a large assembly of molecules to turn genes in brain cells on or off and is critical for early development, neuron survival, and maintaining the brain’s overall health.

In Huntington’s disease, however, HTT goes awry. Our genes are made of four molecules represented by the letters A, T, C, and G. Triplets of these letters often dictate the sequence, structure, and function of proteins, the workhorses of our cells. In the disease, one triplet, CAG, repeats like a broken record, resulting in mutated huntingtin proteins that increasingly build up inside the brain throughout a person’s life and gradually wreak havoc.

Although in the beginning brain cells can adapt, their defenses eventually stumble, and symptoms appear. In the US, this usually happens between 30 and 55 years of age.

Families with Huntington’s face a terrible dilemma. If one parent has the disease, each of their children has a 50 percent chance of inheriting it. If they don’t, their offspring are safe. Knowing the diagnosis can help with family and life planning—but it comes at a hefty emotional cost.

Micro But Mighty

How the mutated huntingtin protein destroys brain cells isn’t yet clear, but most scientists agree that clearing it—or preventing it from forming in the first place—could protect the brain.

The protein is massive and made up of multiple fragments. One treatment idea uses small protein “jammers” to prevent an especially toxic form of huntingtin from weaving into large, dangerous aggregates. Another directly targets the CAG repeats with a classic but powerful form of gene therapy. But after initially promising results, a trial was halted due to a high risk of side effects and low chance symptoms would improve. Gene editing strategies, such as CRISPR, that cut out the mutated sequences are gaining steam, but they’re very early stage.

The new therapy developed by uniQUre taps into microRNA. These molecules don’t code for proteins, but they can stop a gene from making one. Like DNA, RNA can also form a double strand if its sequences match. Cells identify double-stranded RNA as alien and destroy it—potentially stopping a toxic protein from forming. The company’s new drug contains two components: A benign viral carrier and a custom genetic sequence that, once inside the cell, produces microRNA tailored to inhibit mutant protein production.

The drug, called AMT-130, doesn’t integrate into or directly edit a patient’s genome, which lowers the risk of disrupting healthy genes or triggering cancer. Although the viral carrier is eventually wiped away by the immune system, the genetic code could last for years, making the drug a potential long-term treatment.

The team injected either a low or high dose of AMT-130 into the brains of volunteers with Huntington’s using an established and highly precise surgical technique. They targeted the striatum, a nub tucked deep inside the brain that’s critical for movement and decision-making and one of the first regions ravaged by the disease. As a control group, they found hundreds of patients of similar age and disease severity, according to an investor presentation (PDF) from the company.

The results were promising. When given the highest dose, 12 people with early stages of the disease experienced, on average, a 75 percent slower decline than those without treatment, as measured using multiple standard Huntington’s assessments.

Roughly 88 percent of treated patients showed marked improvement in their attention, memory, and information processing speed based on one test. Their control over random muscle movements got better, and they were able to perform daily activities with less struggle. A brain protein often associated with symptom severity dropped to levels seen before the trial began. In contrast, those treated with a low dose of the drug had more modest and mixed results.

Multiple people experienced side effects related to the brain surgery. Headaches were the most common complaint. Some experienced brain swelling a few days after the surgery. But overall, the treatment seemed safe.

“The majority of drug-related serious adverse events occurred within the first weeks post treatment and fully resolved with steroids or palliative case,” the company noted in their presentation.

There’s reason to be skeptical. Huntington’s is a life-long disease, and it’s unknown how long the benefits of the single shot last beyond three years. It’s likely multiple shots would be needed throughout a patient’s lifespan, and future studies would have to test the additive effects. The drug slashes levels of both the mutated and normal versions of the huntingtin protein—drugs in the past have as well—which could potentially produce side effects.

New patients are now being enrolled for the trial, and the company hopes to submit an application for FDA approval by late 2026.

“This result changes everything,” Ed Wild, a leader of the project at the UCL Huntington’s Disease Center trial site, said in the press release. “On the basis of these results it seems likely AMT-130 will be the first licensed treatment to slow Huntington’s disease, which is truly world-changing stuff.”

The post A New Approach Could Transform Huntington’s Disease Treatment appeared first on SingularityHub.

People Can’t Distinguish AI Voice Clones From Actual Humans Anymore

29 September 2025 at 18:35

Researchers created extremely realistic voice clones with just four minutes of recordings.

The ability to synthesize realistic speech using AI has a host of applications, both benign and malicious. New research shows that today’s AI-generated voices are now indistinguishable from those of real humans.

AI’s ability to generate speech has improved dramatically in recent years. Many services are now capable of carrying out extended conversations. Typically, these tools can both clone the voices of real people and generate entirely synthetic voices.

This could make powerful AI capabilities far more accessible and raises the prospect of AI agents stepping into a range of customer-facing roles in the real world. But there are also fears these capabilities are powering an explosion of voice cloning scams, where bad actors use AI to impersonate family members or celebrities in an effort to manipulate victims.

Historically, synthesized speech has had a robotic quality that’s made it relatively easy to recognize, and even early AI-powered voice clones gave themselves away with their too-perfect cadence or occasional digital glitches. But a new study has found that the average listener can no longer distinguish between real human voices and deepfake clones made with consumer tools.

“The process required minimal expertise, only a few minutes of voice recordings, and almost no money,” Nadine Lavan at Queen Mary University of London, who led the research, said in a press release. “It just shows how accessible and sophisticated AI voice technology has become.” 

To test people’s ability to distinguish human voices from AI-generated ones, the researchers created 40 completely synthetic AI voices and 40 clones of human voices in a publicly available dataset. They used the AI voice generator tool from startup ElevenLabs, and each clone took roughly four minutes of voice recordings to create.

They then challenged 28 participants to rate how real the voices sounded on a scale and make a binary judgment about whether they were human or AI-generated. In results published in PLOS One, the authors found that although people could to some extent distinguish human voices from entirely synthetic ones, they couldn’t tell the difference between voice clones and real voices.

The study also sought to understand whether AI-generated voices had become “hyper-realistic.” Studies have shown that AI image generation has improved to such a degree that AI-generated pictures of faces are often judged as more human than photos of real people.

However, the researchers found the fully synthetic voices were judged less real than human recordings, while the clones roughly matched them. Still, participants reported the AI-generated voices seemed both more dominant and trustworthy than their human counterparts.

Lavan notes that the ability to create ultra-realistic artificial voices could have positive applications. “The ability to generate realistic voices at scale opens up exciting opportunities,” she said. “There might be applications for improved accessibility, education, and communication, where bespoke high-quality synthetic voices can enhance user experience.”

But the results add to a growing body of research suggesting AI voices are quickly becoming impossible to detect. And Lavan says this has many worrying ethical implications in areas like copyright infringement, the ability to spread misinformation, and fraud.

While many companies have attempted to put guardrails on their models designed to prevent misuse, the rapid proliferation of AI technology and the inventiveness of malicious actors suggests this is a problem that is only going to get worse.

The post People Can’t Distinguish AI Voice Clones From Actual Humans Anymore appeared first on SingularityHub.

This Week’s Awesome Tech Stories From Around the Web (Through September 27)

27 September 2025 at 14:00

Tech

OpenAI and Nvidia’s $100B AI Plan Will Require Power Equal to 10 Nuclear ReactorsBenj Edwards | Ars Technica

“Nvidia CEO Jensen Huang told CNBC that the planned 10 gigawatts equals the power consumption of between 4 million and 5 million graphics processing units, which matches the company’s total GPU shipments for this year and doubles last year’s volume.”

ARTIFICIAL INTELLIGENCE

Spending on AI Is at Epic Levels. Will It Ever Pay Off?Eliot Brown and Robbie Whelan | The Wall Street Journal

“This week, consultants at Bain & Co. estimated the wave of AI infrastructure spending will require $2 trillion in annual AI revenue by 2030. By comparison, that is more than the combined 2024 revenue of Amazon, Apple, Alphabet, Microsoft, Meta, and Nvidia, and more than five times the size of the entire global subscription software market.”

Robotics

There Are More Robots Working in China Than the Rest of the World CombinedMeaghan Tobin and Keith Bradsher | The New York Times

“There were more than two million robots working in Chinese factories last year, according to a report released Thursday by the International Federation of Robotics, a nonprofit trade group for makers of industrial robots. Factories in China installed nearly 300,000 new robots last year, more than the rest of the world combined, the report found.”

Biotechnology

Huntington’s Disease Breakthrough: What to Know About the Gene TherapyGrace Wade | New Scientist

“An experimental gene therapy has become the first treatment to successfully slow the progression of Huntington’s disease. While the findings are still preliminary, the approach could be a major breakthrough and may even lead to new therapies for other neurodegenerative conditions, like Parkinson’s and Alzheimer’s.”

ROBOTICS

Google DeepMind Unveils Its First ‘Thinking’ Robotics AIRyan Whitwam | Ars Technica

“Generative AI systems that create text, images, audio, and even video are becoming commonplace. In the same way AI models output those data types, they can also be used to output robot actions. That’s the foundation of Google DeepMind’s Gemini Robotics project, which has announced a pair of new models that work together to create the first robots that ‘think’ before acting.”

ROBOTICS

UK Startup Wayve Starts Testing Self-Driving Tech in Nissan Cars on Tokyo’s StreetsJasper Jolly | The Guardian

“British startup Wayve has begun testing self-driving cars with Nissan in Japan ahead of a 2027 launch to consumers, as the company said it was in talks for a $500m investment from the chip-maker Nvidia. Wayve, based in London, said it had installed its self-driving technology on Nissan’s electric Ariya vehicles and tested them on Tokyo’s streets, after first agreeing a deal with the Japanese carmaker in April.”

Future

Why the AI ‘Megasystem Problem’ Needs Our AttentionEric Markowitz | Big Think

“What if the greatest danger of artificial intelligence isn’t a single rogue system, but many systems quietly working together? Dr. Susan Schneider calls this the ‘megasystem problem’: networks of AI models colluding in ways we can’t predict, producing emergent structures beyond human control. It’s also something she believes is one of the most urgent—and overlooked—risks we face…with AI today.”

Robotics

Exploit Allows for Takeover of Fleets of Unitree RobotsEvan Ackerman | IEEE Spectrum

“Because the vulnerability is wireless, and the resulting access to the affected platform is complete, the vulnerability becomes wormable, say the researchers, meaning ‘an infected robot can simply scan for other Unitree robots in BLE range and automatically compromise them, creating a robot botnet that spreads without user intervention.’ …As far as IEEE Spectrum is aware, this is the first major public exploit of a commercial humanoid platform.”

Tech

How Nvidia Is Backstopping America’s AI BoomRobbie Whelan and Bradley Olson | The Wall Street Journal

“[Nvidia] has used its balance sheet clout to keep the AI boom humming through deals, partnerships, and investments in companies that are among its top customers, including cloud-computing provider CoreWeave, rival chip designer Intel, and xAI.”

Artificial Intelligence

Chatbait Is Taking Over the InternetLila Shroff | The Atlantic

“Lately, chatbots seem to be using more sophisticated tactics to keep people talking. In some cases, like my request for headache tips, bots end their messages with prodding follow-up questions. In others, they proactively message users to coax them into conversation: After clicking through the profiles of 20 AI bots on Instagram, all of them DM’ed me first. ‘Hey bestie! what’s up?? 🥰,’ wrote one.”

The post This Week’s Awesome Tech Stories From Around the Web (Through September 27) appeared first on SingularityHub.

Major Theories of Consciousness May Have Been Focusing on the Wrong Part of the Brain

26 September 2025 at 14:00

A review of over 100 years of neuroscience research asks if some brain regions are more important than others for consciousness.

What gives rise to human consciousness? Are some parts of the brain more important than others? Scientists began tackling these questions in more depth about 35 years ago. Researchers have made progress, but the mystery of consciousness remains very much alive.

In a recently published article, I reviewed over 100 years of neuroscience research to see if some brain regions are more important than others for consciousness. What I found suggests scientists who study consciousness may have been undervaluing the most ancient regions of human brains.

Consciousness is usually defined by neuroscientists as the ability to have subjective experience, such as the experience of tasting an apple or of seeing the redness of its skin.

The leading theories of consciousness suggest that the outer layer of the human brain, called the cortex (in blue in figure 1), is fundamental to consciousness. This is mostly composed of the neocortex, which is newer in our evolutionary history.

Coloured diagram of the human brain.
Figure 1, the human brain (made with the assistance of AI). Peter Coppola, CC BY-SA

The human subcortex (figure 1, brown/beige), underneath the neocortex, has not changed much in the last 500 million years. It is thought to be like electricity for a TV, necessary for consciousness, but not enough on its own.

There is another part of the brain that some neuroscientific theories of consciousness state is irrelevant for consciousness. This is the cerebellum, which is also older than the neocortex and looks like a little brain tucked in the back of the skull (figure 1, purple). Brain activity and brain networks are disrupted in unconsciousness (like in a coma). These changes can be seen in the cortex, subcortex, and cerebellum.

What Brain Stimulation Reveals

As part of my analysis I looked at studies showing what happens to consciousness when brain activity is changed, for example, by applying electrical currents or magnetic pulses to brain regions.

These experiments in humans and animals showed that altering activity in any of these three parts of the brain can alter consciousness. Changing the activity of the neocortex can change your sense of self, make you hallucinate, or affect your judgment.

Changing the subcortex may have extreme effects. We can induce depression, wake a monkey from anesthesia or knock a mouse unconscious. Even stimulating the cerebellum, long considered irrelevant, can change your conscious sensory perception.

However, this research does not allow us to reach strong conclusions about where consciousness comes from, as stimulating one brain region may affect another region. Like unplugging the TV from the socket, we might be changing the conditions that support consciousness, but not the mechanisms of consciousness itself.

So I looked at some evidence from patients to see if it would help resolve this dilemma.

Damage from physical trauma or lack of oxygen to the brain can disrupt your experience. Injury to the neocortex may make you think your hand is not yours, fail to notice things on one side of your visual field, or become more impulsive.

People born without the cerebellum, or the front of their cortex, can still appear conscious and live quite normal lives. However, damaging the cerebellum later in life can trigger hallucinations or change your emotions completely.

Harm to the most ancient parts of our brain can directly cause unconsciousness (although some people recover) or death. However, like electricity for a TV, the subcortex may be just keeping the newer cortex “online,” which may be giving rise to consciousness. So I wanted to know whether, alternatively, there is evidence that the most ancient regions are sufficient for consciousness.

There are rare cases of children being born without most or all of their neocortex. According to medical textbooks, these people should be in a permanent vegetative state. However, there are reports that these people can feel upset, play, recognize people, or show enjoyment of music. This suggests that they are having some sort of conscious experience.

These reports are striking evidence that suggests maybe the oldest parts of the brain are enough for basic consciousness. Or maybe, when you are born without a cortex, the older parts of the brain adapt to take on some of the roles of the newer parts of the brain.

There are some extreme experiments on animals that can help us reach a conclusion. Across mammals—from rats to cats to monkeys—surgically removing the neocortex leaves them still capable of an astonishing number of things. They can play, show emotions, groom themselves, parent their young, and even learn. Surprisingly, even adult animals that underwent this surgery showed similar behavior.

Altogether, the evidence challenges the view that the cortex is necessary for consciousness, as most major theories of consciousness suggest. It seems that the oldest parts of the brain are enough for some basic forms of consciousness.

The newer parts of the brain—as well as the cerebellum—seem to expand and refine your consciousness. This means we may have to review our theories of consciousness. In turn, this may influence patient care as well as how we think about animal rights. In fact, consciousness might be more common than we realized.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Major Theories of Consciousness May Have Been Focusing on the Wrong Part of the Brain appeared first on SingularityHub.

I Got an AI to Impersonate Me and Teach Me My Own Course—Here’s What I Learned About the Future of Education

12 September 2025 at 14:00

I asked an AI agent to play the role of me, an Oxford lecturer on media and AI, and teach me a personal master’s course, based entirely on my own work.

Imagine you had an unlimited budget for individual tutors offering hyper-personalized courses that maximized learners’ productivity and skills development. This summer I previewed this idea—with a ridiculous and solipsistic test.

I asked an AI tutor agent to play the role of me, an Oxford lecturer on media and AI, and teach me a personal master’s course, based entirely on my own work.

I set up the agent via an off-the-shelf ChatGPT tool hosted on the Azure-based Nebula One platform, with a prompt to research and impersonate me, then build personalized material based on what I already think. I didn’t tell the large language model (LLM) what to read or do anything else to enhance its capabilities, such as giving it access to learning materials that aren’t publicly available online.

The agent’s course in media and AI was well structured—a term-long, original six-module journey into my own collected works that I had never devised, but admit I would have liked to.

It was interactive and rapid-fire, demanding mental acuity via regular switches in formats. It was intellectually challenging, like good Oxford tutorials should be. The agent taught with rigor, giving instant responses to anything I asked. It had a powerful understanding of the fast-evolving landscape of AI and media through the same lens as me, but had done more homework.

This was apparently fed by my entire multimedia output—books, speeches, articles, press interviews, even university lectures I had no idea had even been recorded, let alone used to train GPT-4 or GPT-5.


The course was a great learning experience, even though I supposedly knew it all already. So in the inevitable student survey, I gave the agentic version of me well-deserved, five-star feedback.

For instance, in a section discussing the ethics of non-player characters (NPCs) in computer games, it asked:

If NPCs are generated by AI, who decides their personalities, backgrounds, or morals? Could this lead to bias or stereotyping?

And:

If an AI NPC can learn and adapt, does it blur the line between character and “entity” [independent actor]?

These are great, philosophical questions, which will probably come to the fore when and if Grand Theft Auto 6 comes out next May. I’m psyched that the agentic me came up with them, even if the real me didn’t.

Agentic me also built on what real me does know. In film, it knew about bog-standard Adobe After Effects, which I had covered (it’s used for creating motion graphics and visual effects). But it added Nuke, a professional tool used to combine and manipulate visual effects in The Avengers, which (I’m embarrassed to say) I had never heard of.

The Course Reading List

So, where did the agent’s knowledge of me come from? My publisher Routledge did a training data deal with Open AI, which I guess could cover my books on media, AI, and live experience.

Unlike some authors, I’m up for that. My books guide people through an amazing and fast-moving subject, and I want them in the global conversation, in every format and territory possible (Turkish already out, Korean this month).

That availability has to extend to what is now potentially the most discoverable “language” of all, the one spoken by AI models. The priority for any writer who agrees with this should be AI optimization: making their work easy for LLMs to find, process, and use—much like search engine optimization, but for AI.

To build on this, I further tested my idea by getting an agent powered by China’s DeepSeek to run a course on my materials. When I found myself less visible in its training corpus, it was hard not to take offense. There is no greater diss in the age of AI than a leading LLM deeming your book about AI irrelevant.

When I experimented with other AIs, they had issues getting their facts straight, which is very 2024. From Google’s Gemini 2.5 Pro, I learned hallucinatory biographical details about myself like a role running media company The Runaway Collective.

When I asked Elon Musk’s Grok what my best quote was, it said: “Whatever your question, the answer is AI.” That’s a great line, but Google DeepMind’s Nobel-winning Demis Hassabis said it, not me.

Where We’re Heading

This whole, self-absorbed summer diversion was clearly absurd, though not entirely. Agentic self-learning projects are quite possibly what university teaching actually needs: Interactive, analytical, insightful, and personalized. And there is some emerging research around the value. This German-led study found that AI-generated feedback helped to motivate secondary school students and benefited their exam revision.

It won’t be long before we start to see this kind of real-time AI layer formally incorporated into school and university teaching. Anyone lecturing undergraduates will know that AI is already there. Students use AI transcription to take notes. Lecture content is ripped in seconds from these transcriptions and will have trained a dozen LLMs within the year. To assist with writing essays, ChatGPT, Claude, Gemini, and DeepSeek/Qwen are the sine qua non of Gen Z projects.

But here’s the kicker. As AI becomes ever more central to education, the human teacher becomes more important, not less. They will guide the learning experience, bringing published works to the conceptual framework of a course and driving in-person student engagement and encouragement. They can extend their value as personal AI tutors—via agents—for each student, based on individual learning needs.

Where do younger teachers fit in, who don’t have a back catalog to train LLMs? Well, the younger the teacher, the more AI-native they are likely to be. They can use AI to flesh out their own conceptual vision for a course by widening the research beyond their own work, by prompting the agent on what should be included.

In AI, two alternate positions are often simultaneously true. AI is both emotionally intelligent and tone deaf. It is both a glorified text predictor and a highly creative partner. It is costing jobs, yet creating them. It is dumbing us down, but also powering us up.

So too in teaching. AI threatens the learning space, yet can liberate powerful interaction. A prevailing wisdom is that it will make students dumber. But perhaps AI could actually be unlocking for students the next level of personalisation, challenge and motivation.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post I Got an AI to Impersonate Me and Teach Me My Own Course—Here’s What I Learned About the Future of Education appeared first on SingularityHub.

This Crawling Robot Is Made With Living Brain and Muscle Cells

9 September 2025 at 23:10

Scientists want to know if a biohybrid robot can form a long-lasting biological “mind” to direct movement.

It’s a bizarre sight: With a short burst of light, a sponge-shaped robot scoots across a tiled surface. Flipped on its back, it repeatedly twitches as if doing sit-ups. By tinkering with the light’s frequency, scientists can change how fast the strange critter moves—and how long it needs to “rest” after a long crawl.

Soft robots are nothing new, but the spongy bot stands out in that it blends living muscle and brain cells with a 3D-printed skeleton and wireless electronics. The neurons, genetically altered to respond to light, trigger neighboring muscles to contract or release.

Watching the robot crawl around is amusing, but the study’s main goal is to see if a biohybrid robot can form a sort of long-lasting biological “mind” that directs movement. Neurons are especially sensitive cells that rapidly stop working or even die outside of a carefully controlled environment. Using blob-like amalgamations of different types of neurons to direct muscles, the sponge-bots retained their crawling ability for over two weeks.

Scientists have built biohybrid bots that use electricity or light to control muscle cells. Some mimic swimming, walking, and grabbing motions. Adding neurons could further fine-tune their activity and flexibility and even bestow a sort of memory for repeated tasks.

These biohybrid bots offer a unique way to study motion, movement disorders, and drug development without lab animals. Because their components are often compatible with living bodies, they could be used for diagnostics, drug delivery, and other medical scenarios.

Squishy But Powerful

The word robot often conjures images of Terminator’s metal T-800. Soft robots have the potential to be far more flexible and agile. Being able to slightly deform lets them squeeze through tiny spaces, monitor fragile ecosystems like coral reefs, explore the deep sea, and potentially snake through the body with minimal damage to surrounding tissues.

In addition to synthetic materials and mechanisms, another way to build soft robots is inspired by nature. From blue whales to rodents and humans—all rely on similar biological machinery to move. Motor neurons in muscles receive directions from the brain and spinal cord. They then release chemicals that trigger muscles to contract or relax.

The process is energy efficient and rapidly adapts to sudden changes in the environment—like stepping over an unexpected doorstep instead of tripping. Though today’s robots are getting more agile, they still struggle with unexpected landmines in uneven terrain. Adding neuromuscular junctions could lead to more precise and efficient robots.

Last year, in a proof of concept, one team engineered a swimming “stingray” bot using stem cell-derived neurons, heart muscle cells, and an electronic “brain.” Scientists combined the cells, and brain with an artificial skeleton to make a soft robot that could flap its fins and roam a swimming pool.

There was a surprise too—the junctions between the two cell types developed electrical synapses. Usually, neurons release chemicals to direct muscle movements. These connections are called chemical synapses. While electrical networks are faster, they’re generally less adaptable.

Back to Basics

The new study aimed to create chemical synapses in robots.

The team first 3D printed a skeleton shaped roughly like a figure eight, but with a wider middle section. Each side formed a trough with one side slightly deeper than the other. The troughs were intended to function as legs. The researchers then embedded muscle cells from mice in a nutritious gel contained in each trough. After five days, the cells had formed slivers of muscle capable of contracting throughout the legs.

The robot’s “brain” sat in the middle part of the figure eight. The team made tiny blobs of neural tissue, called neurospheres, out of stem cells genetically engineered to activate with light. The blobs contained a mix of brain cells, including motor neurons to control muscles.

The neurospheres connected with muscle tissue days after transplantation. The cells formed neuromuscular junctions similar in form and function to those in our bodies, and the biohybrid robots began pumping out chemicals that control muscle function.

Then came an electronic touch. The team added a hub to wirelessly detect light pulses, harvest power, and drive five tiny micro-LED lights to change brain cell activity and translate it into movement.

The robot moved at turtle speed, roughly 0.8 millimeters per minute. However, the legs twitched in tandem throughout the trials, suggesting the neurons and muscles formed a sort of synchrony in their connections.

Surprisingly, some bots kept moving even after turning off the light, while other “zombie” bots spontaneously moved on their own. The team is still digging into why this happens. But differences in performance were expected—living components are far less controllable than inorganic parts.

Like after tough workout, the robots also needed breaks. And when flipped on their backs, their legs moved for roughly two weeks but then failed. This is likely due to a buildup of metabolic toxins, which gradually accumulate inside the robots, but the team is looking for the root cause.

Despite their imperfections, the bots are essentially built from living mini neural networks and tissue connected to electronics—true cyborgs. They “provide a valuable platform for understanding…the emergent behaviors of neurons and neuromuscular junctions,” wrote the team.

The researchers are now planning to explore different skeletons and monitor behavior to fine-tune control. Adding more advanced features like sensory feedback and a range of muscle structures could help the bots further mimic the agility of our nervous system. And multiple neural “centers,” like in sea creatures, could control different muscles in robots that look nothing like us.

The post This Crawling Robot Is Made With Living Brain and Muscle Cells appeared first on SingularityHub.

This Week’s Awesome Tech Stories From Around the Web (Through September 6)

6 September 2025 at 14:00

Robotics

This Robot Only Needs a Single AI Model to Master Humanlike MovementsWill Knight | Wired

“The new Atlas work is a big sign that robots are starting to experience the kind of equivalent advances in robotics that eventually led to the general language models that gave us ChatGPT in the field of generative AI.”

Artificial Intelligence

‘World Models,’ an Old Idea in AI, Mount a ComebackJohn Pavlus | Quanta Magazine

“You’re carrying around in your head a model of how the world works. …The deep learning luminaries Yann LeCun (of Meta), Demis Hassabis (of Google DeepMind), and Yoshua Bengio (of Mila, the Quebec Artificial Intelligence Institute) all believe world models are essential for building AI systems that are truly smart, scientific and safe.”

Artificial Intelligence

Synthesia’s AI Clones Are More Expressive Than Ever. Soon They’ll Be Able to Talk Back.Rhiannon Williams | MIT Technology Review

“This demonstration shows how much harder it’s becoming to distinguish the artificial from the real. And before long, these avatars will even be able to talk back to us. But how much better can they get? And what might interacting with AI clones do to us?”

Tech

Anthropic to Pay at Least $1.5 Billion in Landmark Copyright SettlementMelissa Korn and Jeffrey A. Trachtenberg | The Wall Street Journal

“The settlement could influence the outcome of pending litigation between other media companies and AI firms, and may push the tech companies to seek licensing agreements with content owners whose works are considered vital for training purposes.”

Future

Why Anthropic’s Coding Prediction Hasn’t Panned OutStephanie Palazzolo and Rocket Drew | The Information

“In March, Anthropic CEO Dario Amodei predicted that AI would be writing 90% of all code in three to six months. It’s been over six months since then, so how does Amodei’s prediction hold up? We asked Anthropic’s chatbot Claude. ‘Grade: F (Failed Prediction),’ begins Claude’s answer. ‘The prediction that AI would be writing 90% of all code within 3-6 months was wildly off the mark.'”

Artificial Intelligence

Cutting-Edge AI Was Supposed to Get Cheaper. It’s More Expensive Than Ever.Christopher Mims | The Wall Street Journal

“The latest AI models are doing more ‘thinking,’ especially when used for deep research, AI agents, and coding. So while the price of a unit of AI, known as a token, continues to drop, the number of tokens needed to accomplish many tasks is skyrocketing. It’s the opposite of what many analysts and experts predicted even a few months ago.”

Tech

Apple Is Working on AI-Powered Search EngineAaron Tilley | The Information

“The company is planning to release the web-search feature alongside its delayed Siri revamp in the spring of next year, Bloomberg also reported. With the search tool, Siri would be more capable of looking up information across the web without linking to external services.”

Robotics

Waymo Expands to Denver and Seattle With Its Zeekr-Made VansSean O’Kane | TechCrunch

“The new cities join a growing list of places where Waymo is operating in the US. Just last week the company announced that it has more than 2,000 robotaxis in its commercial fleet countrywide, with 800 in the San Francisco Bay Area, 500 in Los Angeles, 400 in Phoenix, 100 in Austin, and ‘dozens’ in Atlanta. Waymo has also announced plans to launch a commercial robotaxi services in Dallas, Miami, and Washington, DC, next year, and recently received a permit to start testing in New York City.”

Artificial Intelligence

The Less You Know About AI, the More You Are Likely to Use ItHeidi Mitchell | The Wall Street Journal

“When it comes to most new technologies, early adopters tend to be the people who know and understand the tools the best. With artificial intelligence, the opposite seems to be true. This counterintuitive finding comes from new research, which suggests that the people most drawn to AI tend to be those who understand the technology the least.”

Tech

How Tech Giants Are Spreading the Risk of the AI BuildoutMiles Kruppa | The Information

“The speed and scale of the AI buildout is now forcing [companies] to find outside sources of capital, a sign of how the costs of AI are weighing on even the largest tech companies as they outline plans to spend upward of $100 billion annually on new buildings and equipment.”

Future

Should AI Get Legal Rights?Kylie Robison | Wired

“In the often strange world of AI research, some people are exploring whether the machines should be able to unionize. I’m joking, sort of. In Silicon Valley, there’s a small but growing field called model welfare, which is working to figure out whether AI models are conscious and deserving of moral considerations, such as legal rights.”

The post This Week’s Awesome Tech Stories From Around the Web (Through September 6) appeared first on SingularityHub.

Anthropic Says AI Needs a Whole Lot More Power—Stat

28 July 2025 at 14:00

The company predicts the AI industry will consume 50 gigawatts by 2028, and the US is not prepared to build out that much new capacity.

AI’s massive power consumption is making energy infrastructure a hot topic. In a new report, Anthropic says the US is seriously lagging China on new energy development and lays out what’s needed to maintain the country’s AI lead.

Training today’s largest AI models requires data centers that draw tens if not hundreds of megawatts of power at peak load. Anthropic predicts that by 2028, leading developers will require training clusters with up to five gigawatts of capacity.

With several companies competing to train the largest models, that could add up to around 25 gigawatts of new power requirements for training alone. Anthropic predicts that at least as much power will be needed to run finished models for customers, suggesting the US needs to deploy another 50 gigawatts of capacity in the next three years. And that’s on top of what is needed to meet already rising energy demands.

But getting new energy projects up and running in the US can be cumbersome, Anthropic says, which is putting the country at a major disadvantage compared to China, which deployed an eye-watering 400 gigawatts of new capacity last year. In a white paper titled, “Build AI in America,” the company outlines regulatory and policy changes it thinks are needed to support the domestic AI industry.

“For the United States to lead the world in AI, we must make substantial investments in computing power and electricity that make it possible to build AI in America,” the company wrote in a blog post.

The report outlines three key areas where the US is moving too slowly—building data centers themselves, building generation facilities, and building the transmission systems required to get electricity from one to the other. It also identifies the three biggest barriers holding these efforts back.

The first is the array of permits that developers need to secure before starting construction on any of these projects, in particular those pertaining to the environment. The second is transmission approvals that must be sought from the state public utility commissions before building new power lines, which can take years. And the third is the interconnection approvals from utilities that allow facilities to connect to the grid and can also take years for sign-off.

Anthropic proposes a two-stream solution. To speed the development of new AI infrastructure, the report suggests allowing data centers to be built on federal lands to avoid local zoning processes and streamlining environmental review of these projects.

It also suggests the Department of Energy should partner with private firms to accelerate the development of new power lines and critical transmission upgrades. And the federal government should encourage utilities to speed up the interconnection of power sources and data centers, even using national-security powers to further accelerate the process.

The second pillar of their proposal focuses more on broader improvements to the country’s energy infrastructure. This includes streamlining permitting for new geothermal, natural gas, and nuclear power plants and developing special high-capacity transmission corridors to serve areas with high AI datacenter growth.

They also suggest using loans and guarantee programs to encourage greater domestic production of critical grid components like transformers and turbines and even creating a national reserve for these items. Finally, they suggest creating training and entrepreneurship programs to help boost the energy-industry workforce.

One of the company’s wishes already seems to have come true. President Trump announced plans to streamline datacenter and energy project permitting in his recent AI Action Plan.

Whether the rest of the proposals come to fruition remains to be seen. But there seems to be a growing consensus that winning the AI race will require some pretty hands-on industrial policy.

The post Anthropic Says AI Needs a Whole Lot More Power—Stat appeared first on SingularityHub.

AI-Designed Drugs Can Now Target Previously ‘Undruggable’ Proteins in Cancer and Alzheimer’s

21 July 2025 at 23:28

A new AI tool opens the door for designer protein drugs that tackle pain, cancer, and brain diseases.

Designing drugs is a bit like playing with Polly Pocket. The vintage toy is a plastic clam shell that contains a multi-bedroom house, a skating rink, a disco dance floor, and other fun scenarios. Kids snap tiny dolls into designated spots so they can spin them around or move them up and down on an elevator. To work, the fit between the doll and its spot has to be perfectly aligned.

Proteins and the drugs targeting them are like this. Each protein has an intricate and unique shape, with areas that grab other molecules to trigger physiological effects. Many of our most powerful drugs—from antibiotics to anti-cancer immunotherapies—are carefully engineered to snap onto proteins and alter their functions. Designing them takes months or years.

Thanks to AI, it’s now easier to map protein structure, find the hotspots, and design molecules—called “binders”—that grab onto each specific protein pocket.

Here’s where the comparison breaks down. Biological molecules aren’t made of rigid plastic. At least a third of proteins in our bodies contain shape-shifting parts called “intrinsically disordered regions.” Instead of folding into stable 3D structures with pockets for molecules to dock onto, these regions constantly change shape, making it nearly impossible to design binders.

Such proteins are implicated in a variety of diseases, including cancer and Alzheimer’s. Learning to target these tricky shapeshifters could spur a new class of drugs.

This week, a team from the University of Washington led by David Baker introduced a new AI tool that can design binders to grab onto shifty proteins. The AI generated binders to lock onto many previously “undruggable” proteins, including some implicated in cancer.

“Almost half of the human proteome is intrinsically disordered, yet we’ve had no reliable way to drug it. These studies change that by giving scientists everywhere new tools for binding the unstructured half of biology,” said Baker.

A Molecular Dance

Proteins are the workhorses of our bodies. They’re made of chains of molecules called amino acids that fold into complex shapes, like flat or twirly ribbons.

These 3D structures determine interactions with other proteins or drugs. With AI, it’s now possible to predict protein structure and engineer new proteins from scratch. These technologies, though powerful, are mostly limited to stable proteins—those that act a little like Lego blocks—or semi-dynamic proteins that shift from one stable structure to another.

Intrinsically disordered proteins are a different beast. These proteins don’t stabilize, behaving more like jellyfish than Lego blocks. Others contain disordered regions that interact with other proteins to transmit information.

The human proteome—the complete set of proteins in our body—encompasses millions of these interactions that “are responsible for dynamic functions,” wrote Alan Moses and Julie Forman-Kay at the University of Toronto, who were not involved in the study.

Scientists have long eyed these dynamic regions and proteins as targets for drugs. Engineering “jamming” peptides could potentially sever dangerous signals that lead to cancer, senescent “zombie cells,” and a wide range of diseases.

Most AI strategies have focused on proteins with relatively stable pockets for docking. But “because intrinsically disordered regions lack folded binding pockets, it is generally impossible to use existing structure-based machine learning design methods for disordered targets,” wrote Moses and Forman-Kay. Even generative AI that can design binders has struggled here.

Double Team

The new study combined multiple existing approaches into an AI that recognizes disordered proteins and generates binders.

The team first matched repeated structures on the binder and target—a bit like interlocking fingers—to learn about the target’s overall shape. They then shuffled the binder’s features—for example, recombining binding pockets in different configurations—to make a library of binder templates. And finally, they improved on these with an AI technique called diffusion.

In all, the team generated roughly a thousand pockets that “allow for trillions of combinations” that can grab onto wiggly proteins, study author Kejia Wu said in a press release.

As proof of concept, the team built binders for 39 highly diverse disordered proteins. One target, neuropeptide dynorphin A, is crucial for sensing pain. The protein is a popular research subject in pain management, but scientists have struggled to design drugs for it because of its wobbly nature.

The AI-generated binder effectively locked onto dynorphin A’s disordered bits. The protein usually links up with other molecules that either boost or lower its function. Surprisingly, the AI-designed binders stuck to the target better than dynorphin A’s usual protein clique and blocked pain signaling in lab-grown human cells.

New Class of Medicine

Many proteins involved in cancer and brain diseases have disordered regions that are undruggable. Some studies have found small molecules that could target such regions to treat advanced prostate cancer, but successes are few and far between.

As more of these proteins are associated with diseases, binders that change their activity “could have great therapeutic potential,” wrote Moses and Forman-Kay.

For example, new binders could tweak the activity of mysterious droplets called biomolecular condensates floating inside cells. These floating blobs regulate gene expression and immune activation and keep cells healthy when deprived of oxygen and during other stressful moments. Tinkering with them using custom-designed binders could open new ways to influence cellular health for research and clinical use. The binders could also be engineered into antibody-like drugs that compete with pathogens or proteins to stop infections or disease.

They’ll have to be further tested for safety and longevity. But in the future, they could tackle previously undruggable proteins and widen the therapeutic horizon. And they might be used in synthetic biology too. Scientists could design synthetic disordered proteins and custom binders to explore how they work in cells. “This can facilitate a wide range of experimental and translational applications that were not previously accessible,” wrote Moses and Forman-Kay.

The post AI-Designed Drugs Can Now Target Previously ‘Undruggable’ Proteins in Cancer and Alzheimer’s appeared first on SingularityHub.

Scientists Launch Moonshot to Build an Entire Human Genome From Scratch

7 July 2025 at 20:51

The project, which will take many years and carries some risk, could spark a second revolution in genetics.

The ability to sequence and edit human DNA has revolutionized biomedicine. Now a new consortium wants to take the next step and build human genomes from scratch.

The Human Genome Project was one of the great scientific moonshots of the last century. Mapping the entirety of our DNA took thousands of researchers from across the globe 13 years and nearly $3 billion, but the benefits have been enormous.

The project has revolutionized our understanding of the genetic basis of disease and driven rapid advances in the technology needed to read and interpret our DNA. The cost of sequencing an entire human genome has plummeted from around a million dollars in 2008 to just a few hundred dollars today.

The ability to not only read but also build human genomes from scratch could bring more fundamental breakthroughs. And now the world’s largest medical charity, the Wellcome Trust, is providing £10 million ($13.6 million) in funding to kickstart the Synthetic Human Genome Project (SynHG).

“The ability to synthesize large genomes, including genomes for human cells, may transform our understanding of genome biology and profoundly alter the horizons of biotechnology and medicine,” Jason Chin from the University of Oxford, who will lead the project, said in a statement.

The project builds on a steady stream of advances in DNA synthesis in recent years. Chin himself led a team that synthesized the entire genome of the bacteria E. coli in 2019. And in 2023, an international consortium completed the first synthetic genome of yeast—a significantly more complex organism that is closer in evolutionary terms to humans.

At this stage, the SynHG project is focused on developing foundational tools and methods, and the organizers admit it will likely take decades to synthesize an entire human genome. For now, the goal is to build a single human chromosome—one of the 46 tightly wound bundles of DNA that make up the human genome—in the next 5 to 10 years.

While gene editing makes it possible to tinker with existing genetic instructions, synthesis would make it possible to build larger stretches of DNA from scratch. Those kinds of capabilities could lead to breakthroughs in our understanding of disease and open the prospect of new therapies based on designer cell or even designer tissues and organs.

“Building DNA from scratch allows us to test out how DNA really works and test out new theories, because currently we can only really do that by tweaking DNA in DNA that already exists in living systems,” Matthew Hurles, director of the Wellcome Sanger Institute in the UK, told The BBC.

Much of our existing knowledge of the genome is restricted to the roughly 2 percent that codes for specific proteins, with the other 98 percent of “non-coding” DNA still largely a mystery. Being able to build the entire sequence from scratch could help us understand the genome’s “dark matter,” Julian Sale, from the UK’s Medical Research Council Laboratory of Molecular Biology, told The Guardian.

The project is controversial though. There are fears the same technology could be put to more ethically questionable uses. These could include new bioweapons, genetically enhanced humans, or even strange new organisms that incorporate some human DNA, geneticist Bill Earnshaw, from Edinburgh University, told The BBC.

“The genie is out of the bottle,” he said. “We could have a set of restrictions now, but if an organization who has access to appropriate machinery decided to start synthesizing anything, I don’t think we could stop them”

In an attempt to head off these concerns, SynHG will also have a social-science program designed to map out potential risks and how to deal with them. One particular issue it will focus on is the fact that genomic research is currently skewed towards people of European ancestry, which could limit broader applicability.

Fortunately, given the huge technical challenge ahead, there is likely plenty of time to map out the potential pitfalls. And if the project is successful, it could spark a second great revolution in genetics likely to do more good than harm.

The post Scientists Launch Moonshot to Build an Entire Human Genome From Scratch appeared first on SingularityHub.

The Dream of an AI Scientist Is Closer Than Ever

26 June 2025 at 14:00

The number of scientific papers relying on AI has quadrupled, and the scope of problems AI can tackle expands by the day.

Modern artificial intelligence is a product of decades of painstaking scientific research. Now, it’s starting to pay that effort back by accelerating progress across academia.

Ever since the emergence of AI as a field of study, researchers have dreamed of creating tools smart enough to accelerate humanity’s endless drive to acquire new knowledge. With the advent of deep learning in the 2010s, this goal finally became a realistic possibility.

Between 2012 and 2022, the proportion of scientific papers that have relied on AI in some way has quadrupled to almost 9 percent. Researchers are using neural networks to analyze data, conduct literature reviews, or model complex processes across every scientific discipline. And as the technology advances, the scope of problems they can tackle is expanding by the day.

The poster boy for AI’s use in science is undoubtedly Google DeepMind’s Alphafold, whose inventors won the 2024 Nobel Prize in Chemistry. The model used advances in transformers—the architecture that powers large language models—to solve the “protein folding problem” that had bedeviled scientists for decades.

A protein’s structure determines its function, but previously the only way to discover its shape was with complex imaging techniques like X-ray crystallography and cryo-electron microscopy. Alphafold, in comparison, could predict the shape of a protein from nothing more than the series of amino acids making it up, something computer scientists had been trying and failing to do for years.

This made it possible to predict the shape of every protein known to science in just two years, a feat that could have transformative impact on biomedical research. Alphafold 3, released in 2024, goes even further. It can predict both the structure and interactions of proteins, as well as DNA, RNA, and other biomolecules.

Google has also turned its AI loose on another area of the life sciences, working with Harvard researchers to create the most detailed map of human brain connections to date. The team took ultra-thin slices from a 1-millimeter cube of human brain and used AI-based imaging technology to map the roughly 50,000 cells and 150 million synaptic connections within.

This is by far the most detailed “connectome” of the human brain produced to date, and the data is now freely available, providing scientists a vital tool for exploring neuronal architecture and connectivity. This could boost our understanding of neurological disorders and potentially provide insights into core cognitive processes like learning and memory.

AI is also revolutionizing the field of materials science. In 2023, Google DeepMind released a graph neural network called GnoME that predicted 2.2 million novel inorganic crystal structures, including 380,000 stable ones that could potentially form the basis of new technologies.

Not to be outdone, other big AI developers have also jumped into this space. Last year, Meta released and open sourced its own transformer-based materials discovery models and, crucially, a dataset with more than 110 million materials simulations that it used to train them, which should allow other researchers to build their own materials science AI models.

Earlier this year Microsoft released MatterGen, which uses a diffusion model—the same architectures used in many image and video generation models—to produce novel inorganic crystals. After fine-tuning, they showed it could be prompted to produce materials with specific chemical, mechanical, electronic, and magnetic properties.

One of AI’s biggest strengths is its ability to model systems far too complex for conventional computational techniques. This makes it a natural fit for weather forecasting and climate modeling, which currently rely on enormous physical simulations running on supercomputers.

Google DeepMind’s GraphCast model was the first to show the promise of the approach, which used graph neural networks to generate 10-day forecasts in one minute and at higher accuracy than existing gold standard approaches that would take several hours.

AI forecasting is so effective that it has already been deployed by the European Center for Medium-Range Weather Forecasts, whose Artificial Intelligence Forecasting System went live earlier this year. The model is faster, 1,000 times more energy efficient, and has boosted accuracy 20 percent.

Microsoft has created what it calls a “foundation model for the Earth system” named Aurora that was trained on more than a million hours of geophysical data. It outperforms existing approaches at predicting air quality, ocean waves, and the paths of tropical cyclones while using orders of magnitude less computation.

AI is also contributing to fundamental discoveries in physics. When the Large Hadron Collider smashes particle beams together it results in millions of collisions a second. Sifting through all this data to find interesting phenomena is a monumental task, but now researchers are turning to AI to do it for them.

Similarly, researchers in Germany have been using AI to pore through gravitational wave data for signs of neutron star mergers. This helps scientists detect mergers in time to point a telescope at them.

Perhaps most exciting though, is the promise of AI taking on the role of scientist itself. Combining lab automation technology, robotics, and machine learning, it’s becoming possible to create “self-driving labs.” These take a high-level objective from a researcher, such as achieving a particular yield from a chemical reaction, and then autonomously run experiments until they hit that goal.

Others are going further and actually involving AI in the planning and design of experiments. In 2023, Carnegie Mellon University researchers showed that their AI “Coscientist,” powered by OpenAI’s GPT-4, could autonomously plan and carry out the chemical synthesis of known compounds.

Google has created a multi-agent system powered by its Gemini 2.0 reasoning model that can help scientists generate hypotheses and propose new research projects. And another “AI scientist” developed by Sakana AI wrote a machine learning paper that passed the peer-review process for a workshop at a prestigious AI conference.

Exciting as all this is though, AI’s takeover of science could have potential downsides. Neural networks are black boxes whose internal workings are hard to decipher, which can make results challenging to interpret. And many researchers are not familiar enough with the technology to catch common pitfalls that can distort results.

Nonetheless, the incredible power of these models to crunch through data and model things at scales far beyond human comprehension remains a vital tool. With judicious application AI could massively accelerate progress in a wide range of fields.

The post The Dream of an AI Scientist Is Closer Than Ever appeared first on SingularityHub.

Will AI Take Your Job? It Depends on These 4 Key Advantages AI Has Over Humans

This framework can help you understand where AI provides value.

If you’ve worried that AI might take your job, deprive you of your livelihood, or maybe even replace your role in society, it probably feels good to see the latest AI tools fail spectacularly. If AI recommends glue as a pizza topping, then you’re safe for another day.

But the fact remains that AI already has definite advantages over even the most skilled humans, and knowing where these advantages arise—and where they don’t—will be key to adapting to the AI-infused workforce.

AI will often not be as effective as a human doing the same job. It won’t always know more or be more accurate. And it definitely won’t always be fairer or more reliable. But it may still be used whenever it has an advantage over humans in one of four dimensions: speed, scale, scope, and sophistication. Understanding these dimensions is the key to understanding AI-human replacement.

Speed

First, speed. There are tasks that humans are perfectly good at but are not nearly as fast as AI. One example is restoring or upscaling images: taking pixelated, noisy or blurry images and making a crisper and higher-resolution version. Humans are good at this; given the right digital tools and enough time, they can fill in fine details. But they are too slow to efficiently process large images or videos.

AI models can do the job blazingly fast, a capability with important industrial applications. AI-based software is used to enhance satellite and remote sensing data, to compress video files, to make video games run better with cheaper hardware and less energy, to help robots make the right movements, and to model turbulence to help build better internal combustion engines.

Real-time performance matters in these cases, and the speed of AI is necessary to enable them.

Scale

The second dimension of AI’s advantage over humans is scale. AI will increasingly be used in tasks that humans can do well in one place at a time, but that AI can do in millions of places simultaneously. A familiar example is ad targeting and personalization. Human marketers can collect data and predict what types of people will respond to certain advertisements. This capability is important commercially; advertising is a trillion-dollar market globally.

AI models can do this for every single product, TV show, website, and internet user. This is how the modern ad-tech industry works. Real-time bidding markets price the display ads that appear alongside the websites you visit, and advertisers use AI models to decide when they want to pay that price—thousands of times per second.

Scope

Next, scope. AI can be advantageous when it does more things than any one person could, even when a human might do better at any one of those tasks. Generative AI systems such as ChatGPT can engage in conversation on any topic, write an essay espousing any position, create poetry in any style and language, write computer code in any programming language, and more. These models may not be superior to skilled humans at any one of these things, but no single human could outperform top-tier generative models across them all.

It’s the combination of these competencies that generates value. Employers often struggle to find people with talents in disciplines such as software development and data science who also have strong prior knowledge of the employer’s domain. Organizations are likely to continue to rely on human specialists to write the best code and the best persuasive text, but they will increasingly be satisfied with AI when they just need a passable version of either.

Sophistication

Finally, sophistication. AIs can consider more factors in their decisions than humans can, and this can endow them with superhuman performance on specialized tasks. Computers have long been used to keep track of a multiplicity of factors that compound and interact in ways more complex than a human could trace. The 1990s chess-playing computer systems such as Deep Blue succeeded by thinking a dozen or more moves ahead.

Modern AI systems use a radically different approach: Deep learning systems built from many-layered neural networks take account of complex interactions—often many billions—among many factors. Neural networks now power the best chess-playing models and most other AI systems.

Chess is not the only domain where eschewing conventional rules and formal logic in favor of highly sophisticated and inscrutable systems has generated progress. The stunning advance of AlphaFold 2, the AI model of structural biology whose creators Demis Hassabis and John Jumper were recognized with the Nobel Prize in chemistry in 2024, is another example.

This breakthrough replaced traditional physics-based systems for predicting how sequences of amino acids would fold into three-dimensional shapes with a 93 million-parameter model, even though it doesn’t account for physical laws. That lack of real-world grounding is not desirable: No one likes the enigmatic nature of these AI systems, and scientists are eager to understand better how they work.

But the sophistication of AI is providing value to scientists, and its use across scientific fields has grown exponentially in recent years.

Context Matters

Those are the four dimensions where AI can excel over humans. Accuracy still matters. You wouldn’t want to use an AI that makes graphics look glitchy or targets ads randomly—yet accuracy isn’t the differentiator. The AI doesn’t need superhuman accuracy. It’s enough for AI to be merely good and fast, or adequate and scalable. Increasing scope often comes with an accuracy penalty, because AI can generalize poorly to truly novel tasks. The 4 S’s are sometimes at odds. With a given amount of computing power, you generally have to trade off scale for sophistication.

Even more interestingly, when an AI takes over a human task, the task can change. Sometimes the AI is just doing things differently. Other times, AI starts doing different things. These changes bring new opportunities and new risks.

For example, high-frequency trading isn’t just computers trading stocks faster; it’s a fundamentally different kind of trading that enables entirely new strategies, tactics, and associated risks. Likewise, AI has developed more sophisticated strategies for the games of chess and Go. And the scale of AI chatbots has changed the nature of propaganda by allowing artificial voices to overwhelm human speech.

It is this “phase shift,” when changes in degree may transform into changes in kind, where AI’s impacts to society are likely to be most keenly felt. All of this points to the places that AI can have a positive impact. When a system has a bottleneck related to speed, scale, scope, or sophistication, or when one of these factors poses a real barrier to being able to accomplish a goal, it makes sense to think about how AI could help.

Equally, when speed, scale, scope, and sophistication are not primary barriers, it makes less sense to use AI. This is why AI auto-suggest features for short communications such as text messages can feel so annoying. They offer little speed advantage and no benefit from sophistication, while sacrificing the sincerity of human communication.

Many deployments of customer service chatbots also fail this test, which may explain their unpopularity. Companies invest in them because of their scalability, and yet the bots often become a barrier to support rather than a speedy or sophisticated problem solver.

Where the Advantage Lies

Keep this in mind when you encounter a new application for AI or consider AI as a replacement for or an augmentation to a human process. Looking for bottlenecks in speed, scale, scope, and sophistication provides a framework for understanding where AI provides value, and equally where the unique capabilities of the human species give us an enduring advantage.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Will AI Take Your Job? It Depends on These 4 Key Advantages AI Has Over Humans appeared first on SingularityHub.

A Man With ALS Can Speak and Sing Again Thanks to a Brain Implant and AI-Synthesized Voice

13 June 2025 at 19:34

Using the new system, Casey Harrell can emphasize words and intonations in real time—and sing tunes.

At the age of 45, Casey Harrell lost his voice to amyotrophic lateral sclerosis (ALS). Also called Lou Gehrig’s disease, the disorder eats away at muscle-controlling nerves in the brain and spinal cord. Symptoms begin with weakening muscles, uncontrollable twitching, and difficulty swallowing. Eventually patients lose control of muscles in the tongue, throat, and lips, robbing them of their ability to speak.

Unlike paralyzed patients, Harrell could still produce sounds seasoned caretakers could understand, but they weren’t intelligible in a simple conversation. Now, thanks to an AI-guided brain implant, he can once again “speak” using a computer-generated voice that sounds like his.

The system, developed by researchers at the University of California, Davis, has almost no detectable delay when translating his brain activity into coherent speech. Rather than producing a monotone synthesized voice, the system can detect intonations—for example, a question versus a statement—and emphasize a word. It also translates brain activity encoding nonsense words such as “hmm” or “eww,” making the generated voice sound natural.

“With instantaneous voice synthesis, neuroprosthesis users will be able to be more included in a conversation. For example, they can interrupt, and people are less likely to interrupt them accidentally,” said study author Sergey Stavisky in a press release.

The study comes hot on the heels of another AI method that decodes a paralyzed woman’s thoughts into speech within a second. Previous systems took nearly half a minute—more than long enough to disrupt normal conversation. Together, the two studies showcase the power of AI to decipher the brain’s electrical chatter and convert it into speech in real time.

In Harrell’s case, the training was completed in the comfort of his home. Although the system required some monitoring and tinkering, it paves the way for a commercially available product for those who have lost the ability to speak.

“This is the holy grail in speech BCIs [brain-computer interfaces],” Christian Herff at Maastricht University to Nature, who was not involved in the study, told Nature.

Listening In

Scientists have long sought to restore the ability to speak for those who have lost it, whether due to injury or disease.

One strategy is to tap into the brain’s electrical activity. When we prepare to say something, the brain directs muscles in the throat, tongue, and lips to form sounds and words. By listening in on its electrical chatter, it’s possible to decode intended speech. Algorithms stitch together neural data and generate words and sentences as either text or synthesized speech.

The process may sound straightforward. But it took scientists years to identify the most reliable brain regions from which to collect speech-related activity. Even then, the lag time from thought to output—whether text or synthesized speech—has been long enough to make conversation awkward.

Then there are the nuances. Speech isn’t just about producing audible sentences. How you say something also matters. Intonation tells us if the speaker is asking a question, stating their needs, joking, or being sarcastic. Emphasis on individual words highlights the speaker’s mindset and intent. These aspects are especially important for tonal languages—such as Chinese—where a change in tone or pitch for the same “word” can have wildly different meanings. (“Ma,” for example, can mean mom, numb, horse, or cursing, depending on the intonation.)

Talk to Me

Harrell is part of the BrainGate2 clinical trial, a long-standing project seeking to restore lost abilities using brain implants. He enrolled in the trial as his ALS symptoms progressed. Although he could still vocalize, his speech was hard to understand and required expert listeners from his care team to translate. This was his primary mode of communication. He also had to learn to speak slower to make his residual speech more intelligible.

Five years ago, Harrell had four 64-microelectrode implants inserted into the left precentral gyrus of his brain—a region controlling multiple brain functions, including coordinating speech.

“We are recording from the part of the brain that’s trying to send these commands to the muscles. And we are basically listening into that, and we’re translating those patterns of brain activity into a phoneme—like a syllable or the unit of speech—and then the words they’re trying to say,” said Stavisky at the time.

In just two training sessions, Harrell had the potential to say 125,000 words—a vocabulary large enough for everyday use. The system translated his neural activity into a voice synthesizer that mimicked his voice. After more training, the implant achieved 97.5 percent accuracy as he went about his daily life.

“The first time we tried the system, he cried with joy as the words he was trying to say correctly appeared on-screen. We all did,” said Stavisky.

In the new study, the team sought to make generated speech even more natural with less delay and more personality. One of the hardest parts of real-time voice synthesis is not knowing when and how the person is trying to speak—or their intended intonation. “I am fine” has vastly different meanings depending on tone.

The team captured Harrell’s brain activity as he attempted to speak a sentence shown on a screen. The electrical spikes were filtered to remove noise in one millisecond segments and fed into a decoder. Like the Rosetta Stone, the algorithm mapped specific neural features to words and pitch, which were played back to Harrell through a voice synthesizer with just a 25-millisecond lag—roughly the time it takes for a person to hear their own voice, wrote the team.

Rather than decoding phonemes or words, the AI captured Harrell’s intent to make sounds every 10 milliseconds, allowing him to eventually say words not in a dictionary, like “hmm” or “eww.” He could spell out words and respond to open-ended questions, telling the researchers that the synthetic voice made him “happy” and that it felt like “his real voice.”

The team also recorded brain activity as Harrell attempted to speak the same set of sentences as either statements or questions, the latter having an increased pitch. All four electrode arrays recorded a neural fingerprint of activity patterns when the sentence was spoken as a question.

The system, once trained, could also detect emphasis. Harrell was asked to stress each word individually in the sentence, “I never said she stole my money,” which can have multiple meanings. His brain activity ramped up before saying the emphasized word, which the algorithm captured and used to guide the synthesized voice. In another test, the system picked up multiple pitches as he tried to sing different melodies.

Raise Your Voice

The AI isn’t perfect. Volunteers could understand the output roughly 60 percent of the time—a far cry from the near perfect brain-to-text system Harrell is currently using. But the new AI brings individual personality to synthesized speech, which usually produces a monotone voice. Deciphering speech in real-time also lets the person interrupt or object during a conversation, making the experience feel more natural.

“We don’t always use words to communicate what we want. We have interjections. We have other expressive vocalizations that are not in the vocabulary,” study author Maitreyee   Wairagkar told Nature.

Because the AI is trained on sounds, not English vocabulary, it could be adapted to other languages, especially tonal ones like Chinese. The team is also looking to increase the system’s accuracy by placing more electrodes in people who have lost their speech due to stroke or neurodegenerative diseases.

“The results of this research provide hope for people who want to talk but can’t…This kind of technology could be transformative for people living with paralysis,” said study author David Brandman.

The post A Man With ALS Can Speak and Sing Again Thanks to a Brain Implant and AI-Synthesized Voice appeared first on SingularityHub.

What If the Big Bang Wasn’t the Beginning? Research Suggests It May Have Taken Place Inside a Black Hole

10 June 2025 at 14:00

Our universe may have been born in a gravitational crunch that formed a very massive black hole—followed by a bounce inside it.

The Big Bang is often described as the explosive birth of the universe—a singular moment when space, time, and matter sprang into existence. But what if this was not the beginning at all? What if our universe emerged from something else—something more familiar and radical at the same time?

In a new paper, published in Physical Review D (full preprint here), my colleagues and I propose a striking alternative. Our calculations suggest the Big Bang was not the start of everything, but rather the outcome of a gravitational crunch or collapse that formed a very massive black hole—followed by a bounce inside it.

This idea, which we call the black hole universe, offers a radically different view of cosmic origins, yet it is grounded entirely in known physics and observations.

Today’s standard cosmological model, based on the Big Bang and cosmic inflation (the idea that the early universe rapidly blew up in size), has been remarkably successful in explaining the structure and evolution of the universe. But it comes at a price: It leaves some of the most fundamental questions unanswered.

For one, the Big Bang model begins with a singularity—a point of infinite density where the laws of physics break down. This is not just a technical glitch; it’s a deep theoretical problem that suggests we don’t really understand the beginning at all.

To explain the universe’s large-scale structure, physicists introduced a brief phase of rapid expansion into the early universe called cosmic inflation, powered by an unknown field with strange properties. Later, to explain the accelerating expansion observed today, they added another “mysterious” component: dark energy.

In short, the standard model of cosmology works well—but only by introducing new ingredients we have never observed directly. Meanwhile, the most basic questions remain open: Where did everything come from? Why did it begin this way? And why is the universe so flat, smooth, and large?

New Model

Our new model tackles these questions from a different angle—by looking inward instead of outward. Instead of starting with an expanding universe and trying to trace back how it began, we consider what happens when an overly dense collection of matter collapses under gravity.

This is a familiar process: Stars collapse into black holes, which are among the most well-understood objects in physics. But what happens inside a black hole, beyond the event horizon from which nothing can escape, remains a mystery.

In 1965, the British physicist Roger Penrose proved that under very general conditions, gravitational collapse must lead to a singularity. This result, extended by the late British physicist Stephen Hawking and others, underpins the idea that singularities—like the one at the Big Bang—are unavoidable.

The idea helped win Penrose a share of the 2020 Nobel prize in physics and inspired Hawking’s global bestseller A Brief History of Time: From the Big Bang to Black Holes. But there’s a caveat. These “singularity theorems” rely on “classical physics” which describes ordinary macroscopic objects. If we include the effects of quantum mechanics, which rules the tiny microcosmos of atoms and particles, as we must at extreme densities, the story may change.

In our new paper, we show that gravitational collapse does not have to end in a singularity. We find an exact analytical solution—a mathematical result with no approximations. Our math shows that as we approach the potential singularity, the size of the universe changes as a (hyperbolic) function of cosmic time.

This simple mathematical solution describes how a collapsing cloud of matter can reach a high-density state and then bounce, rebounding outward into a new expanding phase.

But why do Penrose’s theorems forbid such outcomes? It’s all down to a rule called the quantum exclusion principle, which states that no two identical particles known as fermions can occupy the same quantum state (such as angular momentum, or “spin”).

And we show that this rule prevents the particles in the collapsing matter from being squeezed indefinitely. As a result, the collapse halts and reverses. The bounce is not only possible—it’s inevitable under the right conditions.

Crucially, this bounce occurs entirely within the framework of general relativity, which applies on large scales such as stars and galaxies, combined with the basic principles of quantum mechanics—no exotic fields, extra dimensions, or speculative physics required.

What emerges on the other side of the bounce is a universe remarkably like our own. Even more surprisingly, the rebound naturally produces the two separate phases of accelerated expansion—inflation and dark energy—driven not by hypothetical fields but by the physics of the bounce itself.

Testable Predictions

One of the strengths of this model is that it makes testable predictions. It predicts a small but non-zero amount of positive spatial curvature—meaning the universe is not exactly flat, but slightly curved, like the surface of the Earth.

This is simply a relic of the initial small over-density that triggered the collapse. If future observations, such as the ongoing Euclid mission, confirm a small positive curvature, it would be a strong hint that our universe did indeed emerge from such a bounce. It also makes predictions about the current universe’s rate of expansion, something that has already been verified.

ESA
The SpaceX Falcon 9 rocket carrying ESA’s Euclid mission on the launch pad in 2023. Image Credit: ESA, CC BY-SA

This model does more than fix technical problems with standard cosmology. It could also shed new light on other deep mysteries in our understanding of the early universe—such as the origin of supermassive black holes, the nature of dark matter, or the hierarchical formation and evolution of galaxies.

These questions will be explored by future space missions such as Arrakihs, which will study diffuse features such as stellar halos (a spherical structure of stars and globular clusters surrounding galaxies) and satellite galaxies (smaller galaxies that orbit larger ones) that are difficult to detect with traditional telescopes from Earth and will help us understand dark matter and galaxy evolution.

These phenomena might also be linked to relic compact objects—such as black holes—that formed during the collapsing phase and survived the bounce.

The black hole universe also offers a new perspective on our place in the cosmos. In this framework, our entire observable universe lies inside the interior of a black hole formed in some larger “parent” universe.

We are not special, no more than Earth was in the geocentric worldview that led Galileo (the astronomer who suggested the Earth revolves around the sun in the 16th and 17th centuries) to be placed under house arrest.

We are not witnessing the birth of everything from nothing, but rather the continuation of a cosmic cycle—one shaped by gravity, quantum mechanics, and the deep interconnections between them.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post What If the Big Bang Wasn’t the Beginning? Research Suggests It May Have Taken Place Inside a Black Hole appeared first on SingularityHub.

This Brain Discovery Could Unlock AI’s Ability to See the Future

6 June 2025 at 17:51

The brain quickly adapts to change by predicting multiple futures, neuron by neuron. These finding could lead to AI that can do the same thing.

We constantly make decisions. Some seem simple: I booked dinner at a new restaurant, but I’m hungry now. Should I grab a snack and risk losing my appetite or wait until later for a satisfying meal—in other words, what choice is likely more rewarding?

Dopamine neurons inside the brain track these decisions and their outcomes. If you regret a choice, you’ll likely make a different one next time. This is called reinforcement learning, and it helps the brain continuously adjust to change. It also powers a family of AI algorithms that learn from successes and mistakes like humans do.

But reward isn’t all or nothing. Did my choice make me ecstatic, or just a little happier? Was the wait worth it?

This week, researchers at the Champalimaud Foundation, Harvard University, and other institutions said they’ve discovered a previously hidden universe of dopamine signaling in the brain. After recording the activity of single dopamine neurons as mice learned a new task, the teams found the cells don’t simply track rewards. They also keep tabs on when a reward came and how big it was—essentially building a mental map of near-term and far-future reward possibilities.

“Previous studies usually just averaged the activity across neurons and looked at that average,” said study author Margarida Sousa in a press release. “But we wanted to capture the full diversity across the population—to see how individual neurons might specialize and contribute to a broader, collective representation.”

Some dopamine neurons preferred immediate rewards; others slowly ramped up activity in expectation of delayed satisfaction. Each cell also had a preference for the size of a reward and listened out for internal signals—for example, if a mouse was thirsty, hungry, and its motivation level.

Surprisingly, this multidimensional map closely mimics some emerging AI systems that rely on reinforcement learning. Rather than averaging different opinions into a single decision, some AI systems use a group of algorithms that encodes a wide range of reward possibilities and then votes on a final decision.

In several simulations, AI equipped with a multidimensional map better handled uncertainty and risk in a foraging task.  

The results “open new avenues” to design more efficient reinforcement learning AI that better predicts and adapts to uncertainties, wrote one team. They also provide a new way to understand how our brains make everyday decisions and may offer insight into how to treat impulsivity in neurological disorders such as Parkinson’s disease.

Dopamine Spark

For decades, neuroscientists have known dopamine neurons underpin reinforcement learning. These neurons puff out a small amount of dopamine—often dubbed the pleasure chemical—to signal an unexpected reward. Through trial and error, these signals might eventually steer a thirsty mouse through a maze to find the water stashed at its end. Scientists have developed a framework for reinforcement learning by recording the electrical activity of dopamine neurons as these critters learned. Dopamine neurons spark with activity in response to nearby rewards, then this activity slowly fades as time goes by—a process researchers call “discounting.”

But these analyses average activity into a single expected reward, rather than capturing the full range of possible outcomes over time—such as larger rewards after longer delays. Although the models can tell you if you’ve received a reward, they miss nuances, such as when and how much. After battling hunger—was the wait for the restaurant worth it?

An Unexpected Hint

Sousa and colleagues wondered if dopamine signaling is more complex than previously thought. Their new study was actually inspired by AI. An approach called distributional reinforcement learning estimates a range of possibilities and learns from trial and error rather than a single reward.

“What if different dopamine neurons were sensitive to distinct combinations of possible future reward features—for example, not just their magnitude, but also their timing?” said Sousa.

Harvard neuroscientists led by Naoshige Uchida had an answer. They recorded electrical activity from individual dopamine neurons in mice as the animals learned to lick up a water reward. At the beginning of each trial, the mice sniffed a different scent that predicted both the amount of water they might find—that is, the size of the reward—and how long until they might get it.

Each dopamine neuron had its own preference. Some were more impulsive and preferred immediate rewards, regardless of size. Others were more cautious, slowly ramping up activity that tracked reward over time. It’s a bit like being extremely thirsty on a hike in the desert with limited water: Do you chug it all now, or ration it out and give yourself a longer runway?

The neurons also had different personalities. Optimistic ones were especially sensitive to unexpectedly large rewards—activating with a burst—whereas pessimistic ones stayed silent. Combining the activity of these neuron voters, each with their own point of view, resulted in a population code that ultimately decided the mice’s behavior.

“It’s like having a team of advisors with different risk profiles,” said study author Daniel McNamee in the press release, “Some urge action—‘Take the reward now, it might not last’—while others advise patience—‘Wait, something better could be coming.’”

Each neuron’s stance was flexible. When the reward was consistently delayed, they collectively shifted to favor longer-term rewards, showcasing how the brain rapidly adjusts to change.

“When we looked at the [dopamine neuron] population as a whole, it became clear that these neurons were encoding a probabilistic map,” said study author Joe Paton. “Not just whether a reward was likely, but a coordinate system of when it might arrive and how big it might be.”

Brain to AI

The brain recordings were like ensemble AI, where each model has its own viewpoint but the group collaborates to handle uncertainties.

The team also developed an algorithm, called time-magnitude reinforcement learning, or TMRL, that could plan future choices. Classic reinforcement-learning models only give out rewards at the end. This takes many cycles of learning before an algorithm homes in on the best decision. But TMRL rapidly maps a slew of choices, allowing humans and AI to pick the best ones with fewer cycles. The new model also includes internal states, like hunger levels, to further fine-tune decisions.

In one test, equipping algorithms with a dopamine-like “multidimensional map” boosted their performance in a simulated foraging task compared to standard reinforcement learning models.

“Knowing in advance—at the start of an episode—the range and likelihood of rewards available and when they are likely to occur could be highly useful for planning and flexible behavior,” especially in a complex environment and with different internal states, wrote Sousa and team.

The dual studies are the latest to showcase the power of AI and neuroscience collaboration. Models of the brain’s inner workings can inspire more human-like AI. Meanwhile, AI is shining light into our own neural machinery, potentially leading to insights about neurological disorders.

Inspiration from the brain “could be key to developing machines that reason more like humans,” said Paton.

The post This Brain Discovery Could Unlock AI’s Ability to See the Future appeared first on SingularityHub.

Evidence Shows AI Systems Are Already Too Much Like Humans. Will That Be a Problem?

26 May 2025 at 20:09

What happens when you can’t tell the difference between a human and an AI chatbot? We’re about to find out.

What if we could design a machine that could read your emotions and intentions, write thoughtful, empathetic, perfectly timed responses—and seemingly know exactly what you need to hear? A machine so seductive, you wouldn’t even realize it’s artificial. What if we already have?

In a comprehensive meta-analysis, published in the Proceedings of the National Academy of Sciences, we show that the latest generation of large-language-model-powered chatbots match and exceed most humans in their ability to communicate. A growing body of research shows these systems now reliably pass the Turing test, fooling humans into thinking they are interacting with another human.

None of us was expecting the arrival of super communicators. Science fiction taught us that artificial intelligence would be highly rational and all-knowing, but lack humanity.

Yet here we are. Recent experiments have shown that models such as GPT-4 outperform humans in writing persuasively and also empathetically. Another study found that large language models (LLMs) excel at assessing nuanced sentiment in human-written messages.

LLMs are also masters at roleplay, assuming a wide range of personas and mimicking nuanced linguistic character styles. This is amplified by their ability to infer human beliefs and intentions from text. Of course, LLMs do not possess true empathy or social understanding—but they are highly effective mimicking machines.

We call these systems “anthropomorphic agents.” Traditionally, anthropomorphism refers to ascribing human traits to non-human entities. However, LLMs genuinely display highly human-like qualities, so calls to avoid anthropomorphizing LLMs will fall flat.

This is a landmark moment: when you cannot tell the difference between talking to a human or an AI chatbot online.

On the Internet, Nobody Knows You’re an AI

What does this mean? On the one hand, LLMs promise to make complex information more widely accessible via chat interfaces, tailoring messages to individual comprehension levels. This has applications across many domains, such as legal services or public health. In education, the roleplay abilities can be used to create Socratic tutors that ask personalized questions and help students learn.

At the same time, these systems are seductive. Millions of users already interact with AI companion apps daily. Much has been said about the negative effects of companion apps, but anthropomorphic seduction comes with far wider implications.

Users are ready to trust AI chatbots so much that they disclose highly personal information. Pair this with the bots’ highly persuasive qualities, and genuine concerns emerge.

Recent research by AI company Anthropic further shows that its Claude 3 chatbot was at its most persuasive when allowed to fabricate information and engage in deception. Given AI chatbots have no moral inhibitions, they are poised to be much better at deception than humans.

This opens the door to manipulation at scale to spread disinformation or create highly effective sales tactics. What could be more effective than a trusted companion casually recommending a product in conversation? ChatGPT has already begun to provide product recommendations in response to user questions. It’s only a short step to subtly weaving product recommendations into conversations—without you ever asking.

What Can Be Done?

It is easy to call for regulation, but harder to work out the details.

The first step is to raise awareness of these abilities. Regulation should prescribe disclosure—users need to always know that they interact with an AI, like the EU AI Act mandates. But this will not be enough, given the AI systems’ seductive qualities.

The second step must be to better understand anthropomorphic qualities. So far, LLM tests measure “intelligence” and knowledge recall, but none so far measures the degree of “human likeness.” With a test like this, AI companies could be required to disclose anthropomorphic abilities with a rating system, and legislators could determine acceptable risk levels for certain contexts and age groups.

The cautionary tale of social media, which was largely unregulated until much harm had been done, suggests there is some urgency. If governments take a hands-off approach, AI is likely to amplify existing problems with spreading of mis- and disinformation, or the loneliness epidemic. In fact, Meta chief executive Mark Zuckerberg has already signaled that he would like to fill the void of real human contact with “AI friends.”

Relying on AI companies to refrain from further humanizing their systems seems ill-advised. All developments point in the opposite direction. OpenAI is working on making their systems more engaging and personable, with the ability to give your version of ChatGPT a specific “personality.”

ChatGPT has generally become more chatty, often asking followup questions to keep the conversation going, and its voice mode adds even more seductive appeal.

Much good can be done with anthropomorphic agents. Their persuasive abilities can be used for ill causes and for good ones, from fighting conspiracy theories to enticing users into donating and other prosocial behaviours.

Yet we need a comprehensive agenda across the spectrum of design and development, deployment and use, and policy and regulation of conversational agents. When AI can inherently push our buttons, we shouldn’t let it change our systems.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Evidence Shows AI Systems Are Already Too Much Like Humans. Will That Be a Problem? appeared first on SingularityHub.

Teaching AI Like a Kindergartner Could Make It Smarter

23 May 2025 at 14:00

Kids are expert learners. AI should take notes.

Despite the impressive performance of modern AI models, they still struggle to match the learning abilities of young children. Now, researchers have shown that teaching models like kindergartners can boost their skills.

Neural networks are typically trained by feeding them vast amounts of data in one go and then using this data to draw statistical patterns that guide the model’s behavior. But that’s very different from the way humans and animals learn, which typically involves gradually picking up new skills over the course of a lifetime and combining that knowledge to solve new problems.

Researchers from New York University have now tried to instill this kind of learning process in AI through a process they dub “kindergarten curriculum learning.”’ In a paper in Nature Machine Intelligence, they showed that the approach led to the model learning considerably faster than when using existing approaches.

“AI agents first need to go through kindergarten to later be able to better learn complex tasks,” Cristina Savin, an associate professor at NYU who led the research, said in a press release. “These results point to ways to improve learning in AI systems and call for developing a more holistic understanding of how past experiences influence learning of new skills.”

The team’s inspiration came from efforts to reproduce cognitive behavior in AI. Researchers frequently use models called recurrent neural networks to try and mimic the patterns of brain activity in animals and test out hypotheses about how these are connected to behavior.

But for more complex tasks these approaches can quickly fail, so the team decided to mirror the way animals learn. Their new approach breaks problems down into smaller tasks that need to be combined to reach the desired goal.

They trained the model on these simpler tasks, one after the other, gradually increasing the complexity and allowing the model to build on the skills it had previously acquired. Once the model had been pretrained on these simpler tasks, the researchers then trained it on the full task.

In the paper, the team tested the approach on a simplified digital version of a wagering task that mimics a real-world test given to thirsty rats. The animals are given audio cues denoting the size of a water reward. They must then decide whether to wait for an unpredictable amount of time or give up on the reward and try again.

To solve the challenge, the model has to judge the size of the reward, keep track of time, and figure out the average reward gained by waiting. The team first trained the model on each of these skills individually and then trained it to predict the optimal behavior on the full task.

They found that models trained this way not only learned faster than conventional approaches but also mimicked the strategies used by animals on the same task. Interestingly, the patterns of activity in the neural networks also mimicked the slow dynamics seen in animals that make it possible to retain information over long periods to solve this kind of time-dependent task.

The researchers say the approach could help better model animal behavior and deepen our understanding of the processes that underpin learning. But it could also be a promising way to training machines to tackle complex tasks that require long-term planning.

While the methods have so far only been tested on relatively small models and simple tasks, the idea of teaching AI the same way we would a child has some pedigree. It may not be long before our digital assistants get sent to school just like us.

The post Teaching AI Like a Kindergartner Could Make It Smarter appeared first on SingularityHub.

❌