Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.
An AI system has taught itself to recognize words using the experiences of a single infant — a breakthrough that could help scientists answer the hotly debated question of how children learn language. The algorithm was trained on 61 hours of video recorded with a head-camera worn by a baby for short periods of time. It was also given transcriptions of words spoken to the child during the recording. The system learnt words such as ‘crib’ and ‘ball’ by building associations between images and words without any other prior knowledge about language. That challenges the theory that babies need some innate knowledge about how language works, says AI researcher Wai Keen Vong, who co-authored the research.
Nature | 5 min read
Reference: Science paper
For nearly half of the world’s population, it’s an election year — and with it come worries about online misinformation such as AI-generated deepfakes. Yet researchers who monitor social media’s political reach find themselves in the worst position they’ve been in for years. For example, X (formerly Twitter) has stopped providing free research access to its data. Many hope that new legislation in Europe will change that; other are exploring workarounds such as interviewing people who use the platforms. “We have to learn how to get insights from more limited sets of data,” says computer scientist Kate Starbird.
Nature | 7 min read
Brain-computer interface company Neuralink has reportedly implanted its ‘brain-reading’ device into a person for the first time. The implant records and decodes individual neurons’ activity, with the aim of allowing a person with severe paralysis to control a device, for example, a robotic arm. Experts are cautiously excited: this is the first fully wireless system of its kind and it has more brain connections than other devices. But there is frustration about Neuralink’s lack of transparency: there’s little information about the study and a tweet by the company’s founder, controversial entrepreneur Elon Musk, is the only confirmation that the trial has begun.
Nature | 6 min read
Conversations with ChatGPT can change people’s attitudes about divisive issues, at least a little. More than 3,000 people with varying opinions on climate change and the Black Lives Matter movement were asked to talk to the chatbot about these topics. The conversations moved all groups more towards supporting the scientific consensus, but the change in opinion among those who were initially unsupportive was nearly six percentage points higher. But this group also rated their experience as significantly worse — a potential dilemma for chatbot creators, says communication researcher and study co-author Kaiping Chen. “You want to make your user happy, otherwise they’re going to use other chatbots,” she explains. “But if you make them happy, maybe they’re not going to learn much from the conversation.”
Grist | 6 min read
Reference: Scientific Reports paper
Image of the week
Table of Contents
‘Roboteryx’ is demonstrating that having feathered forelimbs makes it easier to terrify grasshoppers. The robot is modelled on Caudipteryx, a winged, but flightless, dinosaur. The makers of Roboteryx suggest that the ability to flush out prey insects might be one of the reasons why wings evolved before flight. (The New York Times | 5 min read)
Reference: Scientific Reports paper
Features & opinion
When bioinformatics researcher Hunter Moseley and his colleagues were reviewing biochemistry algorithms, three papers had a catastrophic ‘data leakage’ problem: the data used for training and those used for evaluation were cross-contaminated with duplicated entries. The good news is that the authors of two of the papers had made their data, code and results fully available, so the problem could be found and addressed. The other paper didn’t — making it impossible to properly evaluate their results. The lesson, writes Moseley, is about more than knee-jerk retractions of flawed research. In the midst of a data-driven science boom, good reproducibility practices are more important than ever.
Nature | 6 min read
Universities are struggling to address the potential risks to academic integrity posed by students’ use of AI tools. Public health lecturer Zheng Feei Ma and dean of learning and teaching Antony Hill have tips for teachers on ethical and informed AI use in the classroom:
• Use AI tools instead of recorded lectures for more interactive asynchronous learning — and focus on developing an active classroom dynamic in the time saved.
• Exploit AI systems’ accuracy limitations and biases to foster students’ critical-thinking skills.
• Engage students in analysing AI-generated counter-arguments.
• Ask students to share their ideas for using AI tools for learning.
Times Higher Education | 5 min read
DeepMind researchers performed a “fantastic feat” with the protein-structure prediction tool AlphaFold2, says computational biologist Jennifer Listgarten. But it might be the only scientific problem that AI could have tackled so successfully. “The most interesting and impactful questions may not yet be formulated at all, let alone in a manner suitable for machine learning,” she explains. Most scientific fields don’t have the huge volumes of data required to train an AI system, and it’s no use asking another algorithm to churn out more. “Fresh information must be injected into the system one way or another for there to be a win,” Listgarten says. “For this, we’ll just need to get back to the bench and do more experiments.”
Nature Biotechnology | 9 min read