Taking Stock: AI and all that
OpenAI + Jony Ive | 2 paths for AI | AI AlphaFold | AI for medicine | AI for climate | AI for Nigerian schoolchildren | AI in suspect health papers | Rational livestock-sourced foods
Sam Altman and Jony Ive will force A.I. into your life—Kyle Chayka
The founder of OpenAI and the designer behind the iPhone are teaming up on a gadget that they promise to ship out “faster than any company” ever has. What could go wrong?
‘Last Wednesday, OpenAI announced that it was acquiring a company called io, an artificial-intelligence-forward product-development firm co-founded, last year, by Jony Ive, the vastly influential designer known for his work with Steve Jobs at Apple. Ive led the designs of the original iMac, the iPad, and the Apple Watch, among other era-defining products. Then, in 2019, he left Apple to start his own design firm called LoveFrom. The news of his move to OpenAI felt something like learning that LeBron James was joining the Miami Heat: Ive had become synonymous with Apple’s success, perhaps second only to Jobs. Now, after a period of independence, he was choosing a new team.
‘The announcement of the deal with OpenAI—for a reported $6.5 billion in OpenAI equity—came via a press release, featuring a rather cuddly portrait of Ive with OpenAI’s C.E.O. and co-founder, Sam Altman (shot by the British fashion photographer Craig McDean) and a faux-casual videotaped interview session between the two at San Francisco’s Cafe Zoetrope. In it, Altman describes “a family of devices that would let people use A.I. to create all sorts of wonderful things,” enabled by “magic intelligence in the cloud.” The symbolism of the partnership was clear: Altman is the new Jobs, and together he and Ive promise to create the next ur-device, a personal technology that will reshape our lives just as the iPhone did. Once it’s ready, they say, they’ll ship a hundred million devices “faster than any company” ever has.
‘We don’t know what it will look like just yet, but Altman swears that it will be “the coolest piece of technology that the world will have ever seen.” Ming-chi Kuo, a respected analyst of Apple’s Chinese manufacturing, posted on X that the product is planned to be “as compact and elegant as an iPod Shuffle” and that it will have “cameras and microphones for environmental detection.” It might resemble other early A.I. devices announced or launched in the past year, such as Friend, another pendant-like chatbot companion; Humane, an A.I. pin with a laser projector; or Rabbit, a small handheld gadget. . . .
‘Generative A.I. has already been integrated into many of our daily digital experiences, whether we want it there or not. iPhones now summarize text threads using A.I. and allow users to generate custom emojis. Google recently announced an “AI Mode” that it intends to supplant its traditional search box with, a development that threatens to slow open-web traffic down to a trickle. Meta’s “AI Glasses,” a collaboration with Ray-Ban, integrate voice chatting and live translation with the company’s A.I. assistant. And chatbots with distinct personalities, like Replika and Character.ai, are becoming increasingly popular as they get better at mimicking human connection.
‘Perhaps Altman and Ive’s machine will mingle all of these functionalities: it might listen to and interpret the sounds around you; it might respond with predictive text, delivered to you instantaneously and in a customizable tone; and it might become your main avenue for accessing information, like a personal concierge. It will reportedly not attempt to supplant the other technologies you depend on: according to the Wall Street Journal, Altman described it as a kind of third device, meant to work within an ecosystem that includes your laptop and smartphone. . . .’
Altman and Ive are positioning their device as a solution to screen fatigue. . .
Speculative mockups online imagine an A.I. companion device that looks simple, like a rounded metal amulet—it would be Ive’s style to make the design approachable yet austere. . . .
Watch Sam Altman and Johnny Ive discuss their merger, and trade glowing compliments (for each other and for San Francisco) in (of course) Francis Ford Coppola's cozy San Francisco café.
Two Paths for AI—Joshua Rothman
The technology is complicated, but our choices are simple: we can remain passive, or assert control.
‘Last spring, Daniel Kokotajlo, an A.I.-safety researcher working at OpenAI, quit his job in protest. He’d become convinced that the company wasn’t prepared for the future of its own technology, and wanted to sound the alarm. After a mutual friend connected us, we spoke on the phone. I found Kokotajlo affable, informed, and anxious. Advances in “alignment,” he told me—the suite of techniques used to insure that A.I. acts in accordance with human commands and values—were lagging behind gains in intelligence. Researchers, he said, were hurtling toward the creation of powerful systems they couldn’t control. . . . He’d concluded that a point of no return, when A.I. might become better than people at almost all important tasks, and be trusted with great power and authority, could arrive in 2027 or sooner. He sounded scared.
‘Around the same time that Kokotajlo left OpenAI, two computer scientists at Princeton, Sayash Kapoor and Arvind Narayanan, were preparing for the publication of their book, “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.” In it, Kapoor and Narayanan, who study technology’s integration with society, advanced views that were diametrically opposed to Kokotajlo’s. They argued that many timelines of A.I.’s future were wildly optimistic; that claims about its usefulness were often exaggerated or outright fraudulent; and that, because of the world’s inherent complexity, even powerful A.I. would change it only slowly. . . .
‘Recently, all three researchers have sharpened their views, releasing reports that take their analyses further. The nonprofit AI Futures Project, of which Kokotajlo is the executive director, has published “AI 2027,” a heavily footnoted document, written by Kokotajlo and four other researchers, which works out a chilling scenario in which “superintelligent” A.I. systems either dominate or exterminate the human race by 2030. It’s meant to be taken seriously, as a warning about what might really happen. Meanwhile, Kapoor and Narayanan, in a new paper titled “AI as Normal Technology,” insist that practical obstacles of all kinds—from regulations and professional standards to the simple difficulty of doing physical things in the real world—will slow A.I.’s deployment and limit its transformational potential. While conceding that A.I. may eventually turn out to be a revolutionary technology, on the scale of electricity or the internet, they maintain that it will remain “normal”—that is, controllable through familiar safety measures, such as fail-safes, kill switches, and human supervision—for the foreseeable future. . . .
Which is it: business as usual or the end of the world?
‘. . . When experts get together to make a unified recommendation, it’s hard to ignore them; when they divide themselves into duelling groups, it becomes easier for decision-makers to dismiss both sides and do nothing. Currently, nothing appears to be the plan. . . . We need to make sense of the safety discourse now, before the game is over. . . .
‘There are always trade-offs. If you aim for reliable, levelheaded conservatism, you risk downplaying unlikely possibilities; if you bring imagination to bear, you might dwell on what’s interesting at the expense of what’s likely. . . .
‘“AI 2027” is imaginative, vivid, and detailed. It “is definitely a prediction,” Kokotajlo told me recently, “but it’s in the form of a scenario, which is a particular kind of prediction.” . . .
[T]he bottom line, Kokotajlo said, is that, ‘more likely than not, there is going to be an intelligence explosion, and a crazy geopolitical conflict over who gets to control the A.I.s.’
‘It’s the details of that “intelligence explosion” that we need to follow. The scenario in “AI 2027” centers on a form of A.I. development known as “recursive self-improvement,” or R.S.I., which is currently largely hypothetical. In the report’s story, R.S.I. begins when A.I. programs become capable of doing A.I. research for themselves (today, they only assist human researchers); these A.I. “agents” soon figure out how to make their descendants smarter, and those descendants do the same for their descendants, creating a feedback loop. This process accelerates as the A.I.s start acting like co-workers, trading messages and assigning work to one another, forming a “corporation-within-a-corporation” that repeatedly grows faster and more effective than the A.I. firm in which it’s ensconced. Eventually, the A.I.s begin creating better descendants so quickly that human programmers don’t have time to study them and decide whether they’re controllable.
‘Seemingly every science-fiction novel ever written about A.I. suggests that implementing recursive self-improvement is a bad idea. The big A.I. companies identify R.S.I. as risky, but don’t say that they won’t pursue it; instead, they vow to strengthen their safety measures if they head in that direction. At the same time, if it works, its economic potential could be extraordinary. The pursuit of R.S.I. is “definitely a choice that people are eager to make in these companies,” Kokotajlo said. “It’s the plan. OpenAI and Anthropic, their plan is to automate their own jobs first.”
‘Could this type of R.S.I. work? . . . If R.S.I. took hold, would its progress hit a ceiling, or continue until the advent of “artificial superintelligence”—a level of intelligence that exceeds what human minds are capable of? (“It would be a very strange coincidence if the limit on intelligence happened to be just barely above the human range,” Kokotajlo said.)
‘The possibilities compound. Would superintelligence-driven innovation inspire a militarized arms race? Could superintelligent A.I.s end up manipulating or eliminating us while pursuing their own inscrutable ends? (In “AI 2027,” they use up the Earth’s resources while conducting scientific research we’re not smart enough to understand.) Or, in a happier development, might they solve the alignment problem for us, either domesticating themselves or becoming benevolent gods, depending on your point of view?
‘No one really knows for sure. . . .
‘Unlike “AI 2027,” “AI as Normal Technology” has an East Coast sensibility. . . . Narayanan and Kapoor aren’t too concerned about superintelligence or a possible intelligence explosion. They believe that A.I. faces “speed limits” that will prevent hyper-rapid progress, and argue that, even if superintelligence is possible, it will take decades to invent, giving us plenty of time to pass laws, institute safety measures, and so on. To some extent, the speed limits they discern have to do with A.I. in particular—they flow from the high cost of A.I. hardware, the dwindling supply of training data, and the like. But Kapoor and Narayanan also think they’re inherent to technology in general, which typically changes the world more slowly than people predict.
‘The understandable focus of A.I. researchers on “intelligence,” Kapoor and Narayanan argue, has been misleading. A harsh truth is that intelligence alone is of limited practical value. In the real world, what matters is power—“the ability to modify one’s environment.” They note that, in the history of innovation, many technologies have possessed astonishing capabilities but failed to deliver much power to their inventors or users. It’s incredible, for instance, that some cars can drive themselves. But, in the United States, driverless cars are confined to a handful of cities and operated, as robo-taxis, by a small number of companies. The technology is capable, but not powerful. It will probably transform transportation—someday. . . .
‘New inventions take a long time to “diffuse” through society, from labs outward. “AI 2027” entertains the possibility of “cures for most diseases” arriving as soon as 2029. But, according to Kapoor and Narayanan’s view, even if the intellectual work of creating those cures could be rapidly accelerated through A.I., we would still have to wait a long time before enjoying them. . . . “My favorite example is Moderna,” Kapoor told me, referring to the pharmaceutical company. After Chinese researchers sequenced the genome of SARS-CoV-2, the virus which causes COVID-19, it took Moderna “less than a week to come up with the vaccine. But then it took about a year to roll it out.” Perhaps A.I. could design vaccines even faster—but clinical trials, which depend on human biological processes, simply take time. . . .
‘The world, in this view, is already a pretty well-regulated place—and artificial intelligence will have to be integrated slowly into its web of rules. One question to ask is, Do we believe that those in charge of A.I. will have to follow the rules? Kapoor and Narayanan note “one important caveat” to their analysis: “We explicitly exclude military AI . . . as it involves classified capabilities and unique dynamics that require a deeper analysis.” “AI 2027,” meanwhile, is almost entirely focussed on the militarization of artificial intelligence, which unfolds quickly once its defense implications (“What if AI undermines nuclear deterrence?”) make themselves known. The two reports, taken together, suggest that we should keep a close watch on military applications of A.I. “AI as Normal Technology,” for its part, offers concrete advice for those in charge in many areas of society. Don’t wait, passively, for A.I. firms to “align” their models. Instead, start monitoring the use of A.I. in your field. Find ways to track evidence of its risks and failures. And shore up, or create, rules that will make people and institutions more resilient as the technology spreads. . . .
‘A lot of us may soon find ourselves working on cognitive factory floors. Whatever we do, we could be doing it alongside, or with, machines. Since the machines can automate some of our thinking, it will be tempting to take our hands off the controls. But in such a factory, if a workplace accident occurs, or if a defective product is sold, who will be accountable? Conversely, if the factory is well run, and if its products are delightful, then who will get the credit? . . .
‘It’s only superficially that artificial intelligence seems to relieve us of the burdens of agency. In fact, A.I. challenges us to recognize that, at the end of the day, we’ll always be in charge.’
newyorker.com | @NewYorker
Science: A protein-predicting machine—Bryan Walsh
‘Whenever anyone asks me about an unquestionably good use of AI, I point to one thing: AlphaFold. After all, how many other AI models have won their creators an actual Nobel Prize?
‘AlphaFold, which was developed by the Google-owned AI company DeepMind, is an AI model that predicts the 3D structures of proteins based solely on their amino acid sequences. That’s important because scientists need to predict the shape of protein to better understand how it might function and how it might be used in products like drugs.
‘That’s known as the “protein-folding problem”—and it was a problem because while human researchers could eventually figure out the structure of a protein, it would often take them years of laborious work in the lab to do so. AlphaFold, through machine-learning methods I couldn’t explain to you if I tried, can make predictions in as little as five seconds, with accuracy that is almost as good as gold-standard experimental methods.
‘By speeding up a basic part of biomedical research, AlphaFold has already managed to meaningfully accelerate drug development in everything from Huntington’s disease to antibiotic resistance. And Google DeepMind’s decision last year to open source AlphaFold3, its most advanced model, for non-commercial academic use has greatly expanded the number of researchers who can take advantage of it.’
link.vox.com | @voxdotcom | @bryanrwalsh
Medicine: The AI will hear you now—Bryan Walsh
‘You wouldn’t know it from watching medical dramas like The Pitt, but doctors spend a lot of time doing paperwork—two hours of it for every one hour they actually spend with a patient, by one count. Finding a way to cut down that time could free up doctors to do actual medicine and help stem the problem of burnout.
‘That’s where AI is already making a difference. As the Wall Street Journal reported this week, health care systems across the country are employing “AI scribes”—systems that automatically capture doctor-patient discussions, update medical records, and generally automate as much as possible around the documentation of a medical interaction. In one pilot study employing AI scribes from Microsoft and a startup called Abridge, doctors cut back daily documentation time from 90 minutes to under 30 minutes.
‘Not only do ambient-listening AI products free doctors from much of the need to make manual notes, but they can eventually connect new data from a doctor-patient interaction with existing medical records and ensure connections and insights on care don’t fall between the cracks. “I see it being able to provide insights about the patient that the human mind just can’t do in a reasonable time,” Dr. Lance Owens, regional chief medical information officer at University of Michigan Health, told the Journal.’
link.vox.com | @voxdotcom | @bryanrwalsh
Climate: High tech for the very poor—Bryan Walsh
‘A timely warning about a natural disaster can mean the difference between life and death, especially in already vulnerable poor countries. That is why Google Flood Hub is so important.
‘An open-access, AI-driven river-flood early warning system, Flood Hub provides seven-day flood forecasts for 700 million people in 100 countries. It works by marrying a global hydrology model that can forecast river levels even in basins that lack physical flood gauges with an inundation model that converts those predicted levels into high-resolution flood maps. This allows villagers to see exactly what roads or fields might end up underwater.
‘Flood Hub, to my mind, is one of the clearest examples of how AI can be used for good for those who need it most. Though many rich countries like the US are included in Flood Hub, they mostly already have infrastructure in place to forecast the effects of extreme weather. (Unless, of course, we cut it all from the budget.) But many poor countries lack those capabilities. AI’s ability to drastically reduce the labor and cost of such forecasts has made it possible to extend those lifesaving capabilities to those who need it most.
‘One more cool thing: The NGO GiveDirectly—which provides direct cash payments to the global poor—has experimented with using Flood Hub warnings to send families hundreds of dollars in cash aid days before an expected flood to help themselves prepare for the worst. As the threat of extreme weather grows, thanks to climate change and population movement, this is the kind of cutting-edge philanthropy.’
link.vox.com | @voxdotcom | @bryanrwalsh
AI for good—if we let it—Bryan Walsh
‘Even what seems to be the best applications for AI can come with their drawbacks.
‘The same kind of AI technology that allows AlphaFold to help speed drug development could conceivably be used one day to more rapidly design bioweapons. AI scribes in medicine raise questions about patient confidentiality and the risk of hacking. And while it’s hard to find fault in an AI system that can help warn poor people about natural disasters, the lack of access to the internet in the poorest countries can limit the value of those warnings — and there’s not much AI can do to change that.
‘But with the headlines around AI leaning so apocalyptic, it’s easy to overlook the tangible benefits AI already delivers. Ultimately AI is a tool. A powerful tool, but a tool nonetheless. And like any tool, what it will do — bad and good — will be determined by how we use it.’
link.vox.com | @voxdotcom | @bryanrwalsh
Can AI be trusted in schools?—The Economist
A pilot programme in Nigeria helped students make two years’ worth of progress in six weeks
‘. . . In the rich world, AI and other e-learning tools have yet to prove better than traditional teaching. . . . But in poorer countries, where classrooms are overcrowded and teachers scarce, low-cost teaching aids provide a real opportunity. One in six children across the world live in extreme poverty (or less than $2.15 per day). In low- and middle-income countries an estimated 70% of ten-year-olds cannot read a simple story in any language. In sub-Saharan Africa the figure is closer to 90%. A working paper published in May by the World Bank suggests that AI may offer a partial solution.
‘The study followed 422 secondary-school students in Nigeria who took part in 12 90-minute after-school sessions over six weeks. Pairs of pupils, supported by a teacher, interacted with Microsoft Copilot, a chatbot based on GPT-4, to improve their English grammar, vocabulary and writing skills.
‘The results were striking: by the end of the six weeks the children in the AI “treatment” group had made progress equivalent to nearly two years’ worth of their regular schooling, according to Martín De Simone, who led the study. Overall, the AI group’s test scores were about 10% higher than the control group’s In end-of-year exams—which covered topics beyond the chatbot’s material—they still did better than their peers. (The final tests were done with pen and paper; the results reflected the children’s actual learning, not their use of the tool.)
‘This might be, in part, a reflection of how poor the baseline is. . . . In countries with better-resourced schools, the same intervention with AI might yield more modest results.
‘The findings also come with other caveats. At $48 per student, the programme was relatively cheap, though still more than the monthly minimum wage in Nigeria. The study could not fully isolate the effect of the chatbot from that of any extra study time with a teacher. And scaling up would require a stable internet connection and access to devices—neither of which is guaranteed. Some education reforms, although controversial, have succeeded by tightly standardising lesson plans without the need for extra technology.
‘Even so, the pilot programme outperformed 80% of more than 230 other education programmes in low- and middle-income countries. That should interest governments and donors seeking to improve basic skills in struggling school systems.’
economist.com | @TheEconomist
AI linked to boom in suspect health papers—Nature Briefing
‘A new study found that more than 300 biomedical papers published across 147 journals followed the exact same template: take a single variable from the AI-ready US National Health and Nutrition Examination Survey — an open data set of health records — and then associate it with a complex disorder such as depression or heart disease, ignoring that these have multiple contributing factors. “We have a sudden explosion in publication rates [of papers] that are extremely formulaic that could easily have been generated by large language models,” says biomedical scientist Matt Spick. The associations in many of the papers did not hold up to statistical scrutiny and seemed to contain cherry-picked data, raising concern that public databases might be used as fuel for an AI-driven increase in low-quality analysis.’
nature.com | @Nature
A rational balance to livestock-sourced food products: Integrating sustainability, health, and economic viability—Lindsay Falvey
’The global debate surrounding meat consumption, animal welfare, and the environmental impact of livestock production has become more urgent as the world seeks sustainable solutions to climate change, zoonotic diseases and food security. The challenge includes the broader ethical question of how we can align humanity’s practices with a rational framework for universal well-being. . . .
‘Animal-sourced foods are critical sources of essential nutrients, especially in regions where plant-based diets may not meet the nutritional needs of vulnerable populations. . . . In the context of developing countries, where food insecurity and malnutrition remain significant issues, the consumption of small amounts of animal-sourced foods is an eQective way to address micronutrient deficiencies. . . .
‘ILRI’s research has shown ways to improve livestock productivity in developing countries through enhanced genetic understanding, feed efficiency, disease control and livestock health management. Such advances improve the nutritional outcomes for rural populations while also fostering economic resilience. . . .
‘Achieving a rational balance between the environmental, health, and nutritional roles of livestock requires a multifaceted approach. In developed countries, reducing the overconsumption of meat and transitioning towards more sustainable forms of livestock farming can help mitigate environmental damage and improve the public’s health. If such a change could occur, care would be needed to avoid unnecessary undermining of the social and economic stability of rural communities.
‘In developing countries, improving livestock productivity and addressing health and nutritional challenges are key to human nutritional security. Institutions like ILRI are playing a vital role in this regard, developing climate-resilient livestock systems and improving feed efficiency to boost productivity in smallholder farming systems. . . .
‘Ultimately, the future of meat and livestock production lies in finding a balance that addresses both the needs of people and the planet. This balance will require continued investment in research and innovation, as well as global cooperation to ensure that the most vulnerable human populations are not left behind in the transition towards a more sustainable and equitable food system.’
researchgate.net | @ResearchGate | @LindsayFalvey
Arresting headlines
Regenerating Public Health: Making the case in less than 12 minutes: Malnutrition is our biggest challenge—by Peter Ballerstedt’s ‘Grass Based Health’ newsletter
More and more parents around the world prefer girls to boys: The bias in favour of boys is shrinking in developing countries even as a preference for girls emerges in the rich world. For perhaps the first time in humanity’s long history, in many parts of the world it is boys who are increasingly seen as a burden and girls who are a boon.—The Economist