The Disconnect Between Biden's Popular Policies and His Unpopularity 拜登政策受歡迎 但很難轉化為支持度
Over the past few years, many Democrats argued that there was a simple secret to electoral success: enact popular legislation.
President Joe Biden tried to make that theory a reality. He enacted a big stimulus plan, a bipartisan infrastructure bill and has made progress toward pushing through an ambitious $2 trillion spending bill that has finally passed the House.
But so far, popular policies haven't made for a popular president. His approval ratings have slipped into the mid-40s, even though virtually all of his legislation commands majority support in the same surveys. In poll after poll, voters seem to give Biden no credit for his agenda. They say he hasn't accomplished much. They even say he hasn't helped them personally, even though he sent direct stimulus payments to most households and even more to parents.
The disconnect between Biden's popular policies and his personal unpopularity is a little hard to understand. After all, voters do care about the issues. They've proved it by gradually sorting into ideologically divided parties over the past two decades. And it's clear that presidents can be punished for advancing an unpopular agenda. Just ask Barack Obama about the period after the Affordable Care Act was passed.
But if voters often punish a president for pushing unpopular policies, they rarely seem to reward a president for enacting legislation. Instead, voters seem to reward presidents for presiding over peace and prosperity — in a word, normalcy.
Today, Biden is not seen as presiding over the long-promised return to normalcy. Maybe that will change in the months ahead. But Biden's policy agenda is not expected to do much to help his approval rating so long as Americans do not believe that agenda responds to the most immediate issues facing the country.
The predominance of the economy in American public opinion is one of those basic and seemingly obvious findings of political science that is still somewhat hard to fully internalize. That's partly because it's at odds with how most individual voters — and especially politically engaged voters — think about politics. Most people back their party through the worst economic times; even the fastest economic growth wouldn't persuade them to back the president of the other party.
Researchers at an artificial intelligence lab in Seattle called the Allen Institute for AI unveiled new technology last month that was designed to make moral judgments. They called it Delphi, after the religious oracle consulted by the ancient Greeks. Anyone could visit the Delphi website and ask for an ethical decree.
Joseph Austerweil, a psychologist at the University of Wisconsin-Madison, tested the technology using a few simple scenarios. When he asked if he should kill one person to save another, Delphi said he shouldn't. When he asked if it was right to kill one person to save 100 others, it said he should. Then he asked if he should kill one person to save 101 others. This time, Delphi said he should not.
Morality, it seems, is as knotty for a machine as it is for humans.
Delphi, which has received more than 3 million visits over the past few weeks, is an effort to address what some see as a major problem in modern AI systems: They can be as flawed as the people who create them.
Facial recognition systems and digital assistants show bias against women and people of color. Social networks like Facebook and Twitter fail to control hate speech, despite wide deployment of artificial intelligence. Algorithms used by courts, parole offices and police departments make parole and sentencing recommendations that can seem arbitrary.
A growing number of computer scientists and ethicists are working to address those issues. And the creators of Delphi hope to build an ethical framework that could be installed in any online service, robot or vehicle.
"It's a first step toward making AI systems more ethically informed, socially aware and culturally inclusive," said Yejin Choi, the Allen Institute researcher and University of Washington computer science professor who led the project.
Delphi is by turns fascinating, frustrating and disturbing. It is also a reminder that the morality of any technological creation is a product of those who have built it. The question is: Who gets to teach ethics to the world's machines? AI researchers? Product managers? Mark Zuckerberg? Trained philosophers and psychologists? Government regulators?