Tricky Clicks: 'Dark Patterns' Flood the Web 電商網站 「黑暗模式」充斥
When potential customers visit the online resale store ThredUp, messages on the screen regularly tell them just how much other users of the site are saving.
"Alexandra from Anaheim just saved $222 on her order" says one message next to an image of a bright, multicolored dress. It's a common technique on shopping websites, intended to capitalize on people's desire to fit in with others and to create a "fear of missing out."
But "Alexandra from Anaheim" did not buy the dress. She does not exist. Instead, the website's code pulled combinations from a preprogrammed list of names, locations and items and presented them as actual recent purchases.
The fake messages are an example of "dark patterns," devious online techniques that manipulate users into doing things they might not otherwise choose to.
Sometimes, the methods are clearly deceptive, as with ThredUp, but often they walk a fine line between manipulation and persuasion: Think of the brightly colored button that encourages you to agree to a service, while the link to opt out is hidden in a drop-down menu.
Web designers and consumers have been highlighting examples of dark patterns online since Harry Brignull, a user-experience consultant in Britain, coined the term in 2010. But interest in the tools of online influence has intensified in the past year, amid a series of high-profile revelations about Silicon Valley companies' handling of people's private information. An important element of that discussion is the notion of consent: what users are agreeing to do and share online, and how far businesses can go in leading them to make decisions.
The prevalence of dark patterns across the web is unknown, but in a study released last month, researchers from Princeton University have started to quantify the phenomenon, focusing first on retail companies. The study is the first to systematically examine a large number of sites. The researchers developed software that automatically scanned more than 10,000 sites and found that more than 1,200 of them used techniques that the authors identified as dark patterns, including ThredUp's fake notifications.
The report coincides with discussions among lawmakers about regulating technology companies, including through a bill proposed in April by Sens. Deb Fischer, R-Neb., and Mark Warner, D-Va., that is meant to limit the use of dark patterns by making some of the techniques illegal and giving the Federal Trade Commission more authority to police the practice.
Artificial Intelligence Takes Over the Boss's Role 取代上司角色 改由AI扮演
When Conor Sprouls, a customer service representative in the call center of insurance giant MetLife talks to a customer over the phone, he keeps one eye on the bottom-right corner of his screen. There, in a little blue box, A.I. tells him how he's doing.
Talking too fast? The program flashes an icon of a speedometer, indicating that he should slow down.
Sound sleepy? The software displays an "energy cue," with a picture of a coffee cup. Not empathetic enough? A heart icon pops up.
For decades, people have fearfully imagined armies of hyper-efficient robots invading offices and factories, gobbling up jobs once done by humans. But in all of the worry about the potential of artificial intelligence to replace rank-and-file workers, we may have overlooked the possibility it will replace the bosses, too.
Sprouls and the other call center workers at his office in Warwick, Rhode Island, still have plenty of human supervisors. But the software on their screens — made by Cogito, an A.I. company in Boston — has become a kind of adjunct manager, always watching them. At the end of every call, Sprouls' Cogito notifications are tallied and added to a statistics dashboard that his supervisor can view. If he hides the Cogito window by minimizing it, the program notifies his supervisor.
Cogito is one of several A.I. programs used in call centers and other workplaces. The goal, according to Joshua Feast, Cogito's chief executive, is to make workers more effective by giving them real-time feedback.
"There is variability in human performance," Feast said. "We can infer from the way people are speaking with each other whether things are going well or not."
The goal of automation has always been efficiency, but in this new kind of workplace, A.I. sees humanity itself as the thing to be optimized. Amazon uses complex algorithms to track worker productivity in its fulfillment centers, and can automatically generate the paperwork to fire workers who don't meet their targets, as The Verge uncovered this year.
(Amazon has disputed that it fires workers without human input, saying that managers can intervene in the process.) IBM has used Watson, its A.I. platform, during employee reviews to predict future performance and claims it has a 96% accuracy rate.
Then there are the startups. Cogito, which works with large insurance companies like MetLife and Humana as well as financial and retail firms, says it has 20,000 users. Percolata, a Silicon Valley company that counts Uniqlo and 7-Eleven among its clients, uses in-store sensors to calculate a "true productivity" score for each worker, and rank workers from most to least productive.