ユーザ用ツール

サイト用ツール


サイドバー

keycodesoftware

keycodesoftware

august 24, 1965 gloria placente, 34 a year old from queens, new york, https://keycodesoftware.com/ was driving to orchard beach in the bronx. Dressed in shorts and sunglasses to protect your eyes from the sun, the housewife was looking forward to some quiet time outdoors. However, at that moment, as she crossed the willis avenue bridge in the out-of-town chevrolet corvair, placenta was surrounded by a dozen patrolmen. There were also 125 reporters eager to witness the start of the new york city police department's operation corral—an acronym for computer-based car thief search. Fifteen months ago, placente ran a red light, and did not respond to the location of the breakdown, in which case corral was going to punish with a heavy dose of techno-kafkaesque. It worked like this: a police car stationed at one end of the bridge radioed the license plates for oncoming cars to a teletypewriter, who fed them to a univac 490 computer, a $500,000 expensive toy (three to five million greenbacks in today's dollars) provided by on loan. From sperry rand. The computer checked the numbers against a database of 110,000 cars that were either stolen or belonged to known criminals. While coincidence, the teletypewriter will warn the second patrol car at a different solution from the bridge. Somewhere it took only seven seconds. Compared to the impressive police equipment of the present - automatic recognition of state license plates, cctv cameras, gps trackers - operation corral looks bizarre. And surveillance capabilities will expand over time. European officials believe that your vehicles entering the european market may have a built-in circuit that allows the police to remotely stop the car. Speaking earlier this august, jim farley, senior executive at ford, acknowledged that our organization knows those that are illegal, our experts have vast experience and know when you do it. And more - no less than gps in our car, that's why we are aware, and you carry them out. By the way, we do not provide such information to any person. The last bit didn't sound very encouraging, and farley retracted our remarks. As both traffic and roads become “smart,” they promise near-perfect law enforcement in the order of the present. No need to wait for drivers to break the law, the authorities can simply prevent the crime. Thus, the 50-mile section of the a14 between felixstowe and rugby must be endowed with numerous sensors that are able to monitor traffic by sending signals to mobile phones, as well as viewing in moving vehicles. The telecommunications watchdog ofcom suggests that these smart roads, connected to a centrally controlled traffic system, can automatically set variable speed limits to smooth out the flow of traffic, as well as direct traffic along modified routes to eliminate congestion and even [manage] them. Speed.“ Other tablets, from smartphones to smart glasses, predict of course a significant lack of risk and safety. In april, apple patented technology that places sensors inside a smartphone to analyze whether a vehicle is moving and whether a person using the phone is on the road, once both conditions are met, the home simply blocks the phone's text messaging function. Intel and ford are working on project mobil, a facial recognition system that, if rust fails to recognize the driver's face, will not only stop the car from starting, but also send an image to the car lover (bad information for the older guys). The car is rightfully considered a sign of transformation in many other regions, from intelligent environments for “coexisting with the environment”, where carpets and walls detect when one of the players has fallen, to various master plans of smart communities where civilian institutions direct resources directly to areas that already need them. Thanks to sensors and connectivity, the most mundane everyday items have gained their ability to regulate behavior. Even public restrooms are ripe for sensor-based improvement: the safeguard germ alarm, a smart soap dispenser developed by procter & gamble and used in some public restrooms in the philippines, has sensors that monitor the doors of every cubicle. As soon as you leave the counter, an alarm starts ringing, which can only be stopped by pressing the soap dispenser button. In this context, google's latest plan to push the android operating system to smart watches, smart cars, smart thermostats and, you might suspect, smart everything, looks pretty ominous. In the near future, google will become the go-between between you and your refrigerator, you and your car, you and your trash can, allowing the national security agency to satisfy its data addiction en masse and through one window. This “smart” daily life follows a familiar pattern: there is primary data - a list of what is in your smart refrigerator and your shopping cart - and metadata - a log of how often you open any of these things or when they communicate with each other both provide interesting results: smart mattresses (one recent model promises to track your breathing and heart rate, as well as how much you move during the night) and smart dishes that give nutrition advice. In addition to making our lives more efficient, this smart world also provides us with exciting political choices. If so much of our day to day behavior is already recorded, analyzed and pushed, why pursue non-empirical approaches to regulation? Why rely on laws when there are sensors and feedback mechanisms? If political interventions are to be, to use the buzzwords of the day, “evidence-based” and “result-driven”, technology is here to help. Come and hear evgeny morozov speak at the festival observer ideas This new type of governance has a name: algorithmic regulation. Because silicon valley has a political agenda, this is it. Tim o'reilly, an influential technology publisher, venture capitalist and mastermind (who is responsible for popularizing the term “web 2.0”) was its most active promoter. In a recent essay summarizing his reasoning, o'reilly makes an intriguing case for the merits of algorithmic regulation—arguments that deserve close attention both in terms of what they promise politicians and the simplistic assumptions they make. About politics, democracy, and power. To see algorithmic regulation in action, all you need to do is check your email spam filter. Instead of being limited to a narrow definition of spam, the email filter teaches its users this. Even google can't write rules that cover all the ingenious inventions of professional spammers. However, it can teach the system what constitutes a good rule and determine when it's time to look for another rule to find a good rule, and so on. The algorithm can do this, but constant real-time feedback from users allows the system to counter threats that its developers never imagined. And it's not just spam: your bank uses similar methods to detect credit card fraud. In his essay, o'reilly draws broader philosophical lessons from such technologies, arguing that they work, because they rely on a “deep understanding of the desired outcome” (spam is bad!) And periodically check whether the algorithms really work as expected (are too many legitimate emails flagged as spam?). O'reilly presents such technologies as new and unique - we are, after all, experiencing a digital revolution - but the principle behind “algorithmic regulation” would be familiar to the founders of cybernetics, a discipline that even in its name (meaning “the science of control ”) hints at its big regulatory ambitions. This principle, which allows the system to maintain its stability by constantly learning and adapting to changing circumstances, is what the british psychiatrist ross ashby, one of the founding fathers of cybernetics, called “super-stability”.

To illustrate this, ashby developed the homeostat. This smart device consisted of four interconnected royal air force bomb control boxes - mysterious black boxes with many knobs and switches that are sensitive to voltage fluctuations. If one unit stopped working properly—say, due to an unexpected external disturbance—the other three rebuilt and regrouped, compensating for its failure and maintaining stable overall system performance.

Ashby's homeostat was achieved. “Super-stability” due to constant monitoring of its internal state and smart reallocation of its reserve resources. Like a spam filter, it did not need to specify all possible violations - only conditions for how and when it should be updated and redesigned.This is not a trivial deviation from how conventional technical systems work with their rigid “if-then” rules: suddenly there is no need to develop procedures to manage every contingency, for - or hope so - algorithms and prompt, immediate actions. Feedback can work better than inflexible rules out of touch with reality. Algorithmic regulation can certainly make the application of existing laws more efficient. If he can fight credit card fraud, why not tax fraud? Italian bureaucrats have been experimenting with the redditometro, or income meter, a tool for comparing people's spending patterns, registered thanks to a mysterious italian law, with their declared incomes, so the authorities know when you're spending more than you earn. Spain has expressed interest in a similar tool. Such systems, however, are toothless against the real perpetrators of tax evasion - super-rich families who profit from various offshore schemes or simply write outrageous tax exceptions in the law. Algorithmic regulation is ideal for enforcing the austerity program while leaving the perpetrators of the budget crisis on the sidelines. To understand whether such systems work as expected, we need to change o'reilly's question: who do they work for? Revenue-tracking software, it is hardly a democratic success. With his belief that algorithmic regulation is based on a “deep understanding of the desired outcome,” o'reilly deftly separates the means of policy from its goals. But the “how” in politics is just as important as the “what” in politics—in fact, the former often shapes the latter. Everyone agrees that education, health and safety are “desired outcomes,” but how do you achieve them? In the past, when we faced the hard political choice of delivering them through the market or the state, the lines of the ideological debate were clear. Today, when there is a choice between digital and analog, or between dynamic feedback and static law, that ideological clarity has disappeared, as if the very choice of how to achieve these “desired outcomes” was apolitical and non-compulsory. We have to choose between different and often incompatible views of life in society. Assuming that the utopian world of endless feedback loops is so effective that it transcends politics, algorithmic regulation advocates fall into the same position . Trap like the technocrats of the past. Yes, these systems are devastatingly efficient, just as singapore is devastatingly efficient (no wonder o'reilly praises singapore for its commitment to algorithmic regulation). And while singaporean leaders may think they too have gone beyond politics, that doesn't mean their regime can't be judged outside the linguistic swamp of efficiency and innovation, using political rather than economic benchmarks. Because silicon valley continues to distort our language with its endless glorification of destruction and efficiency—concepts that run counter to the vocabulary of democracy—our ability to ask questions about the “how” of politics is waning. The standard silicon valley response to policy is what i call a solution: problems need to be solved with apps, sensors, and feedback, all provided by startups. Earlier this year, google's eric schmidt even promised that startups would provide a solution to economic inequality: the latter, too, looks like it can be “destroyed.” And where the innovators and disruptors lead, the bureaucrats follow. The secret services began to look for solutions earlier than other government agencies. Thus, they reduced the topic of terrorism from a topic of some relevance to history and foreign policy to an informational problem of identifying emerging terrorist threats through constant surveillance. They urged citizens to accept that instability is part of the game, that its root causes can neither be traced nor eliminated, that the threat can only be prevented through innovation and observation of the enemy through better means of communication.

Speaking in athens last november, the italian philosopher giorgio agamben discussed an epochal transformation of the idea of government, “by which the traditional hierarchical relationship between cause and effect is reversed, so that instead of managing the causes – a difficult and expensive undertaking – governments simply try to manage the consequences.” 

For agamben, this shift is a symbol of modernity.It also explains why economic liberalization can coexist with the growing spread of control - through soap dispensers and remote-controlled cars - into everyday life. “If the government seeks effects and not causes, it will be forced to expand and multiply control. Causes require knowledge, and effects can only be controlled and controlled.” Algorithmic regulation is the embodiment of this political agenda in technological form. The true politics of algorithmic regulation becomes visible when its logic is applied to the social networks of the welfare state. There are no calls for their dismantling, but nevertheless, citizens are urged to take responsibility for their own health. See how fred wilson, an influential us venture capitalist, articulates this theme. “Health… is the other side of healthcare,” he said at a conference in paris last december. “That's what keeps you away from the healthcare system in the first place.” So we are encouraged to start using self-tracking apps and data-sharing platforms and self-monitor our vitals, symptoms, and discrepancies. This fits in nicely with recent utility policy proposals by encouraging healthy lifestyle. Consider a 2013 report from westminster council and the local government information office think tank that calls for housing and municipal benefits to be linked to an applicant's gym attendance - using smart cards. They may not be needed: many smartphones already track how many steps we we do every day (google now, the company's virtual assistant, automatically counts this data and periodically presents it to users, encouraging them to walk more). Did not go unnoticed by o'reilly. “Do you know how advertising became the internet's native business model?” He asked at a recent conference. “I think insurance will become the native business model for the internet of things.” That seems to be the way things are going: in june, microsoft struck a deal with american family insurance, the eighth-largest insurance company in the us, in which both companies will fund startups that want to install sensors in smart homes and smart cars for “preemptive protection” purposes. . The insurance company is happy to subsidize the cost of installing another detector in your home - as long as it can automatically alert the fire department or flash driveway lights in case the smoke detector goes off. For now, the adoption of such tracking systems is framed as an added benefit that could save us some money. But when will we reach the point where not using them will be seen as an aberration, or worse, an act of cover-up that should be punished with higher bonuses? Or consider may 2014. A report from another think tank, 2020health, suggesting that tax breaks be extended to britons who quit smoking, stay lean or drink less. “We offer pay-for-performance, a financial reward for people who become proactive partners in their health, whereby if you lower your blood sugar, stop smoking, lose weight, or [or] take more care of yourself, for example, there will be a tax rebate or bonus at the end of the year,” they say. Smart gadgets are natural allies of such schemes: they document the results and can even help achieve them by constantly forcing us to do what is expected of us. The implicit assumption of most of these reports is that the unhealthy are not only a burden to society, but deserve to be punished (financially for the time being) for failing to take responsibility, for what else can explain their health problems as not their personal flaws?It is certainly not the strength of the food companies, or class differences, or various political and economic injustices.A dozen powerful dates can be worn chiki, owning a smart mattress, and even keeping a close eye on your feces on a daily basis - as some fans of self-tracking usually do - but this injustice will still not be seen, because this is not the most important thing. What can be measured by the sensor. The devil does not carry data. Social injustice is much more difficult to trace than the daily lives of the people whose lives it affects. In shifting the focus of regulation from curbing institutional and corporate abuses to permanent e-governance of people, algorithmic regulation offers us good old technocratic a utopia of politics without politics.Differences and conflicts in this model are seen as unfortunate by-products of the analog era that need to be resolved through data collection, rather than the inevitable results of economic or ideological conflicts. However, politics without politics does not mean politics without control or management. As o'reilly writes in his essay, “new technologies allow for less regulation, actually increasing the amount of oversight and achieving the desired results.” Thus, it is a mistake to think that silicon valley wants to rid us of state institutions. His dream is not a small government of libertarians—a small state doesn't need fancy gadgets or massive data servers, after all—but a data-obsessed, data-fat state of behavioral economists. The push state is excited about feedback technology because its key underlying principle is that although we behave irrationally, our irrationality can be corrected - if only the environment acts on us, pushing us towards the correct option. Not surprisingly, one of the three lone references at the end of o'reilly's essay refers to a 2012 speech titled “regulation: a look backward, a look forward” by cass sunstein, a prominent american legal scholar and the foremost theorist of the push state. And while the instigators have already taken over the state, making behavioral psychology the idiom of the government bureaucracy—daniel kahneman in action, machiavelli not—the algorithmic regulation lobby is moving more stealthily. They create benign non-profit organizations like code for america, which then co-opt the state - under the pretense of encouraging talented hackers to solve civilian problems. , Supplanting other policy tools. For all these tracking applications, algorithms and sensors to work, databases need interoperability, which is exactly what these pseudo-humanitarian organizations with their ardent faith in open data require. And when the government is too slow to move at the speed of silicon valley, they just move inside the government. Thus, jennifer pahlka, founder of code for america and o'reilly's protégé, became us government associate chief technology officer while also receiving a year's “innovation stipend” from the white house. Governments that are short of money , welcome such colonization by technologists, especially if it helps to identify and clean data sets that can be profitably sold to companies that need such data for advertising purposes. The recent skirmish over the sale of student and health data in the uk is just a harbinger of the battles to come: once all public assets have been privatized, data is the next target. For o'reilly, open data is “a key driver of the measurement revolution.” This “measurement revolution” aims to quantify the effectiveness of various social programs, as if the rationale for social networks, which some of them provide, had to achieve the perfection of delivery. The factual rationale, of course, was to ensure a fulfilling life by suppressing certain anxieties so that citizens could relatively quietly pursue their life projects. This vision did give rise to a vast bureaucracy, and critics of the welfare state on the left, especially michel foucault, were right to question its disciplining tendencies. However, neither perfection nor efficiency was the “desired outcome” of this system. Thus, comparing the welfare state with the algorithmic state on this basis is misleading. But we can compare their respective views on human fulfillment—and the role they assign to markets and the state. The silicon valley proposal is clear: through feedback loops ubiquitous, we can all become entrepreneurs and go about our business! As airbnb ceo brian chesky told the atlantic last year, “what happens when everyone is a brand? When does everyone have a reputation? Everyone can be an entrepreneur.” Under according to this vision, we will all code (for america!) In the morning, drive ubers in the afternoon, and rent out our kitchens to restaurants — courtesy of airbnb — in the evening. As o'reilly writes of uber and similar companies, “these services ask every rider to rate their driver (and drivers their rider). Drivers who provide poor service are weeded out.A reputation for superior customer service is better than any government regulation.” The government behind the sharing economy is not disappearing; this may be necessary to ensure that the reputation built up on uber, airbnb and other “sharing economy” platforms is fully liquid and transferable, creating a world in which our every social interaction is recorded and evaluated, erasing any distinction that exists between social domains. Someone somewhere will eventually appreciate you as a passenger, guest, student, patient, client. Whether this ranking infrastructure will be decentralized, provided by a giant like google, or will depend on the state is not yet clear, but the main goal is to turn reputation into a feedback social network that could protect truly responsible citizens from the vicissitudes of life. . Deregulation. Admiring the reputation models of uber and airbnb, o'reilly wants governments to “accept them where there are no obvious negative consequences.” But what counts as a “harmful effect” and how to demonstrate it is a key issue related to the policies that algorithmic regulation wants to suppress. It's easy to demonstrate “bad effects” if the goal of regulation is efficiency, but what if it's something else? Surely there are some advantages—perhaps fewer visits to a psychoanalyst—in not evaluating your every social interaction? Policy is to optimize efficiency. However, as long as democracy is irreducible to a formula, its constituent values will always lose in this battle: they are much harder to quantify. But for silicon valley, the algorithmic state, obsessed with the reputation welfare. If you are honest and hardworking, your online reputation will reflect this, creating a highly personalized social network. It is “super-stable” in ashby's sense: while the welfare state assumes the existence of the particular social evils it is trying to combat, the algorithmic state makes no such assumptions. Future threats may remain completely unknowable and completely manageable—at the individual level. Silicon valley is certainly not alone in touting such ultra-stable individual solutions. Nassim taleb, in his 2012 best-selling book antifragile, does a similar , although more philosophical, a call to maximize our individual resourcefulness and resilience: do not take one job, but several, do not take on debt, rely on your own experience. . It's all about resilience, willingness to take risks, and, as taleb puts it, “playing the game.” As julian reid and brad evans write in their new book resilient life: the art of living dangerously, this growing cult of resilience masks the tacit recognition that no collective project can even seek to tame the escalating threats to human existence—we can only hope to prepare for their individual decision. “When policymakers talk about resilience,” reed and evans write, “they do so in terms that are explicitly designed to prevent people from perceiving danger as a phenomenon from which they might seek to free themselves, and even , on the contrary, as what they must now subject themselves to.” What then is the progressive alternative? “The enemy of my enemy is my friend” does not work here: that silicon valley is attacking the state welfare state does not mean that progressives should defend it to the last bullet (or tweet.) First, even leftist governments have limited fiscal room to maneuver, since the discretionary spending needed to modernize the welfare state will never be approved by the global financial markets.And it is the rating agencies and the bond markets, not the voters, that are in power today. Secondly, the leftist criticism of the welfare state has become even more more relevant today when the exact boundaries between wealth and security are so blurred. With google's android affecting our daily lives so much, the government's temptation to control us with remote-controlled cars and alarmed soap dispensers will be too great. This will expand government control over areas of life that were previously unregulated. With so much data, the government's favorite argument in the fight against terrorism - if only citizens knew as much as we do, they too would impose all these legal exceptions - easily extends to other areas, from health before climate change. Consider a recent scientific paper that used google search data to study obesity patterns in the us and found a significant correlation between search keywords and body mass index levels. “The results show great promise for the idea of monitoring obesity with real-time data from google trends,” the authors note, “which will be “particularly attractive to public health institutions and private enterprises such as insurance companies.” If google senses a flu epidemic somewhere, it's hard to argue with its hunch - we just don't have the infrastructure to process that much data on such a scale. Google may be wrong after the fact — as was recently the case with its flu trend data, which has been shown to inflate infections, perhaps by not accounting for heavy media coverage of flu — but so is the case with most terrorism warnings. The immediate, real-time nature of computer systems makes them ideal allies for an endlessly expanding and proactive state. Perhaps the case of gloria placente and her failed trip to the beach was not just a historical quirk, but an early omen of how real-time computing combined with ubiquitous communications technology will reshape the state. One of the few who heeded this omen was an obscure american publicist named robert mcbride, who brought the logic of operation corral to its final conclusions in his undeservedly forgotten 1967 book, the automated state. At that time in america, the benefits of establishing a national data center to collect various national statistics and provide them to government agencies were discussed. Mcbride criticized his contemporaries for failing to see how the state would use the metadata accumulated as everything was computerized. Instead of a “large-scale, modern austro-hungarian empire,” modern computer systems would give rise to a “bureaucracy of almost heavenly ability” that could “discriminate and define relationships in a way that no human bureaucracy could hope to achieve.” ”. “Bowling on sundays or going to the library instead doesn't matter as no one checks these things,” he wrote. This is not the case when computer systems can collect data from different domains and determine correlations. “Our individual behavior in buying and selling a car, house or securities, in paying off our debts and acquiring new ones, and in making money and getting paid will be carefully noted and scrutinized,” mcbride warned. Thus, the citizen will soon discover that “his choice of magazine subscription… may accurately indicate the likelihood that he will keep his property or his interest in the education of his children. This sounds very similar to the recent case of the unfortunate father who discovered that his daughter is pregnant, from a coupon that target, the retailer, sent to their home. Target's hunch was based on an analysis of the products — such as unscented lotion — that other pregnant women commonly buy. The implication for mcbride was clear. “Political rights will not be violated, but will resemble the rights of a small shareholder in a giant enterprise,” he wrote. A sign of sophistication and mastery in this future will be the grace and flexibility with which a person assumes his role and makes the most of what it offers. And make the most of it. What to do? Technophobia is not the solution. Progressives need technologies that fit the spirit, if not the institutional form, of the welfare state, while maintaining their commitment to creating ideal conditions for human flourishing eng. Even some super-stability is welcome. Stability was a laudable goal of the welfare state before it fell into the trap: by pinpointing the protections the state had to offer against the excesses of capitalism, it could not easily dismiss new, previously unspecified forms of exploitation. How to build a social security system that is both decentralized and ultra-stable? A form of guaranteed basic income that replaces some social services with direct cash transfers to citizens meets two criteria. Creating the right conditions for the emergence of political communities around the causes and issues that they consider relevant would be another good step. Full adherence to the principle of ultra-stability dictates that such issues cannot be foreseen or dictated from above - by political parties or trade unions - and must remain undetermined. What can be specified is the kind of communication infrastructure needed to facilitate this cause: it should be free, hard to trace, and open to new subversive uses. The existing infrastructure of silicon valley is well suited to serve the needs of the state, not self-organizing citizens. It can, of course, be redistributed for activist purposes - and this often happens - but there is no reason to accept the status quo as ideal or inevitable. People in the first place? While many creators of the internet lament how low their creation has fallen, their anger is directed in the wrong direction. This is not due to this amorphous essence, but, above all, the lack of a credible technology policy on the left - a policy that can counter the pro-innovation, subversion and privatization agenda of silicon valley. In his absence, all these emerging political communities will operate with clipped wings. Whether the next “occupy wall street” action will do anything in a truly smart city remains to be seen: censorship and drones will most likely surpass them. To his credit, mcbride figured it all out in 1967. “Given the resources of modern technology and planning methods,” he warned, “it is actually not that difficult to turn even a country like ours into a well-functioning corporation where every detail of life is mechanical work.” Function to take care of. Mcbride's concerns stem from o'reilly's master plan: the government, he writes, should be modeled after a silicon valley “lean start-up” that “uses data to continuously redefine and fine-tune its approach to the market.” This is exactly the approach facebook has recently taken to maximize user engagement on the site: if showing users more happy stories helps, so be it.

keycodesoftware.txt · 最終更新: 2023/02/09 00:38 by 188.130.143.235