Got a few questions about 2 other posts on prepping in depth and the tricks/dying of the light one so I am gonna try and smush them together into this post and address how to prep a generic K in depth with tricks.
As an example I am going to use a K of “complexity theory” I wrote a few years ago for the NDT. Basically this argument came about like this
debater: we should write a generic K and just prep out all the answers, like complexity
me: good idea, how about X,Y, or Z, or really anything other than complexity
debater: agreed!…. so complexity?
So there I was stuck writing the complexity K that by all accounts was pretty stupid. So here is the process I went through
1.Find everyone who has ever read this- look at the wiki, talk to people ( I know, its the worst), google etc. I tried to find as comprehensive as I could listing of things that have already been read (cards, articles etc) so I can look at them before I take a crack. A key part of this process is deciding what, if anything, you are going to recut. The TOC is like a week away, that is plenty of time to write a new generic. It’s not enough time to read every article ever written on most subjects. Time is a finite resource, and decisions must be made. A key factor to this is that the sole purpose of reading is not to cut cards. Let me repeat that – the sole purpose of reading is not to cut cards. Reading helps you learn good, and being smart makes you better at all aspects of debate. After reading several articles about a topic you will be better at
-understanding what articles are saying
-explaining the argument to others
-knowing what is and what is not a good card
-knowing what likely aff answers will be so you can answer them
All of these things are good. So when doing complexity, I surveyed things and kinda figured out there was a “main guy” writing about this (Kavalski) so I decided to read his like 4-5 articles and then re-assess. After doing that it seemed like there was a bunch of stuff out there (that he cited) that people had generally been ignoring so I decided to dive into that rather than recutting a bunch of the articles I already had. Now, this could have been a disaster- maybe these other books/articles wouldn’t have good debate cards, maybe an amazing card was unfound in one of the articles I decided not to cut- you can’t obsess over things like this, you can only do so much.
So now I had a list of like a hundred potential books/articles to look at. Based on the titles I tried to sort them in terms of relevance. For example this was the legalization topic so warming was not really a big part of it, several of the articles were about warming, so I moved them down the list. Tons of aff’s claimed economy advantages of one sort or another- move those up the list.
One thing people really underestimate is just how much time it takes to read/cut an article on a complex topic well. Say its 20 pages. Sure you can read/skim it in 20 minutes, but to actually understand it, produce good cards, underline them well etc. is going to take you at least an hour assuming the article has a reasonable amount of cards in it. So lets say you have a week, and you are going to cut cards for 2-3 hours a day. At most thats 21 hours, or 21 articles. If you have a list of 100 the most important thing then becomes picking the right 20. The articles you need are out there, but 1 in 5 is not the best odds- so you need to make educated judgements, something you will get better at over time.
2. Now I’ve got the rough outline of a neg arg after reading those articles/compiling other peoples cards. Next I moved onto figuring out what the common aff answers were going to be and how to deal with them. For the sake of length of this post I will boil it down to three things that we were worried about answering
A. Predictions good
B. Must act now
C. the “complexity trap” (more later)
Basically complexity is a reps K, so A and B are the most common aff args vs any k like that- “gotta do something now!”. For the rest of this post lets just look at A- predictions are good.
This aff claim can be broken down into different parts
-predictions work- like mechanically we can make them and they are generally true
-predictions are desirable- informatively it is good that we make/debate predictions
-cede the predictable- if we don’t make predictions, someone else will (fitzsimmons)
-specific scenarios- even if predictions are generally wrong, this one is good
You could probably subdivide more, list more warrants etc. but that is a decent initial list. So if you knew you had a week to debate those points, what would you do? That may seem like focusing on minutia, but if you are going for this generic K you basically know it- this is what they are going to say. So I set out to defeat this predictions argument.
Now, having worked on a K before this wasn’t entirely new ground. Looking back through other files I found the 2 best predictions cards I thought I already had
Zachary Lockman is professor of modern Middle East history at New York University, PhD Harvard, , Behind the Battles Over US Middle East Studies , January 2004 http://www.merip.org/mero/interventions/lockman_interv.html
Kramer claims in Ivory Towers that US Middle East scholars have repeatedly made predictions that did not come true. His accusations are sometimes on target, though he is rather selective. He does not, for example, take his colleague Daniel Pipes to task for inaccurately predicting in the early 1980s that Islamist activism would decline as oil prices fell. Nor, in his writings since the Iraq war, has he faulted Fouad Ajami of Johns Hopkins University’s School of Advanced International Studies — who is a favorite of the Bush administration — for claiming that all Iraqis would enthusiastically welcome US occupation. More broadly, Kramer’s fixation on accurate prediction as the chief (or even sole) gauge of good scholarship is itself highly questionable. Most scholars do not in fact seek to predict the future or think they can do so; they try to interpret the past, discern and explain contemporary trends, and, at most, tentatively suggest what might happen in the future if present trends continue, which they very often do not. Of course, governments want accurate predictions in order to shape and implement effective policies, but Kramer’s insistence that the primary goal of scholarship should be the satisfaction of that desire tells us a great deal about his conception of intellectual life and of the proper relationship between scholars and the state. Just as many of the Israeli scholars associated with the Dayan Center have seen themselves as producing knowledge that will serve the security and foreign policy needs of Israel, so American scholars of the Middle East should, Kramer suggests, shape their research agendas to provide the kinds of knowledge the US government will find most useful. His book demonstrates no interest whatsoever in the uses to which such knowledge might be put or in the question of the responsibility of intellectuals to maintain their independence, or indeed in what scholarship and intellectual life should really be about. His real complaint is that US Middle East studies has failed to produce knowledge useful to the state. Yet by ignoring larger political and institutional contexts, Kramer cannot understand or explain why so many scholars have grown less than enthusiastic about producing the kind of knowledge about the Middle East the government wants — or conversely, why it is that the government and the media now routinely turn to analysts based in think tanks, along with former military and intelligence personnel, for policy-relevant knowledge. But there is a larger issue at stake here. At the very heart of Kramer’s approach is a dubious distinction between the trendy, arcane “theorizing” of the scholarship he condemns as at best irrelevant and at worst pernicious, on the one hand, and on the other the purportedly hard-headed, clear-sighted, theory-free observation of, and research on, the “real Middle East” in which he and scholars like him see themselves as engaging. Kramer is not wrong to suggest that there has been some fashionable theory-mongering in academia, including Middle East studies. But in Ivory Towers he goes well beyond this by now banal observation, and beyond a rejection of post-structuralism, to imply that all theories, paradigms and models are distorting and useless, because they get in the way of the direct, unmediated, accurate access to reality that he seems to believe he and those who think like him possess. This is an extraordinarily naïve and unsophisticated understanding of how knowledge is produced, one that few scholars in the humanities and social sciences have taken seriously for a long time. Even among historians, once the most positivist of scholars, few would today argue that the facts “speak for themselves” in any simple sense. Almost all would acknowledge that deciding what should be construed as significant facts for the specific project of historical reconstruction in which they are engaged, choosing which are more relevant and important to the question at hand and which less so, and crafting a story in one particular way rather than another all involve making judgments that are rooted in some sense of how the world works — in short, in some theory or model or paradigm or vision, whether implicit or explicit, whether consciously acknowledged or not. Kramer’s inability or refusal to grasp this suggests a grave lack of self-awareness, coupled with an alarming disinterest in some of the most important scholarly debates over the past four decades or so. It is moreover a stance which Kramer does not maintain in practice. His assertions throughout the book are in fact based on a certain framework of interpretation, even as he insists that they are merely the product of his acute powers of observation, analysis and prediction. It is, for example, striking that at the very end of Ivory Towers Kramer explicitly lays out a political and moral judgment rooted in his own (theoretical) vision of the world: his insistence that a healthy, reconstructed Middle East studies must accept that the US “plays an essentially beneficent role in the world.” He does not bother to tell readers why they should accept this vision of the US role in the world as true, nor does he even acknowledge that it may be something other than self-evidently true. The assertion nonetheless undermines his avowed epistemological stance and graphically demonstrates that it is untenable.
Bruce Schneier is an internationally renowned security technologist and author, MA CS American Univ. 3-13-10 http://www.schneier.com/blog/archives/2010/05/worst-case_thin.html
At a security conference recently, the moderator asked the panel of distinguished cybersecurity leaders what their nightmare scenario was. The answers were the predictable array of large-scale attacks: against our communications infrastructure, against the power grid, against the financial system, in combination with a physical attack. I didn’t get to give my answer until the afternoon, which was: “My nightmare scenario is that people keep talking about their nightmare scenarios.” There’s a certain blindness that comes from worst-case thinking. An extension of the precautionary principle, it involves imagining the worst possible outcome and then acting as if it were a certainty. It substitutes imagination for thinking, speculation for risk analysis, and fear for reason. It fosters powerlessness and vulnerability and magnifies social paralysis. And it makes us more vulnerable to the effects of terrorism. Worst-case thinking means generally bad decision making for several reasons. First, it’s only half of the cost-benefit equation. Every decision has costs and benefits, risks and rewards. By speculating about what can possibly go wrong, and then acting as if that is likely to happen, worst-case thinking focuses only on the extreme but improbable risks and does a poor job at assessing outcomes. Second, it’s based on flawed logic. It begs the question by assuming that a proponent of an action must prove that the nightmare scenario is impossible. Third, it can be used to support any position or its opposite. If we build a nuclear power plant, it could melt down. If we don’t build it, we will run short of power and society will collapse into anarchy. If we allow flights near Iceland’s volcanic ash, planes will crash and people will die. If we don’t, organs won’t arrive in time for transplant operations and people will die. If we don’t invade Iraq, Saddam Hussein might use the nuclear weapons he might have. If we do, we might destabilize the Middle East, leading to widespread violence and death. Of course, not all fears are equal. Those that we tend to exaggerate are more easily justified by worst-case thinking. So terrorism fears trump privacy fears, and almost everything else; technology is hard to understand and therefore scary; nuclear weapons are worse than conventional weapons; our children need to be protected at all costs; and annihilating the planet is bad. Basically, any fear that would make a good movie plot is amenable to worst-case thinking. Fourth and finally, worst-case thinking validates ignorance. Instead of focusing on what we know, it focuses on what we don’t know — and what we can imagine. Remember Defense Secretary Rumsfeld’s quote? “Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.” And this: “the absence of evidence is not evidence of absence.” Ignorance isn’t a cause for doubt; when you can fill that ignorance with imagination, it can be a call to action. Even worse, it can lead to hasty and dangerous acts. You can’t wait for a smoking gun, so you act as if the gun is about to go off. Rather than making us safer, worst-case thinking has the potential to cause dangerous escalation. The new undercurrent in this is that our society no longer has the ability to calculate probabilities. Risk assessment is devalued. Probabilistic thinking is repudiated in favor of “possibilistic thinking“: Since we can’t know what’s likely to go wrong, let’s speculate about what can possibly go wrong. Worst-case thinking leads to bad decisions, bad systems design, and bad security. And we all have direct experience with its effects: airline security and the TSA, which we make fun of when we’re not appalled that they’re harassing 93-year-old women or keeping first graders off airplanes. You can’t be too careful! Actually, you can. You can refuse to fly because of the possibility of plane crashes. You can lock your children in the house because of the possibility of child predators. You can eschew all contact with people because of the possibility of hurt. Steven Hawking wants to avoid trying to communicate with aliens because they might be hostile; does he want to turn off all the planet’s television broadcasts because they’re radiating into space? It isn’t hard to parody worst-case thinking, and at its extreme it’s a psychological condition. Frank Furedi, a sociology professor at the University of Kent, writes: “Worst-case thinking encourages society to adopt fear as one of the dominant principles around which the public, the government and institutions should organize their life. It institutionalizes insecurity and fosters a mood of confusion and powerlessness. Through popularizing the belief that worst cases are normal, it incites people to feel defenseless and vulnerable to a wide range of future threats.” Even worse, it plays directly into the hands of terrorists, creating a population that is easily terrorized — even by failed terrorist attacks like the Christmas Day underwear bomber and the Times Square SUV bomber. When someone is proposing a change, the onus should be on them to justify it over the status quo. But worst-case thinking is a way of looking at the world that exaggerates the rare and unusual and gives the rare much more credence than it deserves. It isn’t really a principle; it’s a cheap trick to justify what you already believe. It lets lazy or biased people make what seem to be cogent arguments without understanding the whole issue. And when people don’t need to refute counterarguments, there’s no point in listening to them.
Now these cards aren’t “bad” but they are
-not about complexity
-older (esp lockman)
-could always be better
Remember there is an ethos perspective to breaking a new arg/reading a K thats been around and they are different. If you are going to read a new arg than primarily the barrier you have to overcome is “judge ignorance”-ie that they will have no idea what you are talking about because they aren’t familiar with that area. When putting new whine in old bottles your primary barrier is “judge apathy”- ie they have heard it before and think its garbage. You need to break them out of that frame/way of thinking. Reading a bunch of old generic evidence REINFORCES this view rather than challenging it. “Oh cap K with zizek and daly? Nice work”. So while if my research efforts failed I could fall back on these cards, I wanted to find something better/more specific.
So lets go back to our categories. First is that “they work”. Now proving something like “predictions work” or “don’t work” is kind of silly in that “prediction” refers to billions of actions taken by billions of individuals, some of which turn out well. So really we need to think about what are we saying works/doesn’t work and move away from trying to indict all attempts to guess the future.
Well what we are talking about is
-by the government
-on a large scale
-designed to solve global calamity
So we really want to be thinking about big predictions, in IR, that are likely to come up in debate. So I think I basically cut like 5-6 articles that looked to be just about predictions. And the results? Garbage. No good cards at all (or at least none better than the above). So naturally I gave up and just re-turned out fem IR. No, actually what I did was I redoubled my efforts like a good Death Star mechanic and looked again. This time I randomly came across a card in an article on another topic I was reading as a “break”.
Cudworth and Hobden, Phds, 10
( Erika and Stephen, The Foundations of Complexity, the Complexity of Foundations Philosophy of the Social Sciences XX(X) 1– 25 )
A key feature of complex systems is that they can behave in both linear and nonlinear ways. In a linear system we would expect there to be a constant and predictable relationship between cause and effect. For example, if I throw a ball twice as hard, I might expect it to go twice as far. Waltz (1979) anticipated a regular relationship between the number of great powers and the characteristics of international relations: a bipolar system will exhibit greater stability than a unipolar system. In a nonlinear system this relationship between variables breaks down. There is no predictable pattern in terms of the relationship between events, and there is no expectation that the same events will result in the same outcome. As Elliott and Kiel (1997, 68) observe, “Nonlinear dynamics . . . lead us to question the extent to which we may be capable of both prediction and control in social and policy systems.” Ultimately a complex approach to the study of the social world suggests that there are very definite limits to which predictability is possible. This is, of course, a less comfortable viewpoint, in particular for a discipline that originated in an attempt to put controls on the operation of the social world— specifically to find and put limits on the occurrence of warfare. This may indicate why complexity theory has, thus far, made little impact on the discipline (or where it has, primarily in terms of actor-based modeling approaches). It is much more reassuring to be able to offer predictions, and to suggest that there may be obvious connections between policy and outcomes. Complexity theory suggests that while complex systems can exhibit linear behavior, this may be the exception rather than the rule, and that prediction based on linearity is successful by coincidence rather than by correlation. As Baker (1993, 133) observes, “Order is always transitory. The pattern of interaction is repeated and then without warning, a change occurs.” It is more reassuring to seek order and predictability, and this may explain the persuasiveness of approaches that suggest that this is possible. However, as Bertuglia and Vaio (2005, 242) indicate A linear tool, even if substantially inadequate to describe natural and social phenomena, with the exception of a very limited number of cases, erroneously appears to be more useful and more correct than a nonlinear one, because the latter does not allow us to make predictions, whereas the former does. Complexity theory suggests that, although less comfortable, the possibilities for prediction-making are limited. Certainty is, of course, a good thing, but “if it is a false certainty, then this is very bad” (Morin 2008, 97). (15-16)
Now unfortunately I was running out of time, but If I had more time I would of tried to track down what this article is citing
As Elliott and Kiel (1997, 68) observe,
As Baker (1993, 133)
as Bertuglia and Vaio (2005, 242)
then this is very bad” (Morin 2008, 97).
Now ideally the 2NC would read 30 cards on every 2AC argument but kids today are slow and thats not possible. So this would have to be the “complexity specific” prediction card instead of reading 5 more.
We were planning to read this k vs a lot of aff’s with economy advantages so I started looking for an “economic predictions fail” card. This was much easier as these come out like every day, so here it was more of a “quality control” problem- how to find the best one of the literally thousands out there. This involved a lot of “skimming” trying to briefly go through articles and see their quality.
This is a crucial skill a lot of people in debate don’t get. If you are only going to cut cards for 2 hours you need to maximize that time/get the most value. If you download 500 articles, but only get through 5 because of time constraints and those 5 are garbage then you’re screwed. Too many people waste time cutting bad articles, or worse cutting bad cards (and here I don’t mean you cut a card and then D-heidt tells you its bad, I mean kids during the summer who have already cut 20 “terrorism” links for the security K and then turn in 5 more. Unless a card is much better than cards you have already cut, don’t cut it. If a card isn’t a 10/10 and you know there are better ones out there, don’t cut it. Cutting bad cards wastes a boatload of time- your’s cutting it, debaters highlighting it etc. Nip this in the bud)
So after some work I had the best econ prediction card I could find
(Max, activist, 9-3 https://www.opendemocracy.net/transformation/max-zahn/compassionate-economics)
Dwelling in our collective imagination, the economist sits hunched over reams of data that appear to the uninitiated like hieroglyphics. Luckily he or she (and it’s usually a “he”) can quickly discern underlying patterns in the sheets that fly from the printer, as would a meteorologist when staring at a multi-colored map. That’s how the myth goes anyway: economists are smarter than the rest of us. You know those college classes that you tried to avoid like the dining hall’s weird-smelling seafood? The economists sought out those classes, and they excelled. So when things like the Great Recession of 2008 happen and societies desperately need a collective conversation about the economy, it’s no wonder that many people defer to the experts who’ve been thinking long and hard about this stuff. They are called “economists” after all. The problem is they also get things wrong. In their recent New York Times Op-Ed “What is Economics Good For,” Alex Rosenberg and Tyler Curtain argue that no one has any business referring to economics as a science. “The fact that the discipline of economics hasn’t helped us improve our predictive abilities,” they say, “suggests it is still far from being a science, and may never be.” Basically, economists make too many mistakes. Scientists hypothesize gravity, drop a bunch of household objects, and then confidently advise that you shouldn’t let go of your mug unless you want the kitchen floor littered with shards of the Paris skyline. Economists, on the other hand, guarantee the upward trajectory of the housing market and then stare at their shoelaces as the mortgage bubble bursts. Back in 2009, Paul Krugman took his indictment of economics one step further. “Few economists saw our current crisis coming, but this predictive failure was the least of the field’s problems,” he wrote. “More important was the profession’s blindness to the very possibility of catastrophic failures in a market economy.” Not only do economists get things wrong, but they also nurse a glaring ideological blind spot. In this case, Krugman was referring to the market fundamentalism that suffused the outlook of economists like former Federal Reserve Chairman Alan Greenspan. It was Greenspan that helped to orchestrate the financial deregulation and low interest rates that directly precipitated the crash of 2008. Despite our hangover from the Great Recession, we should be wary of disregarding economic perspectives. The most important takeaway from the recent crisis is not that economics is unimportant, but that mathematical projection models in economics should be treated with great skepticism, especially those that make ideal market assumptions that are characteristic of the classical school. Professors Rosenberg and Curtain end their piece with a fittingly cautious prescription for any post-recession economic policy: “at this point, [the economy] is a craft, to be executed with wisdom, not algorithms, in the design and management of institutions.” The call for “wisdom, not algorithms” is an invitation to citizens as well as to economists. At root, the most profound economic questions of the day do not demand more sophisticated numerical calculations but more expansive imaginations and priorities. The task is to bring public advocacy into the formerly-sacred realm of economics, thereby helping to ensure a healthy society for ourselves and our neighbors. The recession should not instill a wholesale disavowal of economics; it should strengthen a popular commitment to reinvent the subject in a different image that we can co-create. Our current and future wellbeing depends on it. With that task in mind, the first step is to take an inventory of core values. By sorting out first principles, we enter the economic thicket with a machete that can chop through jargon and obfuscation. As a practicing Buddhist, I’ve chosen compassion as a cornerstone of my economic understanding. Literally meaning “to suffer with,” compassion – in the words of revered Vietnamese Zen Monk Thich Nhat Hanh – entails a commitment to “remove the suffering that is present in another.” This value reappears in the more extreme Mahayana Buddhist Bodhisattva vow to alleviate the suffering of all sentient beings. Notice that the language here is not one of optimizing pleasure but of minimizing pain. In this sense, it flips American rags-to-riches exceptionalism on its head. To understand what Buddhists mean by compassion, we need to revisit their conception of suffering. The Sanskrit term for suffering is “Samsara,” which refers to a persistent and deep sense of dissatisfaction – the sense that things aren’t the way that we want them to be. Buddhists believe that all sentient beings remain trapped in a cyclic feeling of this dissatisfaction, from which we constantly seek escape. The only way to liberate ourselves from Samsara is to transform the way we relate to the place in which this suffering arises: our own minds. Through practices like meditation, we gradually erode the urge to attach ourselves to pleasurable feelings or run away from painful ones. From that point on, we slowly attain a grounded sense of equilibrium and contentment which allow us to adapt fluidly to the ever-shifting circumstances of our lives. What kind of economic approaches could facilitate the liberation of all sentient beings from this kind of suffering? One of the Buddha’s primary teachings – that of the Middle Path – helps to clarify this point. The Buddha taught that both extreme asceticism and extreme excess are hindrances to the path of liberation. Self-denial clouds the mind with the vestiges of fatigue and malnutrition, while indulgence indefinitely postpones the necessary depth of engagement with internal experience. Therefore, a Buddhist economics should ensure that each and every sentient being has sufficient material and educational support to liberate themselves from suffering. For animals, this means that human consumption patterns should not cause avoidable harm. And for humans themselves, this approach calls for progressive taxation, redistributive social services, and accessible pre-college and college education. The conservative critique of this philosophy is easy to anticipate, and not to be dismissed. Conservatives argue that government intrusion in the marketplace will hinder the individual’s entrepreneurial and expressive freedoms. Frankly I’m much more concerned about civil liberties than I am about the right to turn a good idea into lots of money, though I concede that the two overlap. I think Buddhism has a novel response to this concern, and one that isn’t often aired in the mainstream public debate. For Buddhists, the individual presents a tricky paradox. On the one hand, the Buddha famously preached a belief in “no-self:” the concept that one cannot trace any inherent existence to his or herself due to the dual truths of interdependence and impermanence. That said, the Buddha also taught a challenging personal practice that counts on each individual to fully realize this truth of no-self, and thus attain enlightenment. So ironically, each individual Buddhist makes a personal commitment to a path that will eventually undo our most basic assumptions about individuated existence. In this regard, Buddhists should bring a circumspect attitude toward an entirely centralized economy that would impede freedom of thought or expression. It is this freedom to radically engage with our consciousness that allows each of us to liberate ourselves from suffering. At the same time, Buddhists should acknowledge that unregulated market capitalism inevitably causes extreme inequalities of wealth that inflict unnecessary harm on billions of people worldwide. Hence the need for a new middle path but not one that resembles bipartisanship, Washington consensus, or the thinly-veiled politics of economic technocrats. This middle path demands a radical and sustained commitment to a new kind of empowerment for all: the power to transform our relationship with suffering.
But that card is from an unqualified hippy so it was rejected. Back to the drawing board.
Rosenburg and Curtain, PhDs, 13
(Alex Rosenberg is the R. Taylor Cole Professor of Philosophy and chair of the philosophy department at Duke University. He is the author of “Economics — Mathematical Politics or Science of Diminishing Returns,” most recently, “The Atheist’s Guide to Reality.” Tyler Curtain is a philosopher of science and an associate professor of English and comparative literature the University of North Carolina at Chapel Hill. He was recently named the 2013 recipient of the Robert Frost Distinguished Chair of Literature at the Bread Loaf School of English, Middlebury College, Vt. http://opinionator.blogs.nytimes.com/2013/08/24/what-is-economics-good-for/ 8-24)
Recent debates over who is most qualified to serve as the next chairman of the Federal Reserve have focused on more than just the candidates’ theory-driven economic expertise. They have touched on matters of personality and character as well. This is as it should be. Given the nature of economies, and our ability to understand them, the task of the Fed’s next leader will be more a matter of craft and wisdom than of science. When we put a satellite in orbit around Mars, we have the scientific knowledge that guarantees accuracy and precision in the prediction of its orbit. Achieving a comparable level of certainty about the outcomes of an economy is far dicier. The fact that the discipline of economics hasn’t helped us improve our predictive abilities suggests it is still far from being a science, and may never be. Still, the misperceptions persist. A student who graduates with a degree in economics leaves college with a bachelor of science, but possesses nothing so firm as the student of the real world processes of chemistry or even agriculture. Before the 1970s, the discussion of how to make economics a science was left mostly to economists. But like war, which is too important to be left to the generals, economics was too important to be left to the Nobel-winning members of the University of Chicago faculty. Over time, the question of why economics has not (yet) qualified as a science has become an obsession among theorists, including philosophers of science like us. It’s easy to understand why economics might be mistaken for science. It uses quantitative expression in mathematics and the succinct statement of its theories in axioms and derived “theorems,” so economics looks a lot like the models of science we are familiar with from physics. Its approach to economic outcomes — determined from the choices of a large number of “atomic” individuals — recalls the way atomic theory explains chemical reactions. Economics employs partial differential equations like those in a Black-Scholes account of derivatives markets, equations that look remarkably like ones familiar from physics. The trouble with economics is that it lacks the most important of science’s characteristics — a record of improvement in predictive range and accuracy. This is what makes economics a subject of special interest among philosophers of science. None of our models of science really fit economics at all. The irony is that for a long time economists announced a semiofficial allegiance to Karl Popper’s demand for falsifiability as the litmus test for science, and adopted Milton Friedman’s thesis that the only thing that mattered in science was predictive power. Mr. Friedman was reacting to a criticism made by Marxist economists and historical economists that mathematical economics was useless because it made so many idealized assumptions about economic processes: perfect rationality, infinite divisibility of commodities, constant returns to scale, complete information, no price setting. Mr. Friedman argued that false assumptions didn’t matter any more in economics than they did in physics. Like the “ideal gas,” “frictionless plane” and “center of gravity” in physics, idealizations in economics are both harmless and necessary. They are indispensable calculating devices and approximations that enable the economist to make predictions about markets, industries and economies the way they enable physicists to predict eclipses and tides, or prevent bridge collapses and power failures. But economics has never been able to show the record of improvement in predictive successes that physical science has shown through its use of harmless idealizations. In fact, when it comes to economic theory’s track record, there isn’t much predictive success to speak of at all. Moreover, many economists don’t seem troubled when they make predictions that go wrong. Readers of Paul Krugman and other like-minded commentators are familiar with their repeated complaints about the refusal of economists to revise their theories in the face of recalcitrant facts. Philosophers of science are puzzled by the same question. What is economics up to if it isn’t interested enough in predictive success to adjust its theories the way a science does when its predictions go wrong? Unlike the physical world, the domain of economics includes a wide range of social “constructions” — institutions like markets and objects like currency and stock shares — that even when idealized don’t behave uniformly. They are made up of unrecognized but artificial conventions that people persistently change and even destroy in ways that no social scientist can really anticipate. We can exploit gravity, but we can’t change it or destroy it. No one can say the same for the socially constructed causes and effects of our choices that economics deals with. Another factor economics has never been able to tame is science itself. These are the drivers of economic growth, the “creative destruction” of capitalism. But no one can predict the direction of scientific discovery and its technological application. That was Popper’s key insight. Philosophers and historians of science like Thomas S. Kuhn have helped us see why scientific paradigm shifts seem to come almost out of nowhere. As the rate of acceleration of innovation increases, the prospects of an economic theory that tames the economy’s most powerful forces must diminish — and with it, any hope of improvements in prediction declines as well. SO if predictive power is not in the cards for economics, what is it good for? Social and political philosophers have helped us answer this question, and so understand what economics is really all about. Since Hobbes, philosophers have been concerned about the design and management of institutions that will protect us from “the knave” within us all, those parts of our selves tempted to opportunism, free riding and generally avoiding the costs of civil life while securing its benefits. Hobbes and, later, Hume — along with modern philosophers like John Rawls and Robert Nozick — recognized that an economic approach had much to contribute to the design and creative management of such institutions. Fixing bad economic and political institutions (concentrations of power, collusions and monopolies), improving good ones (like the Fed’s open-market operations), designing new ones (like electromagnetic bandwidth auctions), in the private and public sectors, are all attainable tasks of economic theory. Which brings us back to the Fed. An effective chair of the central bank will be one who understands that economics is not yet a science and may never be. At this point it is a craft, to be executed with wisdom, not algorithms, in the design and management of institutions. What made Ben S. Bernanke, the current chairman, successful was his willingness to use methods — like “quantitative easing,” buying bonds to lower long-term interest rates — that demanded a feeling for the economy, one that mere rational-expectations macroeconomics would have denied him. For the foreseeable future economic theory should be understood more on the model of music theory than Newtonian theory. The Fed chairman must, like a first violinist tuning the orchestra, have the rare ear to fine-tune complexity (probably a Keynesian ability to fine-tune at that). Like musicians’, economists’ expertise is still a matter of craft. They must avoid the hubris of thinking their theory is perfectly suited to the task, while employing it wisely enough to produce some harmony amid the cacophony.
While rhetorically not as powerful this card makes a better argument, AND it relates to complexity theory for those of you who read closely.
So now we have
- Predictions fail- complexity
- Predictions fail-economics
Donzo right? Wrong. While this is certainly more work on this specific argument than most people have put in, it’s not absolutely a crush though and really, isn’t that what debate is all about? Drinking the tears of your fallen enemies, not barely winning, is the goal. So we need to go deeper.
We also had this policy making nonsense about predictions. So google predictions policy relevance etc and bam
Engelhardt , MA Harvard, 15
In our era in Washington, whole careers have been built on grotesque mistakes. In fact, when it comes to our various conflicts, God save you if you’re right; no one will ever want to hear from you again. If you’re wrong, however… well, take the invasion of Iraq. Given the Islamic State, that creature of the American occupation, can anyone seriously believe that the invasion that blew a hole in the heart of the Middle East doesn’t qualify as one of the genuine disasters of our time, if not of any time? In the mad occupation that followed, Saddam Hussein’s well-trained army and officer corps were ushered into the chaos of post-invasion unemployment and, of course, insurgency. Meanwhile, at a cost of $25 billion, a whole new military was trained that, years later, summarily collapsed when faced with insurgents led by some of those formerly out-of-work officers. But the crew who pushed it all on Washington has never stopped yakking (or being listened to). They’ve been called back at every anniversary of the invasion to offer their wisdom in the New York Times and elsewhere, while those who counseled against such an invasion have been nowhere in sight. Some of the planners of the invasion and occupation are now advisers to Jeb Bush as he heads into the 2016 election campaign, while the policy wonks who went off to war with the generals (taking regular VIP tours of America’s battle zones) couldn’t be better thought of in Washington today. Take Michael O’Hanlon of the Brookings Institution. When it comes to American war, you can count on one thing: he’s a ray of sunshine on any gloomy day. It hardly matters what year you’re talking about — 2003, 2007, 2009, 2013, Iraq or Afghanistan — and “our odds of success” are invariably “rather good” (if the U.S. military just pursues the path O’Hanlon advocates). Things always seem to be trending in the right direction; there’s invariably “progress,” always carefully qualified; Washington’s troops remain forever steadfast; chances are good that… you fill it in: the invasion will be successful, the occupation a smash, the surge a triumph of an unconventional sort, the latest Afghan election a positive step forward in a tough world. And here’s the amazing thing: year after year, op-ed after op-ed, he never seems to end up on the right side of anything, which seems to work like a charm in Washington. In recent years, he’s made himself into an op-ed tag team with his former Princeton classmate David Petraeus. He began plugging General Petraeus as a “superb commander” back when and, despite the former CIA director’s recent misdemeanor plea deal for “providing his highly classified journals to a mistress,” he’s still touting him as a “national hero.” (“To my mind, what he did in Iraq was probably the greatest complex accomplishment by any American general since Washington in the Revolutionary War.”) Since 2013, on op-ed pages nationwide, he and Petraeus have been promoting the idea that these aren’t the years of America’s decline, but of its rise to greater glory as the leader of a new North American Century (a line that Republicans are passionately running with for campaign 2016). If this came from anyone else, perhaps it would be a debatable position, but not with the O’Hanlon guarantee attached to it. Let’s just say it: if he thinks America is ascending, there’s only one possibility: it’s going down. So many words and what are the odds that none of them would work out? Still, you might think that O’Hanlon is small potatoes in our large world. If so, think again. As Andrew Bacevich, author most recently of Breach of Trust: How Americans Failed Their Soldiers and Their Country, makes clear in “Rationalizing Lunacy,” O’Hanlon is part of a roiling mass of “policy intellectuals” who have given this country a distinctly hard time.
And in case the kids did some speed drills, blam2
Bacevich , Phd Princeton, 15
Policy intellectuals — eggheads presuming to instruct the mere mortals who actually run for office — are a blight on the republic. Like some invasive species, they infest present-day Washington, where their presence strangles common sense and has brought to the verge of extinction the simple ability to perceive reality. A benign appearance — well-dressed types testifying before Congress, pontificating in print and on TV, or even filling key positions in the executive branch — belies a malign impact. They are like Asian carp let loose in the Great Lakes. It all began innocently enough. Back in 1933, with the country in the throes of the Great Depression, President Franklin Delano Roosevelt first imported a handful of eager academics to join the ranks of his New Deal. An unprecedented economic crisis required some fresh thinking, FDR believed. Whether the contributions of this “Brains Trust” made a positive impact or served to retard economic recovery (or ended up being a wash) remains a subject for debate even today. At the very least, however, the arrival of Adolph Berle, Raymond Moley, Rexford Tugwell, and others elevated Washington’s bourbon-and-cigars social scene. As bona fide members of the intelligentsia, they possessed a sort of cachet. Then came World War II, followed in short order by the onset of the Cold War. These events brought to Washington a second wave of deep thinkers, their agenda now focused on “national security.” This eminently elastic concept — more properly, “national insecurity” — encompassed just about anything related to preparing for, fighting, or surviving wars, including economics, technology, weapons design, decision-making, the structure of the armed forces, and other matters said to be of vital importance to the nation’s survival. National insecurity became, and remains today, the policy world’s equivalent of the gift that just keeps on giving. People who specialized in thinking about national insecurity came to be known as “defense intellectuals.” Pioneers in this endeavor back in the 1950s were as likely to collect their paychecks from think tanks like the prototypical RAND Corporation as from more traditional academic institutions. Their ranks included creepy figures like Herman Kahn, who took pride in “thinking about the unthinkable,” and Albert Wohlstetter, who tutored Washington in the complexities of maintaining “the delicate balance of terror.” In this wonky world, the coin of the realm has been and remains “policy relevance.” This means devising products that convey a sense of novelty, while serving chiefly to perpetuate the ongoing enterprise. The ultimate example of a policy-relevant insight is Dr. Strangelove’s discovery of a “mineshaft gap” — successor to the “bomber gap” and the “missile gap” that, in the 1950s, had found America allegedly lagging behind the Soviets in weaponry and desperately needing to catch up. Now, with a thermonuclear exchange about to destroy the planet, the United States is once more falling behind, Strangelove claims, this time in digging underground shelters enabling some small proportion of the population to survive. In a single, brilliant stroke, Strangelove posits a new raison d’être for the entire national insecurity apparatus, thereby ensuring that the game will continue more or less forever. A sequel to Stanley Kubrick’s movie would have shown General “Buck” Turgidson and the other brass huddled in the War Room, developing plans to close the mineshaft gap as if nothing untoward had occurred. The Rise of the National Insecurity State Yet only in the 1960s, right around the time that Dr. Strangelove first appeared in movie theaters, did policy intellectuals really come into their own. The press now referred to them as “action intellectuals,” suggesting energy and impatience. Action intellectuals were thinkers, but also doers, members of a “large and growing body of men who choose to leave their quiet and secure niches on the university campus and involve themselves instead in the perplexing problems that face the nation,” as LIFE Magazine put it in 1967. Among the most perplexing of those problems was what to do about Vietnam, just the sort of challenge an action intellectual could sink his teeth into. Over the previous century-and-a-half, the United States had gone to war for many reasons, including greed, fear, panic, righteous anger, and legitimate self-defense. On various occasions, each of these, alone or in combination, had prompted Americans to fight. Vietnam marked the first time that the United States went to war, at least in considerable part, in response to a bunch of really dumb ideas floated by ostensibly smart people occupying positions of influence. More surprising still, action intellectuals persisted in waging that war well past the point where it had become self-evident, even to members of Congress, that the cause was a misbegotten one doomed to end in failure. In his fine new book American Reckoning: The Vietnam War and Our National Identity, Christian Appy, a historian who teaches at the University of Massachusetts, reminds us of just how dumb those ideas were. As Exhibit A, Professor Appy presents McGeorge Bundy, national security adviser first for President John F. Kennedy and then for Lyndon Johnson. Bundy was a product of Groton and Yale, who famously became the youngest-ever dean of Harvard’s Faculty of Arts and Sciences, having gained tenure there without even bothering to get a graduate degree. For Exhibit B, there is Walt Whitman Rostow, Bundy’s successor as national security adviser. Rostow was another Yalie, earning his undergraduate degree there along with a PhD. While taking a break of sorts, he spent two years at Oxford as a Rhodes scholar. As a professor of economic history at MIT, Rostow captured JFK’s attention with his modestly subtitled 1960 book The Stages of Economic Growth: A Non-Communist Manifesto, which offered a grand theory of development with ostensibly universal applicability. Kennedy brought Rostow to Washington to test his theories of “modernization” in places like Southeast Asia. Finally, as Exhibit C, Appy briefly discusses Professor Samuel P. Huntington’s contributions to the Vietnam War. Huntington also attended Yale, before earning his PhD at Harvard and then returning to teach there, becoming one of the most renowned political scientists of the post-World War II era. What the three shared in common, apart from a suspect education acquired in New Haven, was an unwavering commitment to the reigning verities of the Cold War. Foremost among those verities was this: that a monolith called Communism, controlled by a small group of fanatic ideologues hidden behind the walls of the Kremlin, posed an existential threat not simply to America and its allies, but to the very idea of freedom itself. The claim came with this essential corollary: the only hope of avoiding such a cataclysmic outcome was for the United States to vigorously resist the Communist threat wherever it reared its ugly head. Buy those twin propositions and you accept the imperative of the U.S. preventing the Democratic Republic of Vietnam, a.k.a. North Vietnam, from absorbing the Republic of Vietnam, a.k.a. South Vietnam, into a single unified country; in other words, that South Vietnam was a cause worth fighting and dying for. Bundy, Rostow, and Huntington not only bought that argument hook, line, and sinker, but then exerted themselves mightily to persuade others in Washington to buy it as well. Yet even as he was urging the “Americanization” of the Vietnam War in 1965, Bundy already entertained doubts about whether it was winnable. But not to worry: even if the effort ended in failure, he counseled President Johnson, “the policy will be worth it.” How so? “At a minimum,” Bundy wrote, “it will damp down the charge that we did not do all that we could have done, and this charge will be important in many countries, including our own.” If the United States ultimately lost South Vietnam, at least Americans would have died trying to prevent that result — and through some perverted logic this, in the estimation of Harvard’s youngest-ever dean, was a redeeming prospect. The essential point, Bundy believed, was to prevent others from seeing the United States as a “paper tiger.” To avoid a fight, even a losing one, was to forfeit credibility. “Not to have it thought that when we commit ourselves we really mean no major risk” — that was the problem to be avoided at all cost. Rostow outdid even Bundy in hawkishness. Apart from his relentless advocacy of coercive bombing to influence North Vietnamese policymakers, Rostow was a chief architect of something called the Strategic Hamlet Program. The idea was to jumpstart the Rostovian process of modernization by forcibly relocating Vietnamese peasants from their ancestral villages into armed camps where the Saigon government would provide security, education, medical care, and agricultural assistance. By winning hearts-and-minds in this manner, the defeat of the communist insurgency was sure to follow, with the people of South Vietnam vaulted into the “age of high mass consumption,” where Rostow believed all humankind was destined to end up. That was the theory. Reality differed somewhat. Actual Strategic Hamlets were indistinguishable from concentration camps. The government in Saigon proved too weak, too incompetent, and too corrupt to hold up its end of the bargain. Rather than winning hearts-and-minds, the program induced alienation, even as it essentially destabilized peasant society. One result: an increasingly rootless rural population flooded into South Vietnam’s cities where there was little work apart from servicing the needs of the ever-growing U.S. military population — hardly the sort of activity conducive to self-sustaining development. Yet even when the Vietnam War ended in complete and utter defeat, Rostow still claimed vindication for his theory. “We and the Southeast Asians,” he wrote, had used the war years “so well that there wasn’t the panic [when Saigon fell] that there would have been if we had failed to intervene.” Indeed, regionally Rostow spied plenty of good news, all of it attributable to the American war. ”Since 1975 there has been a general expansion of trade by the other countries of that region with Japan and the West. In Thailand we have seen the rise of a new class of entrepreneurs. Malaysia and Singapore have become countries of diverse manufactured exports. We can see the emergence of a much thicker layer of technocrats in Indonesia.” So there you have it. If you want to know what 58,000 Americans (not to mention vastly larger numbers of Vietnamese) died for, it was to encourage entrepreneurship, exports, and the emergence of technocrats elsewhere in Southeast Asia. Appy describes Professor Huntington as another action intellectual with an unfailing facility for seeing the upside of catastrophe. In Huntington’s view, the internal displacement of South Vietnamese caused by the excessive use of American firepower, along with the failure of Rostow’s Strategic Hamlets, was actually good news. It promised, he insisted, to give the Americans an edge over the insurgents. The key to final victory, Huntington wrote, was “forced-draft urbanization and modernization which rapidly brings the country in question out of the phase in which a rural revolutionary movement can hope to generate sufficient strength to come to power.” By emptying out the countryside, the U.S. could win the war in the cities. “The urban slum, which seems so horrible to middle-class Americans, often becomes for the poor peasant a gateway to a new and better way of life.” The language may be a tad antiseptic, but the point is clear enough: the challenges of city life in a state of utter immiseration would miraculously transform those same peasants into go-getters more interested in making a buck than in signing up for social revolution. Revisited decades later, claims once made with a straight face by the likes of Bundy, Rostow, and Huntington — action intellectuals of the very first rank — seem beyond preposterous. They insult our intelligence, leaving us to wonder how such judgments or the people who promoted them were ever taken seriously. How was it that during Vietnam bad ideas exerted such a perverse influence? Why were those ideas so impervious to challenge? Why, in short, was it so difficult for Americans to recognize bullshit for what it was? Creating a Twenty-First-Century Slow-Motion Vietnam These questions are by no means of mere historical interest. They are no less relevant when applied to the handiwork of the twenty-first-century version of policy intellectuals, specializing in national insecurity, whose bullshit underpins policies hardly more coherent than those used to justify and prosecute the Vietnam War. The present-day successors to Bundy, Rostow, and Huntington subscribe to their own reigning verities. Chief among them is this: that a phenomenon called terrorism or Islamic radicalism, inspired by a small group of fanatic ideologues hidden away in various quarters of the Greater Middle East, poses an existential threat not simply to America and its allies, but — yes, it’s still with us — to the very idea of freedom itself. That assertion comes with an essential corollary dusted off and imported from the Cold War: the only hope of avoiding this cataclysmic outcome is for the United States to vigorously resist the terrorist/Islamist threat wherever it rears its ugly head. At least since September 11, 2001, and arguably for at least two decades prior to that date, U.S. policymakers have taken these propositions for granted. They have done so at least in part because few of the policy intellectuals specializing in national insecurity have bothered to question them. Indeed, those specialists insulate the state from having to address such questions. Think of them as intellectuals devoted to averting genuine intellectual activity. More or less like Herman Kahn and Albert Wohlstetter (or Dr. Strangelove), their function is to perpetuate the ongoing enterprise. The fact that the enterprise itself has become utterly amorphous may actually facilitate such efforts. Once widely known as the Global War on Terror, or GWOT, it has been transformed into the War with No Name. A little bit like the famous Supreme Court opinion on pornography: we can’t define it, we just know it when we see it, with ISIS the latest manifestation to capture Washington’s attention. All that we can say for sure about this nameless undertaking is that it continues with no end in sight. It has become a sort of slow-motion Vietnam, stimulating remarkably little honest reflection regarding its course thus far or prospects for the future. If there is an actual Brains Trust at work in Washington, it operates on autopilot. Today, the second- and third-generation bastard offspring of RAND that clutter northwest Washington — the Center for this, the Institute for that — spin their wheels debating latter day equivalents of Strategic Hamlets, with nary a thought given to more fundamental concerns. What prompts these observations is Ashton Carter’s return to the Pentagon as President Obama’s fourth secretary of defense. Carter himself is an action intellectual in the Bundy, Rostow, Huntington mold, having made a career of rotating between positions at Harvard and in “the Building.” He, too, is a Yalie and a Rhodes scholar, with a PhD. from Oxford. “Ash” — in Washington, a first-name-only identifier (“Henry,” “Zbig,” “Hillary”) signifies that you have truly arrived — is the author of books and articles galore, including one op-ed co-written with former Secretary of Defense William Perry back in 2006 calling for preventive war against North Korea. Military action “undoubtedly carries risk,” he bravely acknowledged at the time. “But the risk of continuing inaction in the face of North Korea’s race to threaten this country would be greater” — just the sort of logic periodically trotted out by the likes of Herman Kahn and Albert Wohlstetter. As Carter has taken the Pentagon’s reins, he also has taken pains to convey the impression of being a big thinker. As one Wall Street Journal headline enthused, “Ash Carter Seeks Fresh Eyes on Global Threats.” That multiple global threats exist and that America’s defense secretary has a mandate to address each of them are, of course, givens. His predecessor Chuck Hagel (no Yale degree) was a bit of a plodder. By way of contrast, Carter has made clear his intention to shake things up. So on his second day in office, for example, he dined with Kenneth Pollack, Michael O’Hanlon, and Robert Kagan, ranking national insecurity intellectuals and old Washington hands one and all. Besides all being employees of the Brookings Institution, the three share the distinction of having supported the Iraq War back in 2003 and calling for redoubling efforts against ISIS today. For assurances that the fundamental orientation of U.S. policy is sound — we just need to try harder — who better to consult than Pollack, O’Hanlon, and Kagan (any Kagan)? Was Carter hoping to gain some fresh insight from his dinner companions? Or was he letting Washington’s clubby network of fellows, senior fellows, and distinguished fellows know that, on his watch, the prevailing verities of national insecurity would remain sacrosanct? You decide. Soon thereafter, Carter’s first trip overseas provided another opportunity to signal his intentions. In Kuwait, he convened a war council of senior military and civilian officials to take stock of the campaign against ISIS. In a daring departure from standard practice, the new defense secretary prohibited PowerPoint briefings. One participant described the ensuing event as “a five-hour-long college seminar” — candid and freewheeling. “This is reversing the paradigm,” one awed senior Pentagon official remarked. Carter was said to be challenging his subordinates to “look at this problem differently.” Of course, Carter might have said, “Let’s look at a different problem.” That, however, was far too radical to contemplate — the equivalent of suggesting back in the 1960s that assumptions landing the United States in Vietnam should be reexamined. In any event — and to no one’s surprise — the different look did not produce a different conclusion. Instead of reversing the paradigm, Carter affirmed it: the existing U.S. approach to dealing with ISIS is sound, he announced. It only needs a bit of tweaking — just the result to give the Pollacks, O’Hanlons, and Kagans something to write about as they keep up the chatter that substitutes for serious debate. Do we really need that chatter? Does it enhance the quality of U.S. policy? If policy/defense/action intellectuals fell silent would America be less secure? Let me propose an experiment. Put them on furlough. Not permanently — just until the last of the winter snow finally melts in New England. Send them back to Yale for reeducation. Let’s see if we are able to make do without them even for a month or two. In the meantime, invite Iraq and Afghanistan War vets to consider how best to deal with ISIS. Turn the op-ed pages of major newspapers over to high school social studies teachers. Book English majors from the Big Ten on the Sunday talk shows. Who knows what tidbits of wisdom might turn up?
These 2 cards address all of the predictions good arguments that the earlier ev neglected.
Now, you are in a debate and the other side says predictions good- what do you do?
Well you know the ev you have, you know the ev the aff can read, this should all be planned out.
I would read the complexity card, and then one ore both of the the policy cards
I would read Lockman since its a more critical indict of the project of predictions , and then the complexity one
Vs. Generic ev about predictions working
I would read complexity, and the econ one
Now how you set this up is a preference issue- maybe you want 25 blocks for predictions, maybe you want 1 block with options you can pick from- there is no right or wrong answer, and given paperless if you want 400 copies of the same card in your doc that’s your perogative