Notes on the Paradigm Crisis

Moderators: Elvis, DrVolin, Jeff

Notes on the Paradigm Crisis

Postby Wombaticus Rex » Wed Apr 18, 2012 12:05 pm


American Society for Microbiology: ‘Has Modern Science Become Dysfunctional?’
March 29th, 2012

The recent explosion in the number of retractions in scientific journals is just the tip of the iceberg and a symptom of a greater dysfunction that has been evolving the world of biomedical research say the editors-in-chief of two prominent journals in a presentation before a committee of the National Academy of Sciences (NAS) today.

“Incentives have evolved over the decades to encourage some behaviors that are detrimental to good science,” says Ferric Fang, editor-in-chief of the journal Infection and Immunity, a publication of the American Society for Microbiology (ASM), who is speaking today at the meeting of the Committee of Science, Technology, and Law of the NAS along with Arturo Casadevall, editor-in-chief of mBio®, the ASM’s online, open-access journal.

In the past decade the number of retraction notices for scientific journals has increased more than 10-fold while the number of journals articles published has only increased by 44%. While retractions still represent a very small percentage of the total, the increase is still disturbing because it undermines society’s confidence in scientific results and on public policy decisions that are based on those results, says Casadevall. Some of the retractions are due to simple error but many are a result of misconduct including falsification of data and plagiarism.

More concerning, say the editors, is that this trend may be a symptom of a growing dysfunction in the biomedical sciences, one that needs to be addressed soon. At the heart of the problem is an economic incentive system fueling a hypercompetitive environment that is fostering poor scientific practices, including frank misconduct.

Via: ... als_120418

Retraction Crisis Hits Scientific Journals
Wednesday, April 18, 2012

Three scientific journals have published articles over the past two years warning of the rise in retractions and misconduct by researchers who have fudged results.

The latest publication to do so was Infection and Immunity, which revealed it had been duped repeatedly by the same scientist, Naoki Mori of the University of the Ryukyus in Japan, who had also published questionable facts in other published papers.

A former editor of the publication, Dr. Arturo Casadevall, blamed “a winner-take-all game” in science today that has created “perverse incentives that lead scientists to cut corners and, in some cases, commit acts of misconduct,” according to The New York Times.

Another journal, Nature, reported last year a tenfold increase in retractions over the past decade even though the number of published papers only increased bo 44%. Before that, the Journal of Medical Ethics published a study in 2010 that said a rise in recent retractions was the fault of misconduct and “honest scientific mistakes.” It calculated that the number of retractions had more than tripled from 50 in 2005 to 180 in 2009.

Dr. Ferric Fang, the editor-in-chief of Infection and Immunity, pointed out that the increased competition for jobs may be a major contributing factor to the falsification problem. According to The New York Times, “In 1973, more than half of biologists had a tenure-track job within six years of getting a Ph.D. By 2006 the figure was down to 15 percent.”

Via: ... 2P20120328

In cancer science, many "discoveries" don't hold up

(Reuters) - A former researcher at Amgen Inc has found that many basic studies on cancer -- a high proportion of them from university labs -- are unreliable, with grim consequences for producing new medicines in the future.

During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 "landmark" publications -- papers in top journals, from reputable labs -- for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.

Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.

"It was shocking," said Begley, now senior vice president of privately held biotechnology company TetraLogic, which develops cancer drugs. "These are the studies the pharmaceutical industry relies on to identify new targets for drug development. But if you're going to place a $1 million or $2 million or $5 million bet on an observation, you need to be sure it's true. As we tried to reproduce these papers we became convinced you can't take anything at face value."

The failure to win "the war on cancer" has been blamed on many factors, from the use of mouse models that are irrelevant to human cancers to risk-averse funding agencies. But recently a new culprit has emerged: too many basic scientific discoveries, done in animals or cells growing in lab dishes and meant to show the way to a new drug, are wrong.

Begley's experience echoes a report from scientists at Bayer AG last year. Neither group of researchers alleges fraud, nor would they identify the research they had tried to replicate.

But they and others fear the phenomenon is the product of a skewed system of incentives that has academics cutting corners to further their careers.

George Robertson of Dalhousie University in Nova Scotia previously worked at Merck on neurodegenerative diseases such as Parkinson's. While at Merck, he also found many academic studies that did not hold up.

"It drives people in industry crazy. Why are we seeing a collapse of the pharma and biotech industries? One possibility is that academia is not providing accurate findings," he said.


Over the last two decades, the most promising route to new cancer drugs has been one pioneered by the discoverers of Gleevec, the Novartis drug that targets a form of leukemia, and Herceptin, Genentech's breast-cancer drug. In each case, scientists discovered a genetic change that turned a normal cell into a malignant one. Those findings allowed them to develop a molecule that blocks the cancer-producing process.

This approach led to an explosion of claims of other potential "druggable" targets. Amgen tried to replicate the new papers before launching its own drug-discovery projects.

Scientists at Bayer did not have much more success. In a 2011 paper titled, "Believe it or not," they analyzed in-house projects that built on "exciting published data" from basic science studies. "Often, key data could not be reproduced," wrote Khusru Asadullah, vice president and head of target discovery at Bayer HealthCare in Berlin, and colleagues.

Of 47 cancer projects at Bayer during 2011, less than one-quarter could reproduce previously reported findings, despite the efforts of three or four scientists working full time for up to a year. Bayer dropped the projects.

Bayer and Amgen found that the prestige of a journal was no guarantee a paper would be solid. "The scientific community assumes that the claims in a preclinical study can be taken at face value," Begley and Lee Ellis of MD Anderson Cancer Center wrote in Nature. It assumes, too, that "the main message of the paper can be relied on ... Unfortunately, this is not always the case."

When the Amgen replication team of about 100 scientists could not confirm reported results, they contacted the authors. Those who cooperated discussed what might account for the inability of Amgen to confirm the results. Some let Amgen borrow antibodies and other materials used in the original study or even repeat experiments under the original authors' direction.

Some authors required the Amgen scientists sign a confidentiality agreement barring them from disclosing data at odds with the original findings. "The world will never know" which 47 studies -- many of them highly cited -- are apparently wrong, Begley said.

The most common response by the challenged scientists was: "you didn't do it right." Indeed, cancer biology is fiendishly complex, noted Phil Sharp, a cancer biologist and Nobel laureate at the Massachusetts Institute of Technology.

Even in the most rigorous studies, the results might be reproducible only in very specific conditions, Sharp explained: "A cancer cell might respond one way in one set of conditions and another way in different conditions. I think a lot of the variability can come from that."


Other scientists worry that something less innocuous explains the lack of reproducibility.

Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.

"We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."

Such selective publication is just one reason the scientific literature is peppered with incorrect results.

For one thing, basic science studies are rarely "blinded" the way clinical trials are. That is, researchers know which cell line or mouse got a treatment or had cancer. That can be a problem when data are subject to interpretation, as a researcher who is intellectually invested in a theory is more likely to interpret ambiguous evidence in its favor.

The problem goes beyond cancer.

On Tuesday, a committee of the National Academy of Sciences heard testimony that the number of scientific papers that had to be retracted increased more than tenfold over the last decade; the number of journal articles published rose only 44 percent.

Ferric Fang of the University of Washington, speaking to the panel, said he blamed a hypercompetitive academic environment that fosters poor science and even fraud, as too many researchers compete for diminishing funding.

"The surest ticket to getting a grant or job is getting published in a high-profile journal," said Fang. "This is an unhealthy belief that can lead a scientist to engage in sensationalism and sometimes even dishonest behavior."

The academic reward system discourages efforts to ensure a finding was not a fluke. Nor is there an incentive to verify someone else's discovery. As recently as the late 1990s, most potential cancer-drug targets were backed by 100 to 200 publications. Now each may have fewer than half a dozen.

"If you can write it up and get it published you're not even thinking of reproducibility," said Ken Kaitin, director of the Tufts Center for the Study of Drug Development. "You make an observation and move on. There is no incentive to find out it was wrong."
User avatar
Wombaticus Rex
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Notes on the Paradigm Crisis

Postby Wombaticus Rex » Wed Apr 18, 2012 12:15 pm

Via: Harvard Fucking Business Review, No Lie

There Is No Invisible Hand

One of the best-kept secrets in economics is that there is no case for the invisible hand. After more than a century trying to prove the opposite, economic theorists investigating the matter finally concluded in the 1970s that there is no reason to believe markets are led, as if by an invisible hand, to an optimal equilibrium — or any equilibrium at all. But the message never got through to their supposedly practical colleagues who so eagerly push advice about almost anything. Most never even heard what the theorists said, or else resolutely ignored it.

Of course, the dynamic but turbulent history of capitalism belies any invisible hand. The financial crisis that erupted in 2008 and the debt crises threatening Europe are just the latest evidence. Having lived in Mexico in the wake of its 1994 crisis and studied its politics, I just saw the absence of any invisible hand as a practical fact. What shocked me, when I later delved into economic theory, was to discover that, at least on this matter, theory supports practical evidence.

Adam Smith suggested the invisible hand in an otherwise obscure passage in his Inquiry Into the Nature and Causes of the Wealth of Nations in 1776. He mentioned it only once in the book, while he repeatedly noted situations where "natural liberty" does not work. Let banks charge much more than 5% interest, and they will lend to "prodigals and projectors," precipitating bubbles and crashes. Let "people of the same trade" meet, and their conversation turns to "some contrivance to raise prices." Let market competition continue to drive the division of labor, and it produces workers as "stupid and ignorant as it is possible for a human creature to become."

In the 1870s, academic economists began seriously trying to build "general equilibrium" models to prove the existence of the invisible hand. They hoped to show that market trading among individuals, pursuing self-interest, and firms, maximizing profit, would lead an economy to a stable and optimal equilibrium.

Leon Walras, of the University of Lausanne in Switzerland, thought he had succeeded in 1874 with his Elements of Pure Economics, but economists concluded that he had fallen far short. Finally, in 1954, Kenneth Arrow, at Stanford, and Gerard Debreu, at the Cowles Commission at Yale, developed the canonical "general-equilibrium" model, for which they later won the Nobel Prize. Making assumptions to characterize competitive markets, they proved that there exists some set of prices that would balance supply and demand for all goods. However, no one ever showed that some invisible hand would actually move markets toward that level. It is just a situation that might balance supply and demand if by happenstance it occurred.

In 1960 Herbert Scarf of Yale showed that an Arrow-Debreu economy can cycle unstably. The picture steadily darkened. Seminal papers in the 1970s, one authored by Debreu, eliminated "any last forlorn hope," as the MIT theorist Franklin Fisher says, of proving that markets would move an economy toward equilibrium. Frank Hahn, a prominent Cambridge University theorist, sums up the matter: "We have no good reason to suppose that there are forces which lead the economy to equilibrium."

An engineering analogy may help. The invisible hand sees market economies as passenger planes, which, for all the miseries of air travel, are aerodynamically stable. Buffeted by turbulence, they just settle back into a slightly different flight path. General-equilibrium theory, as it developed in the 1960s and 1970s, suggests that economies are more like fighter jets. Buffeted by a gust, they wouldn't just settle into a slightly different path but would spin out of control and break asunder if "fly-by-wire" computer guidance systems did not continually redirect them to avert disaster.

Economists might call the fighter-jet analogy polemic, but no knowledgeable theorist would say that the so-called "general equilibrium" model is stable. The very word "equilibrium" is deeply misleading in this context because it describes a situation that is not an equilibrium, either in plain English or in engineering. Economic equilibrium — a stable state toward which an economy would move — reveals a hope on the part of economists, not a mechanism captured in an accepted model. Speaking of "equilibrium" allowed economists to fool themselves, and others.

The failure to model the invisible hand is ironically powerful. Any given economic model might well be implausible. But if the brightest economic minds failed for a century to show how some invisible hand could move markets toward equilibrium, can any such mechanism exist? Something outside markets — social norms, economic regulation, Ben Bernanke in his happier moments — must usually avert disaster.

How can some economic models continue to assume stability? Arrow-Debreu treats each individual, firm, and good as distinct. Supposedly practical economists develop models that aggregate — homogenize. They aggregate corn, iPods, and haircuts into one uniform quantity of stuff that they call "commodities" and label "Y." And they lump all diverse individuals into one "representative agent." You can easily build stability into such a model by pure assumption. But it is pure assumption. How could decentralized trading move markets to equilibrium if there is only one good?

In a tribute to academic insularity, most supposedly practical economists are dimly aware, if at all, of theorists' instability results. They might have briefly seen them in one theory course and ignored them as geeky and inconvenient. Others dismiss them. Milton Friedman once told Franklin Fisher he saw no point in studying the stability of general equilibrium because the economy is obviously stable — and if it isn't, "we are all wasting our time." Fisher quips that the point about economists' wasting their time was perceptive. The point about economies being obviously stable was not perceptive.

Believing far too credulously in an invisible hand, the Federal Reserve failed to see the subprime crisis coming. The principal models it used literally assumed that markets are always in instantaneous equilibrium, so how could a crisis occur? But after the crisis exploded, the Fed dropped its high-tech invisible-hand models and responded with full force to support the economy.

The powerful invisible-hand metaphor refused to die. It assured German Chancellor Angela Merkel, even if she grew up in East Germany under Communism, that slashing fiscal budgets and deregulating labor markets would end the euro crisis. Based on thinking dimmed by some invisible-hand fancy, European authorities have again and again been a day late and a euro short in responding to market gales. As a result, they made the euro crisis far worse than it had to be.
User avatar
Wombaticus Rex
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Notes on the Paradigm Crisis

Postby Wombaticus Rex » Wed Apr 18, 2012 12:53 pm

Via: ... physicist/

Exponential Economist Meets Finite Physicist

Some while back, I found myself sitting next to an accomplished economics professor at a dinner event. Shortly after pleasantries, I said to him, “economic growth cannot continue indefinitely,” just to see where things would go. It was a lively and informative conversation. I was somewhat alarmed by the disconnect between economic theory and physical constraints—not for the first time, but here it was up-close and personal. Though my memory is not keen enough to recount our conversation verbatim, I thought I would at least try to capture the key points and convey the essence of the tennis match—with some entertainment value thrown in.

Physicist: I’ve been thinking a bit about growth and want to run an idea by you. I claim that economic growth cannot continue indefinitely.

Economist: [chokes on bread crumb] Did I hear you right? Did you say that growth can not continue forever?

Physicist: That’s right. I think physical limits assert themselves.

Economist: Well sure, nothing truly lasts forever. The sun, for instance, will not burn forever. On the billions-of-years timescale, things come to an end.

Physicist: Granted, but I’m talking about a more immediate timescale, here on Earth. Earth’s physical resources—particularly energy—are limited and may prohibit continued growth within centuries, or possibly much shorter depending on the choices we make. There are thermodynamic issues as well.

Economist: I don’t think energy will ever be a limiting factor to economic growth. Sure, conventional fossil fuels are finite. But we can substitute non-conventional resources like tar sands, oil shale, shale gas, etc. By the time these run out, we’ll likely have built up a renewable infrastructure of wind, solar, and geothermal energy—plus next-generation nuclear fission and potentially nuclear fusion. And there are likely energy technologies we cannot yet fathom in the farther future.

Physicist: Sure, those things could happen, and I hope they do at some non-trivial scale. But let’s look at the physical implications of the energy scale expanding into the future. So what’s a typical rate of annual energy growth over the last few centuries?

Economist: I would guess a few percent. Less than 5%, but at least 2%, I should think.


Physicist: Right, if you plot the U.S. energy consumption in all forms from 1650 until now, you see a phenomenally faithful exponential at about 3% per year over that whole span. The situation for the whole world is similar. So how long do you think we might be able to continue this trend?

... Before we tackle that, we’re too close to an astounding point for me to leave it unspoken. At that 2.3% growth rate, we would be using energy at a rate corresponding to the total solar input striking Earth in a little over 400 years. We would consume something comparable to the entire sun in 1400 years from now. By 2500 years, we would use energy at the rate of the entire Milky Way galaxy—100 billion stars! I think you can see the absurdity of continued energy growth. 2500 years is not that long, from a historical perspective. We know what we were doing 2500 years ago. I think I know what we’re not going to be doing 2500 years hence.

Economist: That’s really remarkable—I appreciate the detour. You said about 1400 years to reach parity with solar output?

Physicist: Right. And you can see the thermodynamic point in this scenario as well. If we tried to generate energy at a rate commensurate with that of the Sun in 1400 years, and did this on Earth, physics demands that the surface of the Earth must be hotter than the (much larger) surface of the Sun. Just like 100 W from a light bulb results in a much hotter surface than the same 100 W you and I generate via metabolism, spread out across a much larger surface area.


Economist: So I’m as convinced as I need to be that growth in raw energy use is a limited proposition—that we must one day at the very least stabilize to a roughly constant yearly expenditure. At least I’m willing to accept that as a starting point for discussing the long term prospects for economic growth. But coming back to your first statement, I don’t see that this threatens the indefinite continuance of economic growth.

For one thing, we can keep energy use fixed and still do more with it in each passing year via efficiency improvements. Innovations bring new ideas to the market, spurring investment, market demand, etc. These are things that will not run dry. We have plenty of examples of fundamentally important resources in decline, only to be substituted or rendered obsolete by innovations in another direction.

Physicist: Yes, all these things happen, and will continue at some level. But I am not convinced that they represent limitless resources.

Economist: Do you think ingenuity has a limit—that the human mind itself is only so capable? That could be true, but we can’t credibly predict how close we might be to such a limit.

Physicist: That’s not really what I have in mind. Let’s take efficiency first. It is true that, over time, cars get better mileage, refrigerators use less energy, buildings are built more smartly to conserve energy, etc. The best examples tend to see factor-of-two improvements on a 35 year timeframe, translating to 2% per year. But many things are already as efficient as we can expect them to be. Electric motors are a good example, at 90% efficiency. It will always take 4184 Joules to heat a liter of water one degree Celsius. In the middle range, we have giant consumers of energy—like power plants—improving much more slowly, at 1% per year or less. And these middling things tend to be something like 30% efficient. How many more “doublings” are possible? If many of our devices were 0.01% efficient, I would be more enthusiastic about centuries of efficiency-based growth ahead of us. But we may only have one more doubling in us, taking less than a century to realize.

Economist: Okay, point taken. But there is more to efficiency than incremental improvement. There are also game-changers. Tele-conferencing instead of air travel. Laptop replaces desktop; iPhone replaces laptop, etc.—each far more energy frugal than the last. The internet is an example of an enabling innovation that changes the way we use energy.

Physicist: These are important examples, and I do expect some continuation along this line, but we still need to eat, and no activity can get away from energy use entirely. [semi-reluctant nod/bobble] Sure, there are lower-intensity activities, but nothing of economic value is completely free of energy.

Economist: Some things can get awfully close. Consider virtualization. Imagine that in the future, we could all own virtual mansions and have our every need satisfied: all by stimulative neurological trickery. We would stil need nutrition, but the energy required to experience a high-energy lifestyle would be relatively minor. This is an example of enabling technology that obviates the need to engage in energy-intensive activities. Want to spend the weekend in Paris? You can do it without getting out of your chair.

(More like an IV-drip-equipped toilet than a chair, the physicist thinks.)

Physicist: I see. But this is still a finite expenditure of energy per person. Not only does it take energy to feed the person (today at a rate of 10 kilocalories of energy input per kilocalorie eaten, no less), but the virtual environment probably also requires a supercomputer—by today’s standards—for every virtual voyager. The supercomputer at UCSD consumes something like 5 MW of power. Granted, we can expect improvement on this end, but today’s supercomputer eats 50,000 times as much as a person does, so there is a big gulf to cross. I’ll take some convincing. Plus, not everyone will want to live this virtual existence.


Physicist: But let’s leave the Matrix, and cut to the chase. Let’s imagine a world of steady population and steady energy use. I think we’ve both agreed on these physically-imposed parameters. If the flow of energy is fixed, but we posit continued economic growth, then GDP continues to grow while energy remains at a fixed scale. This means that energy—a physically-constrained resource, mind—must become arbitrarily cheap as GDP continues to grow and leave energy in the dust.

Economist: Yes, I think energy plays a diminishing role in the economy and becomes too cheap to worry about.

Physicist: Wow. Do you really believe that? A physically limited resource (read scarcity) that is fundamental to every economic activity becomes arbitrarily cheap? [turns attention to food on the plate, somewhat stunned]

Economist: [after pause to consider] Yes, I do believe that.

Physicist: Okay, so let’s be clear that we’re talking about the same thing. Energy today is roughly 10% of GDP. Let’s say we cap the physical amount available each year at some level, but allow GDP to keep growing. We need to ignore inflation as a nuisance in this case: if my 10 units of energy this year costs $10,000 out of my $100,000 income; then next year that same amount of energy costs $11,000 and I make $110,000—I want to ignore such an effect as “meaningless” inflation: the GDP “growth” in this sense is not real growth, but just a re-scaling of the value of money.

Economist: Agreed.

Physicist: Then in order to have real GDP growth on top of flat energy, the fractional cost of energy goes down relative to the GDP as a whole.

Economist: Correct.

Physicist: How far do you imagine this can go? Will energy get to 1% of GDP? 0.1%? Is there a limit?

Economist: There does not need to be. Energy may become of secondary importance in the economy of the future—like in the virtual world I illustrated.

Physicist: But if energy became arbitrarily cheap, someone could buy all of it, and suddenly the activities that comprise the economy would grind to a halt. Food would stop arriving at the plate without energy for purchase, so people would pay attention to this. Someone would be willing to pay more for it. Everyone would. There will be a floor to how low energy prices can go as a fraction of GDP.

Economist: That floor may be very low: much lower than the 5–10% we pay today.

Physicist: But is there a floor? How low are you willing to take it? 5%? 2%? 1%?

Economist: Let’s say 1%.

Physicist: So once our fixed annual energy costs 1% of GDP, the 99% remaining will find itself stuck. If it tries to grow, energy prices must grow in proportion and we have monetary inflation, but no real growth.

Economist: Well, I wouldn’t go that far. You can still have growth without increasing GDP.

Physicist: But it seems that you are now sold on the notion that the cost of energy would not naturally sink to arbitrarily low levels.

Economist: Yes, I have to retract that statement. If energy is indeed capped at a steady annual amount, then it is important enough to other economic activities that it would not be allowed to slip into economic obscurity.

Physicist: Even early economists like Adam Smith foresaw economic growth as a temporary phase lasting maybe a few hundred years, ultimately limited by land (which is where energy was obtained in that day). If humans are successful in the long term, it is clear that a steady-state economic theory will far outlive the transient growth-based economic frameworks of today. Forget Smith, Keynes, Friedman, and that lot. The economists who devise a functioning steady-state economic system stand to be remembered for a longer eternity than the growth dudes. [Economist stares into the distance as he contemplates this alluring thought.]

I recently was motivated to read a real economics textbook: one written by people who understand and respect physical limitations. The book, called Ecological Economics, by Herman Daly and Joshua Farley, states in its Note to Instructors:

…we do not share the view of many of our economics colleagues that growth will solve the economic problem, that narrow self-interest is the only dependable human motive, that technology will always find a substitute for any depleted resource, that the market can efficiently allocate all types of goods, that free markets always lead to an equilibrium balancing supply and demand, or that the laws of thermodynamics are irrelevant to economics.

User avatar
Wombaticus Rex
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Notes on the Paradigm Crisis

Postby Wombaticus Rex » Wed Apr 18, 2012 8:54 pm

Via: ... _page=true

"I—China—want to be the Godzilla of Asia, because that’s the only way for me—China—to survive! I don’t want the Japanese violating my sovereignty the way they did in the 20th century. I can’t trust the United States, since states can never be certain about other states’ intentions. And as good realists, we—the Chinese—want to dominate Asia the way the Americans have dominated the Western Hemisphere.” John J. Mearsheimer, the R. Wendell Harrison Distinguished Service Professor of Political Science at the University of Chicago, races on in a mild Brooklyn accent, banging his chalk against the blackboard and erasing with his bare hand, before two dozen graduate students in a three-hour seminar titled “Foundations of Realism.”

Mearsheimer writes ANARCHY on the board, explaining that the word does not refer to chaos or disorder. “It simply means that there is no centralized authority, no night watchman or ultimate arbiter, that stands above states and protects them.” (The opposite of anarchy, he notes, borrowing from Columbia University’s Kenneth Waltz, is hierarchy, which is the ordering principle of domestic politics.) Then he writes the uncertainty of intentions and explains: the leaders of one great power in this anarchic jungle of a world can never know what the leaders of a rival great power are thinking. Fear is dominant. “This is the tragic essence of international politics,” he thunders. “It provides the basis for realism, and people hate people like me, who point this out!” Not finished, he adds: “The uncertainty of intentions is my Sunday punch in defense of realism, whenever realism is attacked.”
User avatar
Wombaticus Rex
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Notes on the Paradigm Crisis

Postby bks » Wed Apr 18, 2012 10:19 pm

Great stuff! Please keep posting related info here as you find/digest it.
Posts: 1093
Joined: Thu Jul 19, 2007 2:44 am
Blog: View Blog (0)

Re: Notes on the Paradigm Crisis

Postby JackRiddler » Thu May 10, 2012 9:57 am

Economics is the most obvious case of models crumbling. It was always obvious. The dominant thinking in that discipline never been a science and rarely been empirical. It's always been ideology and in practice a fraud in the service of power. (Where did that excellent piece about professional life go? Do you remember?) This is where the fight is underway and going to be most fateful to our age: in creating an economics that proceeds from a foundation in ecology; that acknowledges that all of our institutions are creations and conventions of our agency, and not functions of natural law; and that finally defines and understands money in a way that everyone can share and command. I've started studying modern monetary theory, here: ... rimer.html

Thread soon!

Physics, I have no idea where it will go, but it's clear enough that present understandings of cosmology will one day be junked and viewed as the last gasp of Genesis. All that we see will once again be understood to be within a horizon, not the whole universe, and the observation of red shift will cease to be viewed as sufficient evidence to extrapolate everything all the way back to a single pinprick waiting for God's command to let light be.

M.J. Disney: The Case Against Cosmology

But it will happen at some cost to those from within that community who first rise to the challenge.

Biology is sensitive because of the attack from the forces of endarkenment on evolution as history, and their attempt to seize and damage the culture as a whole. I'm thinking of the children, here! Nevertheless the theory of it is already in upheaval. Here's an interesting article on that. I shall not copy-paste the whole interview but just the top and the key parts on the paradigm shift itself, in Shapiro's view. (Go to the link to catch all the interesting politics they also discuss...) ... hift/print

May 07, 2012

An Interview With James Shapiro
The Evolution Paradigm Shift


“Given the exemplary status of biological evolution, we can anticipate that a paradigm shift in our understanding of that subject will have repercussions far outside the life sciences. . . . How such an evolutionary paradigm shift will play out in the physical and social sciences remains to be seen. But it is possible to predict that the cognitive (psychological) and social sciences will have an increased influence on biology, especially when it comes to the acquisition and processing of information.”

–James A. Shapiro, Evolution: A View from the 21st Century

I called University of Chicago microbiologist James Shapiro, who’s now also blogging on HuffPost about science, to arrange an interview after noticing that we’d both recently been bashed by Darwinist Jerry Coyne in the same column. I reached Shapiro at home. He was engaging, although he described himself as a “reclusive person” — which he says he finds key to serious thinking. The commotion was over Shapiro’s book: Evolution: A View from the 21st Century since Coyne, also a University of Chicago professor, has an evolution text he’d like to keep relevant. I decided to have a look at Shapiro’s book and see exactly why Coyne was agitated.


And now Shapiro on his findings and hypotheses:

James Shapiro: I became interested in biology as an undergraduate. The topic of evolution just kept coming up. As a research student at Cambridge, when I began to focus on mutagenesis, evolution was again right there because mutation was the source of the raw material of evolution. My first big lesson in evolutionary science was that the mutations I was studying in bacteria were unexpected and unpredicted. People had actually missed them because they accepted the current version of mutations just as point mutations. Here was something quite different. Pieces of DNA inserting themselves in the genome.

Later, I found unexpectedly that starvation triggers a big increase in DNA rearrangements. I also observed some genome changes occurring in patterns in bacterial colonies. All of that gave me a lively interest in evolutionary subjects.

On paradigm shifts (Kuhn).

James Shapiro: I would say research based on theories that will be superceded is inevitable. I was quite struck when I read Thomas Kuhn who understood that. I was sitting by a swimming pool in the Dominican Republic at a meeting on plasmids and he was writing about 18th century chemistry and physics. As I was reading I was saying to myself — “Wow, that’s the way biology operates today.”

Kuhn captured something very quintessentially human about the scientific enterprise: that you inevitably never capture nature as it is. You only capture a portion of it that you can figure out and theorize about. And you go on exploring that portion of nature. For some period of time the explorations are extremely productive. But over time and as technology develops, partly as a consequence of what the scientific enterprise is doing, new phenomena come up and can’t be explained any longer in the same way. In the end there are always a group of people who defend the existing belief system more than is justified by the empirical observations.

Tension arises between those who say the empirical observations are telling us something different and those who defend the intellectual framework which led to those empirical observations. I am not immune to being unable to appreciate where new approaches can lead. For example, I was one of the people who initially thought genome sequencing was just an excuse for using technology without any idea of what we were going to find. I believed that people had run out of useful ideas for experimental biology and were doing DNA sequencing as a substitute. I was totally wrong about that. It turned out that sequencing has been extraordinarily revealing and far from a waste of time. No matter what kind of ideas lay behind it, it’s opened up a treasure trove of new ways of thinking about genomes and DNA in evolution.

So the answer to your question about the money is that money is always being spent based on ideas which are ultimately going to prove fallible. As I put it in a blog, if Newton couldn’t get it right, what hope is there for the rest of us? But it’s not a waste of time and money as long as the research is based on real empirical science, because the observations then lead to a more sophisticated way of thinking about things.

Suzan Mazur: So is science now without an acceptable explanation as to how evolution happened?

James Shapiro: No I don’t think so. We see bits and pieces of the whole process. Certainly we have paleontological evidence. We have the comparative biology. It started off as comparative anatomy but it’s gone much farther than that, of course. All of this tells us about relationships. And now we have the genome evidence, which solidifies our view in the evolutionary relationships. It complicates the picture, but it adds an element — which is the one I’ve been focusing on — the process of genome change itself that is critical. That is what I call “natural genetic engineering.”

His book.


The production of the book was fine. The book cover is striking. I found that picture of the mimetic moth. FTPress wanted to put an iguana on the cover. I said an iguana was too traditional. Theirs was a beautiful iguana, but it was still an iguana. And I thought the moth would say a lot more than the iguana about some of the mysteries that need
to be explained in evolution.

The moth cover was ultimately chosen because the kind of exquisite mimicry it represents is an evolutionary puzzle.

How does that come about? I think gradualist explanations are difficult to sustain in the case of mimicry. Recently it’s been discovered that there are master control regions, sort of like Hox complexes but more complicated, that control wing patterns in butterflies. I suspect as people analyze those we’ll know more about how the mimetic patterns evolved.

The book hasn’t been reviewed by any of the major journals yet. Nature and Science have not reviewed it. The National Center for Science Education is reviewing it in June. I’m interested to see whether they want to show that evolution science is alive and doing novel and controversial things.

The gist of his theoretical thinking:

James Shapiro: There are three components there.

(1) As I say in the book, cells do not act blindly. We know from physiology and biochemistry and molecular biology that cells are full of receptors. They monitor what goes on outside. They monitor what goes on inside. And they’re continually taking in that information and using it to adjust their actions, their biochemistry, their
metabolism, the cell cycle, etc. so that things come out right. That’s why I use the word cognitive to apply to cells, meaning they do things based on knowledge of what’s happening around them and inside of them. Without that knowledge and the systems to use that knowledge they couldn’t proliferate and survive as efficiently as they do.

(2) We’ve learned a great deal about hereditary variation through molecular genetics studies. I was personally involved in this back in the late 60s and 70s and since then we’ve learned about a wide variety of biochemical systems that cells use to restructure their genomes as an active process. Genome change is not the result of accidents. If you have accidents and they’re not fixed, the cells die. It’s in the course of fixing damage or responding to damage or responding to other inputs — in the case I studied, it was starvation — that cells turn on the systems they have for restructuring their genomes. So what we have is something different from accidents and mistakes as a source of genetic change. We have what I call “natural genetic engineering.” Cells are acting on their own genomes in a large variety of well-defined non-random ways to bring about change.

This is consistent with what Barbara McClintock first discovered in the 30s when she was studying chromosome repair and then later in the 40s when her experiments uncovered transposable elements. All of these natural genetic engineering systems are regulated or sensitive to biological inputs. That sensitivity is what we’ve learned about cell regulation in general. As I say, cells don’t act blindly, and they don’t act blindly when they change their genomes.

(3) So if genetic change is not a series of accidents and not a series of necessarily small changes, then how does it work out in evolution? That’s where the DNA record from genome sequencing comes in and confirms what many of us had argued for a long time: namely, all of these systems of genetic change, of natural genetic engineering, have played a major role in evolutionary change. We have a new view of how cells operate in evolution, which is much more information technology friendly.

I think the first blog I put out was quoting a December 2011 paper where they went through the human genome using the 29 mammalian genomes that had recently been aligned. The authors concluded that, at a minimum, there were 280,000 different components, defined functional elements in the genome, that came from mobile genetic elements.

The point is that natural genetic engineering systems have played major roles in evolutionary change. We also see in the DNA record that evolutionary change has not just been a slow accumulation of random changes.

A good way of summarizing this is to compare the genome to storage systems in computers. The conventional view is that the genome is a read-only memory (ROM) system that changes only by copying errors. Incorporating what we have learned at the biochemical level about the cellular and molecular processes of DNA change, we can formulate a fundamentally different view. The contemporary idea is that the genome is a read-write (RW) storage system that changes by direct cell activity. How cell control circuits guide that change activity is the scientific issue of the moment.

Suzan Mazur: So what the gene is, how it first appeared and when are an old way of thinking about things.

James Shapiro: The gene first appeared at the beginning of the 20th century with the rediscovery of Mendelism. Gregor Mendel called them factors, which is fine because it’s nondescript. Then Wilhelm Johannsen came up with the term “gene.” And over time the gene became endowed with a whole bunch of properties. There’s a 1948 Scientific American article by George Beadle in which he called the gene the basic unit of life.

Suzan Mazur: I mean in evolutionary time. This thinking that the gene arrived at some point in the emergence of life. It seems to be an old way of thinking now because the definition of the gene has become much more ambiguous.

James Shapiro: When three scientists rediscovered Mendelism at the turn of the century, in 1900, breeders started seeing discrete hereditary differences that could be passed on from generation to generation. And so the idea that you could have a particulate or atomistic view of the genotype built up, and then the individual components were called genes.

We now have a more sophisticated understanding of hereditary. You’ve got an integrated, super-sophisticated storage system called the genome. You can’t just try and reduce it to any one of its components.

I don’t use the word “gene” because it’s misleading.
There was a time when we were studying the rules of Mendelian heredity when it could be useful, but that time was almost a hundred years ago now.

The way I like to think of cells and genomes is that there are no “units.” There are just systems all the way down. This idea came to me unexpectedly in conversation during a visit to give a lecture at Michigan State. A colleague said that his goal was to discover the basic units in the genome. Without thinking about it consciously, I responded, “What if there are no units?” At that moment, I realized that this answer was something I had been thinking about for a long time.

There have been lots of surprises and lots of discoveries along the way to a systems view of the genome: coding sequences being broken up into exons and introns, non-coding sequences which serve as signals for expression of coding sequences, different ways of reading the coding sequences, and so forth. When you have all of that complexity in genome expression, you no longer can give any kind of simple unitary definition of what you mean by a particular piece of the genome.

With George Beadle and Edward Tatum in the 1940s, you had the one gene-one enzyme hypothesis. It was thought that we could say definitively that the business of “genes” is to determine the structure of proteins. But now we have all of this so-called ”noncoding” information in the genome. In our own human genomes, “non-coding” sequences greatly exceeds the the protein coding capacity. A lot of that “non-coding” DNA is clearly functional and very important for genome action. So we’re beginning to develop a far more sophisticated idea of what a genome is and how it operates. That’s all a part of bringing evolutionary science into the 21st century.

Suzan Mazur: But how far back in time would you say were cells talking to one another without genetic systems, i.e., programs?

James Shapiro: I think I make it explicit in the book that we don’t have enough knowledge yet of how cells came into being in the first place.

Suzan Mazur: When do you anticipate that might become more clear?

James Shapiro: We need to understand how the cells that exist today operate. That’s going to require another shift in our thinking because we have a very mechanical, again a very atomistic view of that.

We don’t yet understand how cells and organisms are integrated functionally and informationally. When we understand that integration, then we’ll have a better idea than we do right now of what the basic requirements are for life and for reproduction.

I expect there will also be technological changes in paleochemistry aiding the search for traces of early life. We don’t have this right now. It’s possible we may never have it. On the other hand, science always amazes us with what it’s able to find. I don’t want to be in a position to say we can’t work something out scientifically because very often we do succeed in unexpected ways.

Suzan Mazur: When did multicelluarity first happen?

James Shapiro: At the first cell division. Life for as long as we know it has been multicellular. The single celled organism is — not exclusively, but by and large — a synthetic construct devised partly to analyze how cells operate and partly as a consequence of Koch’s postulates and the germ theory of disease. In studying bacterial pathogenesis, the emphasis was on isolating a pure culture from a single cell. But in nature very few cells exist isolated from other cells.

Suzan Mazur: When do you think evolution began? How do you think about it?

James Shapiro: This is part of what I think is a new understanding of what it takes to be alive. I would include the ability to change as a fundamental feature of living organisms, as a basic vital function.

Suzan Mazur: Are we including pre-biotic evolution?

James Shapiro: There are people who want to speculate about pre-biotic evolution. I don’t think we can talk about it in a serious scientific way.

Suzan Mazur: Interesting.

James Shapiro: I think we need to come to terms with the biology that exists in front of us before we’re able to speculate about what might have preceded it. And I think we’re very far from being finished with that enterprise.

Suzan Mazur: So you must have some interesting things to say about astrobiology.

James Shapiro: I don’t have anything interesting to say about astrobiology.

Summing up:

Suzan Mazur: Would you wrap up your view of 21st Century evolution and where we’re headed?

James Shapiro: We have the three components, which are:

(1) Cells act in what I call a cognitive way or an information processing way. Some people like to say “computational.” The only reason that I don’t use the word computational is that it doesn’t include the sensory aspect of how cells operate. And the sensing and it’s molecular bases are all very firmly established scientifically. There’s no question about it.

What we don’t understand is how everything is integrated, how the information is processed and how the cells end up doing the appropriate thing. We know a lot about the components involved in signal transfer and decision-making, but we don’t know how the whole system works. That I think is the key frontier in the 21st century. The research will not only impact biology, but it will possibly revolutionize computation as well.

(2) Cells engineer their own genomes and they do it in a wide variety of ways that are subject to sensory inputs and which can be targeted within the genome. I document that pretty extensively in the book.

(3) We know from the DNA record that natural genetic engineering systems have been important in the evolution of new life forms.

The key questions that I see in evolution science besides learning more about those three components are:

(i) What is the link between ecological change and genome change in organisms?

(ii) What is it about the natural genetic engineering processes and how they are regulated and controlled that biases them towards creating new functionalities?

We know we can stimulate rapid genome change in the laboratory by starving cells, or putting them under pressure or in high salt and other stress conditions. Similarly, by manipulating their genomes the way McClintock did so they don’t operate normally. Or by hybridizing, as in horticulture, having different species mate or different populations mate. All of those things will trigger very significant episodes of genome restructuring. And we know genome restructuring has played a role in evolution and evolution is marked by the appearance of biological functional innovations.
We meet at the borders of our being, we dream something of each others reality. - Harvey of R.I.

To Justice my maker from on high did incline:
I am by virtue of its might divine,
The highest Wisdom and the first Love.

TopSecret WallSt. Iraq & more
User avatar
Posts: 15983
Joined: Wed Jan 02, 2008 2:59 pm
Location: New York City
Blog: View Blog (0)

Re: Notes on the Paradigm Crisis

Postby Wombaticus Rex » Mon May 21, 2012 8:54 pm


Harvard To Be Tried for Alzheimer's Research Fraud Print E-mail
Thursday, 10 May 2012
The US Court of Appeals, 1st Circuit overturned a summary judgement by a lower court ordering a whistleblower lawsuit filed by Dr. Kenneth Jones against Harvard Medical School, its teaching hospitals, Brigham and Women's and Massachusetts General Hospital, and Dr. Marilyn Albert (Principal Investigator) and Dr. Ronald Killiany to proceed to trial.

The case involves the largest Alzheimer's disease [AD] research grants awarded by the National Institutes of Health (from 1980 through 2007) for a large project aimed at identifying early physical signs of Alzheimer's by scanning certain regions of the brain with MRIs.

Dr. Jones was the chief statistician for the NIH grant. He blew the whistle after realizing that measurements used to demonstrate the reliability of the study had been secretly altered. Without these alterations, Dr. Jones explained, there was no statistical significance to the major findings of the study. When he insisted that the altered measurements be subjected to an independent reliability study, and that the manipulated results could not be presented as part of a $15 million federal grant extension application, he was terminated and his career came to an end.

The allegations in the suit concern multiple research fraud: data manipulation, significant deviations from the protocol, altered and re-traced MRI scans. To get positive results, Dr. Jones alleges, Dr. Killiany "fraudulently altered the MRI study data prior to 1998 to produce false results of a statistically significant correlation between conversion to AD and volume of the EC [entorhinal cortex]." US ex rel. Jones v. Brigham and Women's Hospital and Harvard University.

He further alleged that Dr. Albert and Dr. Killiany violated federal regulations (43 CFR 50.103(c)(3) by making false statements in the NIH grant application. Statements that "were predicated on falsified data that the defendants, knowing of this falsity, failed to take corrective action or disavow the data."

In overturning the lower court and ordering the case to proceed to trial, the Court of Appeals cited the lower court failure to consider substantial evidence of research fraud, and failed to consider relevant testimony from three expert witnesses presented by Dr. Jones:

A statistician who confirmed that the alterations were responsible for the statistical significance of the study results, a medical researcher who identified that the altered results could not be justified and were changed to establish a predetermined outcome, and a third expert who confirmed that NIH would not have funded the study had the falsity of the data been revealed during the application process and that Harvard failed to adequately investigate allegations of research fraud.

The Court of Appeals decision states:

"the essential dispute is about whether Killiany falsified scientific data by intentionally exaggerating the re-measurements of the EC to cause proof of a particular scientific hypothesis to emerge from the data, and whether statements made in the Application about having used blinded, reliable methods to produce those results were true."

Michael D. Kohn, one of the lead attorneys for Dr. Jones said:

"This is a major breakthrough holding universities accountable for the integrity of reported research results. Fraud committed in order to obtain NIH funding not only robs taxpayers, but also sets back long-term medical research goals. The facts of this case indicate that the report of false data misdirected research efforts at other institutions."

This case also underscores an inconvenient truth about the financial stakes that drive clinical trials. Those who are persuaded to serve as human subjects "for the good of humanity" and "to help medical progress" believe in the integrity and high mindedness of medical researchers--especially those at premier academic institutions. That trust, however, is all too often misplaced. Vulnerable human subjects are being shamelessly exploited in invalid, most often commercially driven experiments.

Indeed, the rationale behind the Harvard brain scanning experiment was to justify early interventions. Another example is Eli Lilly's Alzeheimer's imaging detection test (Amyvid) launched last month.

Inasmuch as no effective, safe treatment for Alzheimer's exists, and ALL such screening tests have been demonstrably inaccurate and inconsistent, such an "early intervention" approach in clinical practice is unethical and controversial.
User avatar
Wombaticus Rex
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Notes on the Paradigm Crisis

Postby Wombaticus Rex » Wed Jul 11, 2012 8:34 pm

Big thanks to Vanlose Kid!! Great find.

Clive Stafford Smith: 'The jury system in this country is utter insanity'

The lawyer and founder of Reprieve on defending clients on death row, why the whole justice system is flawed – and his fear of appearing sanctimonious

Decca Aitkenhead, Sunday 8 July 2012 20.00 BST

Clive Stafford Smith's son Wilf is only three, but must have formed a pretty low opinion of the police already, because when he found out that a friend's father was an officer, his mother had to explain that there are some good policemen. This revelation clearly made quite an impression on the toddler. He promptly sent his parents to prison.

"He comes into the living room the very next day," his father giggles, "and says: 'Daddy, into the kitchen', and shuts the door behind me. So I'm saying: 'Wilf, we're going a little far here, now let's talk about the presumption of innocence, the need for evidence. I mean, what am I even supposed to have done?'" Not for the first time in Stafford Smith's life, his appeal to due process fell on deaf ears. The headteacher at his son's nursery took him aside later that week and told him: "Do you know, this morning Wilf sent the entire class to prison!" Stafford Smith hoots with laughter. "So he's going to become either a hedge fund manager or a prosecutor."

Stafford Smith has always been one of my heroes, but I'd always been slightly apprehensive about meeting the lawyer. What if he turned out to be dreadfully earnest – a sort of humourless Peter Tatchell of death row? To rightwingers who deploy the term "do-gooder" as an insult, his CV reads like their worst nightmare; head boy at an Oxfordshire public school, he turned down a place at Cambridge to study journalism in the US, with a hazy but heartfelt plan to "put an end to America's tryst with the death penalty". Staying on to study law, he came home in the holidays to work for his cousin's ready-mix concrete company, because "I wanted to be reminded that if these men carried their lunch boxes to work every winter's morning for less than £1 an hour, then I should accept the same pay to do labour that I loved."

It's the kind of sentiment that has Daily Telegraph readers reaching for the sickbag, but it's precisely what he went on to do, toiling for a pittance to save convicts from execution for 20 years, before coming home in 2004 to run the charity Reprieve, representing prisoners on death row all over the world. Personally, I would give anything for a CV like his, but do see its scope for tiresome sanctimony.

But Stafford Smith turns out to be positively bouncy with good humour and silly jokes and an eye for the absurd – so much so, in fact, that at 53 he retains an almost teenage quality. At public talks someone always asks if there is anyone he wouldn't represent, to which he replies: "Well, I'm sorry, I just couldn't represent a Tory." The only time he regretted it was when "I'd forgotten – oh fuck, my mother's in the room – and she's voted Tory since 1642." He can cope with serial killers – "It's just serial people who vote Conservative all the time I find very difficult to understand" – and he and his wife Emily, a fellow lawyer, were married in jail by one of his clients after they had got his death sentence commuted to life. "It was so sweet, it was gorgeous. I loved it, I'm so proud of it." Was the murderer authorised to officiate? "No! He's not authorised, but who cares?"

Stafford Smith's new book, Injustice: Life and Death in the Courtrooms of America, isn't exactly a barrel of laughs. But it reads like a thriller, gripping and appalling by turn, following the case of a client convicted of a double murder and sentenced to death in 1986. In itself it's an extraordinary story, exposing incompetence and corruption, dodgy coppers and Colombian drug cartels, and to this day its protagonist remains behind bars – which is why Stafford Smith wrote the book. "I feel so guilty for failing him, and I thought I've got to take his case to the court of public opinion." But in doing so he tells a greater and more troubling tale still, about the Kafkaesque madness of a justice system that appears engineered to deliver anything but justice.

"You start thinking, how could I have failed Kris [Maharaj, his client] so palpably, for so long? I started thinking about things, and I feel an idiot, but I'd just never really stopped to think about how the system is structured in a way that's so flawed. It gets you at every turn. I've often said, in a sort of glib way: 'I hate representing innocent people.' But I'd never really stopped to think how it is that an innocent person is so certain that they didn't do it, that they can't fathom that 12 people could find them guilty."

Maharaj was a successful businessman when he was charged with the murder of two business associates. Not believing for a minute that he could be found guilty, he hired a spectacularly hopeless lawyer, in large part because he offered his services for an affordable fixed rate. Why, Maharaj reasoned, spend a fortune when it's obvious I'm innocent? But a lawyer on a fixed rate has a financial disincentive to spend more than the bare minimum of time on a case. Once convicted, the defendant can hire a new and better lawyer to represent his appeals – but due to a bizarre law known as a "procedural bar", that new lawyer can't submit any new evidence that the rubbish one should have brought up the first time around, because the defence case is deemed to have "waived the right" to mention it by omitting it the first time around.

Opposing him are police and prosecutors whose professional careers – and, Stafford Smith would argue, moral sanity – depends upon an implacable belief that no defendant could conceivably be innocent. There is a common view that the justice system is skewed in favour of the defendant, but nothing, he says, could be further from the truth.

"What made me think about this was a conversation with two cops, who were lovely, and who had 54 years of service between them. And I was curious, so I asked them: 'How many times in 54 years do you think that maybe, maybe, you arrested the wrong guy?' And they said: 'Oh, never.'"

Not one of the judges on the US supreme court of justice has ever worked as a defence counsel, and district attorneys are elected – so they have a vested interest in appearing draconian. Stafford Smith cites a study that found that more than half of US prosecutors do not believe in the presumption of innocence at all. "Fascinating, isn't it? But the thing you have to ask yourself on a human level is, could you, as a person, drive to work every day saying: 'I wonder if I'm going to put an innocent person in prison today or not'? You just can't do that as a human. So naturally, the people who do this job believe that everyone is guilty. And it's something the system doesn't take account of. But it's sort of obvious, isn't it?" As a consequence, the system legislates for malpractice, if not outright corruption, by police and prosecutors who believe they are merely doing whatever it takes to deliver justice.

Even the concept of "reasonable doubt" is no safeguard against miscarriages, Stafford Smith points out, because nobody knows what it means. As a lawyer he's prohibited from even asking jurors what they think it means, or discussing the question in court – so the entire judicial process rests upon a concept shrouded in mystery. "We cannot loudly proclaim that the burden of proof is central to the system," he writes, "yet then assert that we cannot begin to define it."

What does he think it means? "I think it means: can you imagine that this could have happened some other way than the prosecution says?" But if the prosecution tells a jury there's only a one-in-a-million chance the explanation offered by the defence could be true, jurors assume the defence must be lying. "Because if the probability of something happening is one in a million, you think: 'Well, it's never going to happen to me.' But the probability of it happening to you in Britain, say, where 60 million people live, is that of course it's going to happen. A one in a million chance can come up all the time. But the thing is that no one will believe it."

Maharaj's death sentence was eventually overturned by Stafford Smith. But surely the risk of execution was not the only feature that made his a uniquely American story? Stafford Smith concedes there is "probably" less corruption in the British justice system, and that defendants are better funded. "But pretty much all of Kris's story could happen here. There's no less bias here." In some ways, in fact, it is worse.

"The jury system in this country is utter insanity, because you're not allowed to talk to jurors before or after the trial. There's no way of knowing if they did their job properly. And the idea that the defence has to rely on the police for the investigation? Total insanity. I've never met a defence lawyer here who has done any factual investigation for themselves. Total insanity. And the whole notion of a barrister – that he shouldn't have an emotional relationship with his client? Insanity. You cannot represent someone, and meaningfully put them across to the jury, if you don't have a relationship with them."

Britain's adoption of American-style victim impact statements has only made matters worse, he argues. "The victims' families have been told their catharsis is going to come from punishment. And it's just cruel, because it doesn't. They just get exploited. I think that's probably the area in which we've been most unkind to victims."

Our system is less disastrous than the US's in just one crucial respect: "Our results are not as catastrophic, because we don't kill people."

Stafford Smith thinks the US will unquestionably abolish the death penalty: "In my lifetime, hopefully, depending upon how many gin and tonics I drink." He has witnessed six executions, all of clients he failed to save, and is haunted most by the electric chair. Having witnessed an execution by lethal injection myself, and written about it, I wonder if he worries about the unpleasantly pornographic problem of evoking the full horror. "No, I'm glad to have had the experience. Not because it isn't horrid, but when you're talking to a jury, if you've had that experience it gives you so much more power to talk about what it is like."

The bigger problem for Stafford Smith becomes clear as the interview goes on. What he really wants to talk about is something he worries will alienate people. "It's not something I really want to get into right now, because I need to preserve just a tiny bit of credibility. I will write a book about it one day – when I'm close to death, so it doesn't matter any more – about why the whole underlying concept of the justice system is just ridiculous. Total madness. But I don't want to lose people's attention and make them stop listening because they respond in an emotive way. I don't want to alienate the whole world. "

But he can't help talking about it anyway. His anxiety probably comes from spending 25 years in the US, where it probably would sound like heresy, but to me it sounds perfectly sensible.

"Let me ask you," he says, "about the most despicable thing you've ever done in your life, that you're most ashamed of, that you don't want anyone to know. My guess is we'd agree that it's not a criminal offence. It's just something really nasty that you did to someone you love. Now, let's compare the harm in that to the worst criminal offence that's ever been done to you. Which is what?" I have a think, and all I can come up with is having been burgled. "So, how much impact did that have on you?" It was really inconvenient – but emotional impact? None.

"Zero, right. And yet if I'm right, the thing you did, that you're actually ashamed of, inflicted a lot of harm on the person you did it to. Yet the thing we define as a 'crime' – for which some young black British person would get maybe four years in prison – has no impact on you at all. Why is it that we define our criminal law in terms of utter irrationality, where nasty things that you and I do have no consequences legally – and things that are really quite inconsequential, poor people end up in prison for. Why is that?"

I didn't much mind being burgled, I agree, but lots of people do enormously. "But that's because we've trained people to have these idiotic attitudes! It's crazy." And when it comes to genuinely devastating and heinous crimes, he goes on, our response is if anything even crazier. "Because the worse the crime, the more obvious the explanation. There's a reason why it happened."

Put simply, when someone does something bloodcurdlingly awful they are pretty much by definition not bad but mad. He would include Tony Blair and George Bush in that category – "Oh yes, definitely, they were psychotic" – and can't fathom how anyone could regard a sadistic killer, for instance, as remotely sane. Stafford Smith's own father suffered from mental illness – he once memorably handed his seven-year-old son £200 and told him to leave home and look after himself – "And it was a huge relief to me to realise he wasn't a bad guy. He just was not on our planet. He didn't understand the difference between right and wrong." The same goes for most of his guilty clients, he believes. "But we just have this mad approach to madness."

I'm very much looking forward to the book he writes about this, though I hope we don't really have to wait until he's on his deathbed. In the meantime, to anyone who suspects Stafford Smith must be both bad and mad for holding such views, I offer this in mitigation.

"No please don't say anything about this," he begs, the moment it's out of his mouth. "I don't want to look like a prat or anything. It will make me look horribly sanctimonious." In the current climate, I tell him, I very much doubt it.

Stafford Smith draws no salary from Reprieve. Instead, he receives a grant from the Joseph Rowntree Foundation for being a "visionary" – for which he is not liable to pay income tax. But every year he works out how much tax he would pay, were it a salary, and sends it off to the Inland Revenue.

"I can't stop you writing that," he concedes, "because I believe in free speech." But he looks mortified. "I just don't want to look like a plonker." ... m-insanity

User avatar
Wombaticus Rex
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Notes on the Paradigm Crisis

Postby Wombaticus Rex » Sun Dec 30, 2012 6:25 pm

Via: ... ature.html

A surprising upsurge in the number of scientific papers that have had to be retracted because they were wrong or even fraudulent has journal editors and ethicists wringing their hands. The retracted papers are a small fraction of the vast flood of research published each year, but they offer a revealing glimpse of the pressures driving many scientists to improper conduct.

Last year, Nature, a leading scientific journal, calculated that published retractions had increased tenfold over the past decade — to more than 300 a year — even though the number of papers published rose only 44 percent. It attributed half of the retractions to embarrassing mistakes and half to “scientific misconduct” such as plagiarism, faked data and altered images.

Now a new study, published in the Proceedings of the National Academy of Sciences, has concluded that the degree of misconduct was even worse than previously thought. The authors analyzed more than 2,000 retracted papers in the biomedical and life sciences and found that misconduct was the reason for three-quarters of the retractions for which they could determine the cause.

The problem is global. Retracted papers were written in more than 50 countries, with most of the fraud or suspected fraud occurring in the United States, Germany, Japan and China. The problem may even be greater than the new estimates suggest, the authors say, because many journals don’t explain why an article was retracted — a failure that calls out for uniform guidelines.

There are many theories for why retractions and fraud have increased. A benign view suggests that because journals are now published online and more accessible to a wider audience, it’s easier for experts to spot erroneous or fraudulent papers. A darker view suggests that publish-or-perish pressures in the race to be first with a finding and to place it in a prestigious journal has driven scientists to make sloppy mistakes or even falsify data. The solutions are not obvious, but clearly greater vigilance by reviewers and editors is needed.

I am reminded of a recent Charles Murray gem, "the formation of the new upper class has been driven by forces that are nobody's fault."

Via: ... apers.html

Dr. Fang and his colleagues dug through other reports from the Office of Research Integrity, as well as newspaper articles and the blog Retraction Watch. All told, they reclassified 158 papers as fraudulent based on their extra research.

“We haven’t seen this level of analysis before,” said Dr. Ivan Oransky, an author of Retraction Watch and the executive editor at Reuters Health. “It confirms what we suspected.”

Via: ... elves.html

“Unfortunately, individuals found guilty of sloppy or fraudulent research conduct seem to fall into a handful of behavioral patterns,” Dr. Fang said. Some continue to deny they did anything wrong; some admit guilt but don’t want to talk about it; some are prevented from talking because of legal proceedings; and some, Dr. Fang said, “seem to vanish from the face of the earth.”

One notable exception to this pattern emerged in 2006. A University of Vermont researcher, Eric Poehlman, was convicted of lying on federal grant applications and was sentenced to a year in jail. For the previous decade, he had fabricated data in papers he published on obesity, menopause and aging.

During his sentencing hearing, Dr. Poehlman apologized for his actions and offered an explanation.

“I had placed myself, in all honesty, in a situation, in an academic position which the amount of grants that you held basically determined one’s self-worth,” Dr. Poehlman said. “Everything flowed from that.”

Unless he could get grants, he couldn’t pay his lab workers, and to get those grants, he cut corners on his research and then began to fabricate data.

“I take full responsibility for the type of position that I had that was so grant-dependent,” he told the court. (Efforts to reach him for comment for this article were unsuccessful.) “But it created a maladaptive behavior pattern. I was on a treadmill, and I couldn’t get off.”

Onward -- paid reviews on Amazon. I recently mentioned to my Mom that I used to do this for quick money and, bless her heart, she actually asked me "were these books that you had read?"

Via: ... wanted=all

In the fall of 2010, Mr. Rutherford started a Web site, At first, he advertised that he would review a book for $99. But some clients wanted a chorus proclaiming their excellence. So, for $499, Mr. Rutherford would do 20 online reviews. A few people needed a whole orchestra. For $999, he would do 50.

There were immediate complaints in online forums that the service was violating the sacred arm’s-length relationship between reviewer and author. But there were also orders, a lot of them. Before he knew it, he was taking in $28,000 a month....

To put that to scale: most of the gigs I was doing were $5-$15 apiece -- the goal was a conversational, deliberately rambling and folksy, "authentic" sounding review.

“The wheels of online commerce run on positive reviews,” said Bing Liu, a data-mining expert at the University of Illinois, Chicago, whose 2008 research showed that 60 percent of the millions of product reviews on Amazon are five stars and an additional 20 percent are four stars. “But almost no one wants to write five-star reviews, so many of them have to be created.”


Mr. Liu estimates that about one-third of all consumer reviews on the Internet are fake. Yet it is all but impossible to tell when reviews were written by the marketers or retailers (or by the authors themselves under pseudonyms), by customers (who might get a deal from a merchant for giving a good score) or by a hired third-party service.
User avatar
Wombaticus Rex
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Notes on the Paradigm Crisis

Postby Wombaticus Rex » Sun Dec 30, 2012 6:33 pm


Via: ... wanted=all

Global Trend: More Science, More Fraud

The South Korean scandal that shook the world of science last week is just one sign of a global explosion in research that is outstripping the mechanisms meant to guard against error and fraud.

Experts say the problem is only getting worse, as research projects, and the journals that publish the findings, soar.

Science is often said to bar dishonesty and bad research with a triple safety net. The first is peer review, in which experts advise governments about what research to finance. The second is the referee system, which has journals ask reviewers to judge if manuscripts merit publication. The last is replication, whereby independent scientists see if the work holds up.

But a series of scientific scandals in the 1970's and 1980's challenged the scientific community's faith in these mechanisms to root out malfeasance. In response the United States has over the last two decades added extra protections, including new laws and government investigative bodies.

And as research around the globe has increased, most without the benefit of such safeguards, so have the cases of scientific misconduct.
Most recently, suspicions have swirled around a dazzling series of cloning advances by a South Korean scientist, Dr. Hwang Woo Suk.

Dr. Hwang's research made him a national hero. His team outdid rivals by claiming to have extracted stem cells from cloned human embryos and to have cloned a dog, an extraordinary feat. Some observers hailed the breakthroughs as worthy of a Nobel Prize.

Last month, critics charged that Dr. Hwang's published findings hid ethical lapses. And last week, collaborators accused the researcher of fabricating results in one of his landmark human cloning studies, published in Science last spring.

Dr. Hwang has insisted on his innocence but said he would retract the Science paper. Now questions are growing about his earlier work, including Snuppy, the dog he claims to have cloned. Yesterday, news agencies reported that Seoul National University officials investigating Dr. Hwang's claims locked down his laboratory, impounded his computer and interviewed his colleagues, among other actions.

"The Korean case shows us that we should be a lot more cautious," Marcel C. LaFollette, the author of "Stealing Into Print: Fraud, Plagiarism, and Misconduct in Scientific Publishing," said in an interview. "We have been unwilling to ask tough questions of people who are from other countries and whose systems are different because we were attempting to be polite."

To be sure, most scientists resist pressures to cut corners and adhere to the canons of science, honoring the truth above all else. But surveys suggest that there are powerful undercurrents of misbehavior and, in some cases, outright fakery.

In June, a survey of 3,427 scientists by the University of Minnesota and the HealthPartners Research Foundation reported that up to a third of the respondents had engaged in ethically questionable practices, from ignoring contradictory facts to falsifying data.

Scientific fraud as a public danger burst into public view in the 1970's and 1980's, when major cases of misconduct shook a number of elite publications and institutions, including Yale, Harvard and Columbia.

In 1981, Dr. Donald Fredrickson, then the director of the National Institutes of Health, defended the standard view of sciencbe as a self-correcting enterprise. "We deliberately have a very small police force because we know that poor currency will automatically be discovered and cast out," he said.

But fraud after fraud made the weaknesses of that system impossible to ignore. In the early 1980's, a young cardiology researcher, Dr. John R. Darsee, was found to have fabricated much data for more than 100 papers he wrote while working at Harvard and Emory Universities. His work appeared in The New England Journal of Medicine, The Proceedings of the National Academy of Sciences and The American Journal of Cardiology, among other top publications.

Startled, the federal government, beginning in 1985, took steps to augment the existing safeguards. For instance, Congress passed a law requiring public and private institutions to establish formal ways to investigate charges of fraud, in theory helping to assess damage, clear the air and protect the innocent. Eventually, the federal government established its own investigative body, now known by the Orwellian title of the Office of Research Integrity.

Journal editors, at the center of the storm, also took collective action to enhance their credibility. In 1997, they founded the Committee on Publication Ethics, or COPE, "to provide a sounding board for editors who are struggling with how to best deal with possible breaches in research and publication ethics," according to the group's Web site.

Consisting mostly of editors of medical journals, the committee now has more than 300 members in Europe, Asia and the United States.

Still, the frauds kept coming. In 1999, federal investigators found that a scientist at the Lawrence Berkeley Laboratory in Berkeley, Calif., faked what had been hailed as crucial evidence linking power lines to cancer. He published his research in The Annals of the New York Academy of Sciences and F.E.B.S. Letters, a journal of the Federation of European Biochemical Societies.

The year 2002 proved especially bleak. At Bell Labs, a series of extraordinary claims that seemed destined to win a Nobel Prize, including the creation of molecular-scale transistors, suddenly collapsed. Two of the world's most prestigious journals, Science and Nature, had published many of the fraudulent papers, underscoring the need for better safeguards despite two decades of attempted repairs.

Experts now say that the explosive growth of science around the globe has made the problem far worse, because most countries have yet to institute the extra measures that the United States has put in place. That imbalance is at least partly responsible for a rise in scientific scandals in other countries, they say.

Dr. Richard S. Smith, a former editor of The British Medical Journal (now BMJ) and the co-founder of the Committee on Publication Ethics, a group of journal editors, said in an interview that fraud was becoming increasingly difficult to root out because most countries' protective measures were either patchy or altogether absent. "It's hard enough to do something nationally, and to do it internationally is still harder," he said. "But that's what is needed."

Contributing to the problem is a drastic rise in the number of scientific journals published around the world: more than 54,000, according to Ulrich's Periodicals Directory. This glut can confuse researchers, overwhelm quality-control systems, encourage fraud and distort the public perception of findings.

"Foreign scientific journals have gone through the roof," said Shawn Chen, a senior associate editor at Ulrich's, nearly doubling to 29,098 in 2005 from 15,300 in 1980. "We're having a hard time keeping up."

While millions of articles are never read or cited - and some are written simply to pad résumés - others enter the pressure cooker of scientific and biomedical promotion, becoming lucrative elements of companies' business strategies.

Until now, cases of questionable research in other countries have gotten little attention in the United States. But international editors, shaken by scandal, are now publicizing them and expressing concern. This year, the July 30 issue of BMJ devoted four articles to the subject, asking on its cover: "Suspicions of fraud in medical research: Who should investigate?"

The articles discussed cases in which several publications, including BMJ, had stumbled in resolving serious doubts about the truthfulness of published studies done in Canada and India. The Canadian research claimed that a patented mix of multivitamins improved brain function in older people, and the Indian study said that low-fat, high-fiber diets cut by nearly half the risk of death from heart disease.

The BMJ said that it published its own version of the Indian research in April 1992 and that it had later investigated serious questions about the validity of the research for more than a decade before speaking out.

The difficulty, the editors said, was that journals could go only so far in fraud inquiries before needing the aid of national investigative bodies and professional associations that oversee scientific research. But in the Indian and Canadian cases, they added, such bodies either did not exist or refused to help, so "the doubts are unresolved."

The journal's editors, Dr. Fiona Godlee and Dr. Jane Smith, noted that the United States and Scandinavian countries had adopted institutional defenses and that Britain was considering such safeguards. Journals have an obligation to help the process, they concluded, by publicizing their difficulties and doubts.

Most recently, the South Korean uproar illustrates the tangle of publishing and policing issues that can arise as science becomes increasingly competitive and international.

"Now we're in a situation where we have these alliances between university researchers in countries and between institutions that really weren't working together before," said Dr. LaFollette, author of "Stealing Into Print."

The journal Science, owned by the American Association for the Advancement of Science, published the research of Dr. Hwang of Seoul National University and his colleagues in March 2004 and June 2005, hailing it as pathbreaking.

On Dec. 14, the magazine noted in a statement how fraud charges about the 2005 research had led to two investigations - one in South Korea and the other at the University of Pittsburgh, home to one of the article's 25 co-authors. "The journal itself is not an investigative body," Donald Kennedy, the magazine's editor, argued. "We await answers from the authors, as well as official conclusions, before we come to any ourselves."

On Friday in a news conference, Dr. Kennedy emphasized that the magazine had made no accusations of fraud against Dr. Hwang. "As of now we can't reach any conclusions with respect to misconduct issues," he said.

Independent scientists said it remained to be seen how thoroughly authorities in South Korea, where Dr. Hwang is a celebrity, would investigate the case and resolve knotty issues in what amounts to a highly public test of institutional maturity.

Seoul National University is leading the inquiry. Its committee, which apparently has the authority to examine Dr. Hwang's raw data and to question his colleagues, may have the best chance of discovering how much of his work remains valid.

But experts also cautioned that the committee's credibility requires the addition of outsiders, and perhaps scientists from other countries, who know the field and can help ensure that the investigation will retain its objectivity.

"Unfortunately, individual institutions have an enormous conflict of interest," said Dr. Smith, the former editor of The British Medical Journal. "It's a lot easier," he said, for such bodies when examining an allegation of fraud on their own, "to slide someone out of the organization or to suppress it altogether."
User avatar
Wombaticus Rex
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Notes on the Paradigm Crisis

Postby Wombaticus Rex » Sun Dec 30, 2012 6:57 pm

Via: ... 22fda.html

F.D.A. Lags in Banning Researchers After Fraud

Delfina Hernandez helped to carry out one of the most audacious drug research frauds in American history, but because federal drug regulators sent a legal notice years late and to the wrong address, she can legally continue to conduct research.

Ms. Hernandez was a study coordinator at the Southern California Research Institute, a drug testing operation in Whittier, Calif., that federal agents raided in 1997. The institute, which was led by Dr. Robert Fiddes, helped conduct more than 170 drug studies for nearly every major drug maker in the world and routinely falsified data and patient records while doing so.

Ms. Hernandez pleaded guilty to fraud, and federal law required the Food and Drug Administration to ban her from participating in further drug research. The agency had five years after her conviction in which to act.

But in a report scheduled for release on Thursday, Congressional investigators say the agency pays so little attention to its responsibilities to ban investigators convicted of fraud and is so disorganized about carrying them out that its actions take an average of four years to complete.

Via: ... fraud.html

A well-known psychologist in the Netherlands whose work has been published widely in professional journals falsified data and made up entire experiments, an investigating committee has found. Experts say the case exposes deep flaws in the way science is done in a field, psychology, that has only recently earned a fragile respectability.

The psychologist, Diederik Stapel, of Tilburg University, committed academic fraud in “several dozen” published papers, many accepted in respected journals and reported in the news media, according to a report released on Monday by the three Dutch institutions where he has worked: the University of Groningen, the University of Amsterdam, and Tilburg. The journal Science, which published one of Dr. Stapel’s papers in April, posted an “editorial expression of concern” about the research online on Tuesday.

The scandal, involving about a decade of work, is the latest in a string of embarrassments in a field that critics and statisticians say badly needs to overhaul how it treats research results. In recent years, psychologists have reported a raft of findings on race biases, brain imaging and even extrasensory perception that have not stood up to scrutiny. Outright fraud may be rare, these experts say, but they contend that Dr. Stapel took advantage of a system that allows researchers to operate in near secrecy and massage data to find what they want to find, without much fear of being challenged.

“The big problem is that the culture is such that researchers spin their work in a way that tells a prettier story than what they really found,” said Jonathan Schooler, a psychologist at the University of California, Santa Barbara. “It’s almost like everyone is on steroids, and to compete you have to take steroids as well.”

In a prolific career, Dr. Stapel published papers on the effect of power on hypocrisy, on racial stereotyping and on how advertisements affect how people view themselves. Many of his findings appeared in newspapers around the world, including The New York Times, which reported in December on his study about advertising and identity.

In a statement posted Monday on Tilburg University’s Web site, Dr. Stapel apologized to his colleagues. “I have failed as a scientist and researcher,” it read, in part. “I feel ashamed for it and have great regret.”
User avatar
Wombaticus Rex
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

The Nature of Paradigms and Paradigm Shifts in Music Educati

Postby Allegro » Wed Jan 02, 2013 10:21 am

Every time an introduction for this post was begun, I slammed into rants, one deleted after another. Objectivity is beyond me, frankly. So, I’ve cherry picked documentations, with paradigms defined or not, that speak for me in terms of education, philosophy and, consequently, music performance practice standards in North America.

I’ve added some links and [notes], herein.

The Nature of Paradigms and Paradigm Shifts in Music Education
by Elvira Panaiotidi

From: Philosophy of Music Education Review
Volume 13, Number 1, Spring 2005
pp. 37-75 | 10.1353/pme.2005.0024

In lieu of an abstract, here is a brief excerpt of the content:

    Philosophy of Music Education Review 13.1 (2005) 37-75
    Elvira Panaiotidi
    North Ossetian State Pedagogical Institute, Russia

    The advent of the praxial philosophy of music education in the mid-1990s and its systematic development in David Elliott’s Music Matters: A New Philosophy of Music Education [1995] created an unprecedented situation in music education in North America. Having brought to an end the monopoly of one theoretical approach, that of music education as aesthetic education (MEAE), it challenged the music education community with a choice. While Elliott tried to convince music educators of the falsity of MEAE and of the advantages of his own conception, Bennett Reimer and his proponents defended their position. The polemic between “aestheticians” and “praxialists,” represented by the exchange between Reimer and Elliott in the Bulletin of the Council for Research in Music Education [1996, No. 128], brought to light a methodological deficiency in music education theory, namely, its inability to provide tenacious theoretical footing which could help settle the dispute. How do changes in the theory and practice of music education occur? What are the rational standards by which to judge theories? What is the nature and logic of a dynamic development in music education theory and practice? These and similar questions have remained largely unapproached by music educators.

    The present paper is an attempt to initiate a discussion on these issues by exposing one possible metatheoretical strategy. As the title suggests, the strategy I am going to employ is based on the concept of paradigm. This Greek term which was once used to describe Platonic ideas, nowadays broadly circulates in descriptions of transformative processes in nearly every domain of life: we hear about paradigm shifts in musicology, the job market and tax policy, genetics, and so on. It is usually used as a kind of unproblematic category with a precisely defined and commonly accepted meaning, so that no indication of the origin of its usage in a given context is provided. In most cases, however, it turns out that “paradigm shift” is equivalent to and stands for “change” of whatever type, scope, or depth and is preferred because it is a mode or perhaps because it sounds more pretentious. In contrast to this tendency, both “paradigm” and “paradigm shift” are taken here most seriously and their usage is intimately related to the tradition in philosophy of science established by Thomas S. Kuhn’s The Structure of Scientific Revolutions. This may appear dubious after we have been recently told that the impact of this book and the very concept of paradigm have “been largely, but not entirely, for the worse,” namely, “to dull the critical sensibility of the academy” and “to kill the historicist impulse.” “Kuhnification,” it has been argued, is responsible for the suppression of free inquiry and bringing about “paradigmitis,” a syndrome characterized by “a collective sense of historical amnesia and political inertia.” Nonetheless, it is my contention that the paradigm approach per se is not dismissed by the criticism of Kuhn’s conception and can be modified in such a way as to be made fruitful for structuring music education discourse and the explication of theory development in music education. To show how this is possible is the task before me.

    Underlying this project is the idea that theory development in music education can be adequately grasped and appraised in terms of more general units than specific theories which I shall call paradigms. That I am giving preference to this term and not choosing another from the group of providers of the conceptual framework that have been suggested throughout the history—“research tradition” (Laudan), “research programme”(Lakatos), “comprehensive cosmological point of view” (Feyerabend)—is that I find it more pertinent: it is neutral enough and allows for modifications which are necessary in view of the specific nature of music education discourse.

What Is Philosophy of Music Education and Do We Really Need It?
Elivra Panaiotidi 2002 (Click link ‘Look Inside’ on that page)

    ABSTRACT. The article deals with the problem of the disciplinary identification of the philosophy of music education. It explores alternative approaches to the philosophy of music education and its relation to musical pedagogy. On the basis of this analysis an account of the philosophy of music education as a philosophical discipline is suggested and its specific function identified.

    < final three of five paragraphs in the Introduction follow >

    Two preliminary remarks are to be made at the outset in order to clarify the discussion of the opening section. The first on is that under “philosophy of music education” is meant a scholarly and curriculum discipline, a special field of study as it was shaped in the North American in second half of the 20th century. The second is my suggestion that the proper understanding of the nature, constitution and specific character of the philosophy of music education may be achieved from the perspective of the historical development of general educational philosophy. The relevance of such an approach has its reasons. The circumstances of the twentieth-century origin of the philosophy of music education demonstrate that it appeared under the direct influence of educational philosophy, which stood as a model for the authors of the first music education “philosophies.” To paraphrase Bennett Reimer, the fortunes of philosophy of music education are precisely parallel with the curve of philosophy of education as a whole but at an interval of an inch or two below (Reimer, 1989, p. 216).

    Philosophy of education is in a certain sense a “crisis” discipline. It was called in existence by the political social tensions which gave impetus to the rise of interest in educational issues. Educational institutions and policies (and “progressive education” as a cause célèbre) were declared the source of all troubles and at the same time were given the responsibility of overcoming the negative tendencies at all levels and in all spheres of social life. This domain imposed the need to rethink the fundamental premises of the whole educational enterprise. So philosophy of education was put to work.

    As to philosophy of music education, it appeared in the course of endorsement of this general movement into the domain of art education, and it derived from educational philosophy the very idea of the discipline.

Additional resources.

What Is Music? Aesthetic Experience versus Musical Practice
Elvira Panaiotidi, Russian Academy of Sciences
Philosophy of Music Education Review 11.1 (2003) 71-89

Music Education for Changing Times Scribd.
Thomas A. Regelski & J. Terry Gates, Editors

Aesthetic Music Education and the Influence of Bennett Reimer

David J. Elliott Music Education as/for Artistic Citizenship Extract


^ Diane Ravitch on School Reform, Parts 1 & 2
From NYU’s Radical Film and Lecture Series, March 2010
Art will be the last bastion when all else fades away.
~ Timothy White (b 1952), American rock music journalist
User avatar
Posts: 4456
Joined: Fri Jan 01, 2010 1:44 pm
Location: just right of Orion
Blog: View Blog (144)

Re: Notes on the Paradigm Crisis

Postby Wombaticus Rex » Mon Jan 14, 2013 9:54 pm

Via: ... ck-picking

The Observer's panel of stock-picking professionals has been undone in our 2012 investment challenge by a ginger feline called Orlando who spent time paw-ing over the FT.

The Observer portfolio challenge pitted professionals Justin Urquhart Stewart of wealth managers Seven Investment Management, Paul Kavanagh of stockbrokers Killick & Co, and Schroders fund manager Andy Brough against students from John Warner School in Hoddesdon, Hertfordshire – and Orlando.

Each team invested a notional £5,000 in five companies from the FTSE All-Share index at the start of the year. After every three months, they could exchange any stocks, replacing them with others from the index.

By the end of September the professionals had generated £497 of profit compared with £292 managed by Orlando. But an unexpected turnaround in the final quarter has resulted in the cat's portfolio increasing by an average of 4.2% to end the year at £5,542.60, compared with the professionals' £5,176.60.

While the professionals used their decades of investment knowledge and traditional stock-picking methods, the cat selected stocks by throwing his favourite toy mouse on a grid of numbers allocated to different companies.

The challenge raised the question of whether the professionals, with their decades of knowledge, could outperform novice students of finance – or whether a random selection of stocks chosen by Orlando could perform just as well as experienced investors.

The result indicates that the "random walk hypothesis", popularised in economist Burton Malkiel's book A Random Walk Down Wall Street, is perhaps truer than we thought. Burkiel's book explores the idea that share prices move completely at random, making stock markets entirely unpredictable.

"It's time to crack open the Whiskas," said a good-humoured Justin Urquhart-Stewart. "The cat's got talent." To celebrate his success, Orlando's owner, former Cash editor Jill Insley, has bought him a red collar in the style of Urquhart-Stewart's omnipresent red braces.

All but one of Orlando's stocks (Morrisons) rose during the last three months of the year, including specialist plastics and foam company Filtrona, which Orlando had hastily swapped for under-performing Scottish American Investment Trust in September.

By contrast, the professionals refused to swap any stocks at the end of the third quarter and paid the price. British Gas fell by 19% and Imagination Technologies dropped by 16.8%, dragging their portfolio down by an average 7.1%.

The students may have finished last, but displayed the best performance of all the teams in the final quarter, their portfolio increasing by an average 5.4%, including a fantastic performance of 17.4% for property company Savills.

Their trading decisions were key: at the end of the final quarter they swapped Mulberry for Aviva and Betfair for Tesco. In the final quarter, Aviva's share price increased by 17% (compared with a rise of only 6.6% for Mulberry during that time) and Tesco rose by 1.2% (far superior to a fall in the Betfair share price of 5.4%).

Nigel Cook, deputy headteacher at John Hoddesdon School, said: "The mistakes we made earlier in the year were based on selecting companies in risky areas. But while our final position was disappointing, we are happy with our progress in terms of the ground we gained at the end and how our stock-picking skills have improved."
User avatar
Wombaticus Rex
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Paradigm Crisis in the Arts | Sir Ken Robinson

Postby Allegro » Wed Jan 30, 2013 3:24 am

…but I’d say with some certainty that humans in charge of promoting ideas, or, for purposes of this thread, changing paradigms, have demonstrated and will demonstrate the ponderous hand, because the legacies from which those people are working were great training manuals. There. I’ve said written it, again, almost copied word for word from another thread, somewhere! :D

Yet to put as The Best of Sir Ken Robinson, what better thread could there be for the videos I’ve heard? Robinson speaks for me when at moments I can be disgusted and throw my hands in the air and eventually wipe the dust away when being entertained by it all. For a while longer.

These are notes I’ve paraphrased or transcribed from TED or from Robinson’s videos, I’m sure.

    Sir Ken Robinson makes a humorously entertaining and profoundly moving case for creating an educational system that nurtures (rather than undermines) creativity. He argues that it’s because we’ve been educated to become good workers, rather than creative thinkers. Students with restless minds and bodies—far from being cultivated for their energy and curiosity—are ignored or even stigmatized, with terrible consequences. “We are educating people out of their creativity,” Robinson says.

    “Everything in human culture has flown from this ability [that is, to conceive through the power of imagination]; the ability to conceive of alternatives; to conceive of new possibilities; to conceive of a past and of a future. Not just one past or one future, but multiple pasts and multiple futures. … This to me is a fundamental power. … And my great concern is that in education, which should be the process by which we cultivate in, we almost systematically stifle it.”
    ~ Sir Ken Robinson


^ Changing Paradigms | Sir Ken Robinson

^ Changing Education Paradigms | Sir Ken Robinson
RSA Animate

^ Do schools kill creativity? | Sir Ken Robinson
February 2006 TED Talk

^ Bring on the learning revolution! | Sir Ken Robinson
February 2010 TED Talk | TED NOTES. In this poignant, funny follow-up to his fabled 2006 talk, Sir Ken Robinson makes the case for a radical shift from standardized schools to personalized learning—creating conditions where kids’ natural talents can flourish.
Art will be the last bastion when all else fades away.
~ Timothy White (b 1952), American rock music journalist
User avatar
Posts: 4456
Joined: Fri Jan 01, 2010 1:44 pm
Location: just right of Orion
Blog: View Blog (144)

The Quasi Government | Goals, Percentages, Quotas

Postby Allegro » Sun Aug 11, 2013 1:26 am

Highlights mine.
The Quasi Government: Hybrid Organizations with Both Government and Private Sector Legal Characteristics

Kevin R. Kosar
Analyst in American National Government
June 22, 2011 [36 pp.]

    Conclusion: Paradigms in Conflict [p. 34]

    Many observers believe that the underlying attraction of the quasi government organizational option can be traced to an innate desire of organizational leadership, both governmental and private sector, to seek maximum autonomy in matters of policy and operations. 114 With respect to the governmental sector, however, this natural centrifugal thrust of organizational management has been historically held in check by a set of strong counter or centralizing forces. The constitutional paradigm (model) of management was, and remains, based on laws and accountability structures. The President is chief manager of the executive branch and manages through the appointment of officers, the administration of general management laws, and the budgetary process. The highest value in this public law model of management is political accountability for the exercise of governmental power, not efficiency or some other value. 115

    A unified executive structure, coupled with hierarchical lines of authority and accountability, was a theoretical product of the founding fathers. The President was viewed as the chief manager of the administrative system. The governmental and private sector cooperated, but were kept legally distinct in the interests of protecting citizens’ rights against a potentially arbitrary government. 116 Institutions not in the executive branch, but partaking of the attributes of governmental status were looked upon with suspicion as aberrations breaching the constitutional wall between the governmental and private sectors.

    These management values, however, were challenged in the 1960s by a new management theory (public choice theory) emanating from academia, and found expression in the election of political leaders, here and abroad, committed to market principles. The underlying premise of the entrepreneurial management paradigm is that the governmental and private sectors are essentially alike in the fundamentals, and thus subject to many of the same economically derived behavioral norms. 117 The supporters of this position promoted their values and concepts of management internationally under the rubric of New Public Management (NPM) and domestically as part of the National Performance Review (NPR).118

    Skeptics of the new entrepreneurial management paradigm say the centrality of public law is displaced by the centrality of economic axioms; the focus of management, once the citizen, is now the customer; and departmental integration as the norm is replaced by agency dispersion and managerial autonomy. They see political accountability and due process being superseded by the primacy of performance and results, however defined. Critics believe that the historic wall between the governmental and private sectors is being breached not merely as a managerial convenience, but as a matter of policy; so rather than a wall, government entrepreneurs are forging a web of public/private partnerships.

    Given the great differences between the basic premises guiding the two schools of thought, those favoring traditional public law principles versus those favoring entrepreneurial approaches, it is not surprising that their attitudes towards the quasi government are also at odds. Those advocating entrepreneurial management tend to place high value on managerial flexibility and the setting of numerical performance standards. Many are opposed in principle to hierarchical leadership structures and emphasize the desirability of change and managerial risk-taking. This set of values with respect to governmental management makes the hybrid organization within the quasi government an attractive option.

    Those favoring the public law approach to management, on the other hand, argue that the purpose of government management is to implement the laws passed by Congress, not necessarily to maximize performance or to satisfy customers. While accountability and effective performance are generally compatible objectives, in those unusual instances where these values come into conflict, they believe that the democratic value of political accountability should take precedence over the managerial value of maximizing efficiency and outcomes. Many of the public law advocates, not unexpectedly, tend to see quasi governmental entities as instruments of relatively small constituencies whose interests are promoted over the interests of the whole people as represented in their democratic institutions. Thus, they often oppose such quasi governmental hybrid entities as GSEs [Government-Sponsored Enterprises] because they believe those who benefit (shareholders and management) are separate and apart from those who stand at risk (the taxpayers).

    Supporters of performance based criteria for government management stress the need for flexibility, competition, and performance as desirable goals. The pre-eminence of these values, in their view, provides the critical elements in developing creative and successful management. In this respect, therefore, many believe that the quasi government is where much of the future lies, away from what they characterize as the stultifying impact of alleged micromanagement, both congressional and executive, general management laws (e.g., personnel regulations), and budgetary constraints. In the quasi government, some argue, management can do whatever is not forbidden to do by law, thus providing the basis for innovation and partnerships. Accountability will be for performance, however it may be defined and measured, rather than to strict conformance to law. In the new entrepreneurial management paradigm, success, proponents say, will be measured by polling the customers on their trust and satisfaction of the delivery of governmental services.

    Thus, the emergence and growth of the quasi government can be viewed as either a symptom of a decline in our democratic system of governance or as a harbinger of a new, creative management era where the principles of market behavior are harnessed for the general well-being of the nation. 119 One thing is for sure, however: debate between the competing management paradigms is over important issues, such as the legitimacy and utility of the quasi government, and is likely to continue into the foreseeable future.

    Author Contact Information
    Kevin R. Kosar
    Analyst in American National Government
    kkosar, 7-3968
Art will be the last bastion when all else fades away.
~ Timothy White (b 1952), American rock music journalist
User avatar
Posts: 4456
Joined: Fri Jan 01, 2010 1:44 pm
Location: just right of Orion
Blog: View Blog (144)


Return to Data & Research Compilations

Who is online

Users browsing this forum: No registered users and 1 guest