PARLIAMONE!

Ancora a proposito di valutazione, misure e classifiche...

Tratto da C. Corsini (2014), La lezione dimenticata: l'INVALSI e la valutazione di scuole e insegnanti, in I. Giunta (a cura di), Flessibilmente, Un modello sistemico di approccio al tema della flessibilità, Pensa, Lecce-Brescia, pp. 175-203.

 

[...] quel che viene sottovalutato è quanto, in educazione, la scelta di una misura piuttosto che un’altra sia, sempre, una scelta politica: gli indicatori fanno infatti riferimento a things we care about enough to measure (Fitz Gibbon, Kochan 2000). Se si vuole valutare per migliorare, scontato slogan di ogni sistema di accountability, è perché si ha un’idea di buona scuola rispetto alla quale i giudizi di valore esercitano un peso legittimo che non può essere sottaciuto in sede valutativa. La mancata o insufficiente esplicitazione di cosa sia una scuola di qualità, riverenza (ingenua o meno) a un’insostenibile idea di neutralità dell’educazione, della valutazione e della ricerca scientifica è, a sua volta, una scelta politica. E politica è anche la scelta di un approccio valutativo che considera dirigenti, docenti, studentesse e studenti oggetti passivi di misura e non soggetti attivi con cui negoziare il perché, il chi, il cosa e il come della valutazione. Per dirla con Rom Harré e Paul Secord (1972, p. 84): for scientific purposes treat people as if they were human beings. Come evidenzia la genesi e lo sviluppo della docimologia, se vuole essere equa ed efficace la valutazione educativa necessita la negoziazione di punti di vista ed esperienze differenti, fuori e dentro gli istituti [...].

Si fa stringente la necessità di un ripensamento del sistema di accountability che rimetta al centro scuole e docenti, dando loro da un lato fiducia e dall’altro esigendo un autentico “render conto” del proprio lavoro che assuma sì dimensione pubblica, ma non certo quella che si presenta sotto la folle forma classificatoria della cattiva scienza. È necessario riprendere il discorso, quasi ovunque interrotto (e in questo le responsabilità del corpo insegnante sono gravissime), che avrebbe dovuto legare autonomia ad autovalutazione e utilizzare la seconda per coinvolgere nel processo educativo i tanto evocati stakeholders (famiglie, opinione pubblica), portatori d’interesse che hanno facoltà di parola su quanto avviene all’interno degli istituti, ma che non possono barattare questo diritto con l’erogazione e il consumo di statistiche dal dubbio valore scientifico. Va rilanciata la dimensione irrimediabilmente democratica di una efficace cultura della valutazione. L’accountability deve abbandonare ogni autoritaria tendenza a presentarsi come misura assolutamente oggettiva per farsi giudizio condiviso. D’altro canto, una fondazione negoziata, partecipata e pubblica della valutazione educativa garantisce, meglio e più della formulazione quantitativa dei suoi giudizi, l’unica forma di oggettività perseguibile nelle cose umane: l’intersoggettività.

 

Ho lavorato per anni nella ricerca Pisa. Ho una posizione critica su come viene utilizzata l’indagine e alcune delle preoccupazioni qui sotto espresse sono complessivamente fondate, anche se, in generale, ho l’impressione che si attribuiscano a Pisa le criticità e le debolezze che ciascun contesto manifesta nella propria politica educativa. Ricordo che non esiste un test valido in assoluto: dipende dalle finalità per cui somministriamo e dall’uso che facciamo dei dati. Per cui: siamo proprio sicuri che il problema sia Pisa? Pisa confronta risultati rilevati con prove semistrutturate. Non è metodologicamente attrezzata per dirci quale sistema educativo, meno che mai quale scuola, è qualitativamente migliore. Chi trae inferenze simili gioca sull’analfabetismo scientifico dell’utenza. E inoltre confonde misurazione e valutazione.

Però: chi ha scritto la lettera e, più in generale, chi critica le prove Pisa, è a conoscenza del fatto che oltre la metà delle domande sono a risposta aperta? È a conoscenza del fatto che molti quesiti richiedono di esplicitare criticamente il proprio punto di vista? O che una dimensione fondamentale del costrutto di “reading literacy” è la valutazione critica del contenuto e/o della forma di quanto letto?

Insomma: ho l’impressione che l’indagine Pisa sia più contestata che conosciuta.

Tuttavia rimango concorde con il succo delle critiche: in generale l’utilizzo fatto delle misure è aberrante e invalida in molti contesti la possibilità di utilizzarle per informare la nostra valutazione dell’equità e dell’efficacia dei sistemi educativi.

Qui trovate un ottimo intervento di Bruno Losito sulla faccenda.

 

Cristiano Corsini

 

 

1 La lettera del mondo accademico sul ruolo svolto dall'OCSE-PISA negli ultimi anni

 

 

Dear Dr Schleicher,

 

We write to you in your capacity as OECD’s (Organisation for Economic Co-operation and Development) director of the Programme of International Student Assessment (Pisa). Now in its 13th year, Pisa is known around the world as an instrument to rank OECD and non-OECD countries (60-plus at last count) according to a measure of academic achievement of 15-year-old students in mathematics, science, and reading. Administered every three years, Pisa results are anxiously awaited by governments, education ministers, and the editorial boards of newspapers, and are cited authoritatively in countless policy reports. They have begun to deeply influence educational practices in many countries. As a result of Pisa, countries are overhauling their education systems in the hopes of improving their rankings. Lack of progress on Pisa has led to declarations of crisis and “Pisa shock” in many countries, followed by calls for resignations, and far-reaching reforms according to Pisa precepts.

 

We are frankly concerned about the negative consequences of the Pisa rankings. These are some of our concerns:

 

• While standardised testing has been used in many nations for decades (despite serious reservations about its validity and reliability), Pisa has contributed to an escalation in such testing and a dramatically increased reliance on quantitative measures. For example, in the US, Pisa has been invoked as a major justification for the recent “Race to the Top” programme, which has increased the use of standardised testing for student-, teacher-, and administrator evaluations, which rank and label students, as well as teachers and administrators according to the results of tests widely known to be imperfect (see, for example, Finland’s unexplained decline from the top of the Pisa table).

 

• In education policy, Pisa, with its three-year assessment cycle, has caused a shift of attention to short-term fixes designed to help a country quickly climb the rankings, despite research showing that enduring changes in education practice take decades, not a few years, to come to fruition. For example, we know that the status of teachers and the prestige of teaching as a profession have a strong influence on the quality of instruction, but that status varies strongly across cultures and is not easily influenced by short-term policy.

 

• By emphasising a narrow range of measurable aspects of education, Pisa takes attention away from the less measurable or immeasurable educational objectives like physical, moral, civic and artistic development, thereby dangerously narrowing our collective imagination regarding what education is and ought to be about.

 

• As an organisation of economic development, OECD is naturally biased in favour of the economic role of public [state] schools. But preparing young men and women for gainful employment is not the only, and not even the main goal of public education, which has to prepare students for participation in democratic self-government, moral action and a life of personal development, growth and wellbeing.

 

• Unlike United Nations (UN) organisations such as UNESCO or UNICEF that have clear and legitimate mandates to improve education and the lives of children around the world, OECD has no such mandate. Nor are there, at present, mechanisms of effective democratic participation in its education decision-making process.

 

• To carry out Pisa and a host of follow-up services, OECD has embraced “public-private partnerships” and entered into alliances with multi-national for-profit companies, which stand to gain financially from any deficits—real or perceived—unearthed by Pisa. Some of these companies provide educational services to American schools and school districts on a massive, for-profit basis, while also pursuing plans to develop for-profit elementary education in Africa, where OECD is now planning to introduce the Pisa programme.

 

• Finally, and most importantly: the new Pisa regime, with its continuous cycle of global testing, harms our children and impoverishes our classrooms, as it inevitably involves more and longer batteries of multiple-choice testing, more scripted “vendor”-made lessons, and less autonomy for teachers. In this way Pisa has further increased the already high stress level in schools, which endangers the wellbeing of students and teachers.

 

These developments are in overt conflict with widely accepted principles of good educational and democratic practice:

 

• No reform of any consequence should be based on a single narrow measure of quality.

 

• No reform of any consequence should ignore the important role of non-educational factors, among which a nation’s socio-economic inequality is paramount. In many countries, including the US, inequality has dramatically increased over the past 15 years, explaining the widening educational gap between rich and poor which education reforms, no matter how sophisticated, are unlikely to redress.

 

• An organisation like OECD, as any organisation that deeply affects the life of our communities, should be open to democratic accountability by members of those communities.

 

We are writing not only to point out deficits and problems. We would also like to offer constructive ideas and suggestions that may help to alleviate the above mentioned concerns. While in no way complete, they illustrate how learning could be improved without the above mentioned negative effects:

 

1 Develop alternatives to league tables: explore more meaningful and less easily sensationalised ways of reporting assessment outcomes. For example, comparing developing countries, where 15-year-olds are regularly drafted into child labour, with first-world countries makes neither educational nor political sense and opens OECD up for charges of educational colonialism.

 

2 Make room for participation by the full range of relevant constituents and scholarship: to date, the groups with greatest influence on what and how international learning is assessed are psychometricians, statisticians, and economists. They certainly deserve a seat at the table, but so do many other groups: parents, educators, administrators, community leaders, students, as well as scholars from disciplines like anthropology, sociology, history, philosophy, linguistics, as well as the arts and humanities. What and how we assess the education of 15-year-old students should be subject to discussions involving all these groups at local, national, and international levels.

 

3 Include national and international organisations in the formulation of assessment methods and standards whose mission goes beyond the economic aspect of public education and which are concerned with the health, human development, wellbeing and happiness of students and teachers. This would include the above mentioned United Nations organisations, as well as teacher, parent, and administrator associations, to name a few.

 

4 Publish the direct and indirect costs of administering Pisa so that taxpayers in member countries can gauge alternative uses of the millions of dollars spent on these tests and determine if they want to continue their participation in it.

 

5 Welcome oversight by independent international monitoring teams which can observe the administration of Pisa from the conception to the execution, so that questions about test format and statistical and scoring procedures can be weighed fairly against charges of bias or unfair comparisons.

 

6 Provide detailed accounts regarding the role of private, for-profit companies in the preparation, execution, and follow-up to the tri-annual Pisa assessments to avoid the appearance or reality of conflicts of interest.

 

7 Slow down the testing juggernaut. To gain time to discuss the issues mentioned here at local, national, and international levels, consider skipping the next Pisa cycle. This would give time to incorporate the collective learning that will result from the suggested deliberations in a new and improved assessment model.

 

We assume that OECD’s Pisa experts are motivated by a sincere desire to improve education. But we fail to understand how your organisation has become the global arbiter of the means and ends of education around the world. OECD’s narrow focus on standardised testing risks turning learning into drudgery and killing the joy of learning. As Pisa has led many governments into an international competition for higher test scores, OECD has assumed the power to shape education policy around the world, with no debate about the necessity or limitations of OECD’s goals. We are deeply concerned that measuring a great diversity of educational traditions and cultures using a single, narrow, biased yardstick could, in the end, do irreparable harm to our schools and our students.

 

2 La replica di PISA

 

The letter by Dr Heinz-Dieter Meyer and other academics (OECD and Pisa tests are damaging education worldwide – academics, theguardian.com, 6 May) makes a series of false claims regarding the Organisation for Economic Co-operation and Development's Pisa programme. There is nothing that suggests that Pisa, or other educational comparisons, have caused a "shift to short-term fixes" in education policy. On the contrary, by opening up a perspective to a wider range of policy options that arise from international comparisons, Pisa has provided many opportunities for more strategic policy design. It has also created important opportunities for policy-makers and other stakeholders to collaborate across borders. The annual International Summit of the Teaching Profession, where ministers meet with union leaders to discuss ways to raise the status of the teaching profession, is an example. Not least, while it is undoubtedly true that some reforms take time to bear fruit, a number of countries have in fact shown that rapid progress can be made in the short term, eg Poland, Germany and others making observable steady progress every three years.

 

Equally, there are no "public-private partnerships" or other "alliances" in Pisa of the type Dr Meyer implies. All work relating to the development, implementation and reporting of Pisa is carried out under the sole responsibility of the OECD, under the guidance of the Pisa governing board. The OECD does, of course, contract specific technical services out to individuals, institutions or companies. Where it does, these individuals, institutions or companies are appointed by the OECD following an open, transparent and public call for tender. This transparent and open process ensures that each task is carried out by those entities that demonstrate they are best-qualified and provide the best value for money. No individual academic, institution or company gains any advantage from this since the results of all Pisa-related work are placed in the public domain.

 

Furthermore, in the article by Peter Wilby (Pisa league tables killing 'joy of learning', 6 May) it is stated that Pearson is overseeing the Pisa 2015 assessment, which is not the case. Pearson was one of a number of contractors who have been appointed through a competitive tendering process to develop and implement Pisa 2015. Pearson's contract to develop the assessment framework has been completed and has now come to an end.

Andreas Schleicher

Acting director of education, OECD

 

 

 

COPYRIGHT © 2012 | ALL RIGHTS RESERVED

PRIVACY STATEMENT | TERMS & CONDITIONS