Fair governance models

Evaluation — Actors, Values, and Metrics

An examination of modes, values, methods, and metrics of evaluation with an aim to develop more organic, bottom-up led evaluation practices in arts and culture.

The Journey

This journey has been nested in a pressing moment for our World, where the overlapping humanitarian, social, and ecological crises accelerated to a point of a global disruption caused by a pandemic of proportions unseen for generations. Life as we know it is changing before our eyes, and we are again reminded of Marx’s famous words: ‘All that is solid melts into air’, and so is the cultural and artistic life that we only recently knew and lived as practitioners. And this most recent in the stream of crises that we’ve seen in the course of the last decade or so, serves as another reminder about how urgent is the need for pivoting essential and radical changes of ways we organise life on Earth in order to provide for survival of Earth’s ecosystems, including sustainability of communities, both human and beyond. An important part of that change must involve artistic and cultural production in its core – we must urgently rethink how cultural and artistic production is being created, curated, mediated, accessed. And equally important: how cultural and artistic production is being assessed and measured, based on what values, by whom, and using which metrics?

As a community of communities scattered all over Europe and the Middle East and North Africa (MENA) region, RESHAPE has been a perfect playground to gather and examine a variety of experiences of cultural actors in relation to the evaluation processes and their different aspects. It has also given us an opportunity for developing a much bigger and much more nuanced picture than would have been possible without RESHAPE. Besides that, working together provided a lot of comfort and solidarity in times of isolation and uncertainty. RESHAPE has proven to be a reflection and action-oriented process where we could gather to critically assess the current state of the art in the artistic and cultural field, as well as imagine and create blueprints for a different, happier common future.

Early on in the process we detected the need to shake the tree of evaluation, using this metaphorical language not only to demonstrate the need to reframe its basic notions, but also to be able to distinguish its various fruits, be they ripe or green, sometimes already sagged and rotten. Our attempt to examine different modes, values, methods, and metrics of evaluation stems from the desire to contribute to the development of organic, bottom-up led evaluation practices in arts and culture. We want our work to result in an exercise for a potential model of evaluation that can be modified and adapted by organisations within RESHAPE but also outside of its immediate reach. Ideally, the model could be tested on RESHAPE itself in the aftermath of activities’ realisation.

Methodology and approach

Avoiding (but not completely eschewing) the usual survey methodologies, we reached out to the tools immanent to the artistic type of research: collective work based on observation, (many) conversations and meditations on the subject, gathered by examining a body of work of practical and academic research about evaluation and the related fields.

One of the important goals for us was to re-examine, and contribute to the transformation of the role of expertise and experts – shifting the evaluation towards processes of learning and knowledge transfer in order to make them empowering for organisations and individuals around them – strengthening their ownership over evaluation processes, instead of facilitating utilitarian and mercantile logics, more often than not imposed in a top-down manner. Equally important for us was to learn about existing tools of evaluation that organisations all around Europe, in the MENA region, and globally are using in their everyday work, to experience what are the organic practices of evaluation developed by practitioners themselves and to gain deeper insight into how knowledge produced in evaluation processes is being used and reproduced. Finally, an important part of this endeavour was the necessity to obtain insight into different cultural landscapes, all parts of a wider cultural ecosystem within RESHAPE’s horizon, in order to develop a deeper understanding of problems related to evaluation practices and the ways to overcome them.

Interviews & questionnaire

Through a series of interviews with practitioners active in different countries of Europe and the MENA region we wanted to accomplish two main objectives: learning about participants’ practices and attitudes with regard to evaluation processes, and learning about the context in which participants operate, including the nuances about the functioning of the cultural ecosystems in different geographies. Interviews were conducted as a semi-structured type of survey with open-ended questions. Practitioners included members of our Trajectory group Fair Governance Models as well as our colleagues operating in different contexts and in different capacities. The questionnaire for the interviews was jointly developed by the team and involved inquiring about: basic organisational structure, organisational attitude and motivation, methodologies and metrics that organisations use, how is evaluation being used/implemented, how is knowledge gained in evaluation processes communicated. As a complementary tool to the interviews a short questionnaire was designed that focused on a small number of key questions tackling values that inform the evaluation processes, usage of knowledge produced by the evaluation, and how participants see the future of evaluation processes. The questionnaire was designed with the primary purpose to reach the RESHAPE community and to scan the prevalent themes and undercurrents related to evaluation.

Learning resources and inspiration

Different resources have been used throughout this journey, as an inspiration. First and foremost, there were the numerous group and personal encounters with fellow Reshapers as well as other practitioners in our field, during our physical meetings in Lublin, Tangier, Cluj, Sofia, as well as remote meetings in Zagreb and Athens. These exchanges not only enabled us to gain insight into the various interpretations of evaluation processes, existing practices and overarching questions, but have also made us aware of the fact that evaluation persists as one of the key questions in the context of cultural governance, especially with regard to pressing needs to develop fair models of governance based on solidarity and redistribution of power and resources. One of the guiding examples that has greatly inspired our team was Bhutan’s Gross National Happiness index, as a philosophy structured around governance based on collective happiness/wellbeing and preservation of life on Earth.

Another great inspiration for our work was provided by the work of Brazilian futurologist Lala Deheinzelin, who developed 4d Fluxonomy, a governance concept founded on the need to elucidate complex connections between cultural, environmental, social, and financial aspects while developing a sustainable future through the application of new economic models.

In addition, the practical application of Fluxonomy principles into an evaluation tool prepared by FARO, a research group of 14 various Ibero-American cultural organisations gave us an incentive to start developing a version of the tool that was suitable for application in the framework of RESHAPE and could be adapted for communities beyond RESHAPE’s scope. Working on a development of a nuanced and intricate evaluation matrix enables these actors to envisage a scenario for a transition period, notwithstanding the speed and magnitude of changes that we are facing in the contemporary world.

The connection between evaluation and governance?

Evaluation is a crucial part of survival of cultural ecosystems, as it enables the creation and systematisation of knowledge and skills necessary to perpetuate practices that are crucial for institutional/organisational existence as well as enabling us to become conscious of practices that are detrimental for organisations and find ways how to unlearn them. On the other hand, evaluations are a part of a larger complex of scrutiny and control mechanisms (together with audits, quality assessments, and so on), applied externally and from above in order to justify the existence of these external actors, especially funding bodies, government agencies, private foundations, and various public and private investors in arts and culture.

These two processes are inevitably connected, albeit very different. A number of practitioners we have been talking to made a distinction between evaluation that organisations initiate themselves as means to build and share knowledge, seeing these processes often as tacit and organic and leading to the organisational empowerment and greater ownership of individuals over activities and crucial processes within the organisation. Opposed to that, many of the individuals we have been talking to marked the evaluation processes organised and led by the funding bodies as a form of control, as highly formalised processes based on pseudo-quantities (Habib – Engquist and Möntmann, 2018).

When we talk about fair governance it’s inevitable to think about the values that inform what fair governance actually is about (or, rather, should be) and about systems set in place designed to measure how exactly values are being put into practice. No less important are questions of who sets the rules of the evaluation as well as who has a say and who participates in which capacity. Evaluation is a complex endeavour, burdened with internal conflicts and contradictions between motives, subjects, and objects who perform the evaluation/the evaluation is performed on, tools, methodologies and metrics used to perform the evaluation, frameworks, narratives, and language in which evaluations are being interpreted. In that sense, it is much more productive to think about multiple evaluations, instead of a single all-encompassing term.

Perhaps one framework that bridges all the inner contradictions of evaluation is the one of establishing sovereignty over interpretation and valuation (Zembylas, 2019). The sovereignty over interpretation, or the right to establish the narratives determining one’s work make up the core ingredient with regard to governance of institutions and organisations regardless of their size, mission, and formal structure.

Formal language and requirements of evaluation processes led by external factors have permeated the organisational structures, often leading towards institutional isomorphism (DiMaggio & Powell, 1983), that is, increased similarity and homogeneity of actors functioning in the same field. Furthermore, by reducing evaluations to their technical aspects while avoiding the tricky questions pertaining to ownership and direction, evaluations can often become a legitimation tool of no-change. Many organisations, also some of the ones we had the opportunity to talk to, actively resist and challenge these practices.

Organisations will often challenge the official narratives of the evaluation by developing alternative routes and methodologies, leaning on affective and community‑oriented models of evaluation. All the colleagues we have been talking to, no matter where they operated, stressed human contact, conversation, and orientation towards the community as key ingredients of bottom-up driven evaluation. At the same time, all the colleagues stress the necessity to dedicate more time and resources in developing different evaluation methodologies in order to gain deeper knowledge about intricate connections between what we do, how we do it and how it corresponds with the context in which we operate. We hope that our work and this report as one of its essential parts make a small contribution towards greater understanding in that sense.

Context of the research

The first part of our research contextualises matters related to evaluation through general description of circumstances in which interviewees operate, drawing the lines of contrast and similarities between the different cultural ecosystems in which their practices are nested. In the process of development of this research, we interviewed our close collaborators, colleagues engaged in the RESHAPE process as well as colleagues with whom we engaged outside of the project’s scope. In total, we conducted eight in-depth interviews with 14 cultural workers operating in Turkey, Croatia, Serbia, Palestine, Bosnia and Herzegovina, UK (Scotland), Switzerland, and Belgium.

The majority of our interviewees were individuals engaged in non-institutional cultural production, that is, independent cultural actors from various countries in Europe and the MENA region. Most interviewees’ work is nested within a collective/organisation, although some of them are active as freelancers. These organisations vary in terms of formal frameworks, disciplines, size, and scope of their activities. They are active across multidisciplinary forms of artistic expression (performing arts, dance, theatre, visual arts, media, and so on) as well as in diverse critically engaged cultural practices. Organisations we tackled also come in various shapes and sizes, varying from small artistic initiatives to veteran institutions, informal collectives to long-established organisations, organisations focusing their work around one or several annual events/projects to organisations engaged in multi-stakeholder structures with complex programmatic dynamics. Despite great differences between organisations, there are many similarities, mainly with regard to values and basic programmatic orientation: a strong inclination towards socially engaged arts/culture, working closely with communities, critical attitudes towards current social and political context, and active engagement with various actors in terms of fostering positive changes.

A diversity of geographical contexts, together with dependent social and political circumstances has marked our endeavour and represents the great value of this research. Our interviewees came from eight different countries and territories: Turkey, Croatia, Serbia, Palestine, Bosnia and Herzegovina, UK (Scotland), Switzerland, and Belgium. Besides being embedded in their local and national contexts, all of the actors that we’ve been talking to are actively engaged in international collaboration. Both bigger and smaller urban as well as rural areas are relevant for our interviewees’ work. Even though national cultural policies as well as international funders greatly shape the organisational field, our interviewees all emphasised the importance of being embedded in a local context, such as the cities, towns and various other local communities, the ones that hold the potential to bridge the gap between complex workings of international bodies and national policy-making institutions that often fail to recognise the needs and the importance of artists and cultural workers in social development:

Independent evaluator, Palestine: ‘I find it easier to have dialogues with city governments so I am a strong advocate for city development in the [MENA] region. Because cities tend to have more elections and less political appointments. I like to work with cities and I have considered them to be more receptive than the Ministry of culture.’

There are great and well-known disparities between the geographical and socio-political contexts in which our interviewees operate, including ones that pertain to institutional and financial sustainability of organisations as well as disparities that are determined by the differences in the organisation of cultural spheres and the overall objectives of cultural policies (and politics in a wider sense). As much as positions of cultural operators in Western European countries differ from those in Turkey, Palestine, or Bosnia and Herzegovina (especially in terms of available resources), our interviewees have much in common in terms of the conditions in which they operate. First of all, they face similar pressures of top-down imposed priorities when it comes to the conditions of production, both in terms of content and in relation to delivery to audiences. Regardless of a specific geographical locale, all of our interviewees have testified having difficulties constantly adjusting to demands of decision-making and/or funding institutions, where tools for evaluation are often seen as a negative type of instrumentalisation of independent cultural actors in a larger scheme of things. In Western European, so-called ‘developed’ countries, these pressures are often expressed through demands for structural rationalisation and adoption of a corporate organisational culture. Up to a certain level, similar tendencies are present in Eastern Europe and the MENA region, with one notable difference. Namely, while in Western Europe pressure often comes from governmental bodies and/or agencies connected to governments, in the MENA region and Eastern Europe the pressure often stems from diversified sources (government and private funding bodies) as the determining need is to obtain finance and other resources necessary for production, which are scarce. This often leads to a situation that can be crudely described as follows: While in Western countries pressures are put primarily on organisational structuring and management and governance of organisations, in the East the pressure is more emphasised through programming priorities that often determine the content of organisations’ work.

In the East, collectives rely heavily on personal enthusiasm of the involved cultural actors, often without formal professionalisation of roles and relations. While in the Western countries, organisations are being coerced into a hyper-structuralised ecosystem that lack potential for experimentation and development of synergistic momentum outside of siloed views on artistic and cultural production:

Cultural operator, Serbia: ‘It’s also that the small organisations here, everything between civil society organisations, small start-ups ... Some things that function as a rock band or whatever, and also a lot of personal, almost family relations. It’s very difficult to formalise. It’s not a professional thing. It doesn’t have professional structure, even though it has the results that are high quality. And organisations like that produce lots of good cultural content in Serbia. It does not apply any kind of professional logic. It’s difficult because they’re not organised. They’re like some friends decided to do something together. And then you have everything in between your relationship.’

Cultural operator, Belgium: ‘We have top-down management being implemented more and more, and a sort of rationalisation happening in this very fragmented art field. If anything, the Flemish art scene is a very hybrid one, with super small organisations, many institutions collaborating in many diverse ways and the government doesn’t really like this uncontrollable situation – big spaghetti that we work in. So, they’re now implementing categorisation and with this categorisation come new rules. ... So we are facing a big rationalisation and categorisation of the field in very strict categories and with these categories come evaluation points which are different but it also creates a total immobility, because if you’re classified as a certain type of organisation, you should do a certain work and you cannot reinvent or redefine yourself or maybe have a different mobility inside of this, because you would not meet the evaluation point.’

Maybe the biggest point of contradiction between positions of actors from Western Europe and the Eastern countries became visible during an interview with a colleague who operates in Turkey. While colleagues in the West struggle with excessive structuring and pressure to adopt business practices, cultural actors in Turkey avoid formal registration in order to escape being pressured by the current political and administrative regime:

‘We use the benefits of being unregistered. We can do anything political, about the government. No one knows about it, and on paper we don’t exist. It’s a plus to defend these rights.’

From these insights it is clear that all organisations we were in touch with need to navigate a set of complex, often oblique rules within their respective contexts, and these circumstances have a major impact on the development of organisational narratives and practices of evaluation. One thing all actors included in this brief research had in common was searching for places and platforms that would enable more freedom from excessive coercion, places for experimentation, failure, and reflection. The next two parts of this research will look into more details about existing evaluation practices as well as new tendencies that organisations are developing in this regard.

Collecting Narratives as Data

In this text, we attempt to understand the points of view, methodologies, and motivations of arts and culture practitioners, learning about complementary tools that embrace testimonies and storytelling. We look at experimentation with shared creation of knowledge and reshaping our evaluations towards processes of learning and transferring that knowledge in order for it to be truly empowering.

We have chosen to share these narratives as raw data, as conversations around organisational or individual practices and approaches that confirm the connections between evaluation and governance and the theoretical findings that are already available in the existing literature and research.

This section follows the logic of both the interviews and the questionnaire, including the presentation of words, terminologies and quotes from the interviews and the compilation of our data. We attempt to reveal commonalities without erasing complexities, examining the language and the narratives as data that can be reflected upon. These are neither uniform nor frozen in time. They resonate with our fundamental question, which can now be brought into perspective.

From the questionnaire: mapping narratives

How do we present the data we have collected? How do we look at the answers and the material we have at hand and make sense of it? To answer these questions, we carried out an experiment in visualising qualitative and narrative materials using Graph Commons, created by Burak Arikan (Turkey), a collaborative online platform for making and openly publishing interactive network maps. Graph Commons is dedicated to investigative journalism, civic data research, archival exploration, creative research, and organisational analysis. Using a simple interface, it allows users to compile data, define and categorise relationships, and transform them into interactive network maps, discovering new patterns and sharing insights about complex issues. Maps can be publicly shared and collectively edited. The act of network mapping becomes an ongoing, shared practice among contributors and collaborators (Arikan 2015).

Cultures of evaluation

Do organisations and practitioners have a culture of evaluation? If so, what motivates the evaluation process, and what drives the organisational attitude towards evaluation? Our interviewees confirmed that, both in the literature and in practice, there are two types of evaluation, and they are opposed to one another. From cultural practitioners in the United Kingdom:

For us there are two aspects: there is the formalised evaluation and there is the informal evaluation. Often formalised evaluation is to enforce somebody else’s agenda rather than our own. The informal would be more like a critical feedback within the team or an ongoing conversation with the artists, following a set of criteria or questions depending on the artist you are working with.

What first transpires here is the association of evaluation with external factors. This ‘external evaluation process’ primarily relates to project funding or institutional support that imposes monitoring methodologies, requirements, processes, and specific agendas on the organisations that receive such support.

One cannot speak about a single form of evaluation, however, and although external evaluation seems to predominate, organisations also develop their own processes of internal evaluation. For some, the internal evaluation is a formalised process that is entirely inherent to and part of the culture of the organisation itself and its own programmatic strategy. From a cultural worker in Serbia:

We are doing evaluations on a regular basis. This is not connected to the projects. Firstly, it is important to keep talking with the people in our organisation. The director of the organisation and I have evaluation meetings with every employee. Twice a year, we speak with everyone about what they did in the previous period. ... We also have a strategic planning meeting at the end of each year. We do an evaluation of the previous year, of all the plans and the results. ... Every team makes their own smaller action plan for the upcoming year. We set some general results that we want to see at the end of the year. That is something we are looking at throughout the whole year.

Both internal and external evaluation processes are sometimes fully integrated into the culture of the organisation. The motivations for each process are clearly established, and while internal evaluation is seen as a process that serves and accompanies the development of the organisation in a programmatic and structural aspect, external evaluation is still very much only described as a forced transposition of the organisation’s vision, values, and programmes into a very limited and limiting set of quantitative criteria. From a cultural worker in Belgium:

We work on evaluation at three levels. We have the process of re-evaluation that we have in our collective governance structure. We don’t really work with any methodology, but around questions. It is a lot about re-asking questions and evaluation protocols; there’s a constant ritual of rethinking and re-evaluation. ... We are doing evaluations with our artists on a daily basis, and this is really embedded in our practice. ... With the long-trajectory artists with whom we collaborate for years at a time, sometimes it’s a ritual visit to their house or their workspace. We all go there, we sit together, we talk, we evaluate all aspects of collaboration and we formulate new lines for the future. And then – let’s say at the residency level – we do much the same, but it’s much more on an invitational basis, not an obligation. Lastly, there’s the realm of how we need to report to the government, and what we try to do is get a lot of the narratives we have amongst us, and to translate that into the form that the government has given to us. There is another level of percentages that we need to prove, such as income, number of shows produced, audience numbers, and so on. There’s a whole set of criteria we need to evaluate very objectively, on governance, a set of rules that we need to go through. We need to say that we do this and that, and then we get a score.

In some organisations, evaluation is only defined as external and is motivated by the monitoring of funding and reporting processes. In this case, practitioners consider their evaluations as opposed to other types of feedback sessions, meetings, or informal discussions. Although a specific terminology might not be formally identified, it is still part of the organisation’s culture. Informal and organic, this process still informs programmatic or strategic orientations. The cultural worker from Serbia:

We have a steering committee, because we have projects and programmes, and we have a venue. I think maybe that is also important because we constantly cooperate with a lot of people. We have a lot of people coming to the house and discussing everything. I think that that is what we’re looking for here. I hadn’t been thinking about it that way before. We have a lot of informal stuff happening. If someone were to sit down and call it differently, you know, they would give all that a proper name. We don’t, because it just happens. I think that actually, all of it can be considered to be some form of evaluation. But somehow it just goes along the way it goes.

Evaluation in practice: tools, methodologies and metrics

The methodologies, criteria, or metrics employed vary according to the different types and cultures of evaluation defined by practitioners’ own
motivations.

Funding programmes, institutions and/or private foundations set clear methods and criteria that lead the external evaluation process. If the criteria or metrics used in the methodology are determined in advance, it can be especially difficult for small or medium-sized organisations, or for the evaluators themselves, to apply them. The diversity of contexts and ecosystems, the complexity of projects and the unpredictability of the creative process often make the strict implementation of these methodologies difficult. The evaluator for the Creative Europe programme, Turkey:

It is a very systematised process, with a set of criteria that are very well defined. ... Take the activities, for example: are they concrete, deliverables, outcomes, measurable or not? Here, we are also expected to score our own evaluation strategy. Does the project have an evaluation strategy, a qualitative and quantitative base? What kind of deliverable does it propose and how do we propose to measure it? I never know how to score this. Because every project is a journey, I find it very difficult to evaluate the evaluation strategy of a project on paper.

Bureaucracy, the obligation to sustain daily operations, constant auditing, reporting, deliverables, quantitative measurements and so on... All this externally imposed evaluation and its methodology forces organisations and practitioners to comply with what Jonatan Habib Engqvist and Nina Möntmann refer to as ‘corporate institutionalism’, and to normalise the ideologies, strategies and managerialism defined by ‘capitalist realism’ in their programmes and structures (Engqvist and Möntmann 2018). The cultural worker from Belgium:

We are facing a major rationalisation and categorisation of the field into very strict categories. With these categories come evaluation points, which are different, but this also creates an utter immobility, because if you are classified as a certain type of organisation, you have to do a certain kind of work, so you cannot reinvent or redefine yourself, or maybe have a different mobility inside of this, because you would not meet the criteria of the evaluation.

These criteria, metrics, and evaluations, if accepted and considered unavoidable by practitioners and organisations, are, however, being strongly criticised as an imposed process that cannot be integrated into the core programmatic culture of organisations, because of their uniformity. Their limitations also lie in what serves as a base for their formalisation. They rely on Eric Liedman’s concept of ‘pseudo-quantities’, and do not consider nuances, complex dimensions, or even the relationships between ecosystems’ (Engqvist and Möntmann 2018, 61-64). Short-term and essentially quantitative, they assess immediate, tangible and measurable created value in compliance with unrelated agendas and priorities. The cultural and artistic projects evaluator from Palestine:

Funders who receive government money, from the EU or SIDA, for example, continuously remind me how they want to see concrete outputs, because it concerns taxpayers’ money. Which is really interesting. They say: This is taxpayers’ money, so we also need to see the short-term outputs and impact. We cannot go back to the scene five years from now to see how this experience has shaped people’s lives. They have to demonstrate or prove an impact to their own respective governments. It is not because they want to be obnoxious; it’s because they have to do their own lobbying to get the money from the governments. This is a part of funding that makes me very uncomfortable, something I cannot understand fully. ... The point is that funders vary, depending on where the money comes from, and to whom they are accountable. The less they are accountable to a government, the easier it is to work with the funder. With funders, it is not the methodology that makes the difference, but what they need to see towards the end.

Where internal or informal evaluation processes are concerned, methodologies and criteria seem to be defined differently. Usually initiated by a relational process within the organisation, the internal or informal evaluation is led more horizontally, in a non-linear way, valuing interactions, participation, transparency and needs over quantitative measurements (Engqvist and Möntmann 2018). The cultural worker from Bosnia and Herzegovina:

For me, if it involves participants, then the most important part is the feedback that I get from those participants. I don’t like anything numerical. Our work was focused on young people. We worked with a psychologist on that project. She led some great workshops. One of the things we did was ask each participant to write something, a small note, an essay about that project. For instance, what did they like? How did they feel during that process? What did they discover about themselves?

The cultural worker from Serbia:

When we do an evaluation, we do not have strict questionnaires. We like to have it open. And these evaluation meetings with employees are very important. It is very important for us to see what they are thinking about, and how they are feeling about working in our organisation and working on particular activities. What is very important is to see potential, as well as any problems that arise. It is much more important for us to speak about conditions, about why this happened, or this did not happen, and to try to find solutions that we are all satisfied with.

The methodology and its formalisation are not the only things that are different in internal and external evaluations. There is also the time frame, and attention to qualitative elements, as well as the integration in the creative, artistic, and organisational structure and programme of the organisation. Often practiced as a verbal evaluation, internal methodologies allow for more conscious engagement in direct dialogues, permitting the formation of informal networks, relationships, and systems of sharing within the organisation and the ecosystem in which it grows. Where external evaluation methodologies seem to be at the service of the distribution of powers, internal and informal evaluation processes delegate responsibilities and allow for movement, flexibility, redefinition, and distribution (Engqvist and Möntmann 2018).

Responsibilities

Depending on the type of evaluation, the responsibilities and roles of the actors involved in the process vary. In some cases, and at specific steps of the evaluation of a project, external evaluators can be involved in the evaluation process. The cultural worker from Bosnia and Herzegovina:

The particular difference that makes this chasm between the developed societies and economies and culture, and economies like ours is so big. That is why it is important to network with other people who are able to talk to us, and who maybe have similar issues. This is why the culture of the Cultural Capital project was important. We had a lot of support from the evaluators. They came to visit us. We had to engage in a lot of interaction with the local authorities, about what they wanted, what they were ready to do. In the end, everybody was rooting for us.

Most of the time, however, because of the budget restrictions and guidelines imposed by the methodologies of external evaluation processes, practitioners and organisation teams have to take on the realisation of the evaluation themselves. This process is usually considered a burden: it is unpaid work, with a time frame that is not adapted to either the project or the structure.

Interviewees often referred to the same difficulties where evaluation was concerned: lack of time, lack of means, and excessive bureaucracy or institutionalisation of the processes. The cultural worker from Belgium:

You need to employ at least one person on archiving, you need to set your servers in certain ways, you need to upload your digital data in specific formats. It is a hyper-institutionalisation of these companies. For a bigger institution, it sort of makes sense, but in an organisation run by three people, it’s completely silly how much attention goes into fulfilling these obligatory points.

The cultural worker from Croatia:

Lack of time. The only evaluation that makes sense is the one that organisations do because of their own needs and those of the people they work with. At the same time, it is always so difficult to find enough time to do even this.

While any project involves lots of stakeholders, external evaluations rarely take the whole ecosystem into account. The cultural worker from Croatia:

On the other hand, my organisation is always coordinating relatively large-scale, multi-stakeholder processes, where we have dozens of partner organisations, municipalities, individual civic initiatives, artists, neighbours and so on. In our campaigns and activities, there are so many grey areas and murky waters, so we are constantly evaluating this or that. This is not only about having a lot of actors involved, but also because their positions and interests are sometimes highly discrepant. So we are in some way or other constantly engaged in thinking how to push things, mend them, negotiate, persuade, and so on.

There is a disconnect between practitioners and institutions. Information exchange and network sharing exist within the organisations and with their immediate environments, but the logic of power versus responsibility and hegemony (as discussed by Gramsci), and the question of impact and values relate to neither the ecosystems nor the interconnections. Dialogue remains internal, while the evaluation itself remains tedious and bureaucratic. The independent cultural programme evaluator from Palestine:

I think we even have to let go of the word ‘evaluation’. In Arabic, it implies a lot of value judgement. The terminology has to change. People need to stop being obsessed with impact. Impact comes in different shapes and forms. Sometimes you have these groups of ten kids who are part of a training programme. They all enjoy it, and that’s fair. And enjoyment and entertainment are very important. Sometimes the impact does not have to be powerful, long-term, or life-changing. We have to be more humble, and accept that some processes are more joyful. We should sometimes trust our instincts. (See also Fisher and Möntmann, 2014.)

Learning, unlearning, or co-creating knowledge

In our interviews, we felt a clear disconnect between the values represented by external evaluation processes and the internal or informal processes. This confirms the radical differences between these two approaches, but it also questions the relationships between organisations, practitioners and the institutions, whether they are funding bodies or institutional support systems.

When it comes to external evaluation, values are often set by the (grant) proposal applications. They reflect institutional priorities and trajectories. Even if these values can find root in environmental or social aspects, their translation into the evaluation methodologies of the funding bodies or institutions lead to criteria and metrics being over-simplified in quantitative measurements. The cultural worker from the United Kingdom:

In terms of evaluation, we also have to meet carbon emissions and the diversity that we have to meet within our programmes and the running of our organisations. On diversity and inclusion and all of these things that we’ve mapped, it all sits within the bigger framework of artistic excellence, audience access, leadership and governance and international connection – how you reach out. And across these you have to look into digital, environmental, equality and creative learning. This is how we need to report every year and write our business plans.

This explains why the question of value has been a central focal point in our research. For the RESHAPE community and our interviewees, the values that support evaluation processes seem to be closer to the foundations of the ‘new institutionalism’, giving space to less hierarchical, more interactive, flexible and interdisciplinary programmes, participation and transparency, in response to the need for new ecologies of care towards more sustainable institutional processes and policies (Engqvist and Möntmann 2018, 81–87). The cultural worker from Bosnia-Herzegovina:

When I am evaluating, what I am trying to sense is the spirit of time. It always has to be linked to the audience and it’s always about whom we are trying to reach. But I do not like that question about ‘what’ or ‘how many people are going to come to the show?’ Does it have to be that way? Because, as you know, some shows are meant to bring together just two or three persons. It asks the greater question, the spirit that we are living in. So usually in the evaluation, what I disagree with are the miracle numbers that indicate success. I really hate that. And I would like to have the guidance, more emotional, more empathetic, with a more empathetic sense of the art and culture, and the kind of regional area that you are living in.

The set of values supporting an evaluation process say something about the definition of impact and success. This is of course extremely important for art and cultural organisations, and most small and medium-sized organisations depend on grant income. What does the set of values used to measure success say about the true value generated by my project or organisation? And, if I do not meet or realise the expected value, how can I ensure a long-term sustainability? The cultural worker from Switzerland:

Last year, I tried to extend the evaluation catalogue of our organisation – going from the numbers in an audience to a number of unpaid working hours, to a number of international partners, to a number of non-artistic or non-cultural institution partners, the number of material providers, as well as the money that’s generated, and actually the money that is re-injected into the economy. So, actually trying to prove all this, and we wanted to do this on a bigger level, so that the whole political parameter is able to communicate with the numbers: how much is invested and how much is re-invested in the local economy through culture. This is what most of the cities and districts are already working on, actually just trying to say what culture is actually producing. It always looks as if culture costs a lot of money and there’s no income, so it’s just outgoing money. But if you communicate it differently, or you put it in a different way, then you can actually prove what culture is actually producing. It is a very naive and simple method, listening to your more artistic and sensitive ways of talking through our practice and reflecting on them through those processes.

Artistic, sensitive, emotional, transparent, honest, collaborative... In practice, the strategies of art and culture organisations already include aspects of exchange and mutual support, and at a local level, they allow a more nuanced understanding of the values generated and integrated within their operation and programming. Decentralisation of values within organisations can be a response to the hegemony of the institution and its tool of evaluation (Fisher and Möntmann 2014).

The narratives and the language around evaluation should be examined, in order to create a baseline of already existing practices, terminologies, values and aspirations, through conversations, reflections and meditations that confirm the need for a shift in evaluation practices, towards qualitative, conversation-based methodologies, collaboration, co-responsibility and interconnection.

New Metrics and Values Evaluation Website Prototype Proposal

A subgroup of six people was set up inside the Fair Governance Trajectory. They performed the research mentioned earlier, also connected with the work being developed by FARO, with the objective of proposing an interactive output to its current quest for a system that can be applied to evaluate projects and institutions in the socio-cultural sector, such as the ones participating in the RESHAPE context.

Culture has values that go far beyond numbers as it has the ability to transform societies, improve people’s lives and activate the global and local economy. In this sense, it is necessary to develop new metrics that allow us to evaluate all this and value the wealth of the intangible that will be the basis of the economy of the future.

FARO is a learning community that joins forces for a broader, more effective, and innovative action in the sociocultural field, formed by professionals of twelve organisationsBAC Biennale of Arts of the Body, Image and Movement, Madrid, Spain // Consortium of Museums Comunitat Valenciana, Valencia, Spain // Feboasoma, Buenos Aires, Argentina // Graner Artistic Residences Center, Barcelona, Spain // Invisible Pedagogies, Madrid, Spain // mARTadero project, Cochabamba, Bolivia // NAVE Artistic Residences Center, Santiago, Chile // LABEA – Art and Ecology Laboratory, Pamplona, Spain // Salmon Festival, Barcelona, Spain // Teatre L’artesá, El Prat de Llobregat, Barcelona, Spain // Teatro de la Abadia, Madrid, Spain // Uniflux, Sao Paulo, Brazil. from Spain, Brazil, Bolivia, and Chile who have been researching and discussing new values for the past two years. FARO’s challenge is to adapt their practices to an Ecosocial Transition scenario where linearity of people, material, resources, and time can be revised into the exponentiality of the nets, where a Culture of the tangible can become a Culture of the intangible, where Egocentrism gives way to Ecocentrism. They aim to create tools, systems, and methodologies using collaboration and new technologies for converging teams, talents, partners, resources, data, and time.

The Evaluation Subgroup started to study the theoretical base of FARO’s actions, the 4D Fluxonomy, which combines Futuring and New Economies, created by Lala DeheinzelinVideos by Lala Deheinzelin explaining fluxonomy here and here. (Brazil). A bridge was established through Eduardo Bonito (based in Spain) who participated in both projects. Fluxonomy works with four dimensions: cultural, environmental, social and financial, based on the four types of economy: creative, shared, collaborative and multi-value. The idea of the collaboration is to create a common metric system for measuring results highlighting the intangible wealth the projects and organisations of the culture sector have and create.

The collaboration enabled the Fair Governance Trajectory to do an exercise of a proposal for an evaluation website form based on 16 values/criteria, which have been constantly discussed within FARO over the past year and were put into an evaluation form website map with the support of the Evaluation Subgroup. The attached table of 16 questions for 4D-evaluation is the current version of a strong work of collective and progressive approach to this type of metrics and it reflects each word and each concept. The table has been translated into English by RESHAPE and reflects the current state of the research, with the understanding that these values are in constant evolution as they are tested with different projects, discussed and revised by the FARO members in a process which is expected to take many months of trial and error, tests and redefinitions before it could be released, as our Subgroup suggests, in a website format.

For us it has been a very important process to follow the development of the definitions of these values, discuss them internally using our own projects as reference and giving feedback to the FARO members for concept review. It is quite pertinent to point out that we have been looking at realities of smaller, peripheric as well as more established projects within the RESHAPE area of reach and exchanging our impressions with FARO members, contributing to the development of the values. This reflection process has been very enriching for us, and the prospect that it can inform a research that will generate tools for evaluation is already in itself quite satisfying for our Subgroup.

Fluxonomy values have been adapted to the socio-cultural sector, maintaining the theory’s fractal vision of reality based on its ‘zooming’ in on four dimensions: cultural, environmental, social, and financial, which are in turn divided into four, generating a chain of meaning that facilitates a more holistic approach to reality. For instance, the cultural dimension of a project that in turn includes a cultural dimension of the cultural, an environmental dimension of the cultural, a social dimension of the cultural and a credit dimension of the cultural.

By answering four questions about each dimension, we will be able to evaluate a project or organisation on four levels:

Cultural: The reason for being (transmission – relevance): How Convergent the organisation or project’s Idea is, how Revealing its Language is, how much Capacity to Affect its Interaction has, how much Reciprocal the Learning involved in it is.

Environmental: What structures it (transformation viability): How Transforming the organisation or project’s Knowledge is, how Sufficient its Infrastructure is, how Evolutionary its Regulatory Body is, how Interdependent its Multi-Capital financial resources are.

Social: The ability to do together (interdependness – scope): How activating the organisation or project’s Proposal is, how Conscientious and Translocal its Organisation is, how Co-Evolutionary its Governance is and How Influential its Credibility is.

Financial: What generates reproducibility (impact – exponentiality): How Revitalising the organisation’s or project’s Thoughts are, how Deconcentrating its Distribution is, how much Multiplier the Circulation it promotes is, and how much Regenerative Flow its Economy promotes.

As one can see in the table, each of these 16 aspects mentioned above are informed by five forces that support the questions formulated.

The Fair Governance quest was an exercise to propose a map of a website that may support the evaluation of any kind of organisation or project and prepare it to be implemented and tested. It will contain a fixed picture of the definitions as they were in September 2020, acknowledging that the definitions of values and metrics by FARO are still in constant redefinition and fine-tuning, thus suggesting a structure that can be easily updated at any point.

The website structure presented allows users to set up parameters for new projects, get multiple answers from the project’s users and receive automatic numeric evaluation analysis as well as a list of all the answers divided into each of the 16 values. A summary analysis is produced by the project’s evaluator with the information provided.

Each project’s or organisation’s member can register as a user and answer as many of the 16 questions as they can. The website form consists of one home page which directs to four dimension menus where users will encounter four values with four questions to be answered. There they can read and listen to definitions of each value, and give a numeric value of how advanced the project is on each aspect according to their perception. They also answer each question in writing.

Once users have completed their answers, their numeric perception levels will be automatically updated in the home page, visible to all users who have finished answering. These users will also be able to see all answers listed on each section. Once all users have finished, the evaluator(s) can read all the answers and produce final synthetic answers to illustrate the numeric average perception produced automatically.

After all the analyses are produced the evaluation can be available to be seen by all users, or by guests with a code, or by the general public, depending on settings previously defined by the evaluator.

The Evaluation Subgroup understands that the website tool can be very useful if put in place, but it also recognises that tests and reviews should be run before it is offered as a public evaluation tool to any project or organisation. As a result of this collaboration process, many of the Subgroup members were invited to continue to reflect together with FARO on these issues so they may act as invited task force consultants in future projects, for instance at FARO’s residence in November 2020 at the Cadiz Ibero-American Theatre Festival.

Our experience with researching on evaluation values and metrics has been a journey into a kaleidoscope of needs and views, giving us the certainty that the issue is very complex and diverse, and reflecting the immense possibilities of governance practices observed in the RESHAPE area of action and beyond. The process has enriched our perspectives and we hope that our reflections and suggestions described in this text, attached table, and website map may contribute to the development of practices more connected to the reality and needs of cultural projects being developed nowadays.

 

Developed in the framework of the RESHAPE trajectory Fair Governance Models including Helga Baert, Eduardo Bonito, Virdžinija Đeković Miketić, Fatin Farhat, Katarina Pavić, Ilija Pujić, Martin Schick, Sam Trotman and Claire Malika Zerhouni.

This text is licensed under the Creative Commons license Attribution-NonCommercial-ShareAlike 4.0 International.

References
Arikan, Burak. 2015. “Creative and Critical Use of Complex Networks.” Medium, January 27, 2015. https://medium.com/graph-commons/
creative-and-critical-use-of-complex-networks-412fe9eddecb.
 
DiMaggio, P., & Powell, W. (1983). ‘The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organisational Fields’. American Sociological Review, 48(2), 147–160. Retrieved October 20, 2020, from http://www.jstor.org/stable/2095101.
 
Fisher, Mark, and Nina Möntmann. 2014. “Peripheral Proposals.” In Cluster: Dialectionary, edited by Binna Choi, Maria Lind, Emily Pethick, and Nataša Petrešin-Bachelez, 176. Berlin and London: Sternberg Press.
 
Habib Engquist, J. and Möntmann N. (2018). Agencies of Art – A report on the situation of small and medium-sized art centers in Denmark, Norway and Sweden. OK BOOK, Oslo.
 
Zembylas, T. (2019). ‘Why Are Evaluations in the Field of Cultural Policy (Almost Always) Contested?’ In: Major Problems, Frictions, and Challenges in Arts and Cultural Management, Sense and Sensibilities in the State of the Field. Routledge, New York and Abingdon. Pp. 151–173.
Related prototypes
Your privacy

We use cookies to improve your experience on our site. To find out more, read our privacy policy.

Tell me more
×