Published on

Deepomatic: Ethical AI Data Annotation and Improved Worker Conditions

To function, algorithms need to be “educated”, i.e. to ingest a large amount of data labelled by the human hand. Without this intervention, there is no automation. This repetitive but decisive task, sometimes called “annotation”, is usually outsourced by companies that train artificial intelligence solutions. For an organization like Deepomatic, annotation is a key service. While there is a lot of talk about the sustainability of supply chains and the ethics of artificial intelligence, the ecosystem is still struggling to see annotation as part of their responsibility chain and address the issues it generates worldwide. At Deepomatic, we have taken matters very seriously and have been working for two years to improve the working conditions of those who make our product possible. This article has a double vocation: to make the curious aware of this social issue and to convince the concerned organizations to take the problem head on.

For more than a decade, there is a growing interest for the ethics of artificial intelligence: guides and labels aiming at framing the practice have multiplied and discourses have been unified around well-defined issues: accountability, privacy, security, explicability and equity (1)… However, as surprising as it may seem, almost none of these productions deal directly with one of the main externalities of this technology, i.e. the workers who, at the very beginning of the chain, label the data.

For example, in its 2019 guide, the European Commission only briefly mentions the practice of recommending the implementation of sovereign solutions (2). There is certainly a lack of consideration for a subject that is nevertheless crucial.

The Devoir de vigilance law passed in 2017 created an obligation for ordering companies to prevent social, environmental and governance risks related to their operations, which can also extend to the activities of their subsidiaries and business partners. However, outsourcing annotation due to its new and growing character and lack of supervision involves significant human risks. Apart from these social considerations, it should also be emphasized that the proper execution of this activity is inseparably linked to the performance of the algorithms. Poorly executed, it misleads them. In light of these elements, it is difficult to understand the lack of consideration for this unavoidable issue, which, when addressed, often focuses on the extreme (but nonetheless important!) cases of independent workers operating on very uncontrolled platforms. However, there are other forms of organization of this work which, as we shall see, also raise questions. In reaction to this generalized silence and thanks to some insightful readings, Deepomatic quickly wanted to understand the issue better and to get to know those who contributed to its solution. After two years of research and investigation, we are proud to share the conclusions we have drawn, the actions we have put in place, and that we wish to promote to everyone.

Ethical issues of annotation

Definition

In technical terminology, data annotation is a task that aims at preparing the training dataset for supervised learning. It consists of associating metadata with each dataset. In more unorthodox language, let’s say that if you want your image recognition solution to distinguish apples from pears, you have to show it thousands of images of each of these fruits, labeled as such, before the magic happens. It is therefore easy to see why this repetitive practice is time-consuming: up to 80% of the time spent on an artificial intelligence project can consist of this preparation (3). It’s a lucrative and growing industry: annotation solutions generated $1.7 billion in 2019 and will likely reach $4.1 billion by 2024.

A globalized system

This activity's digitalized dimension means that it can be practiced anywhere in the world, at variable, often very low, rates. Logically enough, it is, therefore, strongly marked geographically speaking. Overall, the demand is concentrated in the countries of the North, while its treatment is a little more diffuse but is mainly carried out in the countries of the South with lower incomes, such as India or the Philippines (4).

Various organizational models

To meet a growing need in the North, a large number of structures with varying fees, monitoring and levels of commitment have emerged, each with specific impacts. 

As said in the introduction, when we talk about these digital workers, we often find mention of those working on open platforms, of which Amazon Mechanical Turk is an illustrative example. They are characterized by a large number of independent micro workers performing very low-paid tasks, from their homes, in complete material autonomy, for multiple actors. The final control of the work is very weak. Still based on a platform model, there are also deep labor platforms where control and prices are intermediate. This type of model is to be distinguished from Business Process Outsourcing (BPO) models in which prices are significantly higher, control over the worker is also higher, and the situation of workers is more stable. In this case, companies outsource this activity to a specialized structure. The equipment and training are the responsibility of the provider. In this article, we will mainly talk about this type of organization, putting aside the particular problems of freelance platforms, even if the causes are similar in many ways. In addition to BPO companies, there are others that have positioned themselves in the impact-sourcing segment. They want to offer their staff better working conditions and/or better development opportunities (5). The workers can be independent or not, the rates and the control exercised are average. Among their ranks are companies such as CloudFactory, IMerit or Samasource, which has just become famous in a recent scandal with the company OpenAI (6). If the will to have a positive social impact is present, there are few standards allowing to certify this dimension.

The economic benefits

The companies highlighting the beneficial impacts of this practice frequently focus on the economic inclusion that they theoretically promote. Its digital and decentralized nature, as well as the fact that it does not necessarily require prior qualifications, facilitates access to formal employment for people who are a priori excluded because of the economic situation of the countries in which they are located. This ability to bring these individuals into the formal economy, which is far from always being the majority in low-income countries, is one of the arguments that motivate certain governments to promote this type of digital work in their countries.

Other benefits that would be made possible by this practice include access to flexible working and remote working. This benefit is only true for companies where workers are truly independent (i.e. theoretically not in BPOs). When this is the case, individuals enjoy the possibility of working wherever they want and modulating the amount of work according to their needs.

Aussi reluisants qu’ils puissent avoir l’air, ces impacts sont parfois contrebalancés par d’autres phénomènes qui viennent ternir l’image d’une pratique présentée de manière manichéenne comme émancipatrice. 

Economic disadvantages

The unprecedented – and therefore, as you will have understood, poorly understood – nature of this new form of work contributes to creating legal difficulties. If this is particularly the case for open platform models, there are also problems with BPO structures where “disguised employment” is sometimes the norm. Employees do not benefit from the advantages of self-employed status or permanent contracts. This observation undermines the idea that these individuals are integrated into a formal and regulated economy. Even if this activity would promote integration for all, it offers, in any case, little perspective of evolution to those who participate in it. It is possible to climb the company’s hierarchy, but not many layers exist. This is all the more problematic when one considers that workers are often overqualified for their tasks. Also, the majority of the structures do not contribute to the development of local entrepreneurship because of the composition of their management and shareholders. The social impact, therefore, seems to be definitely mitigated. Another major problem is the distribution of the wealth generated by the outsourcing of this activity. The wages received by employees are often extremely low, especially in the South. This is a fact recently highlighted by a Time survey about workers who contributed to the much-discussed ChatGPT. To moderate this content-generating artificial intelligence and thus prevent it from producing violent or inappropriate content, the annotators were paid between $1.32 and $2 net for an hour of work, knowing that the minimum hourly wage is around $1.52 in their country, Kenya (7). Samasource, the company contracted by Open AI, was paid between six and seven times the amount allocated to the workers.

The strong competition between workers does not favor the emergence of demands for a collective minimum wage or any other improvement in their working conditions. The emergence of a form of common consciousness and, thus, of collective bargaining power (8) is neither easy nor facilitated, and the defense of essential rights is not ensured by a recognized institution. This lack of representativeness also prevents the emergence of an international standard that would allow companies to be situated in relation to their social impacts and to certify them.

Thus, while this outsourcing phenomenon has undeniably positive effects, it must be balanced with the less positive effects. At Deepomatic, we believe that these ambiguities must be known by all those who wish to outsource a similar service.

What companies can do

« “Organizations committed to transparency and identifying best practices could do much to improve working conditions.” (9) »

Despite the cruel lack of interest in the issue, there are signs that some players in the tech ecosystem are beginning to address the issue: for example, the Partnership on AI (PAI) has produced a whole series of tools and documents concerning “data enrichment workers” for AI practitioners (10). They also stress the decisive role of professionals on their working conditions. In particular, the Data Enrichment Sourcing Guidelines propose five concrete measures to improve the working conditions of workers (11). While their recommendations are remarkably operational, they seem to be aimed more at actors using platforms. As such, their recommendations differ in some respects from ours. However, many elements remain common and show that a shared concern is emerging. A positive dynamic, a profound change, is undeniably underway.

Deepomatic’s context

At Deepomatic, we chose a small structure comparable to the BPO model, and which has the particularity of being led by a local female entrepreneur. The impact was not initially at the heart of their model. This is why we wanted to ensure the quality of the working conditions and to lead, hand in hand, a process of improvement. It seemed to us more relevant and ethically more impactful to work on improving this structure than to migrate to another organization right away. This support work required a great deal of reflection on our part to determine what our requirements were in this area (and which we believe should be the minimum required by all customers). Several lessons have emerged from this work that we would like to share here with the aim of making them key criteria for everyone.

Understand both the global and specific issues of the activity

Each activity inevitably impacts the ecosystem in which it takes place. In order to limit the negative impacts and promote the positive ones, it seems necessary to first try to see clearly, to conceptualize the ins and outs of the activity in question. First, we need to understand the overall issues. The first part of this article mentions some of the main issues related to the emergence of the annotation market, but the list is far from being exhaustive. This is why we recommend everyone to increase their knowledge of the impacts by reading as much as possible. Once these issues are well understood, looking at the specific issues that may be related to your own activity is relevant. Indeed, the types of data that one seeks to have labeled can be diverse in nature and do not all have the same consequences on workers. For example, labeling sensitive and violent content – as in the case of the work done for ChatGPT – implies a more pressing need for psychological support for workers. In a similar case, ensuring that the intermediary structure offers quality psychological support is important.

However, finding mention of a support unit in a flyer is not always a sufficient guarantee. For this reason, we encourage any organization that enters into a long-term partnership with an annotation structure to conduct a ground investigation. This can take many forms.

What we did: At Deepomatic, we were lucky enough to be able to participate in an academic research project with a sociological approach. Within this framework, researchers were brought to meet the team we work with to make an inventory of their working conditions. Knowing precisely the material conditions of work and the background of our workers allowed us to target our actions. This occasion being quite unusual, a study can be more classically conducted by auditors specialized in supply chain management.

Ensuring fair wages

As we have seen, workers’ salaries are often close to the basic minimum wages (if any) of the countries in which they are located, and intermediary structures may capture much of the value generated without providing their workers with the material conditions necessary to perform their tasks. 

It is therefore necessary to question the structures about the distribution of value and to understand very concretely what a worker has in his or her pocket at the end of the month. This questioning must be put in perspective with the localities' minimum wages where the activity occurs.

The simple fact of asking about this aspect is already a virtuous act in itself: by asking the actors with whom we could collaborate on this subject, we create the same dynamic as when we ask our supplier about its carbon impact, i.e. we make it clear that it is an important factor in the decision making process. But it is possible – and recommended – to go even further by working to increase them when they are considered too low. However, it must be emphasized here that the calculation of the fair wage is not obvious and must be based on tangible elements. It can be based on the prevailing minimum wage, if there is one, or on an estimate of the necessary living wage (12). Note that these wages may vary within a country and must be updated periodically. In sum, this is a challenging topic and one on which a collective and precise methodology would be beneficial. What we did: Deepomatic has, for example, chosen to increase the base salary (i.e. excluding bonuses and compensations) of all its employees to ensure that it is at least twice as high as the local minimum wage at the beginning of the job.

Making job stability prevail

As we have seen, a large part of the annotation structures propose and put forward the virtues of the self-employed status. However, it does not always guarantee a permanent insertion in the formal economy, especially when the employee has no other perspectives. In order to promote integration, we believe that it is necessary to offer workers a certain stability that only the equivalents of permanent contracts really allow over the long term. In addition to providing access to a stable income, they allow social benefits that can contribute to better health, greater serenity and therefore better integration.

Another important point is that salaried employment grants the right to employee representation, unlike self-employed status. This makes it possible to unite and bring up claims, which otherwise are difficult to formalize and transmit to the employer. What we did: In our case, we worked in collaboration with our provider to change the type of contract from self-employed to salaried status, so that all the workers could have the advantages linked to the stability of the job such as retirement and health coverage.

Giving perspective 

Because of the repetitive and fragmented nature of the task, it is not always easy for the person doing it to understand the value of his or her production. Moreover, the companies that give the orders rarely make an effort to contextualize their work, which would allow workers to understand better where their work fits into the chain of value creation and thus to value their contribution. But this is not the only perspective they seem to lack. Few of the employees interviewed in the context of our provider’s study see themselves in this activity for the long term. They mostly see it as a transition without any idea of how they might evolve afterward. We must, therefore, foster their prospects of evolution.

What we’re going to do: In order to compensate for this severe lack of perspective, Deepomatic is thinking of offering (in a form yet to be defined) free access to training in the technological sector. This effort will have to go hand in hand with the possibility of arranging one’s schedule to take exams. For the moment, this perspective remains very hypothetical and we are looking for feedback and suggestions. As for the actions that could give a more immediate perspective on the product to which they contribute, we wish to organize information missions that could take the form of oral intervention or written documents. The goal is to give the elements that allow the understanding of our computer vision solution and the public to which it is addressed.

Federating the technology players

As we have seen, there are very few guidelines specific to the question of annotation, with the exception of those of Partnership on AI. The reflection they have initiated is relevant, but it can be fed by new perspectives or other ambitions. For example, we could envisage the creation of a consortium to define the criteria for a future Fair Data Work label. These criteria could be shared and potentially serve as a basis for a future collective agreement for data workers.

It is an ambitious program whose contours have yet to be defined, but which is likely to move the lines on a large scale. If you are concerned and wish to know more, do not hesitate to write to  julie@deepomatic.com.

A few words to conclude

One of Deepomatic’s fundamental goals is to bring more visibility to our customers. This principle is not limited to our commercial activity; we gladly extend it to environmental and social issues. As such, the first objective of this article is to shed light on the issues generated by this activity.  

What is essential to remember in this regard is that the way in which data is used is paradoxical: this activity arouses little interest even though it is decisive in the quality of algorithms. It is the great forgotten one, both by specialists in Responsible Digital who willingly talk about the impact of solutions on end-users but not on the individuals upstream, and by specialists in artificial intelligence ethics who tend to focus on exclusively technical issues, and by CSR departments that are supposedly sensitive to the sustainability of supply chains. We hope this observation will inspire AI practitioners and the entire ecosystem to act. Our recommendations are just the beginning of a movement that we hope will grow beyond our organization. If you don’t want to stay on the sidelines of change, join us!

Notes (1) Hagendorff Thilo, “Blind spots in AI ethics”, AI and Ethics 2, December 2021. Same constant in Tessier Catherine, Éthique et IA : analyse et discussion, CNIA 2021 : Conférence Nationale en Intelligence Artificielle, Juin 2021.T. Hagendorff adds that the reason these considerations polarize discussions of AI ethics is because these issues can be resolved technically: “AI ethics often frames AI applications as isolated technical artefacts or entities that can be optimized by experts who apply technical solutions to technical problems that are depicted as problems of ethical AI. Contrary to this position, this paper argues that AI applications must not be conceived in isolation but within a larger network of social and ecological dependencies and relationships.” (2)  High-Level Expert Group on Artificial Intelligence. Policy and Investment Recommendations for Trustworthy AI. Commission européenne, 2019. (3) Data Engineering, Preparation, and Labeling for AI 2020, Cognilytica Research, 2020. (4) Graham Mark, Hjorth Isis, Lehdonvirta Vili, “Digital labour and development: impacts of global digital labour plaforms and the gig economy on worker livelihoods”, Transfer: European Review of Labour and Research, 2017. (5) Kaye Kate, “These companies claim to provide “fair-trade” data work. Do they?”, MIT Technology Review, 2019. (6) Zorthian Julia, “Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic”, Time, 2023. (7) Ibid.

(8) Graham Mark, Hjorth Isis, Lehdonvirta Vili, “Digital labour and development: impacts of global digital labour plaforms and the gig economy on worker livelihoods”, op. cit.

(9) Ibid. (10) All the resources produced on this subject are available here.

(12) For example, it is possible to access specialized studies from the IDH Recognized Living Wage Benchmarks.

background

Automate Quality. Accelerate Growth.