Back To Top
DataEthics4All-AI-DIET-Insights-Hero

An Evaluation of the Job Landscape and Trends in AI Ethics

An Evaluation of the Job Landscape and Trends in AI Ethics

RESEARCHERS
Quentin J Louis, Joshua Walker, Katherine Lai, Victoria Matthews, Kitty Atwood

PRINCIPAL INVESTIGATOR
Susanna Raja

DESIGNER
Ayomikun Bolaji

Contents.

Executive Summary.

Insights.

AI ethics frameworks and principles have been increasingly adopted since 2018.

Corporate awareness about ethical AI is on the rise.

AI ethics roles are being created at the management & C-suite level.

High-profile corporations are adopting AI Ethics teams but concerns remain about ethics washing.

Difficulties integrating AI ethics into the workplace include: soft policy, lack of corporate responsibility & top-down vs bottom-up approaches.

Collaboration across job sectors is needed to develop ethical AI.

CROSS-SECTOR COLLABORATION IS BEING REALISED:

1)   In the policy sector

2)   Through ethics by design.

A SNAPSHOT OF THE CURRENT JOB MARKET ILLUSTRATES THAT:

   There is no one fixed AI ethics job: definitions vary.

   Research, engineering and policy jobs dominate the market.

   Significant collaboration occurs between these 3 fields.

   Academic & experience requirements remain relatively accessible.

Top 3 Recommendations.

DataEthics4All Ethics4NextGen Hackathon highlights

Efforts to acknowledge AI in the private sector must move beyond performative approaches driven by obligation.

DataEthics4All Ethics4NextGen Hackathon highlights

High profile AI ethics job roles must be established alongside integrative, ‘bottom-up’ approaches to include AI ethics.

DataEthics4All Ethics4NextGen Hackathon highlights

Contributions to AI ethics must be diversified to address demographic and geographic biases as well as gaps between disciplines.

Although a contested and evolving term that is often used generally to describe technological revolution in society, AI can be defined as technology that can independently perform complex intelligent tasks, which were previously limited to human cognition.
(The Global Landscape of AI Ethics Guidelines, 2019;
A Unified Framework of Five Principles for AI in Society, 2019)

The explosion of interest in AI technology and discourse has been reflected in a growing hiring trend in AI-related jobs.

In Indeed’s ‘The Best Jobs in the US: 2019’ editorial, 3 of the top 15 jobs were AI roles.
(Indeed, 2019)

The highest ranked of these–Machine Learning Engineer–was the highest positioned of all US jobs, with a 344% growth in the number of job postings for the role between 2015-2018.
(Indeed, 2019).

THE PROLIFERATION OF AI ETHICS FRAMEWORKS, DEFINITIONS AND PRINCIPLES:

The growth of AI has led to considerable debate surrounding issues such as the lack of accountability, discriminatory bias and malevolent use of AI systems. (The Global Landscape of AI Ethics Guidelines, 2019)
In response to this, there has been a growth in AI ethics initiatives which aim to create a framework for developing standards for AI implementation across different sectors and industries. (A Unified Framework of the Five Principles for AI in Society, 2019)

There remains considerable disagreement over what constitutes ethical AI. (The Global Landscape of AI Ethics Guidelines, 2019).
In response to the confusion between definitions, meta-analyses have been carried out to characterise common key principles of AI ethics. (AI Now 2019 Report)

A Unified Framework of the Five Principles for AI in Society, 2019 establish five key principles that are common amongst published guidelines:

BENEFICENCE

NON-MALEFICENCE

AUTONOMY

JUSTICE

EXPLICABILITY

These principles address the key concerns associated with AI, such as

PRIVACY

DISCRIMINATION

SECURITY

TRANSPARENCY

These themes are also increasingly adopted by the private sector, which has seen a growing adoption of AI principles since 2015. (The Global Landscape of AI Ethics Guidelines, 2019)

2018 has been identified as a turning point for corporate AI ethics initiatives, with many high profile companies such as Google, Facebook and IBM publishing ethics guidelines at this time. (AI Index 2021 Report)

CORPORATE AWARENESS OF ETHICAL AI.

Increased adoption of ethics principles corresponds to an increased ethical awareness of employees;

A survey of nearly 900 executives across different sectors in 10 countries in 2020 found that:

78% were aware of explainability in AI systems

78%

69% were aware of transparency

69%

65% were aware of discriminatory bias

65%

accounting to respective increases in awareness of 46%, 33% and 30% from 2019.
(Capgemini: AI and the Ethical Conundrum, 2020)

Deloitte’s third ‘State of AI in the Enterprise’ report surveyed nearly

3000

executives of firms and organisations that have adopted AI approaches, finding that 95% of respondents were concerned about ethical risks of AI (Deloitte: State of AI in the Enterprise, Third Edition, 2020).

MANAGEMENT AND C-SUITE ROLES.

Survey results also suggest that there are a growing number of job roles which explicitly aim to address the question of AI ethics;

For example, Capgemini found that

63%
49%

% of IT and AI data professionals

% of sales and marketing executives

surveyed could point to a management role responsible and accountable for ethical AI issues.
(Capgemini: AI and the Ethical Conundrum, 2020)

New job titles such as ‘Chief AI Ethics Officer’ and ‘AI Ethicist’ are emerging, although AI ethics is often corporated into other executive roles, such as CDO and CAO (Are you asking too much of your Chief Data Officer? by Davenport and Bean, 2020; Forbes: Rise of the Chief Ethics Officer, 2019).

Public Awareness and High Profile Examples of Corporate AI Ethics.

Alongside increased corporate awareness of AI ethics issues, public awareness has also grown. The 2021 AI Index report surveyed popular news topics, finding that themes such as AI ethics guidance, research and education were particularly popular, as well as ethical concerns related to facial recognition software and algorithmic bias. (2021 AI Index Report)

However, rising public awareness raises the concern that firms may act performatively to include AI ethics roles in a form of ‘ethics-washing’, while neglecting the necessary integrated, structural change.
(IBM: Everyday Ethics for Artificial Intelligence)

Many high profile roles have gained attention, such as the Bank of America’s ‘Enterprise Data Governance Executive’ and Google’s Ethical AI team.
(Schrag, 2018)

Concerns that firms are using these roles simply to comply with ethical obligations peaked when Google dismissed two members of its Ethical AI team following a report discussing AI bias. (Naughton, 2021)

The 2021 AI Index reveals that this event received particular attention in the media, naming it the second most popular news topic related to ethical AI after the release of the European Commission’s white paper on AI. (2021 AI Index Report)
These high profile cases suggest there may be a conflict between the actions driven by the increasing pressure for firms to adhere to ethical guidelines, and the change needed to ensure truly ethical practice.

Key Issues in AI Ethics.

SOFT POLICY.

Thriving in the era of pervasive AI - Figure 8 - Deloitte

Despite increasing acknowledgement of the importance of ethics, there remains disparities between risk perception and preparedness;

For example, in Deloitte’s 2020 survey, 53% of executives perceived ethical issues as a major or extreme concern, but only 38% of respondents were fully prepared to address these risks.

This alarming gap between risk recognition and preparedness reflects the adoption of ‘soft law’ approaches which are persuasive and non-legislative.
(The Global Landscapes of AI Ethics Guidelines, 2019)

This approach has been criticised for its lack of consideration of practical implementation.
(AI Now 2021 Report, 2019).

The proliferation of soft policy approaches not only neglects practical implementation, but also often neglects geographical and demographic bias in the development of ethical guidelines (The Global Landscapes of AI Ethics Guidelines, 2019).

Many guidelines and studies originate from largely white, Western institutions, leading to the exclusion of underrepresented groups (AI Now 2021 Report, 2019).

There is thus a demand for policy and guidance to be diversified and for more practical approaches within corporations to be developed simultaneously in order to translate general ethical guidelines into internal structures and practices.

CORPORATIONS’ RELUCTANCE TO TAKE RESPONSIBILTY.

Currently, adoption of ethical AI principles in the private sector has been heterogeneous between corporations, and reports have suggested reluctance to admit AI ethics as a corporate issue (IBM: Everyday Ethics for Artificial Intelligence, 2020.)

For example, IBM’s survey of executives of companies using AI across varying sectors found that most named external factors such as tech companies, governments and customers as having the greatest influence on AI ethics, rather than factors the firms themselves claimed responsibility for.

TOP-DOWN VERSUS BOTTOM-UP APPROACHES.

A managerial approach to introducing AI ethics is evident within the private sector, yet established roles for ethical responsibility such as AI ethicist and Chief AI Ethics Officer have not been adopted across all firms.
(Capgemini: AI and the Ethical Conundrum, 2020)

Furthermore, there is a growing awareness that AI ethics should be introduced at all levels, not just via managerial positions.
Considering the evolution of AI ethics in and the continuing limitations of AI ethics within the private sector, the remainder of this report explores the current job market to identify the state of AI ethics roles.

Initial Market Scope.

SOCIAL AND POLITICAL SCOPE OF AI ETHICS DEMANDS A COLLABORATIVE APPROACH.

As algorithms increasingly automate everyday activities performed by the individual across industries like:

HEALTHCARE

MARKETING

TRANSPORT

FINANCE

INSURANCE

MANUFACTURING

SECURITY & DEFENCE

AGRICULTURE

PUBLIC SECTOR

it is increasingly apparent that to consider AI and its ethical ramifications is no longer simply the prerogative of Software and IT firms.

Decisions that are made using algorithms impact nearly every aspect of our lives.

By retrieving information about an individual, AI is used to assess the likelihood of individuals’ reoffending, for example, by the COMPAS system in US legal courts, to determine individuals’ eligibility for insurance, education and recruitment, to enact predictive policing and, more insidiously, even to surveil citizens via facial recognition.

When such algorithms are coded with bias or using ‘black boxes’ that obscure their justifications for the decision-making to which they contribute, AI bears the risk of disenfranchising members of a society.

Gender Shades

AI’s considerable social, political and economic influence is such that jobs at the intersection of AI and ethics demand an interdisciplinary approach.

Fostering multistakeholderism and a diversity of opinion is necessary to mitigate the risk of issues facing AI, such as algorithmic bias and lack of transparency.

MULTI-STAKEHOLDER COLLABORATION ACROSS JOB SECTORS.

In the past 5 years, a substantial amount of research energy has been devoted to establishing AI ethics principles.

AlgorithmWatch’s AI Ethics Guidelines Global Inventory compiled 160 documents by various companies and organisations all setting out AI principles and guidelines. (Thümmler: 2020)

The Berkman Klein Center’s white paper identified 36 prominent documents that set out ethical approaches to AI principles between 2016 to 2019 alone.
(Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI, 2020)

The Berkman Klein Center study speaks to the significant breadth of job sectors within which research into ethical AI principles has been undertaken: within governmental institutions; inter-governmental organisations; civil society; private sector companies; advocacy groups and multi-stakeholder institutions.

The Berkman Klein Center report does not simply evidence the breadth of job sectors incorporating ethical approaches to AI. It also shows that, as early as 2016-2019, there was an awareness within these sectors regarding the need for cross-sector dialogue in order to practice AI ethics.

64% of all the approaches to AI principles emphasised the importance of multi-stakeholder collaboration, defined as: ‘encouraging or requiring that designers and users of AI systems consult relevant stakeholder groups while developing and managing the use of AI applications’ (Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI, 2020).

IBM’s ‘Everyday Ethics for AI’ report (2019) recommended cross-sector dialogue between AI developers, policymakers and academics to encourage a diversity of perspectives within AI systems.

Access Now’s ‘Human Rights in the Age of AI’ (2018) advised that private-sector actors should assess AI systems by collaboration with external human rights organisations as well as human rights and AI experts.

‘FROM PRINCIPLES TO PRACTICE’:

DEVELOPING TECHNICAL SOLUTIONS FOR ETHICAL AI BY COLLABORATION.

The emphasis in these reports on the need for multistakeholderism in developing ethical AI, particularly across job sectors, is reflective of a wider trend in AI Ethics scholarship to herald a move ‘from principles to practice’ (European Parliamentary Research Service, 2020).

As the 160 ‘AI principles’ documents compiled by AlgorithmWatch and the 36 compiled by The Berkman Klein Center respectively show, the AI ethics research landscape contains a proliferation of theoretical approaches to the subject. 

Moving to apply these principles in practice, the current rhetoric in discussions of ethical AI marks a shift to focussing on the ways in which ethics can be more technically incorporated into technology.

This rhetoric advocates for the worth of roles that align theoretical research about ethics and/or AI with practical initiatives. 

Due to the constraints of time, DataEthics4All has focused on two key practical sectors that are tending to work collaboratively with ethical theorists and researchers across traditional occupational boundaries:

DataEthics4All Membership Diversity check points

Policy and Governance

DataEthics4All Membership Diversity check points

Design and Engineering

POLICY AND GOVERNANCE.

In order to move ‘from principles to practice’ in terms of advancing AI ethics, AI policy experts are adopting more collaborative approaches to integrate ethical perspectives into governance.

The European Parliamentary Research Service recommendations (2020) particularly champion the benefits of this collaboration, using seven key insights from theoretical ethics and technology research as the framework for their AI policy recommendations.

The recognition that the issues tackled in AI policy roles necessarily involve or should involve AI ethics is evident even in 2019.

DataEthics4All has identified that all 5 of the ‘Emerging and Urgent Concerns in 2019’ as listed in the AI Now 2019 Report are characterizable also as issues of ethical concern (AI Index 2019 Report).

THE 5 CONCERNS READ AS FOLLOWS:

AI and Neighbourhood Surveillance

Smart Cities

AI at the Border

National Biometric Identity Systems

China AI Arms Race Narrative

Similarly aligning principles of ethics and practical governance, the Public Policy Programme at The Alan Turing Institute utilises the Data Ethics Framework as a practical tool to support their policy direction (Understanding Artificial Intelligence Ethics and Safety, 2019).

DESIGN AND ENGINEERING:

ETHICS BY DESIGN.

This approach to AI Ethics is often alternatively characterised by phrases such as ‘embedding values in design’; value-sensitive design’ and ‘ethically aligned design.’

The trend towards favouring proactive ethics in respect of AI has coincided with the emergence of an approach to AI called Ethics by Design.
Ethics by Design as defined in AI Ethics is ‘a key component of responsible innovation that aims to integrate ethics in the design and development stage of the technology’ (Artificial Intelligence, Responsibility Attribution, and a Reasonable Justification of Explainability, 2020).

Traditionally, the jobs linked to the design and development stage of AI might include AI researchers, data scientists, engineers, data analysis roles, UX/UI designers and their associated positions.

Indeed’s 2018 research into the emerging trends in artificial intelligence evidences the pervasiveness of such design and development roles in the AI field (Jobs of the Future: Emerging Trends in Artificial Intelligence, 2018).

Out of all the jobs requiring AI skills as identified by Indeed in 2018, 94.2% were machine learning engineers and 75.1% were data scientists, evidencing a clear demand for such design and development roles.

In order to practice Ethics by Design, rather than simply having their design and development teams working as siloes, more organisations are encouraging their teams to collaborate with ethicists.

Microsoft recently created a specialised Ethics and Society team within their engineering function (Ethics by Design: An Organisational Approach to Responsible Use of Technology, 2020).

This allowed ethics experts to actively encourage engineers to embed ethical principles within AI at the early stage of development, thereby mitigating the risk of reifying ethical problems into their technology.

Similarly, Workday initiated a cross-functional AI Ethics Initiative task force to develop the company’s machine learning ethics, involving experts from sectors including: product and engineering, legal, public policy and privacy and ethics and compliance (Ethics by Design: An Organisational Approach to Responsible Use of Technology, 2020).

STEM AND HUMANITIES: BRIDGING THE DIVIDE.

Though progress is still ongoing, the emphasis in the AI Ethics field on multi-stakeholder approaches, on policy that uses ethical principles and on ethics by design are likely to encourage a more collaborative AI job market.

If design and engineering functions increasingly operate alongside ethicists, this is likely also to bridge the gap between STEM subjects and their humanities counterparts.

In-depth Analysis of the Current Job Market.

METHODOLOGY.

To obtain a snapshot of the current AI ethics job market, data was downloaded from 80,000 Hours, a US-based organisation that focuses explicitly on ethical employment (80000 hours).

Available postings from 20/09/20 to 29/06/21 under the problem area ‘AI safety & policy’ were selected.

The job roles listed as AI safety and policy can be broadly characterised as roles in AI ethics, reflecting how the governance of AI is increasingly being wedded to AI ethics.

Research using an independent job board validated this link, showing that 12% of responsible technology job postings were classed as policy roles.

The independent job board used is the All Tech is Human ‘Responsible Tech Job Board’.

[visualizer id="13412" lazy="no" class=""]

Roles associated with AI Safety and Policy account for 17% of all listings on the 80,000 Hours database, the second most popular sector after global health and poverty, resulting in a total sample of 122 postings.

As 80,000 Hours is US-based, their job postings may have a English-speaking or US bias, so the results should not be accepted as representative of the global distribution of jobs in AI ethics.
Nevertheless, the US is a leading player in AI ethics so the dataset provides a useful insight into the current state of employment with an ethical focus within the AI sector.

Of all the published job postings, the US jobs market clearly dominated (accounting for 73% of total jobs) — more than all other countries combined.

The country with the second highest number of postings was the UK (10%).

Location of AI Safety and Policy Postings between 20/09/2020 – 29/06/2021
[visualizer id="13418" lazy="no" class=""]

The dataset is limited to job postings from September 2020 – July 2021 and thus should only be treated as a snapshot of the current jobs market.

RESULTS.

FREQUENCY OF WORDS ASSOCIATED WITH AI ETHICS IN JOB DESCRIPTIONS.

In order to elucidate the various synonyms for AI Ethics, DataEthics4All conducted research into the vocabulary deployed in job descriptions that advertised roles involving AI ethics.

Previous attempts to categorise the terms used to refer to ethical and human-rights based approaches to AI were relatively sparse.

However, the Berkman Klein Center’ s 2020 analysis upon the 36 prominent documents that set out ethical approaches to AI principles between 2016 to 2019 was particularly fruitful.

The Center evidenced the eight key themes by which AI ethics was most consistently described.
(Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI, 2020)

Privacy
Human Control of Technology
Transparency & Explainability
Safety & Security

EIGHT KEY THEMES

Fairness and Non-discrimination
Accountability
Professional Responsibility
Promotion of Human Values

The DataEthics4All research, broadening sample size to 122, was based on using these broad thematic categories to investigate the sort of vocabulary used in job descriptions.

The data shows that the most frequent words used to describe AI Ethics jobs are ‘safety’ or ‘security’, suggesting that the AI  ethicist role is in large part concerned with mitigating and apprehending possible security risks caused by AI.

However, this conclusion has various limitations, not least that we might reasonably expect a search for job descriptions containing either one of two words to reflect more results than that of one word.

Despite this limitation, however, ‘safety’ or ‘security’ clearly represents significantly more results than any of the counterpart searches that contained two words, such as ‘transparency’ or ‘explainability.’ 

Another possible limitation of our findings that cannot be so easily resolved is that the spread of our data, characterised by 80,000 Hours as ‘AI safety & policy’ roles, tends towards jobs that inherently involve ‘safety.’

This crucial limitation of the dataset makes it impossible to conclude that jobs about ‘safety’ or ‘security’ dominate the AI Ethics job market as a whole.

Despite these limitations, however, our data does illuminate the extent to which different vocabulary is used to refer to jobs in the field of ‘AI Ethics,’  suggesting a lack of fixed scope for such jobs that reflects the relative novelty of the field.

[visualizer id="13507" lazy="no" class=""]

ROLE TYPES OF JOB LISTINGS.

Analysis of the role type associated with each AI Safety and Policy job listing shows that research, engineering and policy represent the majority of roles advertised in the study period.

This is shown by the graph below that depicts the total number of tags across all listings.

[visualizer id="13521" lazy="no" class=""]

The chart below shows that the majority of job listings were categorised under multiple role types, suggesting that the current jobs market is dominated by jobs that collaborate between disciplines.

Only 8% of jobs involve one single function, a significant minority compared to the 92% that involve cross-function combinations.

These findings reinforce that the AI ethics job market is adapting an approach that prioritises multi-stakeholder collaboration rather than siloed functions.

[visualizer id="13528" lazy="no" class=""]

Further analysis was conducted of the specific role combinations that made up 92% of the AI Safety and Policy job listings.

Any of the role combinations that had fewer than 4 postings associated with them were categorised under ‘various different combinations.’

[visualizer id="14411" lazy="no" class=""]

The analysis revealed the top three most common combinations of role types after ‘various different combinations’ to be:

1.   Engineering and research

2.   Engineering, policy and research

3.   Policy and research

Making up 28% of all combinations, the collaboration between engineering and research emerges as particularly significant in the AI Ethics field.

This collaboration reflects DataEthics4All’s findings regarding the increasing emphasis in the AI Ethics field on practicing Ethics by Design and incorporating ethical research into engineering.

While the 7% of combinations that were between policy and research roles is unsurprising given the traditional relation between these disciplines, the 9% of collaborations represented by engineering, policy and research is informative.

The collaboration would suggest that AI policy is being informed by experts in the mechanics of AI and perhaps even that the engineering of AI is apprehending possible future regulatory issues  with the technology from the onset.

To corroborate these findings, additional research would however be necessary.

REQUIRED EXPERIENCE AND QUALIFICATIONS.

77% of job listings specified only 0-2 years experience, pointing to accessibility.

However, the data collected does not account for discrimination based on experience between candidates further within the recruitment process.

Similarly, only 24% of postings required a masters degree or higher, again highlighting the obtainability of roles within AI ethics.

[visualizer id="13995" lazy="no" class=""]
[visualizer id="13542" lazy="no" class=""]

RECOMMENDATIONS.

DataEthics4All Membership Diversity check points

Ethical AI demands collaboration between disciplines and roles.
In particular, design and engineering teams should collaborate with ethicists, policy-makers and researchers to practice Ethics by Design.

DataEthics4All Membership Diversity check points

By collaborating across disciplines and incorporating multi-stakeholder approaches, approaches to ethical AI are better positioned to bridge the humanities/STEM divide.
Incorporating multiple different perspectives in the early stages of the AI design process will prevent biases from being coded into technology and ensure that AI serves all of its users.

DataEthics4All Membership Diversity check points

Most work in AI ethics has focused on establishing principles and guidance in what has been deemed a ‘soft policy’ approach.
AI ethics research must go beyond this, developing a greater focus on implementation and practice within the private sector where AI technology is being applied.

DataEthics4All Membership Diversity check points

Efforts to acknowledge AI in the private sector must move beyond performative approaches driven by obligation.
As high profile cases such as Google’s dismissal of two of its AI ethics researchers suggest, efforts to establish ethical AI within firms are limited if they are motivated by the ‘optics’ of ethical research and management positions, driven by growing public awareness of AI issues.

DataEthics4All Membership Diversity check points

Despite concerns over performativity, high profile executive roles and committees focusing explicitly on AI are important.
These roles ensure accountability and establish AI ethics at the top of the corporate hierarchy.
Ethics committees should involve co-operation between executive branches, incorporating technical, ethical, legal and subject matter experts as well as citizen participants (illustrated on the left). 

DataEthics4All Membership Diversity check points

Integrated, bottom-up approaches to corporate AI ethics are needed alongside the establishment of executive roles.
Greater ethical training of engineers and developers and increased transparency and communication with developers surrounding how their work is used by the corporation is required to fully transform the use of AI in the private sector.

DataEthics4All Membership Diversity check points

Research in AI ethics must be diversified to include a more representative range of perspectives to address demographic and geographic biases in contributions.
The positionality of research should be stated to avoid objectivising partial perspectives.
This is particularly important considering that ethical analyses of algorithmic biases have illuminated their ability to particularly disadvantage minority groups.

CONCLUSIONS.

AI remains a disruptive force at the forefront of technological research with huge potential to generate positive societal change. As the ethical risks of AI are further explored, there is increasing demand to address the corresponding potential for the technology to negatively impact society.
Accounting for ethical risks via the private sector is an essential step to increasing transparency and accountability and will enable the better adoption of AI across the corporate landscape and society at large.

Despite that an impetus for collaboration between disciplines and across job sectors is evident within the AI Ethics field, much remains to be done to ensure that the functions developing and designing AI are in constant dialogue with ethicists, policy-makers, researchers and other stakeholders.
Those that build and design the technologies that are becoming increasingly integrated into our social, political and economic interactions have a responsibility to ensure that AI reflects a diversity of perspectives.  

More than simply collaborating across functions, the future of the AI Ethics field may require entirely new roles to be developed that better align design and development perspectives with ethical ones.
For, as Palm and Hansson identify,  despite cross-sector collaboration, engineers are still themselves ‘seldom trained to discuss the ethical issues in a pre-emptive perspective’ (Palm; Hansson: 2006).  The skills necessary to identify and conceptualise ethical issues are still, for the most part, the expected prerogative of applied ethicists: The European Parliamentary Research Service claims that it is ‘unlikely’ that the current data scientists or engineers possess them (2020)

The conclusion that emerges is that, if jobs do not yet exist that are recruiting those with the necessary diversity of skills and perspectives to combine technical design and development with ethical frameworks, there may be a presumed new role invented for such ethical engineers.

As a field that is evolving at such a rapid pace, it is reasonable to assume that AI Ethics and its associated job market will adapt to meet the challenge of technology that requires inherently collaborative, multi-disciplinary approaches.

DataEthics4All

JOIN US.

Keep in touch with the latest in Data Ethics, Privacy, Compliance, Governance and Social Corporate Responsibility.