Automated decision-making (ADM) represents a use case of applied artificial intelligence (AI) with a growing demand in various business scenarios. In this context, algorithms make and execute decisions while impacting individuals’ life. Present article discusses today’s importance of self-determination in the light of the increasing application of ADM. It further illustrates relevant ethical theories in regard to the concept of self-determination, and describes risks of discrimination, particularly in the context of automated decision-making. The paper explores how self-determination is reflected in present legislation and assesses possible areas for legal adaption.
The term of “hyperautomation” does not only describe a trend, it rather stands for the future strategic approach of many companies. It is all about the efficient automation of as many business processes as possible (Gartner 2021). In present era of automation, automated decision-making (ADM) can be identified as one increasing use case where decision-making processes are being automated by applying Artificial Intelligence (AI). More concretely, algorithms make decisions based on data processing, impacting with their decisions or predictions individuals in different kind of ways. Be it the rating of someone’s creditability within a credit request system, the pre-selection of applicants within an HR application process, or the health diagnosis process within the medical sector (Fry, 2018), several scenarios were decisions are being made by an algorithm and therewith replacing humans’ interaction already exist or are being developed. Once automatically processed personal data results into a decision or a recommendation for action, the technology’s functionality in turn results into a direct impact on individual’s self-determination. Present article firstly introduces into the AI use case of ADM and subsequently discusses ethical theories and principles around self-determination, particularly within the application of ADM. Then, heteronomy and risks of discrimination caused by algorithms are being outlined. Lastly, the paper explores how self-determination is reflected in present legislation; with the subordinate, conclusive aim to identify possible needs for adaption or extension of the legal framework with particular regard to ADM.
In brief, automated decision-making (ADM) describes a process through which large amounts of data are being processed by algorithms in order to derive data-driven decisions (Newell & Marabelli, 2015). Article 22 of the European General Data Protection Regulation (GDPR) de-livers another definition of ADM by referring to “a decision based solely on automated pro-cessing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” (GDPR, 2016).
Below figure exemplarily illustrates a basic ADM scenario where the decision is solely being made by the algorithm (AI Agent). The basic ADM process can be clustered into three main phases. During the input phase, the algorithmic model programmed by developers is being set up, plus, relevant data sets are being selected by developers and deployed by the AI Agent. In the subsequent processing phase, the developed model is being applied by processing data against defined target variables for the decision’s subject. In the final output phase, the AI agent is taking the decision based on the created output.
Figure 1: Completely Automated Decision-Making Process (own illustration partly based on Landau & Landau, 2017, p.2; Jaume-Palasí & Spielkamp 2017, p.7).
The ancient Greek philosophers already discussed the fundamental aspect of individual autocracy and the naturally given instinct of self-preservation (Gerhardt, 2018). Gerhardt describes that autocracy, with the aim to self-sufficiency, constitutes the basic condition for virtues (ibid.) Since the age of the Enlightenment, the ethical discussion about self-determination comes along with the elementary theory of Autonomy. Immanuel Kant, the theory’s founder, defined the autonomy of the will in his Groundwork of the Metaphysics of the Moral as “the property of the will by which it is a law to itself” (Kant, 1785).
Kant discusses the principle of autonomy along with the principle of moral and reasonable acting (Kant, 1785). Further, once an individual act in an autonomous way, this person is conscious about the reasons and about its motivation for the specific way of behaving (Rößler, 2017a). Thus, the person needs to know itself to be able to act and live in a self-determined way (ibid). Acting autonomously means acting for the own reasons (ibid., p.35). Kant’s theories further rely on the human concept of dignity, freedom, self-worth, or interaction (Ulgen, 2017).
Transferring the ethical basic concepts around the subject of self-determination into the use case of ADM, where a decisive output can be generated by an algorithm without any human interaction, contradictions to the ethical concept can be identified. First, algorithms themselves do act, or rather operate, in an autonomous way. However, they are not able to fully apply above concepts due to their lack of human consciousness and moral self-perception (Ulgen, 2017). Second, algorithms create output which results into concrete decisions in the case of ADM, which obviously restrict the person’s self-determination and rather exemplifies heteronomy.
The importance of the concept of self-determination becomes apparent in the light of the contrary concept of heteronomy. Within an ADM process there are two areas where heteronomy caused by the machine is given. First, the person is impacted by – and thus subject to – the machine’s decision. Second, since the decision-making process itself happens by processing partly personal data profiles, the persons are additionally subject to a decision based on their data profiles. Closely linked to above heteronomous scenarios are risks of discrimination through the use of algorithms, which shows a recent study of Orwat, compiled with a grant from the Federal Anti-Discrimination Agency of Germany (Orwat, 2020). More concretely, during the processing of group data, generated stereotypes can determine the decisive outcome provided by the algorithm (ibid.). Given fact that individuals cannot agree or disagree to the algorithm-based differentiation, the freedom of personal development, the right to self-expression and the protection of human dignity is being infringed (ibid.). In addition, if applied data sets are incomplete and already biased, this can result into an over or under representation of certain groups, as well as statistical discriminating correlations may occur (ibid.). Also, possible cases of deliberate discrimination can get concealed within software-based applications (ibid.) In 2018, Amazon admitted that it would no longer use an AI software to select applicants (Goodman, 2018). There were disadvantageous applications from women or from persons who had graduated from colleges attended predominantly by women. The reason seems to be due to the training data of previously successfully recruited applicants, because at Amazon often men who are interested in technology apply, so that women were more likely to be filtered out (Goodman, 2018). Another real-life example of AI bias appeared in the US health sector in 2019. US hospitals had applied an algorithm to predict the likelihood for extra medical care and thereby significantly favoured white patients over black patients due to the algorithmic model’s misleading proxy of historical health-care spending data (Vartan, 2019).
A more abstract aspect of given heteronomy delivers Rößler. She argues that Kant’s origin idea of autonomy should nowadays be seen in a broader and more personal context (Rößler, 2017b). She further explains that personal autonomy would never be ideal and that it would constantly facing internal or external restrictions (ibid.). Inner barriers in form of self-deception or the inability to make decisions, outer barriers through inequality or discrimination (ibid.).
Rößler’s descriptions underline the following, differentiative approach on ADM. Regardless of machines’ increasing involvement, decisions in various business scenarios, such as credit approval processes, job application management or insurance fee calculation, have always been made by external parties and not by the impacted person himself, which exemplifies previously exiting limits of self-determination and thus of heteronomy. A significant difference between human decision-making and automated decision-making lays in the applicable accountability for human-made decisions. From an ethical point of view, this results in the (personal) application of moral and reasonable principles, from a legal perspective, it results in the application of relevant laws. Given the lack of transparency and consciousness of AI based software, we identify limits for both machine’s and the developer’s accountability in regard to the application of both ethical principles and legal provisions.
In conclusion, an important role of the concept of privacy and self-determination is the protection of individuals against (group related) discrimination and profiling. Thus, in today’s age of information and Big Data, self-determination in regard to personal data is of high relevance in order to avoid statistical or even deliberate discrimination. Given fact leads directly to the legal term of informational self-determination which includes word-for-word the aspect of self-determination and further opens the discussion about the existing legal framework around self-determination.
Once a new technology has entered the society, we identify the need for legal review. More sharply formulated, the examination of legal compliance is required and should be part of the technological development, too (Spindler et al., 2020). Against this background, the investigation if the technology restricts human rights or, if this technology is compliant to given legislation, is needed. With respect to the use case of ADM, three subordinate areas of existing legislation form relevant legal principals to be considered within the exploration of respective legal framework in the light of self-determination.
The first area relates to mentioned Right to Informational Self-Determination (original: Recht auf Informationelle Selbstbestimmung), both for its terminology but also content-wise. It describes a German fundamental right which has been formed out of the General Personal Rights in the context of the Census Act in 1983 (BVerfGE, 1983). It describes the right of each person to decide on the provision and usage of personal data in a self-determined way (BVerfGE, 1983; Albers, 2015). It aims to protect individuals against influence or handling of their personal information or data and to strengthen individual’s freedom of decision-making and freedom of action (Albers, 2015). In addition, it is the government’s duty to actively protect the right’s protection dimensions (Papier, 2012). The right can further be derived from article 7 of the Charter of Fundamental Rights of the European Union as follows: “Everyone has the right to respect for his or her private and family life, home and communications.” (CFR, 2012, art.7). In addition, article 8 of the European Convention on Human Rights contains almost the identical wording (ECHR, 1950).
This leads to the second legal area of the European data protection regulations. With the release of the GDPR on May 23rd 2018 (GDPR, 2016), the European Union has started to join forces within the legal topic field of data protection in order to especially activate and establish regulations for the protection of personal data across all European countries. In the following, main aspects of the GDPR that restrict and partly conflict with algorithmic software will be clarified. The core of AI based technology form the processing of data. Case-related, algorithms constantly learn new patterns out of a bulk of data. They even change their originally coded algorithm in a not entirely traceable way (Jaume-Palasí & Spielkamp, 2017). In regard to ADM, it is in fact not fully explainable which data set has been analysed, processed and, most importantly, has ultimately resulted into the decision or the recommendation for action. These criteria of algorithms conflict with some of the GDPR’s principles, namely with article 5.1-2, as any personal data processing needs to happen in a lawful, fair, and transparent way (GDPR, 2016). Further, the accountability for any personal data needs to be given following article 5.1-2, which applies in case of AI based data processing for data controllers (GDPR, 2016). Of considerable importance is the principle that personal data must be processed in a way that is comprehensible to the data subject (Art. 5 (1) lit. a GDPR). Traceability and explicability are essential aspects when using AI systems (Kugelmann, 2019). But in GDPR there is no “right to explanation” of specific automated decisions (Wachter et al., 2017). Recital 71 GDPR also calls for the use of appropriate mathematical or statistical methods capable of preventing discriminatory effects. This coincides with article 22 of the GDPR which specifically relates to automated decision-making by stating that “the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” (GDPR, 2016, art. 22).
In regard to the right to informational self-determination, the terminology of self-determination does not appear in the GDPR, however, the regulatory framework does contains actionable elements of informational self-determination, namely through article 7 which defines that, “where processing is based on consent, the controller shall be able to demonstrate that the data subject has consented to processing of his or her personal data.” (GDPR, 2016, art.7). The article further states that “the request for consent shall be presented in a manner which is clearly distinguishable from the other matters, in an intelligible and easily accessible form, using clear and plain language.” (ibid.). In fact, article 7 of the GDPR plainly demonstrates that individuals do actively need to give their consent while being informed about the subject of processing of their data which further describes a fully self-determined action. Also, the principle of data portability in article 20 of the regulatory framework indicates content-related conformity, namely, that every person has the right to transmit given data in a self-determined way (GDPR, 2016).
By releasing the AI law proposal in April 2021, the European Commission introduced an additional legal framework for the future, which specifically addresses potential risks of AI based software including present case of ADM. The proposal, titled as The Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, describes globally the first legal framework draft on AI and assigns a leading role to Europe within this topic field (European Commission, 2021). For the first time, the proposal legally defines AI systems in article 3.1, concretely as a software that “can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” (European Commission, 2021, art.3,1). The AI law proposal further follows a risk-based approach by regulating AI software by respective risk levels (ibid.). Compared to the GDPR, the focus does not only lay on the protection of personal data but on the provision of a guiding framework for the development of AI and on compliant outputs created by AI. In the explanatory memorandum of the proposal, the EU’s objective to develop “secure, trustworthy and ethical artificial intelligence” (European Commission, 2021, p.1) by ensuring the protection of ethical principles is stated. The fundamental rights to be protected by the proposal form all part of the Charter of Fundamental Rights of the European Union, including i.e. respect for private life and protection of personal data (Articles 7 and 8), non-discrimination (Article 21) and equality between women and men (Article 23), among others (CFR, 2012). There is no article to the right to self-determination termed, however with the planned risk assessment for AI software in regard to their impact on individuals provide a protective quality gate. Nonetheless, elements supporting self-determination, such as informed consent for AI software is not unambiguously contained.
Self-determination describes a prerequisite of individual’s self-development. Self-determination equally supports a responsible and autonomous way of living within the society and additionally prevents from discrimination caused by heteronomy, which again shows its importance. Numerous philosophers share the opinion that, within a politically led society, individuals are confronted with external control through legislation, rights, or governmental restrictions. Previously mentioned use cases have shown that specifically companies and corporate stakeholders impact individual’s self-determination by applying ADM, such as automated HR management at Amazon, automated credit approval in the finance and insurance sector, or applied face recognition software at Facebook.
Thus, ethically, self-determination is not only defined by the property of the will and the freedom to decide, but also by internal and external limits and boundaries caused by societal participation. In respect to heteronomous scenarios, it is specifically required to prevent people from bias, (group) discrimination, or violation. In the age of technology, it is inevitable to set operational standards by enabling “ethics-by-design” and by providing a stable and secure legal framework creating transparency and accountability for each impactful output made by algorithms or AI software in general; in order to avoid statistical or considerable discrimination and to protect and strengthen self-determination. The following section evaluates existing legalisation accordingly.
The Right to Informational Self-Determination does include self-determination around personal data and defines one of the first legal approaches within this discussion, but rather on a meta-level. However, the right lacks practical reference and applicability and by this it does not fully meet today’s requirements within technological data use cases, such as automated decision-making. Nonetheless, it describes a legal, historical milestone and contributed to the development of the GDPR.
The GDPR has laid an operationalized, legal foundation to protect personal data. However, for the specific contradiction between self-determination and ADM, there is no concrete principal that precisely reflects and therewith protects individual’s self-determination within the automated use of personal data. This results into an identified need for review and extension of the GDPR towards actionable principles related to automated personal data processing that further contain transparent components.
With the proposed European AI law, a future-oriented framework for AI technology is planned to be established. AI software should follow strict risk assessments and further protect individuals against i.e. discrimination and inequality, by respecting all rights of the Charter of Fundamental Rights. However, the proposed regulatory lacks elements for actionable self-determination which implies that a need for a concretisation of the law proposal towards the inclusion of self-determination can be identified.
Above findings lead to the following questions:
The answers to above question and the identification of concrete measures of mentioned areas for improvement in given legislation is subject to further research, aiming to deliver a holistic view and concrete ideas for legal adaption that protects fundamental personal rights, strengthens self-determination and enables “ethics-by-design” in today’s and future’s era of hyperautomation.
Albers, M. (2005): Informationelle Selbstbestimmung. Nomos, p.155, p.609.
BVerfG (1983). Bundesverfassungsgericht, Volkszählungsurteil 1983, https://www.bfdi.bund.de/DE/Datenschutz/Themen/Melderecht_Statistiken/VolkszaehlungArtikel/151283_VolkszaehlungsUrteil.html
CFR (2012). Charter of Fundamental Rights of the European Union. https://eur-lex.europa.eu/eli/treaty/char_2012/oj
ECHR (1950). European Convention on Human Rights, art. 8 sec.1, in: European Court of Human Rights (2020): Guide on Article 8 of the Convention – Right to respect for private and family life. https://www.echr.coe.int/Documents/Convention_ENG.pdf
European Commission (2021). Proposal for a Regulation laying down harmonised rules on artificial intelligence. https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence
Fry, H. (2018). Hello World: How to Be Human in the Age of the Machine. Doubleday
Gartner (2021). Gartner Forecasts Worldwide Hyperautomation-Enabling Software Market to Reach Nearly $600 Billion by 2022. https://www.gartner.com/en/newsroom/press-releases/2021-04-28-gartner-forecasts-worldwide-hyperautomation-enabling-software-market-to-reach-nearly-600-billion-by-2022
GDPR (2016). General Data Protection Regulation (EU) 2016/679 Of the European Parliament and of the Council of 27 April 2016. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679
Gerhardt, V. (2018). Selbstbestimmung: Das Prinzip der Individualität (2nd. Ed.). Reclam, p.12 seq., pp.122 seqq.
Goodman, R. (2018). Why Amazon’s Automated Hiring Tool Discriminated Against Women, in: ACLU, 12. October 2018. https://www.aclu.org/blog/womens-rights/womens-rights-workplace/why-amazons-automated-hiring-tool-discriminated-against
Kant, I. (1785): GMS – Grundlegung zur Metaphysik der Sitten. https://korpora.zim.uni-duisburg-essen.de/kant/aa04/440.html, translated and edited in Groundwork of the Metaphysics of Morals by Mary Gregor, Cambridge University Press, 1997.
Jaume-Palasí, L., Spielkamp, M. (2017). Ethik und algorithmische Prozesse zur Entscheidungsfindung oder -vorbereitung, AlgorithmWatch Arbeitspapier Nr. 4, p.7. https://algorithmwatch.org/de/wp-content/uploads/2017/06/AlgorithmWatch_Arbeitspapier_4_Ethik_und_Algorithmen.pdf
Kearns, M., Roth, A. (2019). The Ethical Algorithm. Oxford University Press, p.9.
Kugelmann, D. (2019). Datenschutz bei Künstlicher Intelligenz, in: BvD News 2/2019, pp. 5-8.
Landau I., Landau, V. (2017). From data driven decision making (DDDM) to automated datadriven model based decision making (MBDM). 2017. hal-01527766. https://hal.archives-ouvertes.fr/hal-01527766
Newell, S., Marabelli, M. (2015). Strategic opportunities (and challenges) of algorithmic decision-making: a call for action on the long-term societal effects of ‘datification’. J Strateg Inf Syst 24:3–14, p.4. https://doi.org/10.1016/j.jsis.2015.02.001
Orwat, C. (2020): Risks of Discrimination through the Use of Algorithms, Karlsruhe Institute of Technology, p. 49 et seqq., p.58 et seqq. https://www.antidiskriminierungsstelle.de/SharedDocs/downloads/EN/publikationen/Studie_en_Diskriminierungsrisiken_durch_Verwendung_von_Algorithmen.pdf?__blob=publicationFile&v=2
Papier, H.-J. (2012). Verfassungsrechtliche Grundlagen des Datenschutzes, in: Schmidt, J.-H., Weichert, T. (Eds.). Datenschutz, Bundeszentrale für politische Bildung.
Rössler, B. (2017a). Autonomie – Ein Versuch über das gelungene Leben. Suhrkamp, pp. 20-25.
Rössler, B. (2017b). Was ist selbstbestimmtes Leben. Spiegel Online. https://www.spiegel.de/kultur/gesellschaft/philosophin-beate-roessler-im-interview-was-ist-selbstbestimmtes-leben-a-1155708.html
Spindler, M., Booz, S., Gieseler, H., Runschke, S., Wydra, S., and Zinsmaier, J. (2020). How to achieve integration? Methodological concepts and challenges for the integration of ethical, legal, social and economic aspects into technological development, pp.215, in: Gransche, B., Manzeschke, A. (Eds.). Das geteilte Ganze, Springer Fachmedien.
Ulgen, O. (2017). Kantian Ethics in the Age of Artificial Intelligence and Robotics, Questions of International Law, http://www.qil-qdi.org/wp-content/uploads/2017/10/04_AWS_Ulgen_FIN.pdf
Vartan, S. (2019). Racial Bias Found in a Major Health Care Risk Algorithm. Scientific American. https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/
Wachter, S., Mittelstadt, B. and Floridi, L. (2017). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation, in: International Data Privacy Law, 2017, Vol. 7, No. 2, pp. 76-99.