\newif\ifSHOWPROOFS
\documentclass{llncs}
\SHOWPROOFSfalse
\SHOWPROOFStrue

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\usepackage{a4,a4wide}
\usepackage{xspace}
\usepackage{url}
\usepackage{amsmath,amssymb}
% \usepackage{algorithm}
% \usepackage{algorithmic}
\usepackage{graphicx}
\usepackage{pst-tree}
\usepackage{pst-node}
\usepackage{epstopdf}
\usepackage{longtable}
\usepackage[utf8]{inputenc}

\newcommand{\grd}{\mathit{ground}}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand{\naf}{\ensuremath{\mathrm{not}}\xspace}
\newcommand{\sneg}[1]{\ensuremath{\neg#1}\xspace}
\newcommand{\compl}[1]{\ensuremath{\bar{#1}}}

\newcommand{\atomp}[1]{\ensuremath{\mathit{#1}\xspace}}
\newcommand{\atom}[2]{\ensuremath{\mathtt{#1}{\rm(}\mathtt{#2}{\rm)}\xspace}}
\renewcommand{\atom}[2]{\ensuremath{\mathit{#1}{(}{#2}{)}\xspace}}




\newcommand{\gringo}{\texttt{Gringo}\xspace}
\newcommand{\aspviz}{\texttt{ASPVIZ}\xspace}
\newcommand{\idpdraw}{\texttt{IDPDraw}\xspace}
\newcommand{\clasp}{\textsc{Clasp}\xspace}
\newcommand{\dlv}{\texttt{DLV}\xspace}
\newcommand{\nomore}{\texttt{noMoRe}\xspace}
\newcommand{\inter}{\ensuremath{I}\xspace}
\newcommand{\ninter}{\ensuremath{N}\xspace}
\newcommand{\deltaR}{\ensuremath{{\mathit{R}_\Delta}}\xspace}
\newcommand{\deltaI}{\ensuremath{{\mathit{I}_\Delta}}\xspace}
\newcommand{\loopS}{\ensuremath{\Gamma}\xspace}
\newcommand{\opsym}[1]{\ensuremath{\Upsilon_{#1}}\xspace}
\newcommand{\op}[2]{\ensuremath{\opsym{#1}({#2})}\xspace}
\newcommand{\statei}[1]{\ensuremath{\mathit{I}({#1})}\xspace}
\renewcommand{\statei}[1]{\ensuremath{\mathit{Int}[{#1}]}\xspace}
\newcommand{\stater}[1]{\ensuremath{\mathit{R}_{#1}}\xspace}
\newcommand{\cset}{\ensuremath{C}\xspace}
\newcommand{\state}{\ensuremath{S}\xspace}
\newcommand{\facts}{\ensuremath{F}\xspace}
\newcommand{\computation}{\ensuremath{\mathrm{C}}\xspace}
\newcommand{\blockedLiterals}{\ensuremath{\mathfrak{B}}\xspace}
\newcommand{\applrules}{\ensuremath{R}\xspace}
\newcommand{\activatedRules}[2]{\ensuremath{\mathit{AR}_{#2}(#1)}\xspace}

\newcommand{\sealion}{\texttt{SeaLion}\xspace}
\newcommand{\kara}{\texttt{Kara}\xspace}

\newcommand{\intconstraints}[1]{\ensuremath{\mathit{Con}_{#1}}\xspace}

\newcommand{\activerules}[2]{\ensuremath{\mathit{Act}^{#1}({#2})}\xspace}

\newcommand{\forbidden}{\ensuremath{\mathit{forbidden}}\xspace}
\newcommand{\currentSupport}[1]{\ensuremath{\mathit{Sup}[{#1}]}\xspace}
\newcommand{\pool}{\ensuremath{\mathit{pool}}\xspace}
\newcommand{\hcpool}{\ensuremath{\mathit{pool^{HC}}}\xspace}
\newcommand{\constraintset}{\ensuremath{\mathit{Con}}\xspace}
\newcommand{\hclits}{\ensuremath{\mathbb{L}}\xspace}
\newcommand{\Pol}{{\rm P}\xspace}
\newcommand{\PNP}{\Pol^{\rm NP}\xspace}
\newcommand{\NP}{\mbox{\rm NP}\xspace}
\newcommand{\NPNP}{\NP^{\rm NP}\xspace}
\newcommand{\CONP}{\mbox{\rm coNP}\xspace}
\newcommand{\coNP}{\mbox{\rm coNP}\xspace}
\newcommand{\SigmaP}[1]{\ensuremath{{\Sigma}_{#1}^{P}}\xspace}
\newcommand{\PiP}[1]{\ensuremath{{\Pi}_{#1}^{P}}\xspace}
\newcommand{\DeltaP}[1]{\ensuremath{{\Delta}_{#1}^{P}}\xspace}

\newcommand{\lang}{\ensuremath{\mathcal{L}}\xspace}

\newcommand{\alphabet}{\ensuremath{\mathcal{A}}\xspace}
\newcommand{\visalphabet}{\ensuremath{\mathcal{A}_V}\xspace}

\newcommand{\vispred}{\ensuremath{\mathcal{P}_v}\xspace}
\newcommand{\abdpred}{\ensuremath{\mathcal{P}_a}\xspace}
\newcommand{\intpred}{\ensuremath{\mathcal{P}_i}\xspace}
\newcommand{\abddom}{\ensuremath{\mathcal{D}_a}\xspace}

\newcommand{\la}{\ensuremath{\leftarrow}\xspace}
\newcommand{\ra}{\ensuremath{\rightarrow}\xspace}
\newcommand{\at}[1]{\ensuremath{\mathit{At}(#1)}\xspace}

\newcommand{\subst}{\ensuremath{\vartheta}\xspace}

\newcommand{\succeeds}[1]{\ensuremath{\succ_{#1}}\xspace}
\renewcommand{\succeeds}[1]{\ensuremath{\prec_{#1}}\xspace}
\newcommand{\result}[1]{\ensuremath{\mathit{res}({#1})}\xspace}

\newcommand{\be}{\begin{compactenum}}
\newcommand{\ee}{\end{compactenum}}
\newcommand{\bi}{\begin{compactitem}}
\newcommand{\ei}{\end{compactitem}}

\newcommand{\AS}[1]{\ensuremath{\mathit{AS}(#1)}\xspace}
\newcommand{\SM}[1]{\AS{#1}}

\newcommand{\naturals}[0]{\ensuremath{\mathbb{N}}\xspace}
\newcommand{\vhack}{\vspace{-8pt}}
\newcommand{\vhackafter}{\vspace{-5.5pt}}
\renewcommand{\vhack}{}
\renewcommand{\vhackafter}{}
\newcommand{\lit}[1]{\ensuremath{\mathit{L}(#1)}\xspace}


\newcommand{\hcfuncsym}[2]{\ensuremath{f_{#1,#2}}\xspace}
\newcommand{\hcfunc}[3]{\ensuremath{\hcfuncsym{#1}{#2}({#3})}\xspace}

\newcommand{\viz}[0]{viz.\xspace}
\newcommand{\wrt}[0]{with respect to\xspace}
\newcommand{\iec}[0]{i.e.,\xspace}
\newcommand{\Iec}[0]{I.e.,\xspace}
\newcommand{\egc}[0]{e.g.,\xspace}
\newcommand{\eg}[0]{e.g.\xspace}
\newcommand{\resp}[0]{resp.,\xspace}
\newcommand{\nop}[1]{}
\newcommand{\body}[1]{\ensuremath{\mathrm{B(}{#1}\mathrm{)}}\xspace}
\newcommand{\head}[1]{\ensuremath{\mathrm{H(}{#1}\mathrm{)}}\xspace}
\newcommand{\pbody}[1]{\ensuremath{\mathrm{{B^+}(}{#1}\mathrm{)}}\xspace}
\newcommand{\nbody}[1]{\ensuremath{\mathrm{{B^-}(}{#1}\mathrm{)}}\xspace}
\newcommand{\gr}[1]{\ensuremath{\mathit{gr(}{#1}\mathit{)}}\xspace}
\newcommand{\set}[1]{\ensuremath{\{{#1}\}}\xspace}
\newcommand{\abs}[1]{\ensuremath{|{#1}|}\xspace}
\newcommand{\selfsupp}[0]{self-supporting\xspace}
\newcommand{\stable}[0]{stable\xspace}


\newcommand{\trans}[2]{\ensuremath{\lambda{\rm(}{#1},{#2}{\rm)}}}
\newcommand{\guesspart}[1]{\ensuremath{\mathit{guess}{\rm(}{#1}{\rm)}}}
\renewcommand{\guesspart}[1]{\ensuremath{\mathsf{guess}{\rm(}{#1}{\rm)}}}
\newcommand{\checkpart}[1]{\ensuremath{\mathit{check}{\rm(}{#1}{\rm)}}}
\renewcommand{\checkpart}[1]{\ensuremath{\mathsf{check}{\rm(}{#1}{\rm)}}}
\newcommand{\var}[1]{\ensuremath{\mathit{VAR}{\rm(}{#1}{\rm)}}}
\newcommand{\dompart}[2]{\ensuremath{\mathit{dom}{\rm(}{#1},{#2}{\rm)}}}
\renewcommand{\dompart}[2]{\ensuremath{\mathsf{dom}{\rm(}{#1},{#2}{\rm)}}}

\newcommand{\ruleimp}{\mathtt{{:-}}}
\renewcommand{\ruleimp}{:\!- \ }

\newcommand{\nonrecdomo}[0]{\ensuremath{\mathtt{nonRecDom}}}
\renewcommand{\nonrecdomo}[0]{\ensuremath{\mathit{nonRecDom}}}
\newcommand{\nonrecdom}[1]{\ensuremath{\nonrecdomo(#1)}}
\newcommand{\domo}[0]{\ensuremath{\mathtt{dom}}}
\renewcommand{\domo}[0]{\ensuremath{\mathit{dom}}}
\newcommand{\dom}[1]{\ensuremath{\domo(#1)}}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\title{Current Trends in Computer Science\\
Summary Report }

\author{Name Surname}

\institute{Matriculation number: 0123456 \\
\email{email@email.com}
}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\begin{document}

\maketitle
                                              
%\begin{abstract}
%Abstract goes here...

%\end{abstract}

\section{Introduction}

The selection of talks for this report comprises talks from the 2022W and 2023S semesters in which Current Trends in Computer Science was offered.
The first of these talks by Dines Bj\o{}rner on Domain Science \& Engineering, was held on May 4th, 2022.
The second talk was held by Gordana Dodig-Crnković on teaching ethics to students was held on May 11th, 2023.
The third, by Antonio Casilli, focuses on a counter-history of artificial intelligence and was held on May 12th, 2023.

Because artificial intelligence is such a prominent theme of many of the talks offered within this course, the author of this report also proudly asserts that it was not written using the help of ChatGPT.

\newpage

\section{Domain Science \& Engineering - Dines Bj\o{}rner}

Dines Bj\o{}rner has an impressive and impressively long track record consisting of work that has shaped multiple important technologies, methodologies and concepts used in Computer Science in general.
To mention just a few, his early work were contributions to functional programming, query languages (that have evolved into the widely used standard SQL) and the Vienna Development Method (VDM).
But his most significant contribution is to the field of domain science and engineering.

In a project aiming to aid developing countries currently in the process of or planning to adopt computer systems, the need for a deeper understanding of domains seems to have come up almost out of necessity:
The aim of this programme was to develop \textit{software for critical infrastructure}, using formal methods based on sound, scientific foundations.
Some examples provided in the talk for critical infrastructure are railways, telecommunications, banking, fisheries, transport (sea/air/land), agriculture, manufacturing, mining, pipelines and e-government.
The criticality of these domains underlines the need for domain-specificity of such software solutions.

\subsection{Making Domains into a Science}

To understand domains in more detail, these often vaguely used concept of a domain was refined and a calculus to tackle domains for purposes of verification and computing was constructed.
The following definition of a domain was provided in the talk:

\begin{quotation}
By a \textit{domain} we shall understand: a \textit{rationally describable} segment of; a \textit{discrete dynamics} of; a \textit{human assisted reality}, i.e., of the world; its \textit{more-or-less related solid} or \textit{fluid entities} - \textit{natural} [``God-given''] and \textit{artefactual} [``man-made''] - and its \textit{living species entities}: \textit{plants} and \textit{animals} - including \textit{humans}
\end{quotation}
	
One can already get a sense that defining domains is not an easy task.
There are endless combinations of entities, their behaviors and possibly interactions, coming together to form what we might understand as a domain.
Another complication is the question which properties are important for a purposeful application of domain knowledge and what this application should be.
In a sense, these are the deeper problems of abstraction.

Given this definition of a domain, it is reasonable to define the \textit{description} of a domain, which is an association of \textit{abstract syntax} and \textit{semantics} and \textit{language} (\textit{nouns} and \textit{verbs}) spoken by \textit{practitioners}.

There are multiple things to consider here, but the talk focuses on semantics.
The semantics are broken down into more discrete entities: \textit{axioms} and \textit{laws} of syntactically defined \textit{endurants}; \textit{behaviors} that describe \textit{perdurants}.
Endurants and perdurants are the driving forces of this approach to understanding domains, because their complex interplay can capture domains while still remaining formally describable.

The following definition of an endurant is provided (with a citation to Vol. I, pg. 656 to The Shorter Oxford English Dictionary on Historical Principles, Clandon Press, Oxford, England, 1973, 1987, two vols. by W. Little, H.W. Fowler, J. Coulson, and C.T. Onions):

\begin{quotation}
Endurants are those quantities of domains that we can observe (see and touch), in space, as ``complete'' entities no matter which point in time; ``material'' entities that persists and endures.
\end{quotation}

Things which stay still and exist as part of something are endurants, for example.
What happens when we model domain endurants in software?
They typically become \textit{data}.

As for perdurants, the following definition is provided (with another citation to the same source, this time Vol. II, pg. 1552):

\begin{quotation}
By a \textbf{perdurant}, we shall understand an entity: for which only a fragment exists if we look at or touch them at any given snapshot in time. Were we to freeze time we would only see or touch a fragment of the perdurant.
\end{quotation}

Clearly perdurants are in some way orthogonal to endurants.
They describe things which do not stay still but still exist as a part of something, like for example objects that are in motion as part of some process.
Thus it is concluded that domain perdurants modelled in software typically become procedures.

Now these primitives can be used to construct a calculus that operates on a domain, or more specifically, the endurants and perdurants.
Both of these are part of a more abstract group called an entity and we can model the types of an entity with predicates.
Qualities of an entity might also be modelled using predicate.
Richer analysis is possible using relations and functions, entities can also be grouped into aggregates or broken down into sets of entities.

Suddenly there is a fascinating potential for accurate models by using this calculus to do some of the things which are now possible.
For instance, the internal qualities of an entity might be formalized.
We can also try to relate entities that exist as parts at different abstraction layers with each other, like for example comparing a part to the whole (the study of which is referred to as \textit{mereology}, first proposed by Polish mathematician/logician and philosopher Stanis\l{}aw Leśniewski (1886-1939)).
The requirement for something to be a part is for it to have a unique identification.

Another interesting potential is for deduction on the basis of this formalism to \textit{transcend} the domain itself.
We are explicitly told in the talk that a key point of the domain science and engineering approach is to transcendentally deduce a unique behavior to every manifest part.
What is meant by \textit{transcendental} is this:

\begin{quotation}
We shall understand the philosophical notion: the a priori or intuitive basis of knowledge, independent of experience.
\end{quotation}

As for a \textit{transcendental deduction}:

\begin{quotation}
We shall understand the philosophical notion: a transcendental `conversion' of one kind of knowledge into a seemingly different kind of knowledge.
\end{quotation}

Another way to look at transcendence in this framework is the process of ``injecting endurants into perdurants''.
There exists a similar kind of relationship between endurants/perdurants and nouns/verbs and as we know from some languages, ``nouns can often be verbed''.
The notion of transcendence seems almost an inevitability of this approach but it might still be surprising.

Another concept is the concept of a \textit{channel}.
Their inspiration is the communication of behaviors.
To express them, Tony Hoare's \textit{Communicating Sequential Processes} (CSP) are used.
All of these primitives combined can formalize the behavior of systems and within the talk, multiple examples are offered, where out of an specification in informal language, a formal domain specification is constructed.
All of this is brought together at the end in what is called a domain initialisation, which ``amounts to the parallel composition of all manifest part behaviors.''

One interesting thing to note is that this structure resembles VHDL, which is used to model digital logic for hardware.
On one hand there are entities, which exist as instantiated objects on their own, but there is also a \textit{behavioral} description of the entities, which can be supplied for example by providing \textit{processes}.
The difficulty for a programmer used to sequential thinking is to reason about the inherent parallelism of such hardware.
With multiple processes in place, there are multiple things going on at the same time.
One can see the similarity to the idea of a domain initialisation, or the ``parallel composition of all manifest part behaviors.''
There is an interesting historical note here, VHDL as a language is based on Ada.
As we will see, this domain-based approach was actually used to build an Ada compiler.

\subsection{Motivation and Aftereffects of a Science of Domains}

This kind of domain science \& engineering is widely applicable.
It has been applied in a fully validated European Ada compiler developed by the Dansk Datamatik Center (based in Denmark), the inner workings of which was explained as part of the talk.

Today, it is unthinkable not to make extensive use of domain-specific knowledge in solving a problem that involves software.
The requirement to understand a domain appears wherever safety is a concern, but it is also relevant to other metrics of system performance.
As such, it is certainly interesting to consider an approach based on formalisms about the domain itself.

But within the talk, domain science \& engineering was motivated philosophically:
we have seen a part of this while looking a transcendence, but it becomes more obvious when looking into the primer that has been written by Dines Bj\o{}rner titled ``Domain Science and Engineering, A Foundation for Software Development'' (a 2021 publication by Springer).
In the primer, there is an entire chapter dedicated to philosophy and it revolves around the philosophy of Kai S\o{}rlander, partly inspired by and standing in contrast to the philosophy of Immanuel Kant.
Quite naturally, the concepts and primitives that make up the domain formalism we have looked at seem to result from this philosophy and perhaps, this is the reason why Dines Bj\o{}rner's innovative approach to taming complexity is a success.

\newpage

\section{Research-based teaching ethics to engineering students: beyond compliance and on the way of digital humanism - Gordana Dodig-Crnković}

In this talk Gordana Dodig-Crnković touched on the emerging and increasingly important topic of digital humanism, the center of which she remarked to lie in Vienna.
Her track record with which the talk begins is impressive and shows a remarkable kind of focus on the intersection of computer science and ethics.
Especially in such a rapidly evolving area as is computer science, constructively discussing ethics is not at all easy and sometimes even completely forsaken.

\subsection{An Approach to Teaching}

What makes this introduction unique is a focus on courses that were taught and even the underlying approach to some of these.
To characterize her approach to teaching ethics to students, in particular to students of technology, she has quoted Peter Bowden who provided a description of a similar approach:

\begin{quotation}
The course was based on the assumption that \textit{identifying the major ethical issues in the discipline, and subsequently presenting and analysing them in the classroom, would provide the future professional with knowledge of the ethical problems that they were likely to face on graduation.}
The student has then to be given the skills and knowledge to combat these concerns, should he/she wish to.
These findings feed into several components of the course, such as the code of ethics, the role of a professional society or industry association and the role of ethical theory.
The sources employed to identify the issues were surveys of the literature and case studies.
\end{quotation}

What is the rationale behind offering ethics courses to students from such a variety of different fields as computer science and engineering?
The answer is simple:
tech is becoming central to culture, or indeed everything!
As such, a premise of the talk is that ``we must work to understand technologies we build.''
Almost by definition, everything is connected to ethics, because ethical considerations can be made for and about everything we do.
One of the goals of this talk was to highlight the lessons learned in a career geared towards the difficult problems of ethics and to equip students with tools to solve ethical problems they themselves are likely to face in their field.

To identify topics with ethical relevance, the students of the speaker were tasked with the filling out of a questionnaire.
Overall, the topics are categorizable in three groups:
\textit{technology aspects}, \textit{methodology aspects} and \textit{social aspects}.
The technology aspects are further categorizable into data-related and sustainability-related topics.
It is quite instructive to consider what the students identified, very long lists of topics were offered.

To summarize the data-related technology aspects, the identified topics were the provenance, confidentiality, privacy, quality, equality, and reproducibility of data.
There were also answers about public understanding of technology, data-driven approaches and neutrality of data.
As for the sustainability-related topics, the answers pertained to the fuel economy, emissions, environmental contributions of batteries (production, use and disposal), the impact of massive electronic production, the increasing demand of rare elements, a common lack of life cycle assessment and something called the rebound effect.
The rebound effect can be explained by using a simple example, if a combustion-engine based car is made more efficient, it could lead to less emissions.
But the same improvements may induce behavioral changes that offset any gains, if as a consequence the car is used more readily, to drive further or drive faster.

There were even more topics for the methodology aspects.
These pertain in part to possible methodological problems of the scientific process itself, such as reproducibility, open science, the review process (such as a lack of a double-blind review process), presentation of results, limits of modelling and what kinds of approaches could or should really be trusted.

Finally, there were also many social aspects to consider.
Some of the cited examples include the quality of life, codes of ethics, legal issues (such as issues related to copyright infringement), cultural diversity, gender equality, trusting machines to define culturally relevant spaces, interactions of technology with the job market, informing politics to make technology-aware decisions or even just the impact of technology on society at large.

As one can see, there are countless examples to be had, examples which can and should be examined more closely.
The talk takes a hopeful stance towards this issue by providing evidence that ethics is high on the agenda of the scientific community at large.

\subsection{A Common Trope: Unintended Consequences of Technology}

One common trope of many of the societal issues we face today because of the effects of technological innovations is that these problems come about as unintended consequences.
This means that the consequences either weren't previously thought of or ignored.
Even when the technology in question might be just one innovation, the unintended consequences can be manifold:
if we take as an example autonomous cars, they might carry consequences related to the organization of roads, the many ways in which unfortunate accidents could occur, the psychological changes they might bring about, shifts in the car industry as a whole, as well as many more imaginable possibilities.
Other technologies that raise questions were offered as examples, such as artificial general intelligence, nano-technology, biotechnology, internet of things and other autonomous software/systems.

This problem of unintended consequences supports the goal of educating students by confronting them with the quite broad field of ethics, even if their chosen field might be entirely different.
This problem is also the basis of a methodology to deal with ethical issues in technology.
Besides teaching students, the methodology that is offered to avoid unintended consequences is to imbue values into the process of design, engineering and research.
Values serve to encode what is important to us, or as is described in the talk:

\begin{quotation}
Values serve as a guide to action and knowledge.
They are relevant to all aspects of scientific and engineering practice, including discovery, analysis and application.
\end{quotation}

\subsection{The Value of Values}

What sort of values could be involved in the decision making and reasoning processes?
These values are categorized in the talk as ethical values, aesthetic values, epistemic values and economic values, amongst others.
A more sophisticated example of values that inform the reasoning process that comes to mind are \textit{simplicity} and \textit{elegance} in the field of physics.
Acclaimed physicists such as Albert Einstein have used these values as a guide to judge the quality of their work, even though they are aesthetic values, which might or might not at first glance seem relevant to problems of physics.

These values can also fill what is referred to as a policy vacuum.
Policy vacuums are cited as a reason that technological consequences arise, by way of quoting Moor J. on computer ethics.
It is argued that the new choices for action we are presented with by technology (or in particular computer technology) are unguided because there is a lack or policy or inadequate policies for how to conduct ourselves in these situations.
Especially in the field of computer science and engineering, technology is evolving and emerging at so rapid a pace, that regulations might not be able to keep up, so it is reasonable to turn to values instead.
But one danger of being too trusting of values is that we might fall prey to our inherent cognitive biases, which is one additional reason that a continuous dialogue on the topic might be desirable.

Arguably, the role of ethics is increasingly becoming more important, as technology shapes more aspects of our lives.
By now technology has already shaped the world we live in and the stakes seem bound to rise higher still.
For this reason, mention is paid to the emerging field of digital humanism, which deals with ``digital technology development and policies based on human rights, democracy, inclusion and diversity.''

The talk was finished by a hopeful outlook on the future of ethics in the classroom.
The workshop in the city of Florence that young Leonardo da Vinci was taken to by his father to begin his apprenticeship, Andrea Verrocchio's studio, was one that supported (rather than stifled) his spirit of enthusiastic experimentation and developed other young apprentices working there to go on to become famous for their work.
Keeping with this theme, the closing words of the presentation are as follows:

\begin{quotation}
The idealized picture of the roles of teacher and students in a research-based ethics course can be compared to the work of a renaissance art studio.
It is definitely \textbf{beyond compliance} (the action of complying with a wish or command).
\end{quotation}

\newpage

\section{A Counter-History of Artificial Intelligence: Computing factories and machinery (18th - 21st century) - Antonio Casilli}

Within this talk, Antonio Casilli gives weight to a different side of the history of artificial intelligence that arguably has been neglected.
The history of artificial intelligence is by and large a history of computing:
breakthrough upon breakthrough has let us amass our capacities to calculate whatever is within the realms of our programming.
This calculative ability has been the poster child of computer science, but with the rise of artificial intelligence, a different side that is an invaluable and necessary counterpart of computation presents itself.
This counterpart is data itself, which particularly for artificial intelligence comprises much of where the intelligence comes from.
Many are well familiar with the history of computing and the trajectory it has taken, but it is argued in this talk that this counterpart and its history in particular has been left out.
Thus the talk concerns itself with providing this missing link by providing a counter-history, one that is not computing-centric, but data-centric.

\subsection{Breakthroughs in Computing are Preceded By a Lot of Data}

The era to which this counter-history is traced back to interestingly begins even before the first computers were made, to an era where computation was avoided by creating pre-computed tables used for calculations that are not easily resolved by hand.
There were different tables used for different purposes, but each served to provide valuable information that can be used in specific contexts.
In this case, the data that was generated to be used for the tables is equivalent to the human labor that was necessary to generate it.
As such, the advance in the production of data has led to an advance in computation, for these tables reduced the need for computation and thus furthered what can feasibly be computed.
It is argued within the talk that most breakthroughs in computation were accompanied by a lot of data.

Artificial intelligence in particular has garnered much attention and there have been multiple breakthroughs that left their impressions upon the world.
One such breakthrough was in the area of chess, where an artificial intelligence system ended up victorious over venerable chess player Garry Kasparov.
Another impression was left upon the world by IBM Watson.
More recently and perhaps most significantly, the entire world has seemingly turned its attention to ChatGPT.
Each of these operates in a different space, but the proficiency these systems show to solve the task they are geared towards is impressive.
What enabled this success?
By now it should already be clear, this success was enabled by a lot of data.
The chess result has been the culmination of very large datasets of chess games, without which there would presumably have been no victory for artificial intelligence.
The same goes for IBM Watson, which was the result of a curation of Wikipedia tables, almost 10 million such tables constituted the knowledge base of the system.
It is also no different for ChatGPT, which was trained by a staggering amount of natural language.
It is clear that artificial intelligence is not just a computation, but a combination of a computation and a pre-computation.
This pre-computation consists of its basic building block, data.

\subsection{Machines, Human Labor and the Nature of Computation}

The talk proceeds to ponder the nature of computation by going back to the original conception of a machine.
Back when they were conceived, they were regarded as tricks, wonders or illusions.
One example would be the mechanical turk, a machine that supposedly played chess automatically.
It was in fact a mechanical illusion, because the machine was human-operated without the operator being visible.
But the meaning of machines changed when they were researched for other useful mechanical applications.
These new machines were meant to serve as an automatic replacement of human labor.
Their semi-autonomous deterministic operation made them entities based on calculation.

An interesting trend emerged which has set the tone, one that echoes even today.
Even though the machines were intended to be largely autonomous, the need of their operation by humans could in many instances not be eliminated.
This trend was to consider those operators as unskilled and unsophisticated workers.
Understandably the operation of these machines was designed and refined to be very simple, which created a need for simple human labor.
This human labor was the kind of simple computation necessary in order for the machine to fulfill its purpose.
Often times, people (often female and young) less respected for their skills were employed for very little wages to do this ``unskilled'' and ``unsophisticated'' work.

Another illustrative example of how this trend emerged is provided within the talk by mentioning Gaspard de Prony, who in 1791 was assigned to produce logarithmic and trigonometric tables.
His idea was to mechanize these tedious calculations by using a scheme he has devised, a kind of computation factory.
This scheme was a way to divide labor hierarchically:
in a hierarchy resembling a tree, orders are given from the top and results are propagated from the bottom up.
At the top of the hierarchy there was usually a scientist, a well-respected, sophisticated position to be in, who directed planners, who directed the workers sitting at the lowest level.
One can naturally see by way of looking at this hierarchy how such a prejudiced view might have come about.

\subsection{The Rise of Piece-Work}

In fact, one can see the same kind of hierarchy centuries later still, by looking at the concerted British effort to decrypt messages by Nazi Germany.
Many people are familiar with Alan Turing and his occupation during World War 2, where he famously computerized this task of acquiring intelligence over the enemy.
The machine of our codebreakers was named the Colossus computer, but its actual purpose was not true decryption, but rather to pre-process encrypted messages into a different form that would then be decoded by human labor.
Who was tasked with this labor?
It was delegated to ``the girls''.

It is no different today, in fact there is a flourishing industry of piece-work that has formed over the years.
This piece-work is being offered at a cheap price particularly in developing countries, where poorer people have taken up piece-work as an alternative source of income.
Such piece-work consists of relatively self-contained, not too complex tasks such as the labeling of data, which has become especially relevant today.
The need for cleanly labeled data has become paramount for many kinds of artificial intelligence systems, whether it be an autonomous driving system, a face recognition system or any other such system.
But there is also a need for the creation, aggregation and grouping of training data.
In general, when there is a kind of bookkeeping to be done that cannot be entirely (reliably) automated, we fall back to human labor which is subsequently delegated.

Besides the labeling of data, there is also the area of content moderation, which the very large online platforms of today cannot do without.
The full automation of content moderation seems to have been controversial and not effective, so instead we fall back to human labor again.
Because content moderation requires a human view, it seems as if this human labor is necessary whenever the computation should be guided in a human direction.
The same can be argued for the labeling of data, where the intention is to imbue the artificial intelligence system with human values.

In the talk, Gaspard de Prony's tree-like hierarchy for the division of labor is revisited and amended as is appropriate after the introduction of general purpose computers.
The machines were inserted as a new layer, but where does this layer sit?
When before there was a scientist, directing the planners, directing the workers, there is now still a scientist, directing the planners, directing general purpose computers, directing piece-workers.
This view is held because the piece-workers are really doing a computation that is necessary for the general purpose computer but which it cannot do itself.

Where is this piece-work orchestrated?
The most prominent example mentioned in the talk is Amazon Mechanical Turk (MTurk), which is just one (large) crowdsourcing website for the (cheap) remote hire of on-demand tasks.
Will we ever be able to do without such platforms?
Perhaps as a society we should be more cognizant of such platforms, their impacts and whether we truly want them.
A sign that we may not want to perpetuate this piece-work industry is the reminiscence of colonialist times, as it is argued in the talk.
The countries that provide the most labor for any other particular Western country are those which have historically had colonialist relations.
This may be due to economic dependencies or just a kind of inertia by history.
Another interesting question that is posed is whether the work is considered unskilled because of low pay, or the other way around.
But this is a complex area left to the social sciences.

\subsection{Will All Human Labor Eventually Be Automated?}

An interesting question that is posed by the talk is whether we would ever be able to do without such platforms.
Ultimately, the question boils down to whether this bedrock human labor that always seems to become necessary to operate our machines is automatable (or automatically computable) or not.
Historically, there has been an interesting debate between Ludwig Wittgenstein and Alan Turing on this question.
According to the contents of the talk, Alan Turing has been a proponent of the view that it is possible, while Ludwig Wittgenstein took the opposing view.
We will just have to see.

\newpage

\section{Conclusion}

We have seen multiple very interesting talks in a lot of detail.
All of these talks pertain to topics that our society struggles with, whether it be to design systems that fulfill our wishes and cause no harm, how to regard computation or even just how to find the right place for the gadgets we receive after opening yet another Pandora's box.

Even though at first glance it might not be the case, each of these talks is arguably connected, not just by a common theme of computers and computation, but also through our intentions as a society.
In the end, it is important to keep asking these questions as well as try to answer them by creating precise and formal formulations of our problems, so that we can design better and better systems and a better society.

\end{document}
