4(1) 2018: Rethinking AI
Browsing 4(1) 2018: Rethinking AI by Issue Date
Now showing 1 - 13 of 13
Results Per Page
Sort Options
- ArticleOn the Media-political Dimension of Artificial Intelligence: Deep Learning as a Black Box and OpenAISudmann, Andreas (2018) , S. 181-200The essay critically investigates the media-political dimension of modern AI technology. Rather than examining the political aspects of certain AI-driven applications, the main focus of the paper is centred around the political implications of AI’s technological infrastructure, especially with regard to the machine learning approach that since around 2006 has been called Deep Learning (also known as the simulation of Artificial Neural Networks). Firstly, the paper discusses in how far Deep Learning is a fundamentally opaque black box technology, only partially accessible to human understanding. Secondly, and in relation to the first question, the essay takes a critical look at the agenda and activities of the research company OpenAI that supposedly aims to promote the democratization of AI and tries to make technologies like Deep Learning more accessible and transparent.
- ArticlePervasive Intelligence: The Tempo-Spatiality of Drone SwarmsVehlken, Sebastian (2018) , S. 107-131This article seeks to situate collective or swarm robotics (SR) on a conceptual pane which on the one hand sheds light on the peculiar form of AI which is at play in such systems, whilst on the other hand it considers possible consequences of a widespread use of SR with a focus on swarms of Unmanned Aerial Systems (Swarm UAS). The leading hypothesis of this article is that Swarm Robotics create a multifold “spatial intelligence”, ranging from the dynamic morphologies of such collectives via their robust self-organization in changing environments to representations of these environments as distributed 4D-sensor systems. As is shown on the basis of some generative examples from the field of UAS, robot swarms are imagined to literally penetrate space and control it. In contrast to classical forms of surveillance or even “sousveillance”, this procedure could be called perveillance.
- ArticleVoices from the Uncanny Valley: How Robots and Artificial Intelligences Talk Back to UsMännistö-Funk, Tiina; Sihvonen, Tanja (2018) , S. 45-64Voice is a powerful tool of agency – for humans and non-humans alike. In this article, we go through the long history of talking heads and statues to publicly displayed robots and fortune-tellers, as well as consumer-oriented products such as the late 19th century talking dolls of Thomas Edison. We also analyse the attempts at making speaking machines commercially successful on various occasions. In the end, we investigate how speech producing devices such as the actual digital assistants that operate our current technological systems fit into this historical context. Our focus is on the gender aspects of the artificial, posthuman voice. On the basis of our study, we conclude that the female voice and other feminine characteristics as well as the figures of exoticized and racialized ‘Others’ have been applied to draw attention away from the uncanniness and other negative effects of these artificial humans and the machinic speech they produce. Technical problems associated with the commercialization of technologically produced speech have been considerable, but cultural issues have played an equally important role.
- ArticleEducational AI: A Critical Exploration of Layers of Production and ProductivityKrämer, Franz (2018) , S. 67-85Regarding possible implications for teaching and learning, the article explores the production and productive effects of educational AI from sociology of knowledge/ of technology perspectives from three sides: Firstly, the role of knowledge (re-)construction in the creation of educational AI is investigated. In this context, contrasting engineeringoriented approaches, educational AI systems are conceptualised as agentic entities infused with tacit and explicit knowledge about sociality and education, and as potentially reshaping both educational practices and scientific concepts. Looking at promotional and engineeringoriented AI discourses, the article secondly examines how education and AI are linked and how the knowledge pervasion of educational AI is addressed. Findings indicate that the discursive production of educational AI relates to the interwoven assumptions that education and specifically lifelong learning are obliged and able to remedy large-scale societal challenges and that educational AI can leverage this potential. They also indicate that an educational AI system’s knowledge is deemed a reflection of explicit (expert) knowledge that in the form of rationales can, in turn, be reflected to the systems’ users. Thirdly, regarding arising challenges for the sensitive area of education, educational AI’s role in knowledge gathering practices both in educational research and big educational data analysis is addressed.
- ArticleThe Coming Political: Challenges of Artificial IntelligenceGregg, Benjamin (2018) , S. 157-180Intelligence is the human being’s most striking feature. There is no consensually held scientific understanding of intelligence. The term is no less indeterminate in the sphere of artificial intelligence. Definitions are fluid in both cases. But technical applications and biotechnical developments do not wait for scientific clarity and definitional precision. The near future will bring significant advances in technical and biotechnical areas, including the genetic enhancement of human intelligence (HI) as well as artificial intelligence (AI). I show how developments in both areas will challenge human communities in various ways and that the danger of AI is distinctly political. The argument develops in six steps. (1) I compare and contrast artificial with human intelligence in general and (2) AI with HI genetically modified. Then I correlate and differentiate (3) emergent properties and distributed intelligence, both natural and artificial, as well as (4) neural function, both natural and artificial. (5) Finally, I identify the specifically political capabilities I see in HI and (6) political dangers that AI poses to them.
- ArticleVisual Tactics Toward an Ethical DebuggingGriffiths, Catherine (2018) , S. 217-226To advance design research into a critical study of artificially intelligent algorithms, strategies from the fields of critical code studies and data visualisation are combined to propose a methodology for computational visualisation. By opening the algorithmic black box to think through the meaning created by structure and process, computational visualisation seeks to elucidate the complexity and obfuscation at the heart of artificial intelligence systems. There are rising ethical dilemmas that are a consequence of the use of machine learning algorithms in socially sensitive spaces, such as in determining criminal sentencing, job performance, or access to welfare. This is in part due to the lack of a theoretical framework to understand how and why decisions are made at the algorithmic level. The ethical implications are becoming more severe as such algorithmic decision-making is being given higher authority while there is a simultaneous blind spot in where and how biases arise. Computational visualisation, as a method, explores how contemporary visual design tactics including generative design and interaction design, can intersect with a critical exegesis of algorithms to challenge the black box and obfuscation of machine learning and work toward an ethical debugging of biases in such systems.
- ArticleWhere the Sun never Shines: Emerging Paradigms of Post-enlightened CognitionBruder, Johannes (2018) , S. 133-153In this paper, I elaborate on deliberations of “post-enlightened cognition” between cognitive neuroscience, psychology and artificial intelligence research. I show how the design of machine learning algorithms is entangled with research on creativity and pathology in cognitive neuroscience and psychology through an interest in “episodic memory” and various forms of “spontaneous thought”. The most prominent forms of spontaneous thought – mind wandering and day dreaming – appear when the demands of the environment abate and have for a long time been stigmatized as signs of distraction or regarded as potentially pathological. Recent research in cognitive neuroscience, however, conceptualizes spontaneous thought as serving the purpose of, e. g., creative problem solving and hence invokes older discussions around the links between creativity and pathology. I discuss how attendant attempts at differentiating creative cognition from its pathological forms in contemporary psychology, cognitive neuroscience, and AI puts traditional understandings of rationality into question.
- ArticleCan We Think Without Categories?Manovich, Lev (2018) , S. 17-27In this article methods developed for the purpose of what I call “Media Analytics” are contextualized, put into a historical framework and discussed in regard to their relevance for “Cultural Analytics”. Largescale analysis of media and interactions enable NGOs, small and big businesses, scientific research and civic media to create insight and information on various cultural phenomena. They provide quantitative analytical data about aspects of digital culture and are instrumental in designing procedural components for digital applications such as search, recommendations, and contextual advertising. A survey on key texts and propositions from 1830 on until the present sketches the development of “Data Society’s Mind”. I propose that even though Cultural Analytics research uses dozens of algorithms, behind them there is a small number of fundamental paradigms. We can think them as types of data society’s and AI society’s cognition. The three most general paradigmatic approaches are data visualization, unsupervised machine learning, and supervised machine learning. I will discuss important challenges for Cultural Analytics research. Now that we have very large cultural data available, and our computers can do complex analysis quite quickly, how shall we look at culture? Do we only use computational methods to provide better answers to questions already established in the 19th and 20th century humanities paradigms, or do these methods allow fundamentally different new concepts?
- ArticleAutomated State of Play: Rethinking Anthropocentric Rules of the GameFizek, Sonia (2018) , S. 201-214Automation of play has become an ever more noticeable phenomenon in the domain of video games, expressed by self-playing game worlds, self-acting characters, and non-human agents traversing multiplayer spaces. This article proposes to look at AI-driven non-human play and, what follows, rethink digital games, taking into consideration their cybernetic nature, thus departing from the anthropocentric perspectives dominating the field of Game Studies. A decentralised posthumanist reading, as the author argues, not only allows to rethink digital games and play, but is a necessary condition to critically reflect AI, which due to the fictional character of video games, often plays by very different rules than the so-called “true” AI.
- ArticleIntroduction: Rethinking AI. Neural Networks, Biometrics and the New Artificial IntelligenceFuchs, Mathias; Reichert, Ramón (2018) , S. 5-13
- ArticleUnconventional Classifiers and Anti-social Machine Intelligences: Artists Creating Spaces of Contestation and Sensibilities of Difference Across Human-Machine NetworksMonin, Monica (2018) , S. 227-237Artificial intelligence technologies and data structures required for training have become more accessible in recent years and this has enabled artists to incorporate these technologies into their works to various ends. This paper is concerned with the ways in which present day artists are engaging with artificial intelligence, specifically material practices that endeavour to use these technologies and their potential non-human agencies as collaborators with differential objectives to commercial fields. The intentions behind artists’ use of artificial intelligence is varied. Many works, with the accelerating assimilation of artificial intelligence technologies into everyday life, follow a critical path. Such as attempting to unveil how artificial intelligence materially works and is embodied, or to critically work through the potential future adoptions of artificial intelligence technologies into everyday life. However, I diverge from unpacking the criticality of these works and instead follow the suggestion of Bruno Latour to consider their composition. As for Latour, critique implies the capacity to discover a ‘truer’ understanding of reality, whereas composition addresses immanence, how things come together and the emergence of experience. Central to this paper are works that seek to collaborate with artificial intelligence, and to use it to drift out of rather than to affirm or mimic human agency. This goes beyond techniques such as ‘style transfer’ which is seen to support and encode existing human biases or patterns in data. Collaboration with signifies a recognition of a wider field of what constitutes the activity of artistic composition beyond being a singularly human, or AI, act, where composition can be situated in a system. This paper will look at how this approach allows an artist to consider the emerging materiality of a system which they are composing, its resistances and potentials, and the possibilities afforded by the exchange between human and machine intentions in co-composition.
- ArticleCompeting Visions for AI: Turing, Licklider and Generative LiteratureSchwartz, Oscar (2018) , S. 87-105In this paper, I will investigate how two competing visions of machine intelligence put forward by Alan Turing and J. C. R Licklider – one that emphasized automation and another that emphasized augmentation – have informed experiments in computational creativity, from early attempts at computer-generated art and poetry in the 1960s, up to recent experiments that utilise Machine Learning to generate paintings and music. I argue that while our technological capacities have changed, the foundational conflict between Turing’s vision and Licklider’s vision plays itself out in generations of programmers and artists who explore the computer’s creative potential. Moreover, I will demonstrate that this conflict does not only inform technical/artistic practice, but speaks to a deeper philosophical and ideological divide concerning the narrative of a post-human future. While Turing’s conception of human-equivalent AI informs a transhumanist imaginary of super-intelligent, conscious, anthropomorphic machines, Licklider’s vision of symbiosis underpins formulations of the cyborg as human-machine hybrid, aligning more closely with a critical post-human imaginary in which boundaries between the human and technological become mutable and up for re-negotiation. In this article, I will explore how one of the functions of computational creativity is to highlight, emphasise and sometimes thematise these conflicting post-human imaginaries.
- ArticleSecret Agents: A Psychoanalytic Critique of Artificial Intelligence and Machine LearningApprich, Clemens (2018) , S. 29-44“Good Old-Fashioned Artificial Intelligence” (GOFAI), which was based on a symbolic information-processing model of the mind, has been superseded by neural-network models to describe and create intelligence. Rather than a symbolic representation of the world, the idea is to mimic the structure of the brain in electronic form, whereby artificial neurons draw their own connections during a self-learning process. Critiquing such a brain physiological model, the following article takes up the idea of a “psychoanalysis of things” and applies it to artificial intelligence and machine learning. This approach may help to reveal some of the hidden layers within the current A. I. debate and hints towards a central mechanism in the psycho-economy of our socio-technological world: The question of “Who speaks?”, central for the analysis of paranoia, becomes paramount at a time, when algorithms, in the form of artificial neural networks, operate more and more as secret agents.