Pereira, Gabriel2024-03-012024-03-012021https://www.degruyter.com/document/doi/10.14361/dcs-2021-070210/htmlhttps://mediarep.org/handle/doc/23223In 2016, for the first time Apple introduced what it called “advanced computer vision” to organise and curate users’ images. The key selling point for Apple Memories was that all computation would happen inside the user’s device, relying on the privacy afforded by Apple’s widely used smartphone, the iPhone. This article offers a case study of Apple Memories and its automated memory-making, focusing on three dimensions: the vision of Apple Memories; how this vision gets infrastructured through the A11 Bionic chip; and how Apple Memo- ries engages users in automated memory-making. This analysis raises important questions regarding privacy and surveillance capitalism as, even if operating on-device, Apple Memories still relies on the datafi- cation of the personal archive via the automation of image analysis (computer vision) and personalisation. Building upon Mackenzie and Munster’s (2019) notion of “platform seeing”, I argue that control over the networked image today goes beyond data brokering for behavioural analysis and advertising. Apple Memories’ framing of computer vision as an intimate, always-on and personal way of remembering is part of a wider goal of exploiting personal data to bolster user engagement, generate even more data, and ultimately accumulate infrastructural power across Apple’s “walled garden” digital ecosystem.engCreative Commons Attribution Non Commercial No Derivatives 4.0 GenericSurveillance CapitalismInfrastructureComputer VisionNetworked ImagePlatform Vision700300Apple Memories and Automated Memory-Making: The Networked Image Inside the iPhone Chip10.14361/dcs-2021-07021010.25969/mediarep/218642364-2114