Find an automated process to create infographics that capture each Art Section. Following this same direction of the code that transforms PDF into images it would be great if we could develop something that in an automated way creates images that contain general and specific information of each section as a sort of simplified summary of the information that is being presented in Seccion Arte. For example something that includes the root and folder structures with file number information, specific texts of the presented project and a selection of images. Dates of creation, weight and formats of each file. I don’t know, this kind of data. All in one graphic that could be in PDF and image formats. I attach some visual examples of my idea of this. It would be like transforming a folder structure into a two dimensional chart.
The second idea would be to find an ethically correct way to upload content that is native to offline ecosystems to the web. In this direction I’ve been doing some research and I love the idea of Virtual Remote Desktop. It is something used a lot in these pandemic times for workers to have access to their office PCs. But I also found that this system has infinite configuration options. The ideal here would be to allocate a PC in Cuba with the Art Sections and give access from any user to access the information with the administrative limitations that clients cannot delete, copy or move the files. They would only be able to view the content of the folders. At this point I think it would be very much in line with all our ideas but there is a serious problem with the internet connection in Cuba. So this idea would be practically impossible to develop from an online PC in Havana. The other option would be to use a server where this virtual operating system could be placed. But that creates another problem regarding which server. I don’t think we should use commercial servers for this, besides Seccion Arte shares pirated materials like books and documentaries, so a commercial server would give problems in the end regarding this. This is what we should analyse, I understand that for example “Pirate Bay” has its servers in the Pirate Party in Europe, and that allows it to continue to exist online, maybe it would be very interesting to investigate a bit about this and see if it is possible to present projects for those servers. Something like this would be a great conceptual and practical solution for the project.
Question: how to balance the production of something useful for the Paquete and keep in line with our research objective (the tracing)?
*What are we tracing? – the users (example the work of Nestor with Julia) – the networked image (i.e. the pdf and its various elements, deconstruction of a page). The composition of the networked image. – the Paquete as a complex object with its own curatorial intelligence See Nestor’s idea #1.
Trace through time and not only through space Unpack Groys quote, as this implies an assumption about a specific infrastructure: ‘every act of seeing and image or reading a text on the Paquete is not registered and becomes untraceable.’ Or at least not with the same methods. Example of work with Julia, use “pre/non social media” methods to do surveys.
De-linking, a web page is cut off from its networked attachment to become a pdf or a page that can be read offline. Every active link to other pages become “image”. Notion of caching. But there is also a re-linking when the cached pages enter the paquete and circulates through offline networks in Cuba. What is the kind of network that enters in the Paquete? Is it useful to talk about a process of re-mediation?
*Notion of ‘points of access’ Cuba- our side of the world: unstable connection vs content which is not legally accessible How to create a point of access to the Paquete rather than an archived copy?
Can thinking about error help us? Error: build something from this problem (this image is not available in your country)
Projects reviewed: Ben Grosser Project: Trace you > https://bengrosser.com/projects/tracing-you/ (interesting but mono-dimensional, you = your ip) 0xDB / Padma: https://0xdb.org
2 approaches as ways forward: 1- solutionist approach 3- meta commentary of the project / meta-level / translation of the project *remediation*
#layers structure of the project – potential map: file / Paquete / network
*importance of meta-data. When the file is cut-off from its networked attachments, it becomes hard to “preview” it, to have a sense of what it contains. In the Paquete, folder names and file structure are used to compensate this lack. This is a problem common to file sharing in general, and 0xdb is an example of trying to generate metadata for files that are travelling through pirate networks without metadata. Thinking about the resurfacing of the Paquete in online networks. What matters is not so much the files themselves, they are already online. It is much more the selection, the relational structure of the Paquete that may be interesting for online users.
Where are we and what have we been doing until now?
Tracing and understanding better how files that are living on the Internet are integrated in the Paquete. What it entails, how the move from online to offline transforms the documents. This is what we have called the de-linking: to cut off the internet document from its attachment and give it a portable form, the pdf.
This is not just a technical operation. It is the move from a network economy to another. User’s behaviour cannot be traced the same way. The juridical status of the documents change as they enter the Paquete. The commercial structure that supports the sharing of these documents is not the commercial structure of the web economy but the matrices and paqueteros.
Now the question is how can the Seccion Arte resurface online? What is the relevant form for this?