Cosmic Strings

cosmicstrings.online

tl;dr; A collage is algorithmically assembled from random public domain images to be later interpreted and discussed by AI, built as I was contemplating the questions of tradition/novelty and convention/individuality, the need for clarity in these hectic times, as well as the role of AI in my life (and the fact it tries to give an answer at all costs sometimes).

I used p5js to iterate towards the simplest algorithm that produces aesthetically pleasing results, front-end is vibe-coded on Lovable and back-end is a mix of Supabase and Google Cloud Functions and Buckets, hosted on Vercel.

Statement

Public domain imagery randomised, algorithmically composed, AI interpreted to reveal personal insights.

Unlike traditional divination systems with fixed visuals, each collage functions as an emotional fingerprint—unrepeatable and intimately personal. AI serves not as an oracle delivering absolute truths, but as an intuitive mirror helping decode meanings that resonate within randomness.

The project explores our evolving relationship with artificial intelligence as we move beyond algorithmic certainty. Like finding shapes in clouds or constellations in stars, it suggests that authentic insights arise from the interplay between shared knowledge and personal context—but only when embracing uncertainty.

Can AI help us trust our intuitive wisdom by reflecting, rather than directing, our inner truth?

Do we inherently trust tradition's refined imagery more than novel perspectives that personalisation offers?

This experience is where technology, randomness and intuition meet—where digital chaos transforms into personal meaning, inviting dialogue with the deeper self through a letter from the cosmic subconscious.

Choices Made

As I understood from the beginning that my idea has many moving parts of varying complexity (AI tarot-like readings and chats - easy, front-end - medium complexity and collage-building is heaviest), my goal was to equalise the complexity of the parts and aim for end-to-end user experience. To achieve that, I’ve aimed to simplify collage-building part, while trying to keep the aesthetic of the outputs at a certain level. Below are the choices I made along the way.

Prepared randomness - images are pre-downloaded, cleaned-up, tightly cropped in batches locally on my machine using quick python scripts, aiming to skip having that part in code. Images are sorted by the type and placed in respective folders, which is a simplistic take on labelling the images (see Aspirations below).

(random placement, forces of attraction, rule-based & labelling, final version)

Algorithmic aesthetic - I’ve explored pure random placement, then went extremely heavy with physical-forces libraries to get the repeatable results I liked, yet ended up staying somewhere in the middle where rule of thirds and relative placement of element to one another is being used before they are sorted per their desired layer for placement.

Headless logic - collage generation, conversion to base64, AI prompting and AI chat all happen on the back-end, actually two different back-ends (as AI chat is Supabase), which should be an easy fix (have all on Supabase) when my Google Cloud credits will expire. Due to me touching the front-end stuff for the first time, I found the process of integrating p5js quite cumbersome and opted for a known path of cloud functions and storing stuff in buckets.

What I’ve learned

(images used for cut-outs & medallions)

Knowledge + AI >> Just AI - while I used AI to generate some parts of my code (my usual approach is to give a pseudo-code or ask AI to give me a pseudo-code, outlining the structure of the project before), I found it harder to collaborate when I was building the front-end part, mainly because I struggled at first to read the code and assess if I’m getting what I want. It was a breeze in the areas where I’ve had some experience prior. My favourite part was co-writing a python script to cut-out round images from an old book and getting 100+ images in less than 5 minutes.

Critical path >> Perfection - I’ve built the bare-bones of the logic quickly in processing - converting an existing collage photo to base64 and sending it to AI to get the reading. I’ve separately experimented on Lovable with chat-bot functionality. Prototyping collage logic was chaotic, as I leaped from random placement into complex world of physical-like methods in processing, going back quite a bit to land into pure rule-based approach and folder-based labelling as I didn’t want to invest yet in non-deterministic logic.

Delight >> Precision - surprisingly, it took me some time to refine the AI prompt to fist balance delivery style (referencing one of the tarot schools was the key here) and later integration with the front-end (apparently json is always the answer).

Googly eyes >> Critical path - thankfully it’s difficult to judge how much of precious Lovable messages I’ve spent on building (absolutely optional, heavy as hell) background animation, especially the stars with googly eyes (an hommage to ), yet I got distracted from my goal for quite some time. From now on - I’ll be prototyping all animations in processing and then having very precise instructions for Lovable instead of winging it.

Aspirations

Non-deterministic aesthetic - I’d like to explore both LoRa and my own model training, both on my own collages and on public domain imagery and classical paintings to produce collages, curious about achieving more compositional diversity.

Scalability & cost - while not contributing to the idea directly, I’m curious to learn what is available to decrease the cost of AI (especially for the chat-bot part) and how to handle back-end more efficiently.