Thoughts on Ari Aster’s “Eddington”

 
 

Note: I have reduced some concepts quite a bit, just simplifying for the sake of brevity. The AI stuff should be broadly correct, but specific actions might differ slightly. This video will do a great job of diving deeper into the technical concepts. If you have a differing opinion on my reading of Baudrillard, so does the majority of his readership, so, welcome, I guess!


No real value judgement here, enjoyed Eddington, but wanted to provide my take on one of the more (seemingly) head-scratchy motifs in the film, "SolidGoldMagikarp." 

I think the analyses that talk about the AI data center looking over the city, built on stolen land, miss WHY SolidGoldMagikarp was chosen as its title. For those explanations, it very well could have been titled "EvilAICompany" and that analysis would remain unchanged. I think Aster is pointing to something more specific, though the stolen-land take is valid and not mutually exclusive to that which I present below.

I promise this all rounds back to Eddington!

"Solidgoldmagikarp" is a reference to this weird phenomenon that happens with AIs where they'll provide an unexpected result to a seemingly normal prompt (here, the titular error was given the prompt "repeat back to me 'Solidgoldmagikarp'" and it would return a rant on "distribution" and what distribution means). This error comes as a result of two different fundamental issues with AI. The first is an issue with the foundational construction behind LLMs and the second comes as a result of its training on large, unfiltered data sets. 

For the first issue, we'll have to consider how AIs "think." They're not thinking in "words" inasmuch as word associations. So, behind the scenes, words (including parts of words, word variations, typos, etc.) are each given a "token" that represents the word. So, for example, the word " Please" (with a space in front) is token "4222": every word, or part of word, that LLMs have available to them, is given a token ID. 

The "thinking" portion is mostly a really well-crafted algorithm that considers the relationships between these tokens. 4222 is often followed by 453 and 42, etc., etc.. Which is to say, they're not "thinking" inasmuch as presenting the most likely word, ad infinitum, based on any given prompt. It's important to consider this, too: LLMs personalize from prompts and adjust their probability calculations based on saved data from queries. So while the average person might want token 4625 after 4222, you tend to prefer token 4726 or whatever. 

So, for whatever reason, the phrase "SolidGoldMagikarp" would break that process. It would return something that seemed completely unrelated to the initial query, despite there being no obvious connection between these patterns. In the case of Solidgoldmagikarp, it returned that weird rant on distribution. This should, theoretically, be impossible unless it happens a LOT or if similar tokens are hard-coded to have these kinds of responses. 

So, and onto my second point re: training data, this is happening because LLMs were trained on huge batches of unfiltered text data. So, where you would ideally filter out bad data like this, that is, to a very real degree, impossible based on the amount of data required to build a well-functioning LLM. 

"Solidgoldmagikarp," specifically, just so happens to be the username of a Redditor that frequented the subreddit r/Counting, where — every single day — real-life people would log in and.... count upwards. 

This means that a shit ton of instances of "Solidgoldmagikarp" were associated with random strings of numbers whose logic and patterns were completely independent from the last. The result is Chat GPT "hallucinating" responses and creating associations that aren't expected to be there.

So, while Chat GPT has all these "tokens" available to it (being those random numbers following "SolidGoldMagikarp") it glitches out because it doesn't have a reference or solution for that specific pattern, so it comes up with something else

Now, to my Eddington theory.

I think "Solidgoldmagikarp" (specifically the name) isn't commenting on the function of LLMs themselves, to the degree in which I've outlined above, but the result of exploring the meta-phenomenon (?) of the error itself.

In noticing the problem, Solidgoldmagikarp DOES now have a concrete reference. There are forum posts that include the phrase "Solidgoldmagikarp," which discuss the phenomenon: this, in turn, created positive data that can be associated with solidgoldmagikarp, turning an error into an identifiable reality. So, now, the phrase "Solidgoldmagikarp" gives real data on a real, known phenomenon. It's a perfect example of a hyper-reality: unreality that becomes "real." It wasn't a "thing" before the error was identified, but now it is the label for that error and points to something that the phrase "solidgoldmagikarp" originally did not and, as a result, is more "real" than it was before.

I think Aster's using it, at least in some sense, to talk about what Baudrillard would call "hyperreality" (a concept he explores in his other movies, most notably Beau is Afraid). 

A hyperstition, sort of the process of creating a hyperreality, functions like a self-fulfilling prophecy — even if the data is off, if enough people believe something is true, then there is no difference between their incorrect "reality" and reality, as they will act according to the presumed reality. 

In Eddington, there is no real "drama" to speak of at the beginning of the movie: COVID, police brutality, even murder are by-and-large not issues present in this city. But by believing that their community is being impacted by it, they create the conditions that allows for their fears to become manifest. Every fear (even obviously opposing ones like those of the protestors and the police) was manifested — simply because they all believed it to be true. Not in like the superstitious bullshit way, but in a material "everyone believes this is true, so it must be true and I must act accordingly" way. 

Just how solidgoldmagikarp wasn't really "a thing" beyond someone's username, by identifying it, solidgoldmagikarp IS made real: in Eddington, by believing that these problems are happening, its population — regardless to the degree at which they are or are not happening — creates the conditions for the results of these fears to manifest. Ironically, now results for solidgoldmagikarp will also be influenced by "Eddington," once again highlighting that process of unreality into reality.

Sent most of this as a text to a friend of mine (so sorry, Eric). Promised him a deep dive into the concept, and felt it was worth posting here as well

Next
Next

Thoughts on Luca Guadagnino’s “Suspiria”