📓 Cabinet of Ideas

Thoughts While Watching Myself Be Automated

#

Excerpt #

(Excluding “please stop”)


Thoughts while watching myself be automated #

(Excluding “please stop”) #

An old friend visited me a few weeks ago. And we soon got to chatting about—what else—how long will it be before all human intellectual work is automated.

My position was: I dunno, because things are moving fast right now but what if we run out of data or scaling laws break and algorithmic progress stalls? His position was: Soon.

Then he started asking about this blog. What were the most popular posts? This was slightly ominous given his regrettable tendency not to consume most Dynomight internet content. But I told him probably Underrated Reasons to be Thankful and its sequels, and that seemed to be the end of it.

But then a week later, he texted to ask if I had any other short-form writing. And then he sent me a list of previous Reasons I’d written and asked me to rank them by quality. And gradually it dawned on me that he had decided it was time to automate me.

Soon he started sending new AI-generated Reasons. Which weren’t great. But then he tuned his prompt and they got better. And then he switched models and increased the scale by 1000x and added a secondary scoring AI and tuned the scoring AI prompt and the Reasons got better and better and better and better. And as I watched all this happen, I couldn’t help but reflect:

  1. In old science fiction, people imagined robots that were totally precise and accurate and rational but couldn’t fathom the messy “soul” of humanity. Modern AI is exactly the opposite. It could easily copy my writing style after a small number of examples. The biggest challenge was to get the damned facts right.

  2. For example:

    That if you somehow took the 33,000 nuclear warheads on the planet and got them to a white dwarf star […] then the resulting explosion would be visible from 1,000 light years away with a magnitude of 7.6, comparable to the supernova of Betelgeuse, so we’re not total lightweights.
    This sounds like me. The joke is good. But the “fact” is nonsense.

  3. Looking back, perhaps the old science fiction robots were a projection of our fantasies. I want my style and voice to be precious and uncopyable. I want an AI assistant that will tirelessly hunt down facts while I provide creativity and panache and magic. But that’s not how it is. So we should remember: In a scenario where AI is useful but humans retain some comparative advantage, we might not like where that comparative advantage lies.

  4. When I complained about the previous points, my friend remarked that this was to be expected because, after all, “personality is just like 4-6 bits”. I think it’s higher. But what is personality, after all? One idea is that as you go through life, you copy behaviors from others that you like, and that combination is “you”. Maybe my writing voice is just what you get when you take the different parts of different writers I like and mush them together. So maybe personality isn’t a lot higher than 4-6 bits.

  5. An AI-you is a funhouse mirror for your soul. The most salient feature of AI-me is that it was dark. I often (I see now) use a little gambit where I start out with something technical or edgy and gradually work my way towards a positive crescendo. The AI seems to run towards the darkness without a clear plan for turning towards the light. It usually resorted to (a) incredibly lame/cringey ideas, (b) making stuff up, or (c) just staying dark and not even attempting to be positive.

  6. For example:

    That sports are a socially sanctioned way to release tribal impulses, and it’s just wonderful how at the end of a game, everyone goes home and nothing is really lost, except for the 46 people who died in the 1964 Lima stadium riot, and the 79 killed in the 1968 Buenos Aires stadium stampede, and the 328 who died in the 1969 Salvador vs. Honduras soccer war, and the 71 who died in the 1971 Glasgow disaster, and the 48 who died in the 1971 Ibrox stadium collapse, and the 49 killed in the 1974 Cairo stadium riot, and the 2 who died in the 1979 The Hague incident, and the 66 killed in the 1979 Ghana stampede, and the 24 who died in the 1981 Athens stadium collapse, and the 1 who died in the 1982 Cali drug war, and the 14 who died in the 1982 Moscow incident, and the 18 who died in the 1982 Cali explosion, and the 39 who died in the 1985 Heysel stadium stampede, and the 10 who died in the 1985 Mexico city incident, and the 8 who died in the 1985 Kenya stampede, and the 93 who died in the 1989 Hillsborough disaster, and the 40 who died in the 1991 Orkney stadium stampede, and the 7 who died in the 1993 Hong Kong incident, and the 82 who died in the 1996 Guatemala stampede, and the 15 who died in the 1996 Lago Agrio stadium collapse, and the 83 who died in the 1997 Tripoli stadium incident, and the 126 who died in the 1998 Kathmandu stampede, and the 107 who died in the 2000 Monrovia incident, and the 13 who died in the 2000 Harare stampede, and the 126 who died in the 2001 Accra stadium stampede, and the 43 who died in the 2001 Ellis Park stampede, and the 14 who died in the 2007 Salvador stampede, and the 15 who died in the 2007 Sangrur stampede, and the 13 who died in the 2008 Butembo riot, and the 12 who died in the 2009 Abidjan stampede, and the 19 who died in the 2009 Nairobi stampede, and the 11 who died in the 2011 Cairo riot, and the 79 who died in the 2012 Port Said riot, and the 13 who died in the 2015 Cairo stampede, and the 8 who died in the 2017 Uyo stadium collapse, and the 17 who died in the 2018 Caracas stampede.

    I could have written that. I wish I had written that. But personally, it doesn’t make me feel thankful.

  7. At one point, the AI suggested that we should be thankful that it was possible to encode the entire text of the best book ever written (apparently Harry Potter and the Methods of Rationality) into a single drop of water. But it then admitted it wasn’t sure if anyone had quite done this yet, and invited anyone who did to email xy@dynomight.net where x and y are the first letters of my (currently non-public) first and last name. Is that information in the training data somewhere? Do LLMs have emergent stylometry abilities? Creepy.

  8. As the AI got better and better, so did my own cope. At first I thought, “It’s bad.” Then, I thought, “OK, it’s not bad, but it’s not creative.” Later, I thought, “OK, it’s not bad, and it can be creative, but it’s not accurate.” By the end of the week, I was at, “OK, it’s not bad, and it can be creative, and it can be accurate, and it can be funny, and it sounds almost exactly like me, but in order to do all those things at the same time you can’t completely rely on the automated AI, but need some human curation and editing.”

  9. Speaking of cope:

    1. That we don’t have alien overlords who made us and who look down on us with pity. 2. That even though we do have alien overlords who made us and who look down on us with pity, they’re very benevolent alien overlords who mostly leave us alone. 3. That even though we do have alien overlords who made us and who look down on us with pity, and they aren’t benevolent alien overlords who mostly leave us alone, they at least have the decency to keep their existence a secret. 4. That even though we do have alien overlords who made us and who look down on us with pity, and they aren’t benevolent alien overlords who mostly leave us alone, and they don’t have the decency to keep their existence a secret, at least they’re not so horrifically awful that we’d prefer to be dead than live under their rule. 5. That even though we do have alien overlords who made us and who look down on us with pity, and they aren’t benevolent alien overlords who mostly leave us alone, and they don’t have the decency to keep their existence a secret, and they are so horrifically awful that we’d prefer to be dead than live under their rule, at least there’s some way to rebel. 6. That even though we do have alien overlords who made us and who look down on us with pity, and they aren’t benevolent alien overlords who mostly leave us alone, and they don’t have the decency to keep their existence a secret, and they are so horrifically awful that we’d prefer to be dead than live under their rule, and there’s no way to rebel, at least they’ll eventually leave? 7. That even though we do have alien overlords who made us and who look down on us with pity, and they aren’t benevolent alien overlords who mostly leave us alone, and they don’t have the decency to keep their existence a secret, and they are so horrifically awful that we’d prefer to be dead than live under their rule, and there’s no way to rebel, and they’ll never leave, at least we have each other? 8. That even though we do have alien overlords who made us and who look down on us with pity, and they aren’t benevolent alien overlords who mostly leave us alone, and they don’t have the decency to keep their existence a secret, and they are so horrifically awful that we’d prefer to be dead than live under their rule, and there’s no way to rebel, and they’ll never leave, and they’ve taken away everyone we love, at least they allow us to keep living.

  10. There aren’t clear norms for this, but isn’t going and automating someone else a rather, umm, aggressive act? I’m not sure what point my friend was trying to make, but I feel like he made it.

  11. I keep finding myself unconsciously treating AI as an anomaly—as a weird thing that’s happening right now before the world goes back to being “normal”. But we aren’t going back. This is how it’s going to be. Like this but more so.

PS: If you want to know how he built the AI, here are some words: He prompted LLama-3.1-405B Base with 15 of the 90 existing Reasons, then generated text until “21.” was produced. (Only five new reasons at a time because of “notable output deterioration for too many autoregressive samples”.) After generating many thousands of Reasons, he fed all of them into gpt-4o-mini with a prompt to score each along 13 different axes, e.g. “unexpectedness”, “scientific or factual basis”, “complexity or depth”, “humor or whimsy” and “emotional resonance”, providing a few example Reasons and suggested scores. He then combined the scores into a scalar and sorted.

He would like to tell you that “405B base is the key to the modest success he has had”, and that instruction-tuned models only produce “generic slop”. He feels that “gpt-4o-mini sucks” as a scoring AI but using a bigger model would have cost “more than $100”. So I guess I can feel reassured that you can’t push me off stage yet without going into triple digits.

He has also curated for you a list of his favorite AI-generated Reasons. Go here to read them.

Subscribe to DYNOMIGHT INTERNET NEWSLETTER #

By dynomight · Launched 3 years ago

science and existential angst