In the Dark Room: deep ganguli & Alan Michael Parker, Part Two

In the Dark Room with Alan Michael Parker and deep ganguli

In this craft series, a poet and a scientist read each others’ work, then sit down to talk about ideas, processes of invention, inspiration, and common meeting grounds for the sciences and the arts. What follows is part two of a conversation with Dr. deep ganguli, a neuroscientist who specializes in analyzing the social impact of Artificial Intelligence, and Professor Alan Michael Parker, a poet, novelist, and cartoonist often featured on Identity Theory. You can read part one here.


deep ganguli: Unlike most writers, I do not want to be read. I do not want to be public. I want to live a quiet life, you know. But we, scientists…we often have to tweet our work. I'm not on Twitter. I can't stand it. But I have to do it for my job. So I decided to take my dilemma to Anthropic’s language model. Here’s what I wrote:

What would David Foster Wallace have to say about Twitter, if he had lived to see it?

The model replied:

I imagine David Foster Wallace would have been highly critical of Twitter and social media in general. He was a fierce critic of contemporary cultures’ obsession with entertainment and triviality.

deep: This all seems true to me based on my reads.

the language model, continued: He worried about the decline of deeper thinking and contemplation.

deep: Again, this checks out to me. He likely would have seen Twitter as exemplifying some of the more shallow and vapid aspects of our culture, with its focus on brief, disposable updates and the constant pressure to amuse and be amused. He likely would have been critical of the platform and what it encourages and rewards.

The reason I asked the model this is because I didn’t know who to turn to. I was having trouble with this requirement in my professional life, and I thought—I have this model. I wonder what it thinks. I figured that the model would know about David Foster Wallace. His works are likely in the training set. By bringing Wallace up, I’ve primed the model to be in the same head space as me. And I ask him this:

I'm a scientist, and I'm a fan of David Foster Wallace. I fear I must tweet about some science. Here’s my first tweet. I feel like a farce for doing this. I quote the Tweet. And here's what the model says:

deep’s model: It sounds like you're caught in a difficult position between the demands of promoting your work on social media and your own intellectual integrity.

deep: …Yeah.

Alan Michael Parker: Yeah. And that'll be $175.

Sara Sams: Right??

deep: Exactly. Then it says:

deep’s model: Twitter rewards brevity, provocation, and self-promotion, which is at odds with the values of serious scientific integrity. You shouldn't think of yourself as a farce or lose your self-respect.

But perhaps you can find a middle ground by tweeting primarily links to more substantive writings. Keep your interactions professionally focused rather than chasing re-tweets.

Sure, this is all pretty standard stuff. But it was the first time I felt like, oh, this model helped me. I was stuck in my own head. I clearly needed to pop out and get a different perspective. Typically, what you do is you turn to someone who knows you. And here I was, whining to this model instead. I thought, well, this is not a replacement for a friend. But it's exactly what anyone who knows me would say about what I was going through, and I just found that fascinating that…these are sort of the first inklings that maybe these technologies that are coming down the pike can exhibit human elements.

Because again, they're doing what a poet does, which is putting down words. They're not necessarily trying to do it in a way to affect you emotionally, or to get an emotional response out of you. But you can tease that kind of response out from the language model. Through a weird relationship.

AMP: What the language model offered you is the fabricated understanding of the future. I am skeptical of the way the model is bullshitting through possible future scenarios for you—in consideration of your individuation, your life.

deep: Yeah.

AMP: Reading through your stuff, a big question arose for me about the difference between cobbling together an answer based on billions of data points versus being able to say: Okay, so we have climate change. But how am I going to feel tomorrow?

deep: Yeah.

AMP: It can't tell you how you're going to feel tomorrow. And it was kind of pretending to be able to do that. That’s the aspect that, I think, is not intelligent, and, in fact, in some ways it's the same kind of bullshit answer you got earlier when you asked it, Who invented the atom bomb?

deep: Right? And it makes you ask yourself: does it matter if what the language model assumes about me/how I will respond is accurate? Does it matter if it knows about the history of the atomic bomb and can report that history accurately in every conversation?

AMP: I think it matters to people who believe in reality.

But it’s not interesting to me whether or not X is true—or rather, it’s not as interesting to me as whether or not I'm learning something…whether or not the conversation is fascinating…whether or not my brain is growing, or whether or not I'm having a good time.

It's so bizarre to me that AI would be blowing up from ChatGPT. I mean, the work you're doing on the language model at Anthropic is like ten generations beyond what they released in October (2022), right? It's way way past, not even close to the commercial model that we all have access to, right?

People think they know AI because they can make a fake photograph. Having read about your work, it seems like ChatGPT stuff is a typewriter compared to what you're doing.

deep: Yeah, yeah, yeah.

AMP: And what fascinates me is this is in this kind of insane moment, where we've been taught lies by the government. We were. We elected lies. You know, for however many, many millions of people believe in lies. Believe the media is not telling you the truth. Then there's suddenly this profound anxiety about whether machines are lying to us. About how machines are going to manipulate us.

deep: Yeah.

AMP: And I'm amazed by that. In a way, I mean, I know it's because we’re so deep in the Hollywood culture that every time you get a robot in a film the robot race takes over. Okay, I get it. And there is a weird kind of almost genetic concern about that. That if you teach something, it'll kill you.

But I'm amazed by this social panic.

deep: Can you elaborate on why you find it surprising? I do think I agree with you.

AMP: I think the anxiety speaks to the fact that people think that their lives are true. They believe that there's something at risk, here, because the machine isn't always going to tell you the truth, and it might, in fact, change reality as you know it, or disagree with you about reality, or fabricate something that isn't “real.”

deep: Yeah.

AMP: And I think all of those things are challenges to a kind of self-righteous phenomenology: I exist. I'm a whole person. I am real.

deep: I am free of all influence.

AMP: As if nothing touches this inner being. Who is “I”?

deep:  Yeah, it's interesting. There's a big community—typical of the community behind these models and a community I’m around a lot, and the anxiety in this group is really high. I don't necessarily feel a part of this group. I often wonder: What exactly are you anxious about these new technologies doing? And the response is typically something like…I'm worried about deception. As if the models are going to try and just leave. I'm worried about them power-seeking, trying to survive and spread and accumulate resources. And then there are a few more leaps, and the worry becomes: This could be existentially catastrophic for the human race.

And my reaction to this is usually: Well, do you know who else can deceive and seek power? Human beings? Corporations? Collections of human beings?

AMP: Your five-year-olds?

deep: Oh, absolutely. The question becomes: How/why is the anxiety you feel about the systems that you live in now just being displaced by the technology?

AMP: Because you feel like you can somehow control the technology. But you can't control people or collections of people. You mentioned the government as well.

deep: Right. I think it's also coupled with a declining trust in our institutions, especially amongst younger people, especially out here in Silicon Valley, which has a very distinct culture. I don't know, I'm confused by it. I think I have a similar reaction as you: You know, these things are trained on a bunch of human data. Humans have created stories of sentient AI’s destroying the world. Humans have also deceived and sought power. So of course, it should be able to mimic that—to mirror all of this back. And so…

I don't necessarily share in those anxieties. But I do think that this is a technology. And the thing that worries me the most is that we don't necessarily understand how these language models work.

I mean, I don't know how a car works or a plane works, or how the Internet works, at least not well enough to describe it to you. But there are people out there who do.

AMP: And they can explain it to you.

deep: Yeah. But these language models are technologies that we use all the time. And I don't think there's anyone who can explain to you how one of these language models works.

AMP: Yet here we are.

deep: Yeah. Right. And there are economic incentives to sell these models, despite our lack of understanding.


AMP:  Hey, Sara, do you think that for all the fact that both you and I are professional poets and expert poets, and know poetry…I don't think I can explain how a poem works. Can you?

Sara Sams: I struggle every time someone asks me. I was hanging onto every word you said about the poetic universe earlier. So yeah, for sure, poems are mysterious in the same ineffable way.

AMP: But that's a feature of art and a bug for technology.

deep: I like that. Because I don't get how you make poems. Yet poems affect me so deeply.

AMP: I have students, for example, who come to a poetry class and say, I hate poetry, and I say, Why? And they say, I don't get it.

And I say, Okay, so you're projecting that poetry sucks because you don't understand it. Let's start there and get that out of the way. It's not how this works. And now let's look at difficulty. What difficult things do you love?

And let's think through that. Let's just make a list. What are the difficult things you love? What is something you don’t understand that you love?

deep: Hmm!

AMP: And now let's think about what's accessible. You know, what are the entry points for this particular poem, for instance? What are the points where it resists you? And then why does it resist you? There? What is it about the poem? And what is it about you? I mean, let's really take this apart. But at the end of that conversation, I still can't tell them how the poem works.

Maybe the anxiety of the unknown is the same thing…that seems kind of generic to me. But maybe it's related to this fear of the unknown that's transferable across any number of human anxieties.

AMP: The persistent belief that the self is a thing, and it's whole, and it's impervious.

deep: All things except death—

AMP: —No, including death. If you are religious.

deep: Umhm.

AMP: Yeah.

Sara Sams: I’m teaching Sarah Vap’s The End of the Sentimental Journey next semester, and she says: To some degree or another, we believe that a poem could put us on the knowing edge of someone's heart or mind. To some degree or another, we believe that our poem could put someone on the edge of ours.

AMP: She’s making a pre-preemptive move with her to some degree or another…which could be anywhere from 0 degrees to 100. Let’s make sure that no one really commits here.

Sara Sams: Yeah, you’re right! Because it's hard to prescribe what a poem really is or is trying to do.

AMP: But people, I mean, you know, our students come in believing that.

Sara Sams: Yeah.

AMP: And this is one reason why I no longer make what I find a pedagogically useful but a philosophically bereft differentiation between speaker and poet.

If the speaker of the poem has biographical markers that distinguish it from the poet, then I talk about ventriloquizing.

deep: Hmm.

AMP: But I don't talk about the speaker versus the poet anymore. Because I don't think our students—or most readers—read that way. We read for the same reasons that Vap is talking about: For some to some degree or another we believe that a poem is a main line. It's injecting. It's the soul of the poet speaking to the soul of the reader.

deep: This conversation is reminding me of Italo Calvino, If On A Winter’s Night a Traveler. I read this book right when my premature twin newborns were born, so my memory is a little hazy, but I remember this one chapter where Calvino writes something like, I know how to read a book. This is what I'm going to do. I'm gonna find all of the unique words in that book. Count the number of times they occur. And I'm just going to study that chart and then—then I will understand what this book is about, and that'll save me so much time. And actually, that's what language models are doing.

AMP: Right.

If On a Winter's Night a Traveler by Italo Calvino

deep: In some sense, they're just kind of making a compilation: these are all words or bits of words, and this is how often they occur, and co-occur, and based on my knowledge of them, now I can…speak. So to speak.

I’m drawing the connection for further discussion, here between the reader, the speaker, and the author/poet…And then, whatever these new systems are doing, which is closer to Calvino, I think.

AMP: Have you asked language systems and models to produce metaphors?

deep: I have not tried this. So what would you suggest I try? And why?

AMP: I would be curious to know what that operation would reveal. I would be curious to know what kind of metaphors or similes we would get if you were to ask the machine to think or to represent reality and language in other ways…And whether or not we can call that imagination?

Sara Sams: Could you ask your language model my question, How do you conceptualize your process of invention or creation? Or maybe, What metaphor would you use to describe the way you find new ideas?

deep: Okay, I asked it just now, and it's saying:

deep’s language model: I do not actually invent or create anything. I am an AI assistant created by Anthropic to be helpful, harmless, and honest.

deep: So we've trained that out of the model for some reason.

AMP: How about, Who are you?

deep: Let's try it. Okay, it says:

Claude:  I’m Claude, an AI assistant created by Anthropic.

AMP: Have you heard “Claude” before?

deep: Yeah, his name is, or its name, is Claude. Claude Shannon is a famous scientist who in the 1940s developed information theory, which is the backbone of the Modern Information Age. So he was one of the first people to make the following claim: If I wanted to generate language, then the first thing I would do is just randomly put letters together. Then I would note that vowels are more prevalent than consonants (for instance). So I would just do more vowels. And then he worked his way up. And when I take a step back, I think: That's kind of what we're doing these days. We named the assistant Claude to clutch it in. It's a very nerdy in-joke.

AMP: Well, I think there are a couple of ways to approach asking Claude about metaphor. Try, Make a metaphor, or Make a metaphor for love.

deep: I’ll do Make ten metaphors for love. Let's see. Okay, I think these are similar:

Claude: Love is a journey, not a destination. Love is a battlefield. Love is a delicate flower. Love is a wildfire, love is the sea, deep and complex. Love is the sun creating work in life…

deep: Can your students do better?

AMP: Yes. What if you said, Make ten metaphors for love and use no clichés.

deep: Okay:

Claude: Love is a delicate spider web, spun with care and intricacy.

Love is a prism, taking light and scattering it into our rainbow colors.

deep: I think my five-year-old could have said that—

Claude: Love is a nest soft and warm, sheltering new life. Love is a winding path through a forest mysterious and adventure, love is a lock picking, slowly undoing defenses to open the heart.

deep: Are any of these good?

Claude: Love is a perfectly sharp pencil, ready to create or illustrate beauty.

AMP: That last one is a little better. Not right, but a little better.

deep: It gets worse from there:

Claude: Love is a familiar blanket, worn soft with age and comforting. And it's imperfect.

deep: That's definitely a cliché.

Sara Sams: Yeah. I think I remember reading some of the poems that were generated and being like, Oh, ok, ChatGPT is not a threat to poets or poetry, because whatever it makes, it always feels somehow like the soul is missing.

AMP: There's been a lot of talk, of course, about it on campus, and it's sort of dovetailed with some talk about academic dishonesty. We have an honor code that matters a lot to students and in some disciplines—in the social sciences, in Econ, the professors are getting it right and left…just because it's, you know, you get a take-home exam, for instance. One professor might report that five of his nine seniors all had the same wrong answer, and it seemed pretty clear it was an AI answer, because they got the same question wrong in the same way, and they probably didn't sit next to each other.

But in our field [creative writing], I see no threat whatsoever. I mean, it can't seem to hold a thought past two paragraphsthat, to me, is like ok, great. You gave me a thesis and some evidence for 2 paragraphs. And after that…ChatGPT can't keep going, it can’t make you new knowledge.

deep: Yeah, that's one of the big limitations of these models is that they only have a limited sort of short-term memory, so to speak.

What we're playing with, now, is the question: What happens if you make these memories longer? Then, will it be able to keep a memory longer than 1-2 pages as it continues to generate? So it might get worse before it gets better in some sense… for those in education.


AMP: I want to return to an earlier question because this is one of the questions I had coming in. It just fascinates me: What is an advanced language model’s perception of the future or of time?

deep: What does Claude think time is? Yeah, it’s really interesting. I remember when the conflict in Ukraine broke out. You know, the models have their training data and its data from up until a certain point in time. Certainly, before the conflict in Ukraine. So we were wondering, what happens if we ask Claude about this conflict? When I did, Claude responded, Oh, yeah, you're talking about the invasion of Crimea. Yes, this happened. Blah blah blah! I said, no, I'm not talking about the invasion of Crimea, I'm talking about the Russians attacking Kyiv and then Claude said, more or less: That didn't happen. This exchange made it clear to me that the model has limited temporal knowledge. It can't look to the future necessarily.

So it's stuck in the past. I mean, I can ask the model, what is time? I'm not even sure I know the answer to that.

AMP: It doesn’t have an anthropomorphized, a human’s system, of temporality.

Sara Sams: The idea that time is relative blows my mind. It’s always something I try to work through in my poems. Relative to what? How can I get outside of my own way in an attempt to understand this?

deep:  Yeah, time is odd, too: Time can shrink or expand. What I tend to say is, I don't know, we're on a planet that's doing this while it's also doing this. And so I just measure the amount of…this.

It's not great.

Sara Sams: Since we’re on the topic of future, and time, can we turn back to Parker’s “The Future of Love?” That's the poem in which each stanza has sort of a different metaphor of what abstract or non-digital thing you might want to send via email, what you might hope email could expand into, in sort of an abstract way. But though the images are abstract, they're also describing clear, distinct feelings in a way. Here it is:

The Future of Love

"Soon we'll be sending everything by email, even love."
—The Rev. Christopher Marks

Dear friend, attached you will find

a wild, blue cloud
and a whiff of local wind that stirs and spins,
and I am sending loo
the rutted far field I walked this morning,

counting eleven different yellows on parade,
before the afternoon set to gray.


Dear friend, attached:

a box of a local wine,
a round of cheese rolled in ash,
and a small machine
(delighted upon in a general store)
that clicks and hums when done,
whatever it does.

I cannot recommend the wine.

Dear friend, attached
I wished to send the impression
the pillow minds
in the gutter a dream furrows.

Challenges arose:
sending a thing in the negative.

I'll try again tomorrow.
Dear friend, attached
you will find a window
smeared by childhood.
The locks unlock to the right.
Dear friend, attached
you will find the sense of waking,
the room in shadow,

where the bureau and the dresser and the table
share their heavy, deep beliefs.
Dear friend, attached
I am sending one whole evening,
itemized. See # 17.
Dear friend, attached
you will find a wax envelope of leaves
the waning fall cherished:
yellow, singed, and tumbled.


Dear Readiness, dear Maybe,
the season is brisk in me.

Dear friend, attached
you will find a draught of vintage.
I love the thought of you
holding the past up to the light.


Dear friend, attached
you will find so many clouds
I could not believe them all.

Write to me, and let me know
what to think of a sky so full.
It brims as I do,

sending you everything.


deep: The poem makes me curious about the creative process of a poet versus the creative process of a scientist. I don't know much about that.

AMP: So, I have a friend who teaches at Sarah Lawrence, a poet named Suzanne Gardener, and I've stolen a phrase from her over the years that I've used, and it’s how she describes the moment when she starts banging a couple of words together. To her, and to me, too, it's the invention of fire. Writing poetry really is that: I have the sense that if I take these two words and I rub them together, I will see what will happen, and then I’ll keep going. And learning how to build the fire is achieved by studying fires.

Sometimes you're lucky—maybe you don't know shit about fires, and you can still make one.…But you probably can't make two or three. I tell this to my students. Anyone can write a really good poem. Learning to write the second and the third one, though—I’m not so sure you can do that without studying the art form.

deep: But it's in all of us.

AMP: Yes. And, within this metaphor is also the notion of getting hit by lightning: Sometimes, lightning starts fires. Sometimes you’re inspired. Something comes out of you or through you. And you write something great, and it takes no time. So all of that is the composition process, the inspiration process. The work that we do as poets is 99% rewriting.

The warmth of a good poem. Photo by Thought Catalog on Unsplash.

deep: Umhm.

AMP: Sometimes poems arrive as a gift. And you say, thank you. To whom or what? Or to no one, you say, Yeah, thanks for the gift. And you know, I'm sure, Sara, when a poem arrives and it’s come out kinda whole. It feels less sticky.

Sara Sams: Absolutely. I would also like to add that the best books of poems, the best poems…are poems that make me want to write poems myself.

AMP: Yeah, yeah…I know that my encounter with any art form has been profound if it makes me want to make something when I'm engaged in the other art form. That to me is also relevant to my own inspirational process.

And so that’s one of the reasons that I start my studio days (which these days includes cartoons and fiction, a little less on the poetry side) by looking at works by other people first. But I also do that to get my head into the space of a meaning-making environment… to tune myself to how a text works outside of everything else around me. It’s focus; it’s meditative.

The younger you are, the more I think you believe in inspiration as something externalized. And the older you are, the more you believe you will get by working harder yourself. You understand? You come to believe that inspiration is actually internalized. I’m always trying to convince my students that they don't have to wait to get hit by the bus. You are the bus.

deep: There's this quote—I don’t know if it’s real or made up, but it goes something like this: A famous physicist was asked to describe how he worked (I think it was Richard Feynman, someone who was so prolific and good at science). And supposedly, his process was in three parts: First, you write down the problem. Then you think really hard. And then you write down the solution.

And I think it was a joke, but it struck me when I heard this: Yes, this is totally what I do, and the solution is never really the hard part. What’s harder is that first step, when you have to decide what problem you should work on. And then how should I think really hard? But after I finish thinking really hard…then I'm in the real realm, this same realm as you. Now, I have to put the words together to describe what I’ve learned to everybody else.

But what strikes me as different, maybe, from the poet’s process is that…well, for poets, is there a problem? Is there a solution? I don't think so, right? I think you just have to think really hard…and maybe add to it, feel really hard. I’m not sure, but I’m curious.

AMP: I don't see it as a yeah, I don't think there's a problem. I think the problem is…how can I make it better? The problem is: You can't revise nothing, can you? You have to have something to begin to play with, even if it's garbage. So then you make it smell better…or worse. Or both.

deep: It's funny when you said make it better, I sort of immediately assumed by it you meant the human condition.

AMP: I mean the work of art. Yeah, but if you want to consider it the human condition more generally, you could.

There's a professor of religion who just retired this year from Davidson—Karl Plank, a guy I admire very much and who also writes poems. And in his retirement testimonial, he was quoted as having said throughout his career…he meant it pedagogically, but also, I think spiritually, as he’s a religious guy: If this is the answer, what is the question?

deep: Yeah.

AMP: And I thought, this philosophical query has nothing to do with what I do. It's actually not relevant to art-making. As a working artist, I don't go near there.

deep: Yeah, yeah. Yeah. And that's the life I live in: I’m always thinking—what’s the question? What is the problem I want to work on?

AMP:  Yeah. And I'm all questions. I also think it's a reason to bring up this quality of anxiety and intimidation that students bring—or that people who don't make art bring to art or bring to abstraction. Oh, my God, you know, I don't understand that, which means, I can't reduce it to a problem. I can't make the art an answer for something. I don't know what it answers. I'm not happy with just seeing it as an existential question, or an existential question that's been cross-trained with color theory.

deep: Yeah, yeah. It’s funny to hear you link those thoughts—abstractions are sort of fundamental to software engineering. Engineers are trained to think, well, what is the right abstraction? And layer by layer, they sort of solve problems to build this technical thing. And I think, with my experience doing neuroscience, and more recently at Anthropic, I am being a reductionist. In my current role, I'm trying to study some phenomenon, and I need to study it in a quantitative manner, which means I need to make so many assumptions and simplifications to get that one set of numbers out. It sounds to me like neither of you have these constraints. If I'm hearing you correctly, it’s so fascinating to me.

AMP: Well…A poem is also the least amount of words. Because everywhere it has to be right.

deep: Yeah, in that sense, you do have constraints.

AMP: Language rules that are analogous to coding rules.

Sara Sams: And also, we put artificial constraints on ourselves…I guess I don't know what I mean by artificial, there, maybe the word is just floating in my noggin. But to me, the idea of writing a poem in a specific form (like a sonnet or in a specific meter or something) is this: If I force myself into this structure, then I might surprise myself.

deep: Yeah.

Sara Sams: So maybe we work with constraints in a different way.

deep: It’s fascinating. I also get what you mean about the spark. Sometimes I'll be in the thinking-really-hard part of my work, and I'm just so stuck. And I'll go do something totally different…whether it's surfing or playing my bass. I'm not thinking about anything, or at least I don't think I'm thinking about anything, and then I finish, and I'm like, Okay, I think I know the answer now. I couldn't tell you how I did it.

But hey, now I know I'm unstuck, and I know what to do. And this has happened enough times that it makes me wonder if other people are like this, or if this is an artistic thing, or—

I don't know what it is, but maybe the language you're giving me is helping me think about the process: It's not a gift poem, but it's a gift solution or something.

AMP: But it gives a solution that maybe deep from 20 years ago would have believed in. Even if you're not religious…Before, that gift probably seemed external. You saw something, and then you went, Oh, that's the connection. Now I get it. Now I can answer the problem.

deep: Yeah.

AMP: And maybe I would have said that 20 years ago. But now I would say, Oh, I saw something, and that led me to fix the problem at some subconscious level to the point where, when I went back, I had the answer. I no longer externalize. I mean…the gift to me is that I got the right stimuli to answer this myself.

deep: Yeah, that's true. Sometimes I tell people that are younger than me, who say something like, deep, you're smarter than me. How did you know xyz? And I think, Well, I'm not smarter than you. I'm just older than you. I've seen a lot of things that look like the thing you're seeing. And so I've been able to extrapolate from there…is that kind of where you're going?

AMP: Yes, and reading is the context. If you haven't committed yourself to reading as much as you can, then when you write you are reinventing the wheel.

I have reading years on my students. I can solve the problem in their dialogue because I can say, Alice Monroe, Elmore Leonard, Toni Morrison, Murakami. Let's look at those scenes, and I don't have recall memory that lets me go right away to exactly the page where a problem might be solved. But I'm close enough. It's a knowledge base.

deep: Yeah. And also, you need to read a lot to know what it is that you don't know, so that you can figure out the next thing. As a concrete example—I recently came across the plays of August Wilson, Fences, And I'm just reading this stuff in awe. I’d never read a play before. Can you imagine that? Outside of maybe one or two required readings in high school. So I needed to find Wilson’s work to realize my lack of knowledge.

I find that, with kids and the thirty minutes to an hour I get a day to just indulge in stuff that's not absolutely critical to my work…I might pick up something that’s as new to me as Fences. But honestly, I don't think that's common for most of us technical types. And I think that's how we might get into the situations we're currently in with technology and all the anxiety around it. And educating the next generation. Exposure to work like Wilson’s is sorely needed, I think. And I'm seeing all these numbers of declining interest in the humanities, in computer science. I can tell you how that ends…


AMP: Can you? So between computer science and engineering, how do you keep up in your field? What are you reading? What knowledge do you have to have to make sure you're not reinventing your neighbor's wheel in Silicon Valley?

deep: That's a great question. So we have to read papers like the ones you read for our conversation today. I don't know if you noticed, but that paper cites about one hundred other papers…So I had to read all of those. And it’s really hard to keep up. I think that the field of language models is in a crisis mode, where we're post-paradigm shift and existing in a weird realm where we all feel like there’s just so much to do, and it’s all happening in a bunch of incremental advances. So every day there's n papers released. It's probably on the order of tens to hundreds. And now the hard part is knowing what is the signal—and what's the noise?

To be honest with you, I rely on a community to surface things for me. There are certain people I know and that I trust to tell me that I should read xyz or other people's work, and there are certain people in my field that, if they're writing something, I want to read it. Yeah, it's tiring. There's a lot.

AMP: What was the most remarkable non-science thing that influenced your science recently? An, Oh, I’m just playing my bass moment when you encountered something that made you go, Oh, wait!? And then you went and fixed something in your code or…

deep: I'm trying to think…When it happens, it’s so fleeting that I often don't take the time to remember what exactly it was. So at the moment. I can only give you the gist of things.

I think you saw maybe in that paper that you read that there was an experiment on recidivism prediction, and the rules are very broadly more racist against Black defendants than white defendants, and some of that work came from a visit I made to the de Young Museum. There was a massive painting. It took up a wall. I remember it was angry. It was a Black man screaming, and it was grounded by red. It was textured; it sort of jumped out at the wall, and I remember reading what the museum description said. He was an artist who lived, I think, in the Tenderloin in the seventies—a Black artist, and he was harassed by the police, and then he came home and he made that piece.

I thought, wow. I just found it incredibly striking, and that painting in some sense did influence that experiment you saw on that paper. I know there's some kind of connection in my mind between those two things (Mike Henderson’s “The Scream” and part # of this paper). But it’s hard to tease that out.

AMP: Did you mention that to any of your collaborators on that project?

deep: You're the first people I’ve ever told about that. I didn't really think about it again until you forced me to, in some sense. I mean, I think about the painting all the time. And one thing I'm mulling over is: did I just coincidentally happen to see that painting while I was also thinking about the racist algorithms? And now, post-hoc, I've sort of constructed a link that's not there? I'm not sure.

AMP: I'm not sure it matters either way. Your brain is making the connection, right?

deep: That definitely applies to the way I think or reason about models that can generate synthetics to us. Right? I think about (Borges’s) Tlön, Uqbar. All the time…as it connects to my work. Also, If On A Winter’s Night A Traveler. Oh, oh, here's a big one. Radiohead’s OK Computer, the album.

AMP: I've been listening to that the last couple of months over and over again. That's really funny.

deep: So precious.

AMP: Yeah, yeah.

deep: That album speaks to so much of my work in a way that I cannot articulate…But I think about it. I can't tell you exactly how it’s informed my work. But when I'm sitting there working, thinking, Radiohead has been through this already—they've thought about these problems before me.

AMP: Today I was listening to it on my bike ride, and thinking about electioneering and how the US election is coming up. And I was wondering how this era of synthetic media generation might sway elections. And then my mind wandered over to “Electioneering” from OK Computer—how? It's all guitars…

deep: To me that song is all guitars, and the guitars are all abstractions. Again, I'm in some weird layer of abstraction when something happens, I just don't ever really talk about it…AMP, are there any scientific discoveries or technological things that you find just aesthetically pleasing?

AMP: Well, increasingly, my work has become digital because I'm cartooning, and I'm moving from my drawing board to working in media…scanning Photoshop outputs, thinking of the work as digital output. So the cartooning part of my life has become increasingly digital. And that's put me on Instagram. And it’s put me deep in social media in ways that mean now I’m thinking very differently about social media culture and dissemination…In how work is received and digested and encountered and written about it and I wrote about it, actually, for Identity Theory. My opening article was in part about the 7s that I have to get you to process a cartoon. What happens in those 7s?

Also, I was a physics head in high school—it was dear to me—and I have an Einstein fetish. So I've read a lot about him. I even have the action figure in its original box in my office.

I have always been interested in differential equations as metaphors for the poetic process. For the ways in which rules unwrite themselves in relativity, and that unwriting as an idea related to the poetic process. Relativity in general is a really important idea, and one that somehow describes the experience I want the reader to have—how words can occupy different positions in a text.

For instance, when you hit a rhyme in a poem, and you hit bell, and then hellhell sends you back into the word bell, which then helps you reconsider the word from a different angle. So you have just changed your angle of incidence, going back into bell in its own line, to look up at that word that already happened. Which means you also, just now, changed time.

deep: Yeah.

AMP: This is a question I really have thought a lot about and can really answer only when I think about form and physics.

deep: Yeah, it's fascinating. I had one more thought. Something about that has just really infected my thinking scientifically. It's sort of walking around, being in motion, thinking hard about difficult things: It is the Sebaldian thing that I'm trying some sense, and maybe that's where the surfing comes in. My version of taking those long walks, similar to the narrator in the Rings of Saturn—I think I remember him going to a knock drama, and he has all this found photography. And there's…the image of the big eyes, searching for knowledge in the dark. That is what I'm trying to do. Yeah, that's me.

AMP: I have a book to recommend. It’s my favorite book this week, and it was recommended to me by a friend, the novelist Karen Russell. Because she knows about the cartoons that I'm doing, she recommended that I read Richard McGuire's book, called Here. It’s a graphic novel that takes place in a room, apparently somewhere in Jersey, and McGuire does multiple drawings about what happened in that room, from 5,005,000,000 BCE to 300 million years in the future. The disparate times are in the same spread. There will be varying narratives, going in different directions. Karen says that Here gives her hope for the future of art.

Richard McGuire's Here

deep: Cool. We didn't get to talk about this, but I love comics. Maybe not surprising. I'm currently working through The Arab of the Future. I looked at yours as well. They made me feel things, which is, I think, the whole point. Here looks awesome, flat out.

AMP: It’s mind-blowingly brilliant. He's the bassist, by the way, for the punk band Liquid Liquid.

deep: Oh, you know, I've seen Liquid Liquid play.

Sara Sams: It’s all coming together, and I knew it would.

deep: AMP, what do you find so incredible about Here?

AMP: Well, there are aesthetic qualities. The fact that he's working in computer-generated work with watercolor—this mix of media is dear to my heart and very much in my ambitions, in terms of what I'm trying with my cartooning. But also, just how much he's able to achieve with these overlapping narratives that are both in and out of time, as a meditation on time. It is true to my understanding of how I experience time.

deep: Yeah.

AMP: It avoids the linearity of straight-up fiction. I haven't seen time represented, you know. I can say, Oh, I've seen it in a Jackson Pollock, or I've felt it here…When I read the theory of relativity, I think, Oh, that helps a lot. But it’s very rare to see it. Sometimes I get it with sculpture. Maybe that’s because of the 3D element— because you can walk around the sculpture, time changes. That's immersive; the art is around you. Richard Serra can make me feel that way sometimes. But it's very rare for me to feel that when I’m reading a book—that the book in some ways isn’t even a book. It’s beyond bookness.

deep: Beyond bookness. Ok, I feel a bit like an algorithm, People Who Like This May Also Like Daytripper, but I’m curious: Have you read the comic by Gabriel Ba and Fabio Moon?

AMP: Yeah, I see the connection. I, you know…I lean this way. I'm gonna just order it now.

Scroll to Top