- Prompt Hunt Newsletter
- Posts
- Look Mom, Generative AI is on Television!
Look Mom, Generative AI is on Television!
Here's your daily briefing:
Outspoken Tesla/Elon Musk bear and critic E. W. Niedermeyer wrote an interesting thread discussing the disconnect between the reality of autonomous-vehicle technology and the venture-capital hype around it:
VCs approached this like gamblers, not investors. They were locked into the pot of gold, not reasoning through the actual possibilities and limitations of the technology.
In their defense, the pot of gold looked so alluring because we cars are effectively a monopoly.
— E.W. Niedermeyer (@Tweetermeyer)
2:41 PM • Oct 27, 2022
He also wrote a popular thread yesterday discussing the shut down of Argo AI, where shots were definitely fired:
So, if you want to work up some righteous anger, if you want a villain to personify the "self-driving car" bubble, look at the only guy who is actively selling the self-driving car.
More on this in (multiple) longer forms soon. Watch this space.
— E.W. Niedermeyer (@Tweetermeyer)
7:32 PM • Oct 26, 2022
Consider us old-fashioned, but we're not aware of many occasions in history when the people actively desiring "a villain" to "work up righteous anger" toward were The Good People. Elon Musk is obviously a flawed character in this game of life, and certainly a once-in-a-generation Twitter troll, but we still think he's kind of funny (and human):
Entering Twitter HQ – let that sink in!
— Elon Musk (@elonmusk)
6:45 PM • Oct 26, 2022
As yet more proof that we harbor an unconscious compulsion toward self-destruction, we share with you this simple but comprehensive thread discussing AI writing tools according to use-case or need:
AI Writing 101 👇
For every kind of writer...
— Daniel Eckler ✦ (@daniel_eckler)
12:08 PM • Oct 26, 2022
Check out this cool and concise 7 minute tutorial on inpainting via Stable Diffusion:
This presentation about LLMs (large language models and why they're a big deal was only slightly over our heads:


This Trevor Noah interview with OpenAI's chief technology officer Mira Murati...
reminds us of the time Bill Gates tried to explain to David Letterman what the internet was:
Not because Noah's questions are as funny (or endearingly ignorant) as Letterman's, but because it seems like a safe-enough heuristic to assume that once something is talked about on late night television, it's reached a critical mass and is here to stay.
AI isn't going away anytime soon and, like Ms. Murati mentions in the interview, these are "big societal questions that shouldn't even be in the hands of technologists alone," so let's keep learning, discussing, and building the future together!
If you don't have the time to watch (with all those annoyingly human pauses and ums) but have a few minutes to read a transcript, we've prepared one for you! And if you're really in a rush, we've taken the liberty of bolding the main thrust of each question and answer. Enjoy : )

Trevor Noah: How does an AI create an image? Because it's not copying the image. It's not taking from something else. It is creating an image from nothing. How is it doing this?
Murati: Exactly. It's an original image never seen before. And we have been making images since the beginning of time, and we simply took a great deal of these images and we fed them into this AI system. And it learned this relationship between the description of the image and the image itself. It learned these patterns. And eventually, it was generating images that were original. They were not copies of what it had seen before. And basically, the way that it learns, the magic is just understanding the patterns and analyzing the patterns between a lot of information, a lot of training data that we have fed into this system.
Noah: In creating AI, are you constantly grappling with how it will affect people's jobs and what people even consider a job?
Murati: That's a great question. The technology that we're building has such a huge effect on society, but also the society can and should shape it. And there are a ton of questions we're wrestling with every day.
With the technologies that we have today, like GPT 3 and DALL-E, we see them as tools, so an extension of our creativity or our writing abilities. It's a tool. And there isn't anything particularly new about having a human helper. Even the ancient Greeks had this concept of human helpers that when you give something infinite powers of knowledge or strength or so on, maybe you had to be wary of the vulnerabilities.
And so these concepts of extending the human abilities and also being aware of the vulnerabilities are timeless. And in a sense, we are continuing this conversation by building AI technologies today.
Noah: Well, it might be frightening, because some people go, oh, the world is going to end because of this technology. But in the meantime, it's very fun, I'm not going to lie. Because like, you know, DALL-E for instance, doesn't just create an image from text... you've also gotten it to the point now where as a company, you've designed it so that it can imagine what an image would be. So for instance, there's that famous image. It's The Girl with the Pearl Earring. Right, yeah. It's a famous image, right, but what DALL-E can do is, you've got the famous image, and then DALL-E can expand that. All of the uses, everything you're seeing that never existed. So DALL-E's like, well, this is what I think it would look like if there was more to this image. It can assume, it can create, it can inspire.
Murati: Yes, it can inspire. And it makes this beautiful, sometimes touching, sometimes funny images, and it's really just an extension of your imagination. There isn't even a canvas or the boundaries of paper are not there anymore. You just extend it.
Noah: So how do you safeguard them? Someone might look at this technology and go, oh, then you could type in a politician was caught doing something. Here, now I've got the image. You've got, and now all the politicians say, oh, that's not, that's not me, that was made by that fake program. We can very quickly find ourselves in a world where nothing is real and everything that's real isn't, and we question it. How do you prevent or can you even prevent that completely?
Murati: You know, misinformation and the societal impact of our technologies, these are very important and difficult questions. And I think it's very important to be able to bring the public along, bring these technologies in the public consciousness, but in a way that's responsible and safe. And that's why we have chosen to make DALL-E available, but with certain guardrails and with certain constraints, because we do want people to understand what AI is capable of. And we want people in various fields to think about what it means. But right now, we I don't feel very comfortable around the mitigations on misinformation. And so we do have some guardrails. For example, we do not allow generation of public figures. So we will go in the data set and we will eliminate certain data. So if you type something in, you can't pull up a, it can't create a politician for you. It won't be a picture of that person. So that's the first step, and the training of the model itself, just looking at the data and auditing it, making interventions in the data sets to avoid certain outcomes. And then later in the deployment stage, we will look at filters, applying filters, so that when you put in a prompt, it won't generate things that contain violence or hate, and make it more in line with our content policy.
Noah: So let me ask you this then. Obviously, part of your team has to think about the ethical ramifications of the technology that you're creating. Do your team also then think about the greater meaning of work or life or the purpose that humans have? Because most of us define ourselves by what we do, i.e. our jobs. As AI slowly takes away what people's jobs are, we'll find a growing class of people who don't have that same purpose anymore. Do you then also have to think about that and wonder like what does it mean to be human if it's not my job? And can you tell me what that is?**
Murati: We have philosophers and ethicists at OpenAI. But I really think these are big societal questions that shouldn't even be in the hands of technologists alone. We're certainly thinking about them. And the tools that we see today, they're not the tools that are automating certain aspects of our jobs. They're really tools extending our capabilities, our inherent abilities and making them far better. But it could be that in our future we have these systems that can automate a lot of different jobs. I do think that as with other revolutions that we've gone through, there will be new jobs and some jobs will be lost, some jobs will be new, and there will be some retraining required as well. But I'm optimistic.
--
We're optimistic too. Are you?

"an optimistic audience at a late night television show"


