- Prompt Hunt Newsletter
- Posts
- Saturday Briefing
Saturday Briefing
Here's your daily briefing:
There's a Stable Diffusion app for MacOS. We haven't played with it yet but it looks like it supports all the important features including text-to-image, image-to-image, in-painting, out-painting, and more.
Here's another cool Two Minute Paper about NVIDIA's AI-assistant project that can learn to speak in your own voice:
If you're looking to go a bit deeper into the relationship between interpretability and model editing, this explainer thread by Kevin Meng is for you:
How & where do large language models (LLMs) like GPT store knowledge? Can we surgically write *new* facts into them, just like we write records into databases?
Explainer 🧵 on how interpretability & model editing go hand-in-hand, and why these emerging areas are so important 👇 https://
— Kevin Meng (@mengk20)
5:17 PM • Nov 4, 2022
And here's a fairly heady (get it?) thread about language models and there correlation with cognitive theories of language processing:
Why are language models so great at encoding brain responses to natural language?
In our new paper (bit.ly/3Ua8MLK), we explore two new correlates of encoding performance and their implications on cognitive theories of language processing. (1/12)— Richard Antonello (@RichardAntone13)
9:10 PM • Nov 4, 2022
This is cool:
Portal1 #aiart#nft
— Midjourney (@midjourney_ai)
8:41 AM • Nov 3, 2022
And so is this:
Van Gogh Diffusion: Dreambooth model trained on screenshots from the film Loving Vincent by huggingface.co/dallinmackay
@huggingface model: huggingface.co/dallinmackay/V…
— AK (@_akhaliq)
11:34 AM • Nov 5, 2022

Here's a Wired piece related to yesterday's discussion about AI and artists.
I asked GPT-3, an AI text generator, to write me “a Short Talk on trout in the style of Anne Carson.” It replied: “Trout are most active in the early morning and late evening, so those are the best times to go fishing.” I went back to the original. Of the trout found in haiku, Carson writes: “Worn out, completely exhausted, they are going down to the sea.” I think we can agree that the Canadian brain wins this one. But we do not have to choose between, on the one hand, an unthinking digital pseudobrain and, on the other, the artifacts of a single human mind. The miracle of the age is that we can learn from both, whenever we like. Anything to avoid boredom.
This Redditor made a wedding album for their friends using Stable Diffusion and DreamBooth:
Cool demonstration of generative animation with a step by step how to:

Looks like Andrej Karpathy made a YouTube course about neural networks. What a world where you can learn direct from the world's experts, direct to your living room:
This new course by @karpathy is 🔥!
I like that it goes deep into how to implement and use neural networks. The content complements other courses where NN details are typically skipped.
youtube.com/playlist?list=…
— elvis (@omarsar0)
1:17 PM • Nov 5, 2022
Bit of a slow Saturday in AI news...we'll see you tomorrow!

"minions going down madison ave in nyc during the thanksgiving day parade"

