Saturday Briefing

Here's your daily briefing:

  • There's a Stable Diffusion app for MacOS. We haven't played with it yet but it looks like it supports all the important features including text-to-image, image-to-image, in-painting, out-painting, and more.

  • Here's another cool Two Minute Paper about NVIDIA's AI-assistant project that can learn to speak in your own voice:

  • If you're looking to go a bit deeper into the relationship between interpretability and model editing, this explainer thread by Kevin Meng is for you:

  • And here's a fairly heady (get it?) thread about language models and there correlation with cognitive theories of language processing:

  • This is cool:

  • And so is this:

  • Here's a Wired piece related to yesterday's discussion about AI and artists.

I asked GPT-3, an AI text generator, to write me “a Short Talk on trout in the style of Anne Carson.” It replied: “Trout are most active in the early morning and late evening, so those are the best times to go fishing.” I went back to the original. Of the trout found in haiku, Carson writes: “Worn out, completely exhausted, they are going down to the sea.” I think we can agree that the Canadian brain wins this one. But we do not have to choose between, on the one hand, an unthinking digital pseudobrain and, on the other, the artifacts of a single human mind. The miracle of the age is that we can learn from both, whenever we like. Anything to avoid boredom.

  • This Redditor made a wedding album for their friends using Stable Diffusion and DreamBooth:

  • Cool demonstration of generative animation with a step by step how to:

  • Looks like Andrej Karpathy made a YouTube course about neural networks. What a world where you can learn direct from the world's experts, direct to your living room:

Bit of a slow Saturday in AI news...we'll see you tomorrow!

"minions going down madison ave in nyc during the thanksgiving day parade"