AI, Twitter, and Misinformation

Moderating the town square

Here's your daily briefing:

  • Here's an interesting coupla tweets (and blog post) about Elon Musk's decision to fire the META team (ML, Ethics, Transparency, Accountability) at Twitter and how this might affect the bird app's ability to fight AI-generated misinformation:

Generating misinformation is easier, cheaper, and faster than ever. Anyone can exploit AI systems: from conventional deepfakes to language models like GPT-3 and text-to-image diffusion models like Stable Diffusion. These AIs can’t tell right from wrong and often make up realities that have no grounding in the real world. At the same time, because these systems evolve so fast, we’re lagging behind in detection and recognition countermeasures.

And I’d add here another problem on top of this. Recommender systems, which literally decide what we see on Twitter (or any other social media app), are also driven by opaque AI algorithms. This wouldn’t be much of a problem if it wasn’t for Musk’s recent decisions: first, he decided to remove identity verification, and then he fired the team in charge of algorithmic responsibility.

  • Researchers at MIT created an algorithm that tackles the issue of AI being too “curious” and getting preoccupied by the task at hand. Their algorithm automatically dials up the curiosity as needed, and dampens it if the agent has got enough supervision from the environment to get the "job" done.

Just like the dilemma faced by humans in selecting a restaurant, these [AI] agents also struggle with balancing the time spent discovering better actions (exploration) and the time spent taking actions that led to high rewards in the past (exploitation). Too much curiosity can distract the agent from making good decisions and too little means the agent will never discover good decisions.

  • In this Stanford HAI (Human-Centered Artificial Intelligence) article, Nikki Goth Itoi discusses the question of data-privacy in relation to ever-increasingly intelligent (and nosy) smart assistants:

If we expect digital assistants to facilitate personal tasks that involve a mix of public and private data, we’ll need the technology to provide “perfect secrecy,” or the highest possible level of privacy, in certain situations. Until now, prior methods either have ignored the privacy question or provided weaker privacy guarantees.

Nikki Goth Itoi
  • Check out this blog post and Youtube video by Gateway about how he used Stable Diffusion Infinity outpainting to create a very large mural:

  • Gotta catch em' all:

Especially this one 😝 🤣:

  • Check out this video demo from Alexandr Wang of Scale AI of their new Spellbook, a "platform for LLM apps."

  • Nice informative thread from Assembly AI on 10 AI terms you should know:

Take us away, GPT-3: