Contextualizing Noise with Blackbird's Compass
AI InsideFebruary 28, 2024
57:2053.32 MB

Contextualizing Noise with Blackbird's Compass

This week Jason Howell and Jeff Jarvis talk with Dan Patterson of about their context-providing service Compass before diving into the week's top AI stories including Google's "biased" Gemini model, data sharing deals between companies like Reddit for model training, and the risks and benefits of open sourcing AI systems.


  • Overview of Blackbird AI's mission to track narrative threats and attacks like misinformation and disinformation
  • Introduction of Blackbird's new product Compass for providing context around claims using AI analysis
  • Explanation of how Compass works to check claims and provide contextual information from authoritative sources
  • Discussion around Compass being built on Blackbird's Raven Risk large language model (LLM) and related APIs
  • Examples provided of using Compass for real-world claims like "Is the earth flat?"
  • Intention for Compass to help provide clarity and essential context to media content
  • Discussion around target users for Compass - social media companies, comms agencies, journalists
  • Explanation that Compass determines authority based on how authoritative sites reference each other
  • Discussion around Compass having a framework for integrating with fact-checking databases


  • Discussion around the challenges and nuances of implementing guardrails for AI
  • News segment on Google's Gemini model controversy over biased image generation
  • Deals emerging between tech companies to sell data for AI model training, including Reddit-Google and rumored Tumblr-OpenAI
  • Tyler Perry putting his studio expansion plans on hold due to the emergence of AI like OpenAI's Sora
  • Analysis of benefits and risks of open-sourcing AI models

Hosted on Acast. See for more information.