ThursdAI - The top AI news from the past week

📆 ThursdAI - Dec 4, 2025 - DeepSeek V3.2 Goes Gold Medal, Mistral Returns to Apache 2.0, OpenAI Hits Code Red, and US-Trained MOEs Are Back!

94 snips
Dec 5, 2025
Lucas Atkins, CTO of RCAI and leader in U.S.-based MOE models, dives into the significant launch of Trinity models and their enterprise implications. He highlights the importance of training and compliance in model development, explaining the efficiency of MOE inference and the challenges of scaling. The conversation shifts to the competitive benchmarks of DeepSeek V3.2, which showcases exceptional performance. Exciting insights on the latest AI integrations wrap up the discussion, emphasizing real-world applications and the rapid evolution of AI technology.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ADVICE

Integration Beats Slightly Better Models

  • Build product adoption around integration and accessibility, not just raw model quality.
  • Prioritize seamless device and account integrations because users favor convenience over marginal model gains.
ADVICE

Run Quick Benchmarks With W&B

  • Use Weights & Biases LLM evaluation jobs to run standard benchmarks against any OpenAI-compatible API quickly.
  • Provide the base URL and API key and select evaluations to generate tracked leaderboards automatically.
ADVICE

Choose MOEs For RL And Inference Efficiency

  • Prefer MOE architectures when you need inference and RL efficiency because they reduce inference cost while enabling more RL rollouts.
  • Use MOEs to gain more post-training improvement by enabling many cheaper rollouts during RL fine-tuning.
Get the Snipd Podcast app to discover more snips from this episode
Get the app