GroĂe Sprachmodelle: GPT-4, LLaMA & Co đď¸ (click here to comment)
,Disclaimer: Sowohl Titel wie auch Beschreibung wurden von GPT-4 generiert. Kontext war die Liste der Links aus den Shownotes.
Manuel, Johannes, Dominik & Jochen tauschen sich Ăźber groĂe Sprachmodelle (LLMs) wie GPT-4 und LLaMA aus đ. Sie besprechen faszinierende Anwendungen in Projekten wie GitHub Copilot und BlenderGPT đ ď¸, sowie die Rolle von Word Embeddings und Reinforcement Learning from Human Feedback (RLHF) in der Modellentwicklung đ. ChatGPT wird dabei als Beispiel hervorgehoben, das die NĂźtzlichkeit von LLMs einer breiteren Ăffentlichkeit verdeutlicht hat. Die Diskussion umfasst auch ethische Bedenken im Zusammenhang mit LLMs đ¨ und schlieĂt mit Empfehlungen fĂźr vertiefende Ressourcen đđ§.
Shownotes
Unsere E-Mail fĂźr Fragen, Anregungen & Kommentare: hallo@python-podcast.de
News Allgemeines Geplauder
- Pause Giant AI Experiments: An Open Letter
- Thoughts on a Crazy Week in AI News
- GitHub Copilot
- JetBrains Fleet
- GPT-3 (generative pre-trained transformer) / Few-shot learning / Chain-of-thought
- GPT-4
- Eight Things to Know about Large Language Models | Sehr interessante Zusammenfassung von Dingen, die man bisher weiĂ
- BlenderGPT - This addon allows you to use Blender with natural language commands using OpenAI's GPT-3.5/GPT-4
- Introducing LLaMA: A foundational, 65-billion-parameter large language model / Alpaca.cpp / Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality
- GPUs in der Cloud: beam.cloud / pipeline.ai / crebrium.ai / banana.dev
- Hugging Face / Natural Language Processing with Transformers (Book)
- Inference of LLaMA model in pure C/C++
Large Language Models
- Let's build GPT: from scratch, in code, spelled out
- Attention Is All You Need | Das ursprĂźngliche Transformer-Paper
- The Waluigi Effect (mega-post)
- LangChain | Building applications with LLMs through composability
- ChatGPT plugins
- Zero-shot learning
- On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
- Understanding models understanding language | Was Modelle aus Text Ăźber Farben lernen
- Global workspace theory
- Bouba/kiki effect
- ControlNet
- Word embeddings
- llm command line tool
- Geppetto - go LLM and GPT3 specific prompting framework | enthält das cli tool pinocchio
- kitty - the fast, feature-rich, cross-platform, GPU based terminal
- pyupgrade A tool (and pre-commit hook) to automatically upgrade syntax for newer versions of the language
- Semi supervised learning
- Illustrating Reinforcement Learning from Human Feedback (RLHF)
- What Is ChatGPT Doing ⌠and Why Does It Work?
- The Illustrated Transformer
- Eight Things to Know about Large Language Models
Medien zum Vertiefen
- Yoshua Bengio: large language models, higher cognition, causality, working memory, responsible AI (The Robot Brains Podcast) | Sehr gut!
- Episode 88: ChatGPT (Hotel Bar Sessions) | Meh, aber interessant wie daneben die liberal arts Leute liegen
- A.I. Is About to Get Much Weirder. Hereâs What to Watch For. (The Ezra Klein Show) | Quite solid for a journalistic publication
- ChatGPT, GPT4 hype, and Building LLM-native products â with Logan Kilpatrick of OpenAI (Latent Space Podcast)
- Prompt Engineering and AI Constitutions with Stephen Wolfram
Picks
- streamlit.io - A faster way to build and share data apps
- ruff - An extremely fast Python linter, written in Rust
- Scrapeghost is an experimental library for scraping websites using OpenAI's GPT
- BlenderGPT - This addon allows you to use Blender with natural language commands using OpenAI's GPT-3.5/GPT-4
- Descript is the simple, powerful, and fun way to edit





