AI Breakdown

agibreakdown
undefined
Dec 19, 2023 • 6min

arxiv preprint - WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia

In this episode we discuss WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia by Sina J. Semnani, Violet Z. Yao, Heidi C. Zhang, Monica S. Lam. The paper introduces WikiChat, a chatbot that uses a few-shot Large Language Model (LLM) grounded in Wikipedia to provide accurate, engaging responses with minimal hallucinations. WikiChat was distilled into a smaller 7 billion-parameter model from GPT-4, improving response times and reducing costs without much loss in quality. It outperforms other chatbots in terms of factual accuracy and knowledge coverage, according to a unique evaluation involving human-and-LLM interactions, achieving highly accurate responses and favorable user feedback.
undefined
Dec 18, 2023 • 4min

arxiv preprint - DemoFusion: Democratising High-Resolution Image Generation With No $$$

In this episode we discuss DemoFusion: Democratising High-Resolution Image Generation With No $$$ by Ruoyi Du, Dongliang Chang, Timothy Hospedales, Yi-Zhe Song, Zhanyu Ma. The paper introduces DemoFusion, a framework designed to enhance open-source Latent Diffusion Models (LDMs) for higher-resolution image generation. It incorporates Progressive Upscaling, Skip Residual, and Dilated Sampling to improve image quality while ensuring the process remains accessible to a broader audience. Additionally, DemoFusion's progressive approach allows for intermediate "previews" that support quick iterations of image prompts.
undefined
Dec 15, 2023 • 5min

arxiv preprint - Recommender Systems with Generative Retrieval

In this episode we discuss Recommender Systems with Generative Retrieval by Shashank Rajput, Nikhil Mehta, Anima Singh, Raghunandan H. Keshavan, Trung Vu, Lukasz Heldt, Lichan Hong, Yi Tay, Vinh Q. Tran, Jonah Samost, Maciej Kula, Ed H. Chi, Maheswaran Sathiamoorthy. The paper presents a novel generative approach for large-scale retrieval in recommender systems, where a model autoregressively decodes the identifiers (Semantic IDs) of target items. It introduces Semantic IDs, composed of semantically meaningful tuples, to represent items, and uses a Transformer-based sequence-to-sequence model to predict the next item a user will interact with based on their session history. The approach outperforms current state-of-the-art models on multiple datasets and demonstrates improved generalization, effectively retrieving items without prior interactions.
undefined
Dec 14, 2023 • 4min

arxiv preprint - Mamba: Linear-Time Sequence Modeling with Selective State Spaces

In this episode we discuss Mamba: Linear-Time Sequence Modeling with Selective State Spaces by Albert Gu, Tri Dao. The paper presents Mamba, an innovative neural network architecture that outperforms traditional Transformer models, especially in handling very long sequences. Mamba's design incorporates selective structured state space models (SSMs) whose parameters depend on input tokens, enabling content-based reasoning and memory management over sequence lengths. The result is a model with fast inference, linear scaling with sequence length, and state-of-the-art performance in various modalities, including language, audio, and genomics, even surpassing Transformers that are twice its size.
undefined
Dec 13, 2023 • 5min

arxiv preprint - Block-State Transformers

In this episode we discuss Block-State Transformers by Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin. The paper introduces the Block-State Transformer (BST) architecture that merges state space models and block-wise attention to effectively capture long-range dependencies and improve performance on language modeling tasks. The BST incorporates an SSM sublayer for long-range context and a Block Transformer sublayer for local sequence processing, enhancing parallellization and combining the strengths of both model types. Experiments demonstrate the BST's superior performance over traditional Transformers in terms of perplexity, generalization to longer sequences, and a significant acceleration in processing speed due to model parallelization.
undefined
Dec 12, 2023 • 5min

arxiv preprint - Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns

In this episode we discuss Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns by Brian DuSell, David Chiang. The paper introduces stack attention, a novel attention mechanism that incorporates the concept of stacks to help recognize hierarchical and nested syntactic structures, which traditional scaled dot-product attention fails to handle effectively. Two versions of stack attention are presented, one deterministic and one nondeterministic, both aiming to enhance transformers' ability to parse context-free languages (CFLs) without requiring explicit syntactic training data. Experimental results reveal that transformers equipped with stack attention outperform standard transformers on CFLs with complex parsing requirements and also show improvements in natural language modeling and machine translation within a limited parameter setting.
undefined
Dec 11, 2023 • 4min

arxiv preprint - LooseControl: Lifting ControlNet for Generalized Depth Conditioning

In this episode we discuss LooseControl: Lifting ControlNet for Generalized Depth Conditioning by Shariq Farooq Bhat, Niloy J. Mitra, Peter Wonka. LOOSECONTROL is introduced as a novel method for depth-conditioned image generation that is less reliant on detailed depth maps, unlike the state-of-the-art ControlNet. It allows for content creation by specifying scene boundaries or 3D box layouts for objects, which can then be refined using either 3D box editing or attribute editing techniques. The results of LOOSECONTROL outperform baselines, and with its potential as a design tool for creating complex scenes, the authors make their code and additional information available online.
undefined
Dec 8, 2023 • 2min

Announcement: AI Breakdown Youtube Channel

Welcome back to AI Breakdown! In this special announcement, your hosts Megan and Ray share exciting news - we're expanding to YouTube! This new platform will add a visual dimension to our discussions, bringing AI papers to life with figures, tables, and results. While the podcast will continue as usual, the YouTube channel will offer a more immersive experience, perfect for those who prefer a visual approach to understanding AI. Stay tuned for this new chapter in AI Breakdown, and check out AI Breakdown YouTube Channel!
undefined
Dec 8, 2023 • 4min

arxiv preprint - OneLLM: One Framework to Align All Modalities with Language

In this episode we discuss OneLLM: One Framework to Align All Modalities with Language by Jiaming Han, Kaixiong Gong, Yiyuan Zhang, Jiaqi Wang, Kaipeng Zhang, Dahua Lin, Yu Qiao, Peng Gao, Xiangyu Yue. The paper introduces OneLLM, a multimodal large language model that unifies the encoding of eight different modalities to language via a single framework. It uses a new image projection module and a universal projection module for multimodal alignment, extending the model's capability to progressively align more modalities. OneLLM is demonstrated to excel in various multimodal tasks across 25 benchmarks and is supplementarily supported by a specially curated multimodal instruction dataset with 2 million items, with resources accessible online.
undefined
Dec 8, 2023 • 4min

arxiv preprint - The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning

In this episode we discuss The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning by Bill Yuchen Lin, Abhilasha Ravichander, Ximing Lu, Nouha Dziri, Melanie Sclar, Khyathi Chandu, Chandra Bhagavatula, Yejin Choi. The paper discusses the effectiveness of traditional alignment tuning methods for large language models (LLMs) and introduces a new, simple tuning-free method named URIAL (Untuned LLMs with Restyled In-context ALignment). Analysis reveals that alignment tuning primarily adjusts the language style without significant transformation of the knowledge base, with the majority of decoding remaining identical to the base LLM. The proposed URIAL method, which utilizes strategic prompting and in-context learning with just a few stylistic examples, achieves comparable or superior performance to models aligned through traditional methods, questioning the necessity of complex alignment tuning and emphasizing the need for deeper understanding of LLM alignment.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app