AI Breakdown

agibreakdown
undefined
Feb 2, 2024 • 4min

arxiv preprint - Tree Prompting: Efficient Task Adaptation without Fine-Tuning

In this episode, we discuss Tree Prompting: Efficient Task Adaptation without Fine-Tuning by John X. Morris, Chandan Singh, Alexander M. Rush, Jianfeng Gao, Yuntian Deng. Tree Prompting is a novel method for interacting with smaller language models (LMs) that creates a decision tree of prompts to guide the model's responses. This technique significantly enhances accuracy on tasks compared to traditional prompting methods and rivals the performance of gradient-based fine-tuning. Additionally, some versions of Tree Prompting provide insights into the LM's decision-making process.
undefined
Feb 1, 2024 • 3min

arxiv preprint - Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion Tokens

In this episode, we discuss Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion Tokens by Jiacheng Liu, Sewon Min, Luke Zettlemoyer, Yejin Choi, Hannaneh Hajishirzi. The paper introduces an improved n-gram language model named "Infini-gram," which scales to 1.4 trillion tokens and has the capacity to use n-grams of arbitrary length. The authors develop a suffix array-powered engine called infini-gram that calculates probabilities for these extended n-grams quickly, without the need for pre-computing count tables. This new framework demonstrated its utility by enhancing the performance of neural large language models and revealing limitations in machine-generated text, and the authors have made the engine available as an open-source tool for further research.
undefined
Jan 31, 2024 • 4min

arxiv preprint - Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning

In this episode, we discuss Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning by Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang. This paper introduces LRV-Instruction, a diverse dataset designed for visual instruction tuning with a focus on mitigating hallucination in large multi-modal models (LMMs). The dataset contains 400k visual instructions generated by GPT4 and includes negative as well as positive instructions to increase robustness, structured at different semantic levels of complexity. The authors propose GAVIE, an evaluation method that mimics human expert assessment without needing annotated ground truth, and demonstrate that training on the LRV-Instruction dataset, with an appropriate mix of positive and negative samples, reduces LMM hallucinations and improves performance across several tasks.
undefined
Jan 30, 2024 • 4min

arxiv preprint - RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture

In this episode, we discuss RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture by Angels Balaguer, Vinamra Benara, Renato Luiz de Freitas Cunha, Roberto de M. Estevão Filho, Todd Hendry, Daniel Holstein, Jennifer Marsman, Nick Mecklenburg, Sara Malvar, Leonardo O. Nunes, Rafael Padilha, Morris Sharp, Bruno Silva, Swati Sharma, Vijay Aski, Ranveer Chandra. The paper explores two methods of integrating specialized data into Large Language Models (LLMs): Retrieval-Augmented Generation (RAG), which adds external data to the input, and Fine-Tuning, which embeds the data into the model itself. A multi-stage pipeline for these methods is tested on an agricultural dataset to evaluate their effectiveness in providing geographically tailored insights to farmers. Results indicate substantial improvements in accuracy (over 6 percentage points with Fine-Tuning and an additional 5 with RAG), with fine-tuned models effectively using cross-regional information, showcasing the potential for LLMs to be customized for industry-specific applications.
undefined
Jan 29, 2024 • 3min

arxiv preprint - SliceGPT: Compress Large Language Models by Deleting Rows and Columns

In this episode, we discuss SliceGPT: Compress Large Language Models by Deleting Rows and Columns by Saleh Ashkboos, Maximilian L. Croci, Marcelo Gennari do Nascimento, Torsten Hoefler, James Hensman. The paper introduces SliceGPT, a new method for post-training sparsification of large language models that reduces their size and computational requirements by replacing weight matrices with smaller ones and thus cutting down the embedding dimension. This approach can eliminate up to 25% of parameters in certain models with minimal loss in task performance. The authors highlight computational invariance in transformer networks, which SliceGPT utilizes, and demonstrate that models can run faster and on fewer GPUs, all without additional optimization, providing code for their method at a provided GitHub repository.
undefined
Jan 26, 2024 • 4min

arxiv preprint - Is ImageNet worth 1 video? Learning strong image encoders from 1 long unlabelled video

In this episode, we discuss Is ImageNet worth 1 video? Learning strong image encoders from 1 long unlabelled video by Shashanka Venkataramanan, Mamshad Nayeem Rizve, João Carreira, Yuki M. Asano, Yannis Avrithis. The paper presents two innovations in self-supervised learning: a new dataset called "Walking Tours," which features high-resolution, long duration, first-person videos ideal for self-supervision, and a novel pretraining method called DORA which uses transformer cross-attention to track and learn object recognition in videos. This method diverges from adapting image-based pretraining to videos by instead focusing on tracking objects over time. The researchers found that their approach, combining the Walking Tours dataset with DORA, performed comparably to ImageNet on various image and video recognition tasks, showcasing the efficiency of their method.
undefined
Jan 25, 2024 • 4min

arxiv preprint - MambaByte: Token-free Selective State Space Model

In this episode, we discuss MambaByte: Token-free Selective State Space Model by Junxiong Wang, Tushaar Gangavarapu, Jing Nathan Yan, Alexander M Rush. "MambaByte, a token-free language model, removes the bias associated with subword tokenization by learning from raw bytes. It capitalizes on the Mamba state space model's adaptability to byte sequences, offering computational efficiency and often outperforming traditional subword Transformers despite the increased sequence length. With linear scaling, MambaByte also achieves faster inference, demonstrating its potential for efficient token-free language modeling."
undefined
Jan 24, 2024 • 4min

arxiv preprint - Lumiere: A Space-Time Diffusion Model for Video Generation

In this episode, we discuss Lumiere: A Space-Time Diffusion Model for Video Generation by Omer Bar-Tal, Hila Chefer, Omer Tov, Charles Herrmann, Roni Paiss, Shiran Zada, Ariel Ephrat, Junhwa Hur, Yuanzhen Li, Tomer Michaeli, Oliver Wang, Deqing Sun, Tali Dekel, Inbar Mosseri. The paper presents Lumiere, a novel text-to-video diffusion model capable of generating realistic and coherently moving videos by producing the full temporal sequence in a single pass, using a Space-Time U-Net architecture. Unlike other methods that create videos by interpolating between keyframes, Lumiere ensures global temporal consistency by using spatial and temporal down- and up-sampling. The model shows superior performance in text-to-video generation and is versatile, allowing for content creation tasks such as image-to-video conversion, video inpainting, and stylized video generation.
undefined
Jan 23, 2024 • 3min

arxiv preprint - Self-Rewarding Language Models

In this episode, we discuss Self-Rewarding Language Models by Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Sainbayar Sukhbaatar, Jing Xu, Jason Weston. The paper introduces self-rewarding language models (SR-LMs) which generate their own rewards for self-improvement beyond human performance levels. Using a method called Iterative Direct Preference Optimization, SR-LMs can enhance their ability to follow instructions and improve the quality of self-generated rewards through iteration. The authors demonstrate that their approach, when applied to Llama 2 70B, exceeds the performance of other systems on the AlpacaEval 2.0 leaderboard, suggesting potential for models to self-improve continuously.
undefined
Jan 22, 2024 • 4min

arxiv preprint - Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data

In this episode, we discuss Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data by Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao. "Depth Anything" is an approach to improve monocular depth estimation by exploiting a massive collection of about 62 million unlabeled images, aiming to extend dataset size and lessen generalization errors without the need for novel technical developments. The model's performance is heightened through strategic data augmentation and the incorporation of semantic knowledge from pre-trained encoders, leading to exceptional zero-shot generalization demonstrated on various public datasets and random images. By additionally fine-tuning with metric depth data, the model sets new benchmarks on NYUv2 and KITTI datasets and enhances the efficacy of a depth-conditioned ControlNet, with all models released for public use.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app