AI Breakdown

agibreakdown
undefined
Jan 2, 2024 • 4min

arxiv preprint - The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction

In this episode, we discuss The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction by Pratyusha Sharma, Jordan T. Ash, Dipendra Misra. The paper presents Layer-Selective Rank Reduction (LASER), an innovative method that enhances Transformer-based Large Language Models (LLMs) by reducing higher-order features in their weight matrices post-training, without adding parameters or data. Extensive experiments show that LASER significantly boosts the performance of various LLMs on multiple datasets. The authors also delve into the theoretical understanding of LASER, examining the conditions under which it is most beneficial and the principles of how it works.
undefined
Dec 29, 2023 • 5min

arxiv preprint - DreaMoving: A Human Video Generation Framework based on Diffusion Models

In this episode we discuss DreaMoving: A Human Video Generation Framework based on Diffusion Models by Mengyang Feng, Jinlin Liu, Kai Yu, Yuan Yao, Zheng Hui, Xiefan Guo, Xianhui Lin, Haolan Xue, Chen Shi, Xiaowen Li, Aojie Li, Xiaoyang Kang, Biwen Lei, Miaomiao Cui, Peiran Ren, Xuansong Xie. DreaMoving is a framework that uses diffusion models to create customized human dance videos, where a target person can be seen performing specific dance moves. It consists of two main components: the Video ControlNet, which oversees motion control, and the Content Guider, which ensures the target individual's identity is maintained throughout the video. The framework is designed to be user-friendly and flexible, allowing for a wide range of video styles and is further detailed on its project page.
undefined
Dec 28, 2023 • 4min

arxiv preprint - Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution

In this episode we discuss Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution by Mostafa Dehghani, Basil Mustafa, Josip Djolonga, Jonathan Heek, Matthias Minderer, Mathilde Caron, Andreas Steiner, Joan Puigcerver, Robert Geirhos, Ibrahim Alabdulmohsin, Avital Oliver, Piotr Padlewski, Alexey Gritsenko, Mario Lučić, Neil Houlsby. The paper introduces NaViT (Native Resolution Vision Transformer), which unlike traditional computer vision models does not require resizing images to a fixed resolution, instead handling arbitrary resolutions and aspect ratios through sequence packing. NaViT demonstrates better training efficiency and can be applied to various standard computer vision tasks, where it also achieves improved robustness and fairness results. This approach allows for flexible input handling at test time, optimizing performance-cost trade-offs, and represents a significant shift from conventional CNN-based computer vision pipelines.
undefined
Dec 28, 2023 • 5min

arxiv preprint - UniRef++: Segment Every Reference Object in Spatial and Temporal Spaces

In this episode, we discuss UniRef++: Segment Every Reference Object in Spatial and Temporal Spaces by Jiannan Wu, Yi Jiang, Bin Yan, Huchuan Lu, Zehuan Yuan, Ping Luo. The paper introduces UniRef++, a unified architecture designed to address four reference-based object segmentation tasks: referring image segmentation (RIS), few-shot image segmentation (FSS), referring video object segmentation (RVOS), and video object segmentation (VOS). At the core of UniRef++ is the UniFusion module, which enables multiway fusion adjusted to task-specific references, along with a unified Transformer architecture for instance-level segmentation. UniRef++ demonstrates state-of-the-art performance on RIS and RVOS benchmarks, competitive results on FSS and VOS, and can be integrated with existing models, like SAM, for parameter-efficient finetuning.
undefined
Dec 27, 2023 • 4min

arxiv preprint - LongNet: Scaling Transformers to 1,000,000,000 Tokens

In this episode we discuss LongNet: Scaling Transformers to 1,000,000,000 Tokens by Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei. LONGNET is a new Transformer variant that allows for efficient processing of sequences over 1 billion tokens long using a novel dilated attention mechanism. This mechanism provides linear computational complexity and facilitates scaling, while maintaining performance on shorter sequences. The model is compatible with existing Transformer setups and has shown strong performance in tasks requiring long-sequence modeling and general language tasks, offering the potential to process vast text datasets as a single sequence.
undefined
Dec 27, 2023 • 4min

arxiv preprint - MotionCtrl: A Unified and Flexible Motion Controller for Video Generation

In this episode, we discuss MotionCtrl: A Unified and Flexible Motion Controller for Video Generation by Zhouxia Wang, Ziyang Yuan, Xintao Wang, Tianshui Chen, Menghan Xia, Ping Luo, Ying Shan. The study introduces MotionCtrl, a novel approach for video generation that can separately regulate camera and object motions, addressing limitations in previous methodologies that lacked precise control over these two motion types. MotionCtrl's design and training strategy reflect the distinct nature of camera and object movements and are less influenced by object appearance, enabling a more nuanced manipulation of motion within generated videos. Experimental results show that MotionCtrl outperforms existing models in its ability to produce diverse and controlled motion dynamics, while also maintaining the capability of adapting to various camera positions and trajectories.
undefined
Dec 26, 2023 • 5min

arxiv preprint - Model-tuning Via Prompts Makes NLP Models Adversarially Robust

In this episode we discuss Model-tuning Via Prompts Makes NLP Models Adversarially Robust by Mrigank Raman, Pratyush Maini, J. Zico Kolter, Zachary C. Lipton, Danish Pruthi. The discussed paper presents a new method called Model-tuning Via Prompts (MVP) that significantly improves the adversarial robustness of pretrained language models over the standard multilayer perceptron fine-tuning (MLP-FT) approach. MVP appends a prompt to the input instead of an MLP head, leading to an average 8% performance increase against adversarial attacks across various datasets and models, and even surpassing state-of-the-art defenses by 3.5%. The research suggests that MVP's robustness gains stem from better alignment with pre-training tasks and avoidance of the vulnerabilities introduced by the random initialization of MLP parameters.
undefined
Dec 22, 2023 • 5min

arxiv preprint - Training Chain-of-Thought via Latent-Variable Inference

In this episode we discuss Training Chain-of-Thought via Latent-Variable Inference by Du Phan, Matthew D. Hoffman, David Dohan, Sholto Douglas, Tuan Anh Le, Aaron Parisi, Pavel Sountsov, Charles Sutton, Sharad Vikram, Rif A. Saurous. The paper introduces a fine-tuning strategy for large language models that improves their problem-solving accuracy by focusing on maximizing the probability of correct answers using chain-of-thought (CoT) prompts without requiring detailed rationale supervision. It tackles the challenge of sampling from the posterior distribution of possible rationales with a novel Markov-chain Monte Carlo (MCMC) expectation-maximization (EM) algorithm, which also incorporates a control-variate technique to reduce variance in gradient estimates. The method outperforms existing fine-tuning methods, including the self-taught reasoner (STaR) and prompt-tuning with CoT, in generating more accurate answers on various complex reasoning tasks.
undefined
Dec 21, 2023 • 4min

arxiv preprint - Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation

In this episode we discuss Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation by Bingxin Ke, Anton Obukhov, Shengyu Huang, Nando Metzger, Rodrigo Caye Daudt, Konrad Schindler. The paper presents Marigold, a new method for monocular depth estimation that utilizes the learned priors from generative diffusion models, specifically derived from Stable Diffusion. Marigold is affine-invariant and can be fine-tuned efficiently on synthetic data with a single GPU, offering significant performance improvements, including over 20% gains in certain datasets. The project demonstrates the potential of leveraging the capabilities of generative models for enhancing depth estimation tasks, with a focus on better generalization and state-of-the-art results.
undefined
Dec 20, 2023 • 4min

arxiv preprint - Instruction-tuning Aligns LLMs to the Human Brain

In this episode we discuss Instruction-tuning Aligns LLMs to the Human Brain by Khai Loong Aw, Syrielle Montariol, Badr AlKhamissi, Martin Schrimpf, Antoine Bosselut. The paper examines whether instruction-tuning, a method for fine-tuning large language models (LLMs), makes their processing more human-like through two metrics: brain alignment and behavioral alignment. Results indicate instruction-tuning increases brain alignment with human neural activity by 6% on average but does not significantly impact behavioral alignment. A strong correlation is found between brain alignment and both the size of the model and its performance on tasks requiring world knowledge, suggesting that as LLMs better encode world knowledge, their internal representations align more closely with human brain activity.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app