AI Breakdown

agibreakdown
undefined
Jan 19, 2024 • 4min

arxiv preprint - MoVQA: A Benchmark of Versatile Question-Answering for Long-Form Movie Understanding

In this episode, we discuss MoVQA: A Benchmark of Versatile Question-Answering for Long-Form Movie Understanding by Hongjie Zhang, Yi Liu, Lu Dong, Yifei Huang, Zhen-Hua Ling, Yali Wang, Limin Wang, Yu Qiao. The newly introduced dataset MoVQA aims to enhance the evaluation of AI systems' understanding of long-form video content, such as movies, addressing the limitations of previous datasets that did not fully capture the complexity and lengthy nature of such content. It challenges AI models with a more realistic range of temporal lengths and multimodal questions to mimic human-level comprehension from a moviegoer's perspective. Initial experiments with MoVQA show that current methods struggle as video and clue lengths increase, indicating substantial room for improvement in long-form video understanding AI research.
undefined
Jan 18, 2024 • 4min

arxiv preprint - Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model

In this episode, we discuss Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model by Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, Xinggang Wang. The paper introduces a new vision backbone called Vim, which leverages bidirectional Mamba blocks for efficient and effective visual representation learning, sidestepping the need for self-attention mechanisms. Vim incorporates position embeddings for handling the position-sensitivity of visual data and uses state space models to handle global context, leading to better performance on various tasks such as ImageNet classification and COCO object detection, while being more computationally and memory efficient than existing models like DeiT. Tests show that Vim is significantly faster and more memory-efficient, making it a promising candidate for advanced vision backbone algorithms, especially for high-resolution image processing.
undefined
Jan 17, 2024 • 4min

arxiv preprint - Patchscopes: A Unifying Framework for Inspecting Hidden Representations of Language Models

In this episode, we discuss Patchscopes: A Unifying Framework for Inspecting Hidden Representations of Language Models by Asma Ghandeharioun, Avi Caciularu, Adam Pearce, Lucas Dixon, Mor Geva. The paper presents a novel framework named Patchscopes designed to improve understanding of the hidden representations in large language models (LLMs) by using the models themselves to articulate these representations in natural language. Patchscopes integrates and extends existing interpretability techniques, overcoming limitations like the inability to inspect early layers and enhancing expressivity. Beyond reconciling former methods, Patchscopes also enables innovative applications, including having more advanced LLMs explain the workings of simpler ones and facilitating self-correction in complex reasoning tasks.
undefined
Jan 16, 2024 • 4min

arxiv preprint - Time Travel in LLMs: Tracing Data Contamination in Large Language Models

In this episode, we discuss Time Travel in LLMs: Tracing Data Contamination in Large Language Models by Shahriar Golchin, Mihai Surdeanu. The paper presents a method to detect test data contamination in large language models by checking if the model's output closely matches specific segments of reference data. This process involves guided instructions using dataset names and partition types, comparing the model's output to reference instances, and assessing partitions based on statistical overlap measures or classification by GPT-4's few-shot in-context learning. The results show high accuracy in identifying contamination, revealing that GPT-4 has been contaminated with certain datasets such as AG News, WNLI, and XSum.
undefined
Jan 12, 2024 • 4min

arxiv preprint - InseRF: Text-Driven Generative Object Insertion in Neural 3D Scenes

In this episode, we discuss InseRF: Text-Driven Generative Object Insertion in Neural 3D Scenes by Mohamad Shahbazi, Liesbeth Claessens, Michael Niemeyer, Edo Collins, Alessio Tonioni, Luc Van Gool, Federico Tombari. InseRF is a new approach for inserting generated objects into 3D scene reconstructions using NeRF, based on textual descriptions and 2D reference images. This method overcomes the limitations of existing scene editing techniques, which struggle with the generation of new objects, by performing a 2D insertion in a reference view and extrapolating it to 3D with the help of single-view reconstruction and monocular depth estimation priors. Extensive evaluations show that InseRF achieves controllable and 3D-consistent object insertions, outperforming current methods, and it does so without needing explicit 3D models as input.
undefined
Jan 11, 2024 • 4min

arxiv preprint - A Simple LLM Framework for Long-Range Video Question-Answering

In this episode, we discuss A Simple LLM Framework for Long-Range Video Question-Answering by Ce Zhang, Taixi Lu, Md Mohaiminul Islam, Ziyang Wang, Shoubin Yu, Mohit Bansal, Gedas Bertasius. The LLoVi framework innovates in long-range video question-answering (LVQA) by combining visual captioners with Large Language Models (LLMs) such as GPT-3.5 or GPT-4, foregoing complex long-range video modeling structures. Short video clips from a long video are captioned and these captions are then synthesized by an LLM to answer questions over the entire video length, proving more effective at LVQA than previous methods. In benchmarks, LLoVi notably outperformed previous best-performing approaches on several datasets, such as EgoSchema, NeXT-QA, IntentQA, and NeXT-GQA, and the code for LLoVi will be made publicly available.
undefined
Jan 9, 2024 • 4min

arxiv preprint - Mixtral of Experts

In this episode, we discuss Mixtral of Experts by Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. Mixtral 8x7B is a Sparse Mixture of Experts (SMoE) language model, building on Mistral 7B's architecture with 8 experts per layer, among which two experts are selected per token for processing, allowing access to 47B parameters but using only 13B actively. It excels in benchmarks, surpassing Llama 2 70B and GPT-3.5, especially in areas like math, code generation, and multilingual tasks. A special instruction-following version called Mixtral 8x7B – Instruct also outperforms leading models, with both models being open-sourced under the Apache 2.0 license.
undefined
Jan 8, 2024 • 4min

arxiv preprint - Weight subcloning: direct initialization of transformers using larger pretrained ones

In this episode we discuss Weight subcloning: direct initialization of transformers using larger pretrained ones by Mohammad Samragh, Mehrdad Farajtabar, Sachin Mehta, Raviteja Vemulapalli, Fartash Faghri, Devang Naik, Oncel Tuzel, Mohammad Rastegari. The paper introduces a new method called weight subcloning to expedite the training of small transformer models by initializing them with weights from larger pretrained models. This method ranks neurons by importance to reduce dimensions and removes blocks to align with the smaller model's layer count, resulting in significantly faster training times. Weight subcloning allows the transfer of knowledge from larger to smaller models, improving speed and potentially accuracy without the need for a pretrained model of the exact desired size.
undefined
Jan 5, 2024 • 5min

arxiv preprint - Compositional Abilities Emerge Multiplicatively: Exploring Diffusion Models on a Synthetic Task

In this episode we discuss Compositional Abilities Emerge Multiplicatively: Exploring Diffusion Models on a Synthetic Task by Maya Okawa, Ekdeep Singh Lubana, Robert P. Dick, Hidenori Tanaka. The paper investigates how conditional diffusion models generalize compositionally by studying their ability to generate novel data combinations within a controlled synthetic environment. Key discoveries include that compositional ability hinges on the data-generating process structure, and there's a sudden emergence of compositional performance linked to individual task proficiency. The findings also show that rarely seen concepts in training are tougher to compose for new outputs, shedding light on the generative models' capabilities from the perspective of data availability and structure.
undefined
Jan 5, 2024 • 4min

arxiv preprint - LLM in a flash: Efficient Large Language Model Inference with Limited Memory

In this episode, we discuss LLM in a flash: Efficient Large Language Model Inference with Limited Memory by Keivan Alizadeh, Iman Mirzadeh, Dmitry Belenko, Karen Khatamifard, Minsik Cho, Carlo C Del Mundo, Mohammad Rastegari, Mehrdad Farajtabar. The paper introduces an approach to operate large language models (LLMs) efficiently on devices with limited DRAM by using flash memory to store and selectively load model parameters. It proposes an inference cost model specific to flash memory to optimize data transfers and introduces "windowing" and "row-column bundling" techniques to improve data read efficiency. By implementing these strategies, the paper demonstrates that LLMs up to twice the size of the DRAM can be run 4-5 times faster on CPU and 20-25 times faster on GPU compared to standard loading methods, while also incorporating sparsity and context-awareness for enhanced performance.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app