Skip to content Skip to sidebar Skip to footer

Visual Haystacks Benchmark: The First “Visual-Centric” Needle-In-A-Haystack (NIAH) Benchmark to Assess LMMs’ Capability in Long-Context Visual Retrieval and Reasoning

A significant challenge in the field of visual question answering (VQA) is the task of Multi-Image Visual Question Answering (MIQA). This involves generating relevant and grounded responses to natural language queries based on a large set of images. Existing Large Multimodal Models (LMMs) excel in single-image visual question answering but face substantial difficulties when queries…

Read More

From Diagrams to Solutions: MAVIS’s Three-Stage Framework for Mathematical AI

Large Language Models (LLMs) and their multi-modal counterparts (MLLMs) have made significant strides in advancing artificial general intelligence (AGI) across various domains. However, these models face a significant challenge in the realm of visual mathematical problem-solving. While MLLMs have demonstrated impressive capabilities in diverse tasks, they struggle to fully utilize their potential when confronted with…

Read More

A Decade of Transformation: How Deep Learning Redefined Stereo Matching in the Twenties

A fundamental topic in computer vision for nearly half a century, stereo matching involves calculating dense disparity maps from two corrected pictures. It plays a critical role in many applications, including autonomous driving, robotics, and augmented reality, among many others. According to their cost-volume computation and optimization methodologies, existing surveys categorize end-to-end architectures into 2D…

Read More

Enhancing Vision-Language Models: Addressing Multi-Object Hallucination and Cultural Inclusivity for Improved Visual Assistance in Diverse Contexts

The research on vision-language models (VLMs) has gained significant momentum, driven by their potential to revolutionize various applications, including visual assistance for visually impaired individuals. However, current evaluations of these models often need to pay more attention to the complexities introduced by multi-object scenarios and diverse cultural contexts. Two notable studies shed light on these…

Read More

MG-LLaVA: An Advanced Multi-Modal Model Adept at Processing Visual Inputs of Multiple Granularities, Including Object-Level Features, Original-Resolution Images, and High-Resolution Data

Multi-modal Large Language Models (MLLMs) have various applications in visual tasks. MLLMs rely on the visual features extracted from an image to understand its content. When a low-resolution image containing fewer pixels is provided as input, it translates less information to these models to work with. Due to this limitation, these models often need to…

Read More

CMU Researchers Propose In-Context Abstraction Learning (ICAL): An AI Method that Builds a Memory of Multimodal Experience Insights from Sub-Optimal Demonstrations and Human Feedback

Humans are versatile; they can quickly apply what they’ve learned from little examples to larger contexts by combining new and old information. Not only can they foresee possible setbacks and determine what is important for success, but they swiftly learn to adjust to different situations by practicing and receiving feedback on what works. This process…

Read More

Convolutional Kolmogorov-Arnold Networks (Convolutional KANs): An Innovative Alternative to the Standard Convolutional Neural Networks (CNNs)

Computer vision, one of the major areas of artificial intelligence, focuses on enabling machines to interpret and understand visual data. This field encompasses image recognition, object detection, and scene understanding. Researchers continuously strive to improve the accuracy and efficiency of neural networks to tackle these complex tasks effectively. Advanced architectures, particularly Convolutional Neural Networks (CNNs),…

Read More

Apple Releases 4M-21: A Very Effective Multimodal AI Model that Solves Tens of Tasks and Modalities

Large language models (LLMs) have made significant strides in handling multiple modalities and tasks, but they still need to improve their ability to process diverse inputs and perform a wide range of tasks effectively. The primary challenge lies in developing a single neural network capable of handling a broad spectrum of tasks and modalities while…

Read More

TiTok: An Innovative AI Method for Tokenizing Images into 1D Latent Sequences

In recent years, image generation has made significant progress due to advancements in both transformers and diffusion models. Similar to trends in generative language models, many modern image generation models now use standard image tokenizers and de-tokenizers. Despite showing great success in image generation, image tokenizers encounter fundamental limitations due to the way they are…

Read More

NVIDIA’s Autoguidance: Improving Image Quality and Variation in Diffusion Models

Improving image quality and variation in diffusion models without compromising alignment with given conditions, such as class labels or text prompts, is a significant challenge. Current methods often enhance image quality at the expense of diversity, limiting their applicability in various real-world scenarios such as medical diagnosis and autonomous driving, where both high quality and…

Read More