The evolution of Vision Language Models (VLMs) towards general-purpose models relies on their ability to understand images and perform tasks via natural language instructions. However, it must be clarified if current VLMs truly grasp detailed object information in images. The analysis shows that their image understanding correlates strongly with zero-shot performance on vision language tasks.…
Multimodal Large Language Models (MLLMs), having contributed to remarkable progress in AI, face challenges in accurately processing and responding to misleading information, leading to incorrect or hallucinated responses. This vulnerability raises concerns about the reliability of MLLMs in applications where accurate interpretation of text and visual data is crucial.
Recent research has explored visual instruction…
In 3D reconstruction and generation, pursuing techniques that balance visual richness with computational efficiency is paramount. Effective methods such as Gaussian Splatting often have significant limitations, particularly in handling high-frequency signals and sharp edges due to their inherent low-pass characteristics. This limitation affects the quality of the rendered scenes and imposes a substantial memory footprint,…
The introduction of Augmented Reality (AR) and wearable Artificial Intelligence (AI) gadgets is a significant advancement in human-computer interaction. With AR and AI gadgets facilitating data collection, there are new possibilities to develop highly contextualized and personalized AI assistants that function as an extension of the wearer’s cognitive processes.
Currently, existing multimodal AI assistants, like…
Text-to-image (T2I) and text-to-video (T2V) generation have made significant strides in generative models. While T2I models can control subject identity well, extending this capability to T2V remains challenging. Existing T2V methods need more precise control over generated content, particularly identity-specific generation for human-related scenarios. Efforts to leverage T2I advancements for video generation need help maintaining…
There has been a recent uptick in the development of general-purpose multimodal AI assistants capable of following visual and written directions, thanks to the remarkable success of Large Language Models (LLMs). By utilizing the impressive reasoning capabilities of LLMs and information found in huge alignment corpus (such as image-text pairs), they demonstrate the immense potential…
The intersection of artificial intelligence and creativity has witnessed an exceptional breakthrough in the form of text-to-image (T2I) diffusion models. These models, which convert textual descriptions into visually compelling images, have broadened the horizons of digital art, content creation, and more. Yet this rapidly evolving area of Personalized T2I generation study grapples with several core…
Deep learning has revolutionized view synthesis in computer vision, offering diverse approaches like NeRF and end-to-end style architectures. Traditionally, 3D modeling methods like voxels, point clouds, or meshes were employed. NeRF-based techniques implicitly represent 3D scenes using MLPs. Recent advancements focus on image-to-image approaches, generating novel views from collections of scene images. These methods often…
MoD-SLAM is a state-of-the-art method for Simultaneous Localization And Mapping (SLAM) systems. In SLAM systems, it is challenging to achieve real-time, accurate, and scalable dense mapping. To address these challenges, researchers have introduced a novel method focusing on unbounded scenes using only RGB images. Existing neural SLAM methods often rely on RGB-D input which leads…
The task of view synthesis is essential in both computer vision and graphics, enabling the re-rendering of scenes from various viewpoints akin to the human eye. This capability is vital for everyday tasks and fosters creativity by allowing the envisioning and crafting of immersive objects with depth and perspective. Researchers at Dyson Robotics Lab aim…