The Segment Anything Model (SAM) is an AI-powered model that segments images for object detection and recognition. It is an effective solution for various computer vision tasks. However, SAM is not optimized for edge devices, which can lead to retarded performance and high resource consumption. Researchers from S-Lab Nanyang Technological University and Shanghai Artificial Intelligence…
Researchers from Carnegie Mellon University and Google DeepMind have collaborated to develop RoboTool, a system leveraging Large Language Models (LLMs) to imbue robots with the ability to creatively use tools in tasks involving implicit physical constraints and long-term planning. The system comprises four key components:
Analyzer for interpreting natural language
Planner for generating strategies
Calculator…
Generative foundational models are a class of artificial intelligence models designed to generate new data that resembles a specific type of input data they were trained on. These models are often employed in various fields, including natural language processing, computer vision, music generation, etc. They learn the underlying patterns and structures from the training data…
Many branches of biology, including ecology, evolutionary biology, and biodiversity, are increasingly turning to digital imagery and computer vision as research tools. Modern technology has greatly improved their capacity to analyze large amounts of images from museums, camera traps, and citizen science platforms. This data can then be used for species delineation, understanding adaptation mechanisms,…
Large Vision-Language Models (LVLMs) combine computer vision and natural language processing to generate text descriptions of visual content. These models have shown remarkable progress in various applications, including image captioning, visible question answering, and image retrieval. However, despite their impressive performance, LVLMs still face some challenges, particularly when it comes to specialized tasks that require…
Diffusion models have shown to be very successful in producing high-quality photographs when given text suggestions. This paradigm for Text-to-picture (T2I) production has been successfully used for several downstream applications, including depth-driven picture generation and subject/segmentation identification. Two popular text-conditioned diffusion models, CLIP models and Latent Diffusion Models (LDM), often called Stable Diffusion, are essential…
It isn’t easy to generate detailed and realistic 3D models from a single RGB image. Researchers from Shanghai AI Laboratory, The Chinese University of Hong Kong, Shanghai Jiao Tong University, and S-Lab NTU have presented HyperDreamer to address this issue. This framework solves this problem by enabling the creation of 3D content that is viewable,…
Volumetric recording and realistic representation of 4D (spacetime) human performance dissolve the barriers between spectators and performers. It offers a variety of immersive VR/AR experiences, such as telepresence and tele-education. Some early systems use nonrigid registration explicitly to recreate textured models from recorded footage. However, they are still susceptible to occlusions and texture deficiencies, which…
In a groundbreaking move, researchers at Meta AI have tackled the longstanding challenge of achieving high-fidelity relighting for dynamic 3D head avatars. Traditional methods have often needed to catch up when capturing the intricate details of facial expressions, especially in real-time applications where efficiency is paramount. Meta AI’s research team has responded to this challenge…
Transformers were first introduced and quickly rose to prominence as the primary architecture in natural language processing. More lately, they have gained immense popularity in computer vision as well. Dosovitskiy et al. demonstrated how to create effective image classifiers that beat CNN-based architectures at high model and data scales by dividing pictures into sequences of…