Skip to content Skip to sidebar Skip to footer

Combining Large and Small LLMs to Boost Inference Time and Quality | by Richa Gadgil | Dec, 2024

Implementing Speculative and Contrastive Decoding Large Language models are comprised of billions of parameters (weights). For each word it generates, the model has to perform computationally expensive calculations across all of these parameters. Large Language models accept a sentence, or sequence of tokens, and generate a probability distribution of the next most likely token. Thus,…

Read More

ShowUI: A Vision-Language-Action Model for GUI Visual Agents that Addresses Key Challenges in UI Visual and Action Modeling

Large Language Models (LLMs) have demonstrated remarkable potential in performing complex tasks by building intelligent agents. As individuals increasingly engage with the digital world, these models serve as virtual embodied interfaces for a wide range of daily activities. The emerging field of GUI automation aims to develop intelligent agents that can significantly streamline human workflows…

Read More

Smaller is smarter. Do you really need the power of top… | by Alexandre Allouin | Dec, 2024

Concerns about the environmental impacts of Large Language Models (LLMs) are growing. Although detailed information about the actual costs of LLMs can be difficult to find, let’s attempt to gather some facts to understand the scale. Generated with ChatGPT-4oSince comprehensive data on ChatGPT-4 is not readily available, we can consider Llama 3.1 405B as an…

Read More

Researchers from NVIDIA and MIT Present SANA: An Efficient High-Resolution Image Synthesis Pipeline that Could Generate 4K Images from a Laptop

Diffusion models have pulled ahead of others in text-to-image generation. With continuous research in this field over the past year, we can now generate high-resolution, realistic images that are indistinguishable from authentic images.  However, with the increasing quality of the hyperrealistic images model, parameters are also escalating, and this trend results in high training and…

Read More

A new era of discovery

AI is revolutionizing the landscape of scientific research, enabling advancements at a pace that was once unimaginable — from accelerating drug discovery to designing new materials for clean energy technologies. The AI for Science Forum — co-hosted by Google DeepMind and the Royal Society — brought together the scientific community, policymakers, and industry leaders to…

Read More

Addressing Missing Data. Understand missing data patterns (MCAR… | by Gizem Kaya | Nov, 2024

Understand missing data patterns (MCAR, MNAR, MAR) for better model performance with Missingno In an ideal world, we would like to work with datasets that are clean, complete and accurate. However, real-world data rarely meets our expectation. We often encounter datasets with noise, inconsistencies, outliers and missingness, which requires careful handling to get effective results.…

Read More

Microsoft Research Introduces Reducio-DiT: Enhancing Video Generation Efficiency with Advanced Compression

Recent advancements in video generation models have enabled the production of high-quality, realistic video clips. However, these models face challenges in scaling for large-scale, real-world applications due to the computational demands required for training and inference. Current commercial models like Sora, Runway Gen-3, and Movie Gen demand extensive resources, including thousands of GPUs and millions…

Read More