Skip to content Skip to sidebar Skip to footer

How to Find the Best Multilingual Embedding Model for Your RAG | by Iulia Brezeanu | Jan, 2024

Optimize the Embedding Space for Improving RAG Image by author. AI generated.Embeddings are vector representations that capture the semantic meaning of words or sentences. Besides having quality data, choosing a good embedding model is the most important and underrated step for optimizing your RAG application. Multilingual models are especially challenging as most are pre-trained on…

Read More

Large Language Models, GPT-1 — Generative Pre-Trained Transformer | by Vyacheslav Efimov | Jan, 2024

Diving deeply into the working structure of the first version of gigantic GPT-models 2017 was a historical year in machine learning. Researchers from the Google Brain team introduced Transformer which rapidly outperformed most of the existing approaches in deep learning. The famous attention mechanism became the key component in the future models derived from…

Read More

Google AI Presents Lumiere: A Space-Time Diffusion Model for Video Generation

Recent advancements in generative models for text-to-image (T2I) tasks have led to impressive results in producing high-resolution, realistic images from textual prompts. However, extending this capability to text-to-video (T2V) models poses challenges due to the complexities introduced by motion. Current T2V models face limitations in video duration, visual quality, and realistic motion generation, primarily due…

Read More

Some Thoughts on Operationalizing LLM Applications | by Matthew Harris | Jan, 2024

A few personal lessons learned from developing LLM applications Source DALL·E 3 prompted with “Operationalizing LLMs, watercolor”It’s been fun posting articles exploring new Large Language Model (LLM) techniques and libraries as they emerge, but most of the time has been spent behind the scenes working on the operationalization of LLM solutions. Many organizations are working…

Read More