Let’s dive into the most important libraries in R and Python to visualise data and create different charts, and what the pros and cons are Being a pro in certain programming languages is the goal of every aspiring data professional. Reaching a certain level in one of the countless languages is a critical milestone for…
If you have been a data scientist for a while, sooner or later you’ll notice that your day-to-day has shifted from a VSCode-loving, research paper-reading, git-version-committing data scientist to a collaboration-driving, project-scoping, stakeholder-managing, and strategy-setting individual. This shift will be gradual and almost unnoticeable but one that will require you to put on different hats…
Implementing Speculative and Contrastive Decoding Large Language models are comprised of billions of parameters (weights). For each word it generates, the model has to perform computationally expensive calculations across all of these parameters. Large Language models accept a sentence, or sequence of tokens, and generate a probability distribution of the next most likely token. Thus,…
Concerns about the environmental impacts of Large Language Models (LLMs) are growing. Although detailed information about the actual costs of LLMs can be difficult to find, let’s attempt to gather some facts to understand the scale. Generated with ChatGPT-4oSince comprehensive data on ChatGPT-4 is not readily available, we can consider Llama 3.1 405B as an…
Understand missing data patterns (MCAR, MNAR, MAR) for better model performance with Missingno In an ideal world, we would like to work with datasets that are clean, complete and accurate. However, real-world data rarely meets our expectation. We often encounter datasets with noise, inconsistencies, outliers and missingness, which requires careful handling to get effective results.…
This November 30 marks the second anniversary of ChatGPT’s launch, an event that sent shockwaves through technology, society, and the economy. The space opened by this milestone has not always made it easy — or perhaps even possible — to separate reality from expectations. For example, this year Nvidia became the most valuable public company…
|LLM|INTERPRETABILITY|SPARSE AUTOENCODERS|XAI| A deep dive into LLM visualization and interpretation using sparse autoencoders Image created by the author using DALL-EAll things are subject to interpretation whichever interpretation prevails at a given time is a function of power and not truth. — Friedrich Nietzsche As AI systems grow in scale, it is increasingly difficult and pressing…
Decoding One-Hot Encoding: A Beginner’s Guide to Categorical Data | by Vyacheslav Efimov | Nov, 2024
Learning to transform categorical data into a format that a machine learning model can understand When studying machine learning, it is essential to understand the inner workings of the most basic algorithms. Doing so helps in understanding how algorithms operate in popular libraries and frameworks, how to debug them, choose better hyperparameters more easily, and…
Building a 28% more accurate multimodal image search engine with VLMs. Until recently, AI models were narrow in scope and limited to understanding either language or specific images, but rarely both. In this respect, general language models like GPTs were a HUGE leap since we went from specialized models to general yet much more powerful…
What working as a data scientist at various companies and industries over the past 6+ years has taught me of the future of data science and AI engineering GenAI and Large Language Models (LLMs) continue changing how we work and what work will mean in the future, especially for the data science domain, where in…