Skip to content Skip to sidebar Skip to footer

3 Business Skills You Need to Progress Your Data Science Career in 2025 | by Dr. Varshita Sher | Dec, 2024

If you have been a data scientist for a while, sooner or later you’ll notice that your day-to-day has shifted from a VSCode-loving, research paper-reading, git-version-committing data scientist to a collaboration-driving, project-scoping, stakeholder-managing, and strategy-setting individual. This shift will be gradual and almost unnoticeable but one that will require you to put on different hats…

Read More

Combining Large and Small LLMs to Boost Inference Time and Quality | by Richa Gadgil | Dec, 2024

Implementing Speculative and Contrastive Decoding Large Language models are comprised of billions of parameters (weights). For each word it generates, the model has to perform computationally expensive calculations across all of these parameters. Large Language models accept a sentence, or sequence of tokens, and generate a probability distribution of the next most likely token. Thus,…

Read More

Smaller is smarter. Do you really need the power of top… | by Alexandre Allouin | Dec, 2024

Concerns about the environmental impacts of Large Language Models (LLMs) are growing. Although detailed information about the actual costs of LLMs can be difficult to find, let’s attempt to gather some facts to understand the scale. Generated with ChatGPT-4oSince comprehensive data on ChatGPT-4 is not readily available, we can consider Llama 3.1 405B as an…

Read More

Addressing Missing Data. Understand missing data patterns (MCAR… | by Gizem Kaya | Nov, 2024

Understand missing data patterns (MCAR, MNAR, MAR) for better model performance with Missingno In an ideal world, we would like to work with datasets that are clean, complete and accurate. However, real-world data rarely meets our expectation. We often encounter datasets with noise, inconsistencies, outliers and missingness, which requires careful handling to get effective results.…

Read More

ChatGPT: Two Years Later. Tracing the impact of the generative AI… | by Julián Peller | Nov, 2024

This November 30 marks the second anniversary of ChatGPT’s launch, an event that sent shockwaves through technology, society, and the economy. The space opened by this milestone has not always made it easy — or perhaps even possible — to separate reality from expectations. For example, this year Nvidia became the most valuable public company…

Read More

Open the Artificial Brain: Sparse Autoencoders for LLM Inspection | by Salvatore Raieli | Nov, 2024

|LLM|INTERPRETABILITY|SPARSE AUTOENCODERS|XAI| A deep dive into LLM visualization and interpretation using sparse autoencoders Image created by the author using DALL-EAll things are subject to interpretation whichever interpretation prevails at a given time is a function of power and not truth. — Friedrich Nietzsche As AI systems grow in scale, it is increasingly difficult and pressing…

Read More

Decoding One-Hot Encoding: A Beginner’s Guide to Categorical Data | by Vyacheslav Efimov | Nov, 2024

Learning to transform categorical data into a format that a machine learning model can understand When studying machine learning, it is essential to understand the inner workings of the most basic algorithms. Doing so helps in understanding how algorithms operate in popular libraries and frameworks, how to debug them, choose better hyperparameters more easily, and…

Read More