Skip to content Skip to sidebar Skip to footer

Enhanced Large Language Models as Reasoning Engines | by Anthony Alcaraz | Dec, 2023

The recent exponential advances in natural language processing capabilities from large language models (LLMs) have stirred tremendous excitement about their potential to achieve human-level intelligence. Their ability to produce remarkably coherent text and engage in dialogue after exposure to vast datasets seems to point towards flexible, general purpose reasoning skills. However, a growing chorus of…

Read More

Understanding LoRA — Low Rank Adaptation For Finetuning Large Models | by Bhavin Jawade | Dec, 2023

Math behind this parameter efficient finetuning method Fine-tuning large pre-trained models is computationally challenging, often involving adjustment of millions of parameters. This traditional fine-tuning approach, while effective, demands substantial computational resources and time, posing a bottleneck for adapting these models to specific tasks. LoRA presented an effective solution to this problem by decomposing the update…

Read More