
674: Parameter-Efficient Fine-Tuning of LLMs using LoRA (Low-Rank Adaptation)
Super Data Science: ML & AI Podcast with Jon Krohn
00:00
Parameter-Efficient Fine-Tuning using LoRA and Atta-Lora
Exploring how LoRA and Atta-Lora techniques optimize large language models by reducing trainable parameters and memory usage, with Atta-Lora adapting fine-tuning in specific model sections for efficiency.
Play episode from 00:00
Transcript


