LLM Architecture

GitHub - therealoliver/Deepdive-llama3-from-scratch: Achieve the llama3 inference step-by-step, grasp the core concepts, master the process derivation, implement the code.

A comprehensive guide detailing the implementation of Llama3 from scratch, covering model architecture, attention mechanisms, and optimization techniques like KV-Cache, with detailed code explanations and mathematical derivations.

Deep dive into LLMs like ChatGPT by Andrej Karpathy (TL;DR)

Andrej Karpathy's deep dive into LLMs covers the complete lifecycle from pretraining to post-training, explaining tokenization, neural network architectures, and fine-tuning processes. The comprehensive guide explores how LLMs process information, handle hallucinations, and utilize reinforcement learning to improve performance and reasoning capabilities.

Pulse AI Blog - Why LLMs Suck at OCR

Large Language Models (LLMs) face significant limitations in OCR tasks due to their probabilistic nature and inability to maintain precise visual information, particularly struggling with complex layouts and tables. LLMs' vision processing architecture leads to critical errors in data extraction, including financial and medical data corruption, while also being susceptible to prompt injection vulnerabilities.