LLMs

ewintr.nl

A detailed walkthrough of building a budget-friendly AI workstation with 48GB VRAM for running local LLMs, costing around 1700 euros using second-hand Tesla P40 GPUs. The setup enables running various AI models locally, achieving 5-15 tokens per second depending on model size, while maintaining independence from cloud-based AI services.

The LLMentalist Effect: how chat-based Large Language Models rep…

A detailed analysis comparing large language models to psychic cold reading techniques reveals striking parallels in how both create illusions of intelligence through statistical responses and subjective validation. The author argues that LLMs are mathematical models producing statistically plausible outputs rather than demonstrating true intelligence, suggesting many AI applications may be unintentionally replicating classic mentalist techniques.