In a surprising benchmark result that could shake up the competitive landscape for AI inference, startup chip company Groq appears to have confirmed through a series of retweets that its system is serving Meta’s newly released LLaMA 3 large language model at over 800 tokens per second. “We’ve been …