It’s frighteningly easy to poison a large language model (LLM). What’s even more unnerving is you don’t need access to do it.  A new study from NYU scientists dives deep into how much medical misinformation an AI can handle before it starts giving wrong answers.… Read full story from martech.org : Read More