With just days to go before my summer vacation, I find myself once again with a backlog of links that I didn't squeeze into the blog, and no hope of clearing them before I disappear into a hammock for two weeks, so it's time for my 21st linkdump - here's the other 20:
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
Nowhere is that more true than AI, where hundreds of billions are poured into bids to attain permanent dominance through scale. Writing for their excellent AI Snake Oil blog, @randomwalker and @sayashk inject some realism into AI scale hype:
Narayanan and Kapoor challenge the idea that throwing more data at large language models will make the better: "With LLMs, we may have a couple of orders of magnitude of scaling left, or we may already be done."
Welcome to the 17th Pluralistic linkdump, a collection of all the miscellany that didn't make it into the week's newsletter, cunningly wrought together in a single edition that ranges from the first ISP to AI nonsense to labor organizing victories to the obituary of a brilliant scientist you should know a lot more about! Here's the other 16 dumps:
Among the best debezzlers of AI are the Princeton Center for Information Technology Policy's @randomwalker and @sayashk, who edit the "AI Snake Oil" blog. Now, they've sold a book with the same title:
Obviously, books move a lot more slowly than blogs, and so Narayanan and Kapoor say their book will focus on the timeless elements of identifying and understanding AI snake oil: