Kind of, it's a standalone system with the hardware integrated - kinda like Google Coral with a Raspberry Pi.
I’ve recently been looking into locally hosting some LLMs for various purposes, I haven’t specced out hardware yet. Any good resources you can recommend?
Not really, sorry - I haven't gone too deep into LLMs beyond simple use cases. I've only really used llama.cpp myself.