xodoh74984

@xodoh74984@lemmy.world

This profile is from a federated server and may be incomplete. For a complete list of posts, browse on the original instance.

xodoh74984 ,

Hah, I would assume they mean not beholden to a government that tracks its citizens with facial recognition, data mines its citizens' personal communications to arrest them before they can even organize a protest, and is run by a dictator who literally made it illegal to call him Pooh Bear.

The sphere that America exerts control over is not without its issues and is surely corrupt. But it is nowhere near as corrupt, oppressive, and lacking in individual freedom as China and the other contender for world domination. Unlike China, America has no social credit score enforced by an all-seeing mass surveillance mechanism where VPN's and other attempts to hide from it are strictly illegal. And while many Americans might be racist toward Muslims, the American government does not dehumanize them and force them into labor camps.

Your whataboutism is clearly just a Chinese troll, but I'll leave this comment as a reminder to others reading that there is zero equivalence.

xodoh74984 ,

I don't think anyone in this thread thinks it's good for any government to be spying on everyone. But if we can cut off that flow of data to at least one government, great. Especially since that government is oppressive and authoritarian.

Maybe one day the US government will be cut off from mass surveillance as well.

In terms of reciprocity, the TikTok ban is long overdue. The US government's most valuable mass surveillance tools – Google, WhatsApp, Instagram, Snapchat, Facebook, etc – aren't allowed there.

xodoh74984 ,

Of all of the code specific LLMs I'm familiar with Deepseek-Coder-33B is my favorite. There are multiple pre-quantized versions available here:
https://huggingface.co/TheBloke/deepseek-coder-33B-base-GGUF/tree/main

In my experience a minimum of 5-bit quantization performs best.

xodoh74984 ,

This one is only 7B parameters, but it punches far above its weight for such a little model:
https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha

My personal setup is capable of running larger models, but for everyday use like summarization and brainstorming, I find myself coming back to Starling the most. Since it's so small, it runs inference blazing fast on my hardware. I don't rely on it for writing code. Deepseek-Coder-33B is my pick for that.

Others have said Starling's overall performance rivals LLaMA 70B. YMMV.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines