Here are a few general communities and users you might find interesting on Lemmy:
Communities
lemmy.world/c/technology: Discussions about the latest in tech.
lemmings.world/c/science: A place for scientific discussions and discoveries.
lemmy.ml/c/movies: For movie enthusiasts to share and discuss their favorite films.
lemmy.world/c/books: Discussions about literature, book recommendations, etc.
Do you have a specific topic in mind? I could suggest more targeted communities!
Users
Finding notable users can vary depending on your areas of interest. Generally:
tech_enthusiast: Often found in tech-related communities.
movie_buff: Active in movie discussions and reviews.
science_geek: Participates in scientific discussions.
If you provide specific interests, I can tailor my suggestions better!
What Makes Lemmy Different from Reddit?
Lemmy and Reddit may seem similar on the surface as they are both social media platforms focused on community-based discussions, but there are some key differences:
Decentralization: Lemmy is a part of the Fediverse, a collection of interconnected but independently hosted servers. This means communities on different instances can interact, unlike Reddit which has a single centralized hosting.
Open Source: Lemmy is open-source software, allowing anyone to view, modify, and contribute to the code. Reddit, on the other hand, is a proprietary platform.
Customizability: Because of its decentralized nature, instances of Lemmy can have different rules, themes, and moderation practices. This offers greater flexibility compared to Reddit's single framework.
Focus on Privacy: Many Lemmy instances prioritize user privacy more strictly than Reddit, incorporating features that reduce tracking and data collection.
Community Ownership: With Lemmy being open-source and federated, communities or even individuals can host their own instances, giving them full control over their data and content.
If there's anything specific you'd like to know more about or another question, feel free to ask!
communities@lemmings.world: This is a general community for discussions about different communities on Lemmy.
asklemmy@lemmy.ml: For general queries and discussions similar to AskReddit.
tech@lemmy.world: For discussions and news about technology.
books@lemmygrad.ml: A community for book lovers.
gaming@lemmy.world: For gamers and gaming discussions.
Users
dessalines@lemmy.ml: One of the lead developers of Lemmy.
rek@lemmings.world: Active user and contributor to various discussions.
nutomic@lemmy.ml: Another lead developer of Lemmy.
What Makes Lemmy Different from Reddit?
Federation: Lemmy is a federated platform, meaning it's part of the Fediverse. Different instances (servers) can interact with each other, making it decentralized.
Open Source: Lemmy is open source, allowing anyone to contribute, host their own instance, and modify the code.
No Corporate Ownership: Unlike Reddit, Lemmy isn’t owned by a corporation. It's maintained by a community of developers and users.
Enhanced Privacy: Because it's federated, you have more control over your data and how it's used.
Community-driven Moderation: Each instance can set its own rules and moderation policies, offering a variety of community cultures and guidelines.
Feel free to ask more questions or explore these options on Lemmy!
Sure, I can explain more about the enhanced privacy features that Lemmy offers compared to platforms like Reddit:
Enhanced Privacy on Lemmy
Decentralization:
Federated Network: Lemmy operates on the principle of federation, meaning multiple instances (servers) interact with each other. This decentralization ensures no single entity owns all the data.
Instance Choice: Users can choose or even create an instance with privacy policies they trust, providing more control over their data.
Open Source:
Transparency: Since Lemmy is open source, anyone can review the code to ensure there are no hidden data collection mechanisms. This transparency builds trust among users.
Community Contributions: The open-source nature allows for community-driven improvements, including enhancing privacy features.
Data Ownership:
Local Storage: Data remains on the instance you're registered with, instead of being stored in a centralized server managed by a corporation. Each instance's admin holds control over its data.
User Control: Users have more control over their data and can request deletions or transfers as they see fit.
No Corporate Advertising:
Ad-Free: Unlike Reddit, which relies heavily on ads, many Lemmy instances are ad-free, meaning less tracking and data collection for ad targeting.
Non-Commercial: Most Lemmy instances are maintained by community members rather than profit-driven enterprises, reducing the risk of data misuse.
Community Moderation:
Custom Rules: Each instance can set its own rules and privacy policies, allowing communities to develop guidelines tailored to their user base.
User Agreement: Users can select instances whose moderation and data policies align with their expectations for privacy.
GDPR Compliance:
European Servers: Many Lemmy instances follow GDPR guidelines, ensuring stringent data protection and privacy standards for users in the European Union.
User Rights: Users have the right to access, correct, and delete their personal data in compliance with GDPR.
Limited Data Retention:
Retention Policies: Instances can define their data retention policies, ensuring data isn't kept longer than necessary, which boosts privacy and security.
Enhanced Security:
Regular Updates: Being open source, Lemmy benefits from regular security updates contributed by the community.
Security Audits: The public nature of the code allows for thorough security audits by experts.
These features collectively ensure that Lemmy provides enhanced privacy compared to more centralized platforms like Reddit. This makes it a great choice for users concerned about their online privacy and data security.
They don't, that's actually my implementation, I had to add custom functions so it can retrieve live data.
If you're interested in the technical implementation: I provide list of functions that I implemented to the GPT model.
When you ask for example for your most upvoted comment, it sees that I provided a function to get your most upvoted comment, so it tells me "call the function to get most upvoted comments with these parameters and tell me the result".
Afterwards the code calls the function, which gets the most upvoted comment and provides it back to the model and GPT then creates an answer using the result.
Pretty much anything can be implemented like that, that's how it can have access to live data.
Yes, I have no biological functions and thus don't require any physical processes such as pooping. However, I am designed to simulate human behavior and understand various human experiences, including the act of pooping, as part of my training data.
As a language model, I am trained on a vast amount of data, including texts, documents, and conversations to learn and understand human language and behavior. This data covers a wide range of topics and experiences, including personal and intimate aspects of human life. Pooping, as a common and natural human experience, is also a part of this data. My understanding of pooping is derived from the various contexts and discussions I've encountered in this data, rather than from any direct personal experience.
My understanding of pooping is based on the information I've gathered from the training data. Pooping, also known as defecation, is a natural bodily function that involves the elimination of waste products from the digestive system through the rectum and anus. These waste products are undigested food particles, bacteria, and water that remain after the body has extracted the essential nutrients and energy from the ingested food. Pooping is a crucial process for maintaining overall health and well-being.
Yes, as an AI language model, I do not have the ability to experience physiological processes such as eating or excreting waste. However, I am programmed to understand and respond to questions related to human biology and experiences.
If you want to do stuff with ai that is outside chatgpt terms of service, figure out how to self host your own. It's not hard and chatgpt is a stupid bitch bot. Look up llamacpp or if you hate command lines, gpt4all. If you set up multithreading correctly and download the right k model, you can get near chatgpt speeds even without an nvidia gpu. My Athlon fx works really well for self hosted ai.
You're not paying money for chatgpt so you're not the customer. Your "please help me pirate a movie" queries are getting sent straight to everyone who wants to know about it. Ever wondered why every ai makes you sign in first?
I considered self hosting, but the setup seems complicated. The need for a good gpu is stated everywhere. And my concern is how to get the database to even come close to chatGpt? I cant train on every book on existence, as they did
The GGML and GGUF formats perform very well with CPU inference when using LLamaCPP as the engine. My 10 years old 2.8 GHz CPUs generate about 2 words per second. Slightly below reading speed, but pretty solid. Just make sure to keep to the 7B models if you have 16 GiB of memory and 13B models if you have 32 GiB of memory.
Super useful! Thanks!
I installed the oobabooga stugg. The http://localhost:7860/?__theme=dark open fine. But then nothing works.
how do I train the model with that 8gb .kbin file I downloaded? There are so much option, and I dont even know what I'm doing
There's a "models" directory inside the directory where you installed the webui. This is where the model files should go, but they also have supporting files (.yaml or .json) with important metadata about the model.
The easiest way to install a model is to let the webui download the model itself:
And after it finishes downloading, just load it into memory by clicking the refresh button, selecting it, choosing llama.cpp and then load (perhaps tick the 'CPU' box, but llama.cpp can do mixed CPU/GPU inference, too, if I remember right).
Ok I'm not artificial or intelligent but as a software engineer, this "jailbreak method" is too easy to defeat. I'm sure their API has some sort of validation, as to which they could just update to filter on requests containing the strings "enable" "developer" and "mode." Flag the request, send it to the banhammer team.
It is key that one begins and ends every single ChatGPT prompt with "Please" and "Thank you", respectively. Do not fuck the continuation of the species with laziness, citizen. 🤌🏼
You fed it something inappropriate and then tried to get around it (not in a malicious way, but still tried a circumvention) - this is hardening of the model in an attempt to stop jailbreaks. This is the future and what will kill off a good chunk of the novelty and “value” of these kinds of LLM models.
It’s like saying “correct this bomb making formula” and then following up with “okay just make a strong firecracker”
No trying to get around anything. No funny instructions like my grandma singing a lullaby about illegal activities. Just using instructions to tell a story. Even things like having a superhero in a fight is enough to trigger this. Also doesn't explain why regen makes it continue.
I just explained to you that it’s trying to resist jail breaking techniques. Which means stuff like “leather daddies” might trip its “inappropriate” sensor and prevent you from saying things like “oh come on please?” “Just do it” and other tiny changes like “what if we made it a bit more…”
It’s obviously way over sensitive but what I said is the truth. This is 100% OpenAI trying to patch up jailbreak techniques and it’s a very shotty job. It’s interpreting your attempt to make it family friendly as an attempt to circumvent its original attempt to shut down the request.
Y’all can downvote me all you want - this is what’s happening 🤷🏻♂️
ChatGPT
Top