@Ferk@kbin.social cover
@Ferk@kbin.social avatar

Ferk

@Ferk@kbin.social

This profile is from a federated server and may be incomplete. For a complete list of posts, browse on the original instance.

Ferk , (edited )
@Ferk@kbin.social avatar

"First evidence in a billion years of two lifeforms merging into one"

It's slightly shorter and more accurate.. it does not state absolutely that it happened for the first time, but rather that it's the first evidence we've found from the last billion years.

Ferk , (edited )
@Ferk@kbin.social avatar

While the result from generating an image through AI is not meant to be "factually" accurate, its seeking to be as accurate as possible when it comes to matching the prompt that is provided. And the prompt "1943 German Soldier" or "US Senator from the 1800" or "Emperor of China" has some implications in what kind of images would be expected and which kinds wouldn't. Just like how you wouldn't expect a lightsaber when asking for "medieval swords".

I'm not convinced that attempting to "balance a biased training dataset" in the way that this is apparently being done is really attainable or worthwhile.

An AI can only work based on biases, and it's impossible to correct/balance the dataset without just introducing a different bias. Because the model is just a collection of biases that discriminate between how different descriptions relate to pictures. If there was no bias for the AI to rely on, they would not be able to pick anything to show.

For example, the AI does not know whether the word "Soldier" really corresponds to someone dressed like in the picture, it's just biased to expect that. It can't tell whether an actual soldier might just be wearing pajamas or whether someone dressed in those uniforms might not be an actual soldier.

Describing a picture is, on itself, an exercise of assumptions, biases, appearances that are just based on pre-conceived notions of what are our expectations when comparing the picture to our own reality. So the AI needs to show whatever corresponds to those biases in order to match as accuratelly as possible our biased expectations for what those descriptions mean.

If the dataset is complete enough, and yet it's biased to show predominantly a particular gender or ethnicity when asking for "1943 German Soldier" because that happens to be the most common image of what a "1943 German Soldier" is, but you want a different ethnicity or gender, then add that ethnicity/gender to the prompt (like you said in the first point), instead supporting the idea of having the developers force diversity into the results in a direction that contradicts the dataset just because the results aren't politically correct. ..it would be more honest to add a disclaimer and still show the result as it is, instead of manipulating it in a direction that activelly pushes the IA to hallucinate.

Alternativelly: expand your dataset with more valuable data in a direction that does not contradict reality (eg. introduce more pictures of soldiers of different ethnics from situations that actually are found in our reality). You'll be altering the data, but you would be doing it without distorting the bias unrealistically, since they would be examples grounded in reality.

Ferk , (edited )
@Ferk@kbin.social avatar

The word "Nazi" wasn't part of the prompt though.

The prompt was "1943 German Soldier"... so if, like you said, the images are "Dressed as a German style soldier", I'd say it's not too bad.

Ferk ,
@Ferk@kbin.social avatar

From the actual regulation text:

the concept of ‘illegal content’ should broadly reflect the existing rules in the offline environment. In particular, the concept of ‘illegal content’ should be defined broadly to cover information relating to illegal content, products, services and activities. In particular, that concept should be understood to refer to information, irrespective of its form, that under the applicable law is either itself illegal, such as illegal hate speech or terrorist content and unlawful discriminatory content, or that the applicable rules render illegal in view of the fact that it relates to illegal activities. Illustrative examples include the sharing of images depicting child sexual abuse, the unlawful non-consensual sharing of private images, online stalking, the sale of non-compliant or counterfeit products, the sale of products or the provision of services in infringement of consumer protection law, the non-authorised use of copyright protected material, the illegal offer of accommodation services or the illegal sale of live animals. In contrast, an eyewitness video of a potential crime should not be considered to constitute illegal content, merely because it depicts an illegal act, where recording or disseminating such a video to the public is not illegal under national or Union law. In this regard, it is immaterial whether the illegality of the information or activity results from Union law or from national law that is in compliance with Union law and what the precise nature or subject matter is of the law in question.

So, both.

Ferk ,
@Ferk@kbin.social avatar

Yes.. honestly, I don't see this approach being worthwhile...

It's better to search for full open source alternatives, front end and backend... like Lemmy/kbin for or reddit, peertube/lbry for YouTube, etc.

Ferk , (edited )
@Ferk@kbin.social avatar

I feel it's a balance. Each operation has a purpose.

Rebasing makes sense when you are working in a feature branch together with other people so you rebase your own commits to keep the feature branch lean before you finally merge it into the main branch, instead of polluting the history with a hard to follow mess of sub branches for each person. Or when you yourself ended up needing to rewrite (or squash) some commits to clean up / reorganize related changes for the same feature. Or when you already committed something locally without realizing you were not on sync with the latest version of a remote branch you are working on and you don't wanna have it as a 1-single-commit branch that has to be merged.

Squashing with git merge --squash is also very situational.. ideally you wouldn't need it if your commits are not messy/tiny/redundant enough that combining them together makes it better.

Ferk , (edited )
@Ferk@kbin.social avatar

"Capitalism" just means that the industry (or specifically, "means of production") can be privately owned.

The whole idea of Lemmy is allowing smaller groups / individuals to own smaller instances, so we don't depend on big corporations.

So the way I understand it, it's more of a big vs small thing, not really a "private" vs "governmental/social" ownership thing.

Sure, Lemmy gives freedom for people so, even governments, can make their own public instances.. but this all still relies on capitalism, since individual instances can still owned by (smaller?) private groups that can compete amongst each other for users, so you basically are competing as if you were just another company in a capitalist system controlled by offer/demand and reliant on what the average consumer goes after.

This would be the equivalent of asking people to purchase ethically sourced goods and drive the market with their purchase decisions (which is actually what a capitalist system expects) as opposed to actually making laws that forbid companies from selling unethical products. That means we are not ignoring capitalism, but rather participating on it, and just asking consumers to choose ethically when they go buy a product. That's just an attempt at ethical/educated capitalism, but still capitalism.

Ferk , (edited )
@Ferk@kbin.social avatar

Boycotting is an expected/intended tool in capitalism. It's part of the "free market" philosophy, the regulatory "invisible hand". The reason you can boycott a company is because the economy is based on a capitalist free market.

If boycotts were actually a good and successful method for the society to regulate the wealthy, then there would be no issue with capitalism. So that's not how you "end" capitalism, that's just how you make it work.

The issue is, precisely, that boycotts do not work (and thus, capitalism does not really work). Particularly when entire industries are controlled by private de-facto monopolies. If they worked you would not need social-democratic laws to force companies into compliance in many ethical aspects.

What you are advocating is not an alternative to capitalism (like communism or socialism), but a more ethical/educated capitalism that works at controlling the wealthy, just like many proponents of capitalism expected it would.

Ferk ,
@Ferk@kbin.social avatar

That's even harder. Specially if we aspire to have a community that protects privacy & anonymity.

Keep in mind "rich" does not necessarily mean "famous".
For all anyone knows, you and me could be part of the wealthy, yet nobody here would know, no online service would deny us service. Being forced to live an anonymous and private life is not really much of a punishment, at least it wouldn't be for me... if I were part of that wealthy I'd just lay low.. I'd get a reasonably humble but comfortable house in a reasonably neighborhood where people mind their own business, dressing modestly and living life without having to "really" work a day of my life, while my companies / assets / investments keep making money so I can go on modest trips and have some nice hobbies that are not necessarily really that expensive anyway. Anyone who figures it out, I set them up. It'd still be worth it to live that life.

Ferk ,
@Ferk@kbin.social avatar

Developing a crippled port that is limited/restricted by design due to Apple policies would not really help Mozilla’s/Firefox’s reputation anyway. Apple fanbois will complain ether way.

If those fanbois want a Firefox app on Apple systems, it's Apple the one they should complain to.

Ferk ,
@Ferk@kbin.social avatar

I always felt the fediverse is designed in a very awkward way... the way all the content needs to be mirrored, not only does it make it hard to update / modify / delete content, but also it makes it so other instances have to host content from all the other instances they want their users to access...

Not only is that redundant and requiring a lot more resources from the instances, but it also means that if an instance you federate with is hosting content you don't want (let's say... ch*ld pr0n) then your instance might end up HOSTING (ie.activelly propagating) that content... if I hosted my own instance I wouldn't want to federate at all out of fear of legal implications and I'd be constantly paranoid about possibly facilitating illegal stuff like that without even noticing...

Imho, a decentralized system in which content providers are separate from the user account providers would make more sense in my mind. Then the content providers can have full control over what they are hosting and also control over what user accounts (or whole account providers) are banned from posting / allowed to post. And it still gives users the freedom to navigate across different content providers seamlessly with the same account and interact with multiple content providers, sort of like with the fediverse, without having to login to each content provider.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines