frog

@frog@beehaw.org

This profile is from a federated server and may be incomplete. For a complete list of posts, browse on the original instance.

frog ,

Yeah, let's face it, AI will be used to make sure the super-wealthy pay even less tax than they already do. Neither AI nor its carbon emissions will ever be taxed.

frog ,

The other thing that needs to die is hoovering up all data to train AIs without the consent and compensation to the owners of the data. Most of the more frivolous uses of AI would disappear at that point, because they would be non-viable financially.

frog ,

I remember reading that a little while back. I definitely agree that the solution isn't extending copyright, but extending labour laws on a sector-wide basis. Because this is the ultimate problem with AI: the economic benefits are only going to a small handful, while everybody else loses out because of increased financial and employment insecurity.

So the question that comes to mind is exactly how, on a practical level, it would work to make sure that when a company scrapes data, trains and AI, and then makes billions of dollars, the thousands or millions of people who created the data all get a cut after the fact. Because particularly in the creative sector, a lot of people are freelancers who don't have a specific employer they can go after. From a purely practical perspective, paying artists before the data is used makes sure all those freelancers get paid. Waiting until the company makes a profit, taxing it out of them, and then distributing it to artists doesn't seem practical to me.

frog , (edited )

Creating same-y pieces with AI will not improve the material conditions of artists' lives, either. All that does is drag everyone down in a race to the bottom on who can churn out the most dreck the most quickly. "If we advance the technology enough, everybody can have it on their device and make as much AI-generated crap as they like" does not secure stable futures for artists.

frog ,

I did actually specify that I think the solution is extending labour laws to cover the entire sector, although it seems that you accidentally missed that in your enthusiasm to insist that the solution is having AI on more devices. However, so far I haven't seen any practical solutions as to how to extend labour laws to protect freelancers who will lose business to AI but don't have a specific employer that the labour laws will apply to. Retroactively assigning profits from AI to freelancers who have lost out during the process doesn't seem practical.

frog , (edited )

Labour law alone, in terms of the terms under which people are employed and how they are paid, does not protect freelancers from the scenario that you, and so many others, advocate for: a multitude of individuals all training their own AIs. No AI advocate has ever proposed a viable and practical solution to the large number of artists who aren't directly employed by a company but are still exposed to all the downsides of unregulated AI.

The reality is that artists need to be paid for their work. That needs to happen at some point in the process. If AI companies (or individuals setting up their own customised AIs) don't want to pay in advance to obtain the training data, then they're going to have to pay from the profits generated by the AI. Continuing the status quo, where AIs can use artists' labour without paying them at all is not an acceptable or viable long-term plan.

frog ,

Destroying the rights of artists to the benefit of AI owners doesn't achieve that goal. Outside of the extremely wealthy who can produce art for art's sake, art is a form of skilled labour that is a livelihood for a great many people, particularly the forms of art that are most at risk from AI - graphic design, illustration, concept art, etc. Most of the people in these roles are freelancers who aren't in salaried jobs that can be regulated with labour laws. They are typically commissioned to produce specific pieces of art. I really don't think AI enthusiasts have any idea how rare stable, long-term jobs in art actually are. The vast majority of artists are freelancers: it's essentially a gig-economy.

Changes to labour laws protect artists who are employees - which we absolutely should do, so that companies can't simply employ artists, train AI on their work, then fire them all. That absolutely needs to happen. But that doesn't protect freelancers from companies that say "we'll buy a few pieces from that artist, then train an AI on their work so we never have to commission them again". It is incredibly complex to redefine commissions as waged employment in such a way that the company can both use the work for AI training while the artist is ensured future employment. And then there's the issue of the companies that say "we'll just download their portfolio, then train an AI on the portfolio so we never have to pay them anything". All of the AI companies in existence fall into this category at present - they are making billions on the backs of labour they have never paid for, and have no intention of ever paying for. There seems to be no rush to say that they were actually employing those millions of artists, who are now owed back-pay for years worth of labour and all the other rights that workers protected by labour laws should have.

frog ,

Taking artists' work without consent or compensation goes against the spirit of open source, though, doesn't it? The concept of open source relies upon the fact that everyone involved is knowingly and voluntarily contributing towards a project that is open for all to use. It has never, ever been the case that if someone doesn't volunteer their contributions, their work should simply be appropriated for the project without their consent. Just look at open source software: that is created and maintained by volunteers, and others contribute to it voluntarily. It has never, ever been okay for an open source dev to simply grab whatever they want to use if the creator hasn't explicitly released it under an applicable licence.

If the open source AI movement wants to be seen as anything but an enemy to artists, then it cannot just stomp on artists' rights in exactly the same way the corporate AIs have. Open source AIs need to have a conversation about consent and informed participation in the project. If an artist chooses to release all their work under an open source licence, then of course open source AIs should be free to use it. But simply taking art without consent or compensation with the claim that it's fine because the corporate AIs are doing it too is not a good look and goes against the spirit of what open source is. Destroying artists' livelihoods while claiming they are saving them from someone else destroying their livelihoods will never inspire the kind of enthusiasm from artists that open source AI proponents weirdly feel entitled to.

This is ultimately my problem with the proponents of AI. The open source community is, largely, an amazing group of people whose work I really respect and admire. But genuine proponents of open source aren't so entitled that they think anyone who doesn't voluntarily agree to participate in their project should be compelled to do so, which is at the centre of the open source AI community. Open source AI proponents want to have all the data for free, just like the corporate AIs and their tech bro CEOs do, cloaking it in the words of open source while undermining everything that is amazing about open source. I really can't understand why you don't see that forcing artists to work for open source projects for free is just as unethical as corporations doing it, and the more AI proponents argue that it's fine because it's not evil when they do it, the more artists will see them as being just as evil as the corporations. You cannot force someone to volunteer.

frog ,

When the purpose of gathering the data is to create a tool that destroys someone's livelihood, the act of training an AI is not merely "observation". The AIs cannot exist without using content created by other people, and the spirit of open source doesn't include appropriating content without consent - especially when it is not for research or educational purposes, but to create a tool that will be used commercially, which open source ones inevitably will be, given the stated purpose is to compete with corporate models.

No argument you can make will convince me that what open source AI proponents are doing is any less unethical or exploitative than what the corporate ones are. Both feel entitled to artists' labour in exchange for no compensation, and have absolutely no regard for the negative impacts of their projects. The only difference between CEO AI tech bros and open source AI tech bros is the level of wealth. The arrogant entitlement is just the same in both.

frog ,

The problem is that undermining artists by dispersing open source AI to everyone, without having a fundamental change in copyright law that removes power from the corporations as well as individual artists, and a fundamental change in labour law, wealth distribution, and literally everything else, just screws artists over. Proceeding with open source AI, without any other plans or even a realistic path to a complete change in our social and economic structure, is basically just saying "yeah, we'll sort out the problems later, but right now we're entitled to do whatever we want, and fuck everybody else". And that is the tech bro mindset, and the fossil fuel industry, and so, so many others.

AI should be regulated into oblivion until such a time as our social and economic structures can handle it, ie, when all the power and wealth has been redistributed away from the 1% and evenly into the hands of everyone. Open source AI will not change the power that corporations hold. We know this because open source software hasn't meaningfully changed the power they hold.

I'm also sick of the excuse that AI helps people express themselves, like artistic expression has always been behind some impenetrable wall, with some gatekeeper only allowing a chosen few access. Every single artist had to work incredibly hard to learn the skill. It's not some innate talent that is gifted to a lucky few. It takes hard work and dedication, just like any other skill. Nothing has ever stopped anyone learning that except the willingness to put the effort in. I don't think people who tried one doodle and gave up because it was hard are a justifiable reason to destroy workers' livelihoods.

frog ,

I'm feeling the need to do a social media detox, including Beehaw. Pro-AI techbros are getting me down.

Shockingly, keeping Instagram active. My feed there is nothing but frogs, greyhounds, and art from local artists, and detoxing from stuff that is improving my mood rather than making it worse seems unnecessary.

frog ,

Kind of depressing that the answer to not being replaced by AI is "learn to use it and spend your day fixing its fuckups", like that's somehow a meaningful way to live for someone who previously had an actual creative job.

frog ,

AI is also going to run into a wall because it needs continual updates with more human-made data, but the supply of all that is going to dry up once the humans who create new content have been driven out of business.

It's almost like AIs have been developed and promoted by people who have no ability to think about anything but their profits for the next 12 months.

frog ,

Yep. Life does just seem... permanently enshittified now. I honestly don't see it ever getting better, either. AI will just ensure it carries on.

frog ,

Yep. I used to be an accountant, and that's how trainees learn in that field too. The company I worked at had a fairly even split between clients with manual and computerised records, and trainees always spent the first year or so almost exclusively working on manual records because that was how you learned to recognise when something had gone wrong in the computerised records, which would always look "right" on a first glance.

frog ,

But this is the point: the AIs will always need input from some source or another. Consider using AI to generate search results. Those will need to be updated with new information and knowledge, because an AI that can only answer questions related to things known before 2023 will very quickly become obsolete. So it must be updated. But AIs do not know what is going on in the world. They have no sensory capacity of their own, and so their inputs require data that is ultimately, at some point in the process, created by a human who does have the sensory capacity to observe what is happening in the world and write it down. And if the AI simply takes that writing without compensating the human, then the human will stop writing, because they will have had to get a different job to buy food, rent, etc.

No amount of "we can train AIs on AI-generated content" is going to fix the fundamental problem that the world is not static and AI's don't have the capacity to observe what is changing. They will always be reliant on humans. Taking human input without paying for it disincentivises humans from producing content, and this will eventually create problems for the AI.

frog ,

The scales of the two are nowhere near comparable. A human can't steal and regurgitate so much content that they put millions of other humans out of work.

frog ,

I did not know the exact wording of this guidance, but this is basically the strategy I use. I've always figured that because I prepare for my journeys, I am never in such a rush that I need to put someone else's life at risk in order to pass them quicker - it's not like it's going to make a difference to my day if I arrive at my destination 2 minutes later, but it'll make a huge difference to someone else's day if I rush past a cyclist when it's not safe.

frog ,

I honestly don't get why so many people are so reckless and impatient on the roads. I've seen some people being really fucking stupid around cyclists and motorcyclists. One incident haunts me, because I know someone would have been severely injured, maybe killed, if I hadn't been quick enough to get out of the way of an impatient person overtaking in a stupid place.

And it's just like... why? Just leave home a few minutes earlier!

frog ,

There may not have been any intentional design, but humans are still meant to eat food, drink water, and breathe oxygen, and going against that won't lead to a good end.

frog ,

Just gonna say that I agree with you on this. Humans have evolved over millions of years to emotionally respond to their environment. There's certainly evidence that many of the mental health problems we see today, particularly at the scale we see, is in part due to the fact that we evolved to live in a very different way to our present lifestyles. And that's not about living in cities rather than caves, but more to do with the amount of work we do each day, the availability and accessability of essential resources, the sense of community and connectedness with small social groups, and so on.

We know that death has been a constant of our existence for as long as life has existed, so it logically follows that dealing with death and grief is something we've evolved to do. Namely, we evolved to grieve for a member of our "tribe", and then move on. We can't let go immediately, because we need to be able to maintain relationships across brief separations, but holding on forever to a relationship that can never be continued would make any creature unable to focus on the needs of the present and future.

AI simulacrums of the deceased give the illusion of maintaining the relationship with the deceased. It is certainly well within the possibility that this will prolong the grieving process artificially, when the natural cycle of grieving is to eventually reach a point of acceptance. I don't know for sure that's what would happen... but I would want to be absolutely sure it's not going to cause harm before unleashing this AI on the general public, particularly vulnerable people (which grieving people are.)

Although I say that about all AI, so maybe I'm biased by the ridiculous ideology that new technologies should be tested and regulated before vulnerable people are experimented on.

frog ,

Sure, you should be free to make one. But when you die and an AI company contacts all your grieving friends and family to offer them access to an AI based on you (for a low, low fee!), there are valid questions about whether that will cause them harm rather than help - and grieving people do not always make the most rational decisions. They can very easily be convinced that interacting with AI-you would be good for them, but it actually prolongs their grief and makes them feel worse. Grieving people are vulnerable, and I don't think AI companies should be free to prey on the vulnerable, which is a very, very realistic outcome of this technology. Because that is what companies do.

So I think you need to ask yourself not whether you should have the right to make an AI version of yourself for those who survive your death... but whether you're comfortable with the very likely outcome that an abusive company will use their memories of you to exploit their grief and prolong their suffering. Do you want to do that to people you care about?

frog ,

Think of how many family recipes could be preserved. Think of the stories that you can be retold in 10 years. Think of the little things that you’d easily forget as time passes.

An AI isn't going to magically know these things, because these aren't AIs based on brain scans preserving the person's entire mind and memories. They can learn only the data they're told. And fortunately, there's a much cheaper way for someone to preserve family recipies and other memories that their loved ones would like to hold onto: they could write it down, or record a video. No AI needed.

frog ,

I also suspect, based on the accuracy of AIs we have seen so far, that their interpretation of the deceased's personality would not be very accurate, and would likely hallucinate memories or facts about the person, or make them "say" things they never would have said when they were alive. At best it would be very Uncanny Valley, and at worst would be very, very upsetting for the bereaved person.

frog ,

Given the husband is likely going to die in a few weeks, and the wife is likely already grieving for the man she is shortly going to lose, I think that still places both of them into the "vulnerable" category, and the owner of this technology approached them while they were in this vulnerable state. So yes, I have concerns, and the fact that the owner is allegedly a friend of the family (which just means they were the first vulnerable couple he had easy access to, in order to experiment on) doesn't change the fact that there are valid concerns about the exploitation of grief.

With the way AI techbros have been behaving so far, I'm not willing to give any of them the benefit of the doubt about claims of wanting to help rather than make money - such as using a vulnerable couple to experiment on while making a "proof of concept" that can be used to sell this to other vulnerable people.

frog ,

I absolutely, 100% agree with you. Nothing I have seen about the development of AI so far has suggested that the vast majority of its uses are grotesque. The few edge cases where it is useful and helpful don't outweigh the massive harm it's doing.

frog ,

Nope, I'm just not giving the benefit of the doubt to the techbro who responded to a dying man's farewell posts online with "hey, come use my untested AI tool!"

frog ,

Yeah, I think you could be right there, actually. My instinct on this from the start is that it would prevent the grieving process from completing properly. There's a thing called the gestalt cycle of experience where there's a normal, natural mechanism for a person going through a new experience, whether it's good and bad, and a lot of unhealthy behaviour patterns stem from a part of that cycle being interrupted - you need to go through the cycle for everything that happens in your life, reaching closure so that you're ready for the next experience to begin (most basic explanation), and when that doesn't happen properly, it creates unhealthy patterns that influence everything that happens after that.

Now I suppose, theoretically, there's a possibility that being able to talk to an AI replication of a loved one might give someone a chance to say things they couldn't say before the person died, which could aid in gaining closure... but we already have methods for doing that, like talking to a photo of them or to their grave, or writing them a letter, etc. Because the AI still creates the sense of the person still being "there", it seems more likely to prevent closure - because that concrete ending is blurred.

Also, your username seems really fitting for this conversation. :)

frog ,

Will admit I have not read the article, so this is my comment based solely on reading the headline:

Well that's not creepy at all.

frog ,

As many a person has said before when an outlandish and unproven claim is made: pics or it didn't happen.

frog ,

Oh, that is such a beautiful moth - and a really lovely photo too! <3

frog ,

Finished with university until September now, and have most of my grades for this year back (still waiting on one module). Having mixed feelings about the grades, because I know objectively that they're excellent, yet I still feel like I could have done better. I still got better grades than everyone else. I will acknowledge the two may be connected: when you constantly feel like you could be doing better, you push yourself harder. Even so, I did learn a ridiculous amount this year, and produced some work I'm really proud of.

The end outcome of this is, of course, that I'm exhausted, yet simultaneously having trouble slowing down. Having been pushing at full speed ahead for many months, I'm now feeling weirded out by not having any assignments to do or deadlines to meet. If I had to summarise what my brain is doing right now, it would be:

??????

There is also tangible relief to be away from... that guy. I can't remember if I posted about it at the time but basically he got caught lying about his part of the group project, namely that he had finished it when he had not even started it. So with 24 hours before the deadline, we essentially kicked him off the team and I did his section of the project. A week's worth of work packed into a single evening. Because he's using his neurodiversity as an excuse for not doing anything for half the year, they're probably going to be reluctant to kick him out... but that's a problem for next September. For now, I'm just going to enjoy not having to deal with the useless, arrogant prick for a few months.

frog ,

For sure, it's definitely been valuable experience! I would like to think in a working environment, things would be a bit... easier, I guess, since a big part of the problem was this project didn't have any effective leadership that could challenge the asshole on his lack of contributions. Whereas I would hope that in an actual studio, department heads wouldn't let someone produce absolutely no work for months, while blindly believing every excuse which is, sadly, what the leader for this project did. The lecturer knew what was up, because despite taking a hands-off approach, he was watching far more closely than most of the class realised - but he let it play out this way precisely because it's a good learning experience. Suffice to say, I got an extra few points on my grade because I stepped in at the last minute.

frog ,

Wonderful advice, thank you! I still haven't really worked out what actually helped to destress me and what didn't (aside from venting - just feeling heard makes a difference!), because I don't think I ever really destressed until the deadlines had passed.

frog ,

I think it's very much an artefact of religious attitudes at the time science started advancing during the Industrial Revolution, which held up humans as being superior to animals (and also that people before the Industrial Revolution were ignorant and unenlightened). Given that we have legal records from the centuries before that, where animals were held to have legal/moral equivalency to humans (this includes incidences of animals being punished for crimes, of course, but there's also a case of a court ruling in favour of weevils having rights over a particular field, so the farmer had to let them have it - the record of whether the weevils abided by this agreement was... eaten by weevils), I suspect that back then people were a lot more open to the idea that animals had many of the same capabilities as us. Christianity, especially the "humans have dominion over everything else" strains of it that we've had for the last 150 years or so, likely does not reflect the attitude of all humans for the entirety of history - although of course in the past, people didn't have the scientific knowledge needed to prove it conclusively.

frog ,

I have to agree. I've used a great many software packages over the years, but having been given an Adobe Creative Cloud subscription by my university, as several of Adobe's programs are required for the degree I'm doing, I've been very annoyed to discover that the alternatives really aren't on the same level. They work, sure. You can get the job done with them. But I am genuinely finding Photoshop to be significantly more powerful than everything else I've used. And it's really annoying because I've never liked Adobe as a company.

frog ,

The one thing I've been dissatisfied with Photoshop for, in comparison to another app, is its traditional media analogues do not come even close to Painter's, and I've not been able to get any brushes set up in a way that replicates them. There's professionals that use Painter in addition to Photoshop because of that, and I expect I will as well - but I really notice the features missing that I use a lot in Photoshop.

frog ,

Been a while since I used Krita, so it's hard to compare Krita from 3 or 4 years ago with Photoshop 2023, but it was okay. Better than GIMP, but unless there's been some major changes, it doesn't have anywhere near the versatility in tools and filters as Photoshop.

This feels like the key difference between Photoshop and the others. There's an awful lot of stuff that previously I would have to do manually, sometimes over several hours, that Photoshop can do in seconds, either because there's a tool or filter for it, or sometimes just because Photoshop is so much more responsive. This is really hard to quantify in an objective way, far more so than pointing out whether a feature is present or absent, but... I use an art tablet and Photoshop just responds to the pen better.

So like it's not really that it's impossible to do amazing work with the free apps, it'll just take a lot longer. I liked your analogy in your other comment, about the e-bike vs pickup truck: you definitely can move that half a ton of crushed stone with an e-bike, but it'll be quicker and less work with a pickup truck.

frog ,

That would probably work for hobbyists, but I have my doubts that professionals, who rely on Adobe products for their livelihood, could use unsuitable software for years in the hopes that volunteer devs will eventually add the features they need. In the other post about this topic, someone commented that GIMP's devs are refusing to fix problems that are repelling new users, which is not going to encourage Adobe users to make the switch. GIMP still doesn't have fully functioning, reliable non-destructive editing, which is 100% essential for anyone beholden to a boss or client who is going to change their minds a couple of times between now and next month.

Adobe is big because of their userbase, but their userbase is big because they make genuinely powerful software that fits the needs of professionals. The free options (and the cheap proprietary options) are not there yet, and probably never will be. Professionals aren't going to switch until the features they need are there (because seriously, why would anyone use a tool for their job that doesn't actually allow them to do their job properly?), but the features aren't going to be added until the professionals switch over. Catch22.

frog ,

I don't particularly want to jump between a dozen different apps to have access to every single tool and filter I use, especially when even when using a single file format (PSD), not every app treats layers in the same way. In a detailed digital paint, you can very easily have hundreds of layers, so it's absolutely a deal-breaker if your layer groupings or group masks are destroyed when switching between apps.

frog ,

When AI can sit in a large chair and make money off the backs of others all day

Arguably this is the only thing AI can do. Would AI even exist if not for the huge datasets derived from other people's hard work? All the money AI will generate is based exclusively off the backs of others.

frog ,

There's also the risk that the AI might decide that the best way of testing the company's new product is to unleash it on the general public without any safety testing or thought of the consequences. That would be an absolute disaster.

frog ,

The "Willa Wonka Experience" event comes to mind. The images on the website were so obviously AI-generated, but people still coughed up £35 a ticket to take their kids to it, and were then angry that the "event" was an empty warehouse with a couple of plastic props and three actors trying to improvise because the script they'd been given was AI-generated gibberish. Straight up scam.

frog ,

Having flicked through a few spots in the video, and being British, my conclusion is this:

Britain has got some major problems, many of which there is a lack of political will to fix, to the point that I could identify the general subject of many sections of this video just by the title on the timestamps. But the video is still pretty rubbish and overly sensationalised, with some of the opinions presented (smoking bans being bad, switching to an American style insurance-based healthcare being a good idea) are just straight up idiotic.

I would still rather live here than America though. Although Britain has been badly mismanaged by the Conservatives over the last 14 years, it is less polarised than the US, and the electorate as a whole are broadly tolerant and compassionate people who have very little tolerance of or respect for culture wars. The Conservatives insisting on trying to make this election about culture wars is a contributing factor into why their poll ratings are getting worse.

frog ,

UK citizens can also opt out, as the Data Protection Act 2018 is the UK's implementation of GDPR and confers all of the same rights.

In my opt out, I have also reminded them of their obligation to delete data when I do not consent to its use, so since I have denied consent, any of my data that has been used must be scrubbed from the training sets and resulting AI outputs derived from the unauthorised use of my data.

Sadly, having an Instagram account is unavoidable for me. Networking is an important part of many creatives' careers, and if the bulk of your connections are on Instagram, you have to be there too.

frog ,

AI programs are already dominated by bad actors, and always will be. OpenAI and the other corporations are every bit the bad actors as Russia and China. The difference between Putin and most techbros is as narrow as a sheet of paper. Both put themselves before the planet and everyone else living on it. Both are sociopathic narcissists who take, take, take, and rely on the exploitation of those poorer and weaker than themselves in order to hoard wealth and power they don't deserve.

frog ,

Had OpenAI not released ChatGPT, making it available to everyone (including Russia), there are no indications that Russia would have developed their own ChatGPT. Literally nobody has made any suggestion that Russia was within a hair's breadth of inventing AI and so OpenAI had better do it first. But there have been plenty of people making the entirely valid point that OpenAI rushed to release this thing before it was ready and before the consequences had been considered.

So effectively, what OpenAI have done is start handing out guns to everyone, and is now saying "look, all these bad people have guns! The only solution is everyone who doesn't already have a gun should get one right now, preferably from us!"

frog ,

Well, let's see about the evidence, shall we? OpenAI scraped a vast quantity of content from the internet without consent or compensation to the people that created the content, and leaving aside any conversations about whether copyright should exist or not, if your company cannot make a profit without relying on labour you haven't paid for, that's exploitation.

And then, even though it was obvious from the very beginning that AI could very easily be used for nefarious purposes, they released it to the general public with guardrails that were incredibly flimsy and easily circumvented.

This is a technology that required being handled with care. Instead, its lead proponents are of the "move fast and break things" mentality, when the list of things that can be broken is vast and includes millions of very real human beings.

You know who else thinks humans are basically disposable as long as he gets what he wants? Putin.

So yeah, the people running OpenAI and all the other AI companies are no better than Putin. None of them care who gets hurt as long as they get what they want.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines