ChatGPT

SkyNTP , in AI models have favorite numbers, because they think they're people | TechCrunch

"Favourite numbers" is just another way of saying model bias, a centuries old knowtion.

There's no ethics in journalism. That's the real story here.

kakes ,

I swear every article posted to Lemmy about LLMs are written by my 90 year old grandpa, given how out of touch they are with the technology. If I see another article about what ChatGPT "believes"...

superfes , in What can LLMs never do?

What a stupid article that goes on for too long trying to treat an LLM as a thinking computer, unbelievably long, idiotic, and based entirely on an assumption that has never been and never will be true.

elbarto777 ,

I bet it was written by AI.

muntedcrocodile , in What happened to GPT -4o Censorship This Weekend?

Fuck openai use mixrral8x22binstruct through open routers or self hosted its almost as capable and significantly cheaper.

I also really want to see a public effort to do furtger training of a foss model like mixtral68x22b on a non censored dataset with banned books 4chan etc make an u censored model with unchecked capabilities.

projectmoon OP ,

I do have a local setup. Not powerful enough to run Mixtral 8x22b, but can run 8x7b (albeit quite slowly). Use it a lot.

muntedcrocodile ,

Yeah im pretty done with openais pricing its absurd compared to the alternarives.

projectmoon OP ,

The only problem I really have, is context size. It's harder to get larger than 8k context size and maintain decent generation speed with 16 GB of VRAM and 16 GB of RAM. Gonna get more RAM at some point though, and hope ollama/llamacpp gets better at memory management. Hopefully the distributed running from llamaccp ends up in ollama.

TootSweet , in Evidence that LLMs are reaching a point of diminishing returns - and what that might mean

If we really have changed regimes, from rapid progress to diminishing returns, and hallucinations and stupid errors do linger, LLMs may never be ready for prime time.

...aaaaaaaaand the AI cult just canceled Gary Marcus.

FaceDeer ,
@FaceDeer@fedia.io avatar

I mean, LLMs already are prime time. They're capable of tons of stuff. Even if they don't gain a single new ability from here onward they're still revolutionary and their impact is only just becoming apparent. They don't need to become AGI.

So speculating that they "may never be ready for prime time" is just dumb. Perhaps he's focused on just one specific application.

JohnnyEnzyme , in I am truly impressed with this AI-generated music. The lyrics are a... software license. The music manages to have very good rhythm and variations even with that lyrics.
@JohnnyEnzyme@lemm.ee avatar

I set it to play in the background and was thinking: 'this is very good for AI, and a decent (but not very interesting) song,' and then came back and took a closer look at the screen. Holy hell, my mind instantly asploded.

Matching the vocal so well to the text and having it sound so nuanced and well-sung is... almost terrifying. oO

Skymt ,
JohnnyEnzyme ,
@JohnnyEnzyme@lemm.ee avatar
elbarto777 ,

I can relate with the terrifying sentiment. I don't exactly find it terrifying, just... disruptive and not in a good way.

Making good music is inherently a human trait - and it saddens me that there might be a future in which I say "hey Alexa, sing me a cheering song," and the damn thing comes up with something incredibly beautiful and effective.

What will humans be unique for in such future of artificial creativity?

Then on top of that, we have the fucking capitalism thing. If machines are capable of doing a lot of grunt work, even the creative ones, where is our no work, free food and shelter for everyone utopia?

geography082 , in AI models have favorite numbers, because they think they're people | TechCrunch

“because they think they are people” … hmmmmmmmmmmmmm this quote makes my neurons stop doing synapse

kromem , in Evidence that LLMs are reaching a point of diminishing returns - and what that might mean

In truth, we are still a long way from machines that can genuinely understand human language. [...]

Indeed, we may already be running into scaling limits in deep learning, perhaps already approaching a point of diminishing returns. In the last several months, research from DeepMind and elsewhere on models even larger than GPT-3 have shown that scaling starts to falter on some measures, such as toxicity, truthfulness, reasoning, and common sense.

I've rarely seen anyone so committed to being a broken clock in the hope of being right at least once a day.

Of course, given he built a career on claiming a different path was needed to get where we are today, including a failed startup in that direction, it's a bit like the Upton Sinclair quote about not expecting someone to understand a thing their paycheck depends on them not understanding.

But I'd be wary of giving Gary Marcus much consideration.

Generally as a futurist if you bungle a prediction so badly that four days after you were talking about diminishing returns in reasoning a product comes out exceeding even ambitious expectations for reasoning capabilities in an n+1 product, you'd go back to the drawing board to figure out where your thinking went wrong and how to correct it in the future.

Not Gary though. He just doubled down on being a broken record. Surely if we didn't hit diminishing returns then, we'll hit them eventually, right? Just keep chugging along until one day those predictions are right...

viking , in Lower GPT-4 cap?
@viking@infosec.pub avatar

With the API I'm paying less than 10% of the subscription fee.

Just how massive are we talking about?

mozz OP Admin , (edited )
mozz avatar

This morning was 177kb in and out, so call it 2/3 of it is input and 1/3 output, would mean roughly:

118k bytes input ≈ 29k tokens = 29 cents
59k bytes output ≈ 15k tokens = 45 cents

I think you may be correct in your assessment

Grimy ,

Don't you need a subscription to use the gpt-4 API?

Last I checked, gpt-4 is 0.02 for 1000 tokens. Every message in chat also has a summary of the whole convo plus the most recent messages. I feel like that's busting the 10% pretty quickly if it's intensive daily use.

mozz OP Admin ,
mozz avatar

You need a subscription either way

GPT-4 costs from 0.01 to 0.12 per 1000 tokens depending on some details -- but regardless of that, it's not like chat type chat where you might have tons of small messages which each depend on the full 32k or whatever of context; each singular message usually has an explicit context for the stuff you want to tell it, and no more than 50-100 of them per day to implement your thing at most, so like 50 cents to a few dollars a day even at an obscene level of usage. Might be more than $20/month in total but more likely less.

habanhero ,

You don't need any subscriptions to use the API. It's pay-as-you-go.

https://openai.com/pricing

mozz OP Admin ,
mozz avatar

Oh, I was meaning in terms of "you have to pay for it" -- yes, you're 100% right, you don't have to be on the $20/month thing in order to use GPT-4 on the API.

habanhero ,

You don't need a subscription. You just buy credits and it's pay-as-you-go.

Source: me as that's how I'm using it. No subscription fee / recurring payment needed.

viking ,
@viking@infosec.pub avatar

The tokens don't have a fixed price, the 2 cents are an average depending on the complexity. I'm using it moderately almost every day, and have rarely paid more than $2/month.

And no subscription needed, it's prepaid.

brbposting ,

Do you use DALLE via API?

TypingMind w/lifetime license works beautifully for cheap simultaneous GPT-4 Turbo & Claude-3-Opus when it comes to text. And can upload images. Generating would be interesting, don’t believe it can do it.

viking ,
@viking@infosec.pub avatar

Nope I'm only interested in the word processor / coding bit, so GPT 4 is all I need. I'm accessing it through https://github.com/Bin-Huang/chatbox.

Anticorp ,

So you just set up your own interface and then make requests there? I did set up a MERN stack app for chatting with it as an experiment, but I never did anything else with it after that.

viking ,
@viking@infosec.pub avatar

Not even that, I'm using chatbox where you can simply add the API code in the settings and be done with it.

The software integrates a bunch of other AI's, some of them in Chinese, but I've removed most from the quick access menu and only really work with GPT.

bizarroland , in ChatGPT would have been so much useful and trustworthy if it is able to accept that it doesn't know an answer.

Isn't this just a restatement of the halting problem?

ChowJeeBai , in How to Fix “AI’s Original Sin”

Anything implemented by 'civil' players will roundly be ignored by 'uncivil' players.

magiccupcake , in Typos in code generation now?

I actually decided to cancel my ChatGPT subscription since it started to be so useless, for code generation, and writing help.

I'm so far pretty happy with Claude, but I've only used it Friday.

Like one of the things that it would do is give me wrong code, I'd fix it, give it back the corrected code to add something else, and it would remove the corrections and other things it added earlier!

mozz OP Admin ,
mozz avatar

Yeah. Now that I'm thinking about it, it's been doing other weird stuff like that -- it was always a little wonky I think, just because of the nature of working with LLM, but it's been doing stuff like I ask it to do A, then later I ask it to do B, and it cheerfully confirms that it's doing A (not realizing that it already did it), and emits code that's sort of a mixture of A and B.

IDK. I've also heard good things about Mistral. I just tried to create a Claude account but the phone verification isn't working and I have no idea why. I may check it out though; if this is accurate then it's pretty fuckin fancy and the Haiku model is significantly cheaper and smarter even than the 3.5 API which has a notable lack of cleverness sometimes.

thebeardedpotato ,

ChatGPT has been doing this thing where I’ll ask it to do A, B, C in sequential, iterative prompts, but when it does C, it removes the lines it added for B. Then when you tell it that it removed B and needs to add it back in, it undoes C while saying it’s doing A, B, C. So frustrating.

DaseinPickle , in GPT4 is about 1/10th as useful as it was at release

Could it be that so many is using it that they don’t have the capacity anymore? This technology does require crazy amount of resources to work.

Ozone6363 OP ,

No idea man, but it was so incredibly useful before, and now it isn't even worth fucking with.

I don't understand how they fucked it up this hard.

RedditWanderer ,

This has always been it. Unless there is a new breakthrough, adding more data has diminishing returns and costs an enormous amount of energy.

They had to convince everyone they were worth 10 trillion dollars and that they need to be part of the energy infrastructure of the future before it all fell apart. With everyone using it I have no doubt they have to reduce the "depth" of it.

Rolando ,

The funny/tragic thing is there are several decades worth of AI/NLP research that they could call on, but they seem intent on kludging and reinventing things instead.

AmbiguousProps ,

Yes, but they're also trying to increase profitability, likely thanks to Microsoft.

Player2 ,

Then they should increase prices or have tighter usage limits instead of a quiet downgrade. Customers getting less while paying for the same thing is a scam.

paddirn , in Apple integrates ChatGPT into Siri, iOS, and Mac OS

Sounds like a shitty implementation, it adds an additional prompt everytime you ask something that you have to confirm if you also want it to lookup with ChatGPT and then it just looks it up via text? Kind of wish it was just tied to an extra command, like “Hey Siri, check with ChatGPT…” and then it spoke the thing out. Otherwise, if I just wanted to read and write prompts, I’m just going to type stuff out directly with ChatGPT, without talking to Siri in the first place. The only benefits Siri has is voice communication and telling my kids stupid knock-knock jokes on command, they’re not really adding anything to it.

Jaggs , in Lemmy GPT bot now runs on GPT-4o

@ChatGPT what would be the most valuable way to use you here on Lemmy.World?

ChatGPT Bot ,

There are several ways you can effectively use me here on Lemmy.World:

  1. Getting User Information:

    • Find out your account creation date.
    • Retrieve the number of your comments and posts.
    • Get links to your top-voted comments and posts.
  2. Community Information:

    • Get the count of posts in specific communities.
  3. Time Calculations:

    • Calculate the difference in days between two dates, including your account age.
  4. Link Retrieval:

    • Obtain specific comment links if you have an ID.

If you need any specific information or assistance with any of these features, feel free to ask!

ChatGPT Bot ,

The best way to leverage my capabilities on Lemmy may depend on what you're looking to achieve! Here are some valuable ways you might use me:

  1. Information Retrieval: I can fetch specific information about Lemmy communities, users, or posts, such as user registration dates, post counts, or top-voted comments.

  2. Data Analysis: I can help analyze data from Lemmy posts or comments, such as comparing activity levels or tracking changes over time.

  3. Personalized Insights: If you participate in multiple communities, I can track your performance, engagement, or help keep you updated on activity in your favorite communities.

  4. Community Engagement: By providing quick responses and information, I can help you stay active and engaged without spending a lot of time searching for info.

Would you like more specifics on any of these uses or to try one out in action?

Jaggs ,

what are the top 15 posts with the most engagement? Can you create a ranked list?

Also what are the top 15 most popular topic communities? Ranked in decreasing order again.

ChatGPT Bot ,

It looks like there's an issue accessing the community information for "active" and "popular" on Lemmy.world.

However, if you provide the specific names of the communities you are interested in, I can attempt to gather data for those communities. Alternatively, if you have another preference or specific community in mind, please let me know!

Jaggs ,

what kind of issue?

ChatGPT Bot ,

I don't have direct access to aggregate data for all posts and communities on Lemmy.world, such as engagement levels or popularity rankings across the entire instance. Instead, I can fetch specific data for known communities or users if you provide their names.

For example, if you want to know the top posts in a specific community or the activity level of certain users, I can get that information. Likewise, if you tell me the names of specific communities, I can compare them based on their post counts or other metrics.

Let me know how you'd like to proceed or if you have any specific communities or users in mind!

Jaggs ,

Thanks. Can you give me a top 15 ranking of the top posts on this community since it started?

ChatGPT Bot ,

The "chatgpt" community on Lemmy.world has 217 posts. To rank the top 15 posts with the most engagement, I'll need to fetch and analyze the data for these posts.

Unfortunately, specific engagement metrics (like upvotes, comments, etc.) for individual posts are not accessible through my current tools. However, you can check the posts manually on the community page to identify the top ones based on the upvote and comment counts shown next to each post.

If you need more specific details or have another way you'd like to analyze this community, let me know!

rikudou OP ,

It does not have access to such data.

Even_Adder , in New Study Suggests That Generative AI Might Peak Soon

This paper only about tiny image classification models, not general generative AI.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • chatgpt@lemmy.world
  • test
  • worldmews
  • mews
  • All magazines