This has always been it. Unless there is a new breakthrough, adding more data has diminishing returns and costs an enormous amount of energy.
They had to convince everyone they were worth 10 trillion dollars and that they need to be part of the energy infrastructure of the future before it all fell apart. With everyone using it I have no doubt they have to reduce the "depth" of it.
The funny/tragic thing is there are several decades worth of AI/NLP research that they could call on, but they seem intent on kludging and reinventing things instead.
Then they should increase prices or have tighter usage limits instead of a quiet downgrade. Customers getting less while paying for the same thing is a scam.
Just tried it out for a bit, and while the responses were faster, the amount of incorrect information and hallucinations seemed to be the same. Its memory might be even worse.
What a stupid article that goes on for too long trying to treat an LLM as a thinking computer, unbelievably long, idiotic, and based entirely on an assumption that has never been and never will be true.
The growing integration of artificial intelligence (AI) in various industries has made it easier than ever to interact with and utilize digital content. ChatPDF is one such tool that harnesses the power of AI to make PDFs more accessible and user-friendly
What issues do you have with this tool specifically? It sounds pretty helpful for locating specific info in large PDFs.
ChatPDF allows users to search, summarize, and answer questions about PDFs by using natural language queries. This innovative approach to interacting with PDFs simplifies the process of searching for information, reducing the time and effort required to extract relevant data from a document.
I set it to play in the background and was thinking: 'this is very good for AI, and a decent (but not very interesting) song,' and then came back and took a closer look at the screen. Holy hell, my mind instantly asploded.
Matching the vocal so well to the text and having it sound so nuanced and well-sung is... almost terrifying. oO
I can relate with the terrifying sentiment. I don't exactly find it terrifying, just... disruptive and not in a good way.
Making good music is inherently a human trait - and it saddens me that there might be a future in which I say "hey Alexa, sing me a cheering song," and the damn thing comes up with something incredibly beautiful and effective.
What will humans be unique for in such future of artificial creativity?
Then on top of that, we have the fucking capitalism thing. If machines are capable of doing a lot of grunt work, even the creative ones, where is our no work, free food and shelter for everyone utopia?
In truth, we are still a long way from machines that can genuinely understand human language. [...]
Indeed, we may already be running into scaling limits in deep learning, perhaps already approaching a point of diminishing returns. In the last several months, research from DeepMind and elsewhere on models even larger than GPT-3 have shown that scaling starts to falter on some measures, such as toxicity, truthfulness, reasoning, and common sense.
I've rarely seen anyone so committed to being a broken clock in the hope of being right at least once a day.
Of course, given he built a career on claiming a different path was needed to get where we are today, including a failed startup in that direction, it's a bit like the Upton Sinclair quote about not expecting someone to understand a thing their paycheck depends on them not understanding.
But I'd be wary of giving Gary Marcus much consideration.
Generally as a futurist if you bungle a prediction so badly that four days after you were talking about diminishing returns in reasoning a product comes out exceeding even ambitious expectations for reasoning capabilities in an n+1 product, you'd go back to the drawing board to figure out where your thinking went wrong and how to correct it in the future.
Not Gary though. He just doubled down on being a broken record. Surely if we didn't hit diminishing returns then, we'll hit them eventually, right? Just keep chugging along until one day those predictions are right...
I think this article does a good job of asking the question "what are we really measuring when we talk about LLM accuracy?" If you judge an LLM by its: hallucinations, ability analyze images, ability to critically analyze text, etc. you're going to see low scores for all LLMs.
The only metric an LLM should excel at is "did it generate human readable and contextually relevant text?" I think we've all forgotten the humble origins of "AI" chat bots. They often struggled to generate anything more than a few sentences of relevant text. They often made syntactical errors. Modern LLMs solved these issues quite well. They can produce long form content which is coherent and syntactically error free.
However the content makes no guarantees to be accurate or critically meaningful. Whilst it is often critically meaningful, it is certainly capable of half-assed answers that dodge difficult questions. LLMs are approaching 95% "accuracy" if you think of them as good human text fakers. They are pretty impressive at that. But people keep expecting them to do their math homework, analyze contracts, and generate perfectly valid content. They just aren't even built to do that. We work really hard just to keep them from hallucinating as much as they do.
I think the desperation to see these things essentially become indistinguishable from humans is causing us to lose sight of the real progress that's been made. We're probably going to hit a wall with this method. But this breakthrough has made AI a viable technology for a lot of jobs. So it's definitely a breakthrough. I just think either I finitely larger models (of which we can't seem to generate the data for) or new models will be required to leap to the next level.
But people keep expecting them to do their math homework, analyze contracts, and generate perfectly valid content
People expect that because that's how they are marketed. The problem is that there's an uncontrolled hype going on with AI these days. To the point of a financial bubble, with companies investing a lot of time and money now, based on the promise that AI will save them time and money in the future. AI has become a cult. The author of the article does a good job in setting the right expectations.
I guess ChatGPT 4 has wised up. I'm curious now. Will try it.
Edit: Yup, you're right. It says "bro, you cray cray." But if I tell it that it's a recent math model, then it will say "Well, I guess in that model it's 7, but that's not standard."
ChatGPT
Hot