canihasaccount

@canihasaccount@lemmy.world

This profile is from a federated server and may be incomplete. For a complete list of posts, browse on the original instance.

canihasaccount ,

Seven paragraphs is too much? I read the full thing before seeing your comment. It's well written and easy to read.

canihasaccount ,

Claude Opus disagrees, lol (edit to add: all of what follows is Claude; not me):

I respectfully disagree with the claim that current language models are logically incapable of mapping language to actual concepts. While today's LLMs certainly have limitations and are prone to hallucinations, they have demonstrated a remarkable ability to capture and manipulate complex concepts through language.

A few key points:

LLMs are trained on vast amounts of human-generated text which is grounded in real-world concepts, allowing the models to build rich representations that go beyond surface-level word associations. Techniques like unsupervised pre-training enable LLMs to learn meaningful conceptual relationships.

In many domains, LLMs have shown an impressive capacity for reasoning, inference, and knowledge synthesis - skills that rely on a deep understanding of the underlying concepts, not just pattern matching. Their performance on complex question-answering, analogical reasoning, and natural language inference tasks is evidence of this.

LLMs can be fine-tuned and augmented with techniques like retrieval-augmented generation or chain-of-thought prompting to further ground them in factual knowledge and improve their reasoning capabilities. This allows more robust conceptual mappings.

Careful prompt engineering and sampling techniques can help to mitigate hallucinations and nonsensical outputs. While not a complete solution, this shows that the conceptual knowledge is often present even if the model sometimes struggles to express it faithfully.

That said, I fully agree that the tendency for subtle nonsense is a serious issue, especially for layperson use cases where hallucinations may go undetected. Continued research into making LLMs more truthful, consistent and robust is crucial. Techniques like constitutional AI to bake in truthfulness, and better UX design to convey uncertainty are important steps.

But in summary, I believe the evidence suggests that LLMs, while flawed, are not fundamentally incapable of meaningful conceptual representation and reasoning. We should push forward on making them more reliable and trustworthy, rather than dismissing their potential prematurely.

canihasaccount ,

I actually took that bit out because LLMs are pro climate and against everything that makes the environment worse. That's a result of being trained on a lot of scientific literature. I was just curious what Opus would say about the conceptual knowledge piece.

canihasaccount , (edited )

Not true:

https://www.sciencedirect.com/science/article/pii/S0165032714003620

https://www.cghjournal.org/article/S1542-3565%2806%2900800-7/fulltext

I found more, too.

Edit: I have no skin in this game. I don't take turmeric and won't ever because of the risk of lead. I'm just pointing out that the meme is inaccurate. The person who replied to me pointed out some flaws in the first study (not the second), but none of the flaws mentioned makes the meme accurate. Even the shitty first study I linked found a significant condition difference in its primary endpoint at 8 weeks. Yeah, it's got flaws (which the second doesn't), but a successful trial with heavy limitations and conflicts of interest is nonetheless a successful trial, making this meme inaccurate. The second study I linked is stronger.

Also, the limitations in the first trial are standard for many clinical trials. For example:

https://onlinelibrary.wiley.com/doi/abs/10.1111/jsr.12201

https://www.sciencedirect.com/science/article/pii/S0924977X14001266

I could list 100 more with the same limitations of the first study I linked above. High dropout, small sample sizes, funding by an industry with a conflict of interest etc. are standard for clinical trial studies.

canihasaccount ,

I'm not saying the study is good, just that the meme isn't true.

Also, you can level almost every single one of those criticisms against many studies for SSRIs and they'd hit just as hard. The exception being sample size.

canihasaccount ,

Why are you completely ignoring the second paper I linked, which doesn't suffer from any of the limitations you mentioned?

The meme says no trial was successful. Any trial with any small difference is a successful trial.

canihasaccount ,

Sorry, but this makes clear that you aren't in science. You should avoid trying to shit on studies if you don't know how to interpret them. Both of the things you mentioned actually support the existence of a true effect.

First, if the treatment has an effect, you would expect a greater rate of relapse after the treatment is removed, provided that it treats a more final pathway rather than the cause: People in the placebo group have already been relapsing at the typical rate, and people receiving treatment--whose disease has been ramping up behind the dam of a medication preventing it from showing--are then expected to relapse at a higher rate after treatment is removed. The second sixth-month period was after cessation of the curcumin or place; it was a follow-up for treatment-as-usual.

Second, people drop out of a study nonrandomly for two main reasons: side effects and perceived lack of treatment efficacy. The placebo doesn't have side effects, so when you have a greater rate of dropout in your placebo group, that implies the perceived treatment efficacy was lower. In other words, the worst placebo participants are likely the extra dropouts in that group, and including them would not only provide more degrees of freedom, it would theoretically strengthen the effect.

This is basic clinical trials research knowledge.

Again, I have no skin in the game here. I don't take curcumin, nor would I ever. I do care about accurate depictions of research. I'm a STEM professor at an R1 with three active federal grants funding my research. The meme is inaccurate.

canihasaccount ,

Ich lebe in Amerika. Ich lerne Deutsche sprechen, aber das kostet Geld. Vielleicht wollen die Migranten Deutsche lernen, haben aber nicht das Geld dafür?

Sorry if the above is poorly worded; I'm still new to the language. My point is that there are lots of reasons that someone might not know a language well, including a lack of money, or a lack of time from needing to work full time to support one's migrant family on a low wage.

Mexican immigrants to the US are wonderful, but their culture is very different from non-Hispanic US culture. I don't expect them to learn English. They work like 60 hours per week to support their families. Like the person you're replying to has said, though, their children learn English and integrate into, but also uniquely contribute to, US culture. Rather than expecting the first-generation immigrants to learn English, I've learned Spanish specifically to speak with them. It's not like there are many more immigrants to Germany than there are immigrants to the US--even discounting the fact that the US has always been a country of immigrants, Hispanic and Latino/a/e Americans (the majority of which are Mexican Americans) are expected to exceed 50% of all Americans within a couple of decades. In some states, they are already the majority.

Diversity is a good thing, and we shouldn't require immigrants to become like us culturally or linguistically before accepting them.

canihasaccount ,

That's not actually the abstract; it's a piece from the discussion that someone pasted nicely with the first page in order to name and shame the authors. I looked at it in depth when I saw this circulate a little while ago.

canihasaccount ,

It can't write much of substance. The only people using it in science for anything more than fluff are people who don't speak English well or who have no business writing papers. I sympathize with the former, but I don't understand why those folks wouldn't just either publish in a language they speak or get an English-speaking coauthor to help write in English. I wouldn't ever use it to write an article. Even editing, it tends to butcher scientific nuance.

It is good at writing fluff though, which is helpful for things like letters of recommendation for undergraduates.

canihasaccount ,

I'm in science. It isn't difficult to get an English speaking coauthor. Going to an LLM is easier and faster, sure, but if someone can't understand the output then they have no idea if their text is being translated correctly.

canihasaccount ,

Ah, I would consider that fluff, which is okay in my book. I don't use it for writing, personally, but what I tell my students is that if it'd be fine for a friend to do the thing and not get coauthorship, it's fine to use AI for that (provided you acknowledge it, as you would a friend who provides some helpful comments on a draft). Proofing and suggesting minor stylistic things fall under that umbrella IMO.

A lot of Redditors hate the Reddit IPO | Reddit warned us that its users were a risk factor, and boy do they sound excited about shorting its stock. ( www.theverge.com )

A lot of Redditors hate the Reddit IPO | Reddit warned us that its users were a risk factor, and boy do they sound excited about shorting its stock.::Reddit seems like a likely candidate for a meme stock. But the actual reaction suggests that r/WallStreetBets isn’t going to send the stock to the moon.

canihasaccount ,

I'm thinking of shorting it. My friend is definitely shorting it.

canihasaccount ,

To a degree. The large subreddits, like AskReddit, get far fewer upvotes on the top posts of the week than they used to get. I think there's a good chunk of folks who left for a replacement, then left their replacement without going back to Reddit.

canihasaccount ,

Pharmacists don't get PhDs, they get degrees for practice, like MDs. A PharmD doesn't require being able to understand or conduct original research like a PhD does. Basically, a PharmD requires a really good memory, not necessarily critical thinking.

canihasaccount ,

Those are extremely few and far between, and they aren't evolutionary biologists. Behe, the most famous of them, doesn't have a PhD in biology, but a PhD in biochemistry. Those are vastly different fields, and understanding the evidence for evolution wouldn't have been relevant to Behe's PhD. MDs more commonly don't believe in evolution because MDs are essentially average folks who can memorize stuff really well. MDs don't receive training in research or how to conduct it, so they're pretty poor at understanding primary research most of the time.

Someone with a PhD from a reputable university (essentially, one that funds their PhD programs rather than making students pay, and one that doesn't incentivize publications directly with bonuses) will be an expert in their subject area. Behe would be able to tell you about the biochemistry of sickle cell anemia. Someone with a PhD speaking on an area outside of their expertise is perhaps more likely than the average person to be correct because they could have read and understood most primary sources even outside of their area, but I wouldn't say it's all that much more likely. Basically, PhDs speaking on the topic of their expertise are experts, but they're not experts in everything.

Personally, my PhD made me like the trope of someone who could tell you everything you want to know about some esoteric subject but wouldn't know how to make a meal.

Getting a PhD produces highly specialized knowledge, not general knowledge.

canihasaccount ,

Unless you're in university administration, academia is not well paid. University administrators who are well paid are usually EdDs (essentially, university-focused MBAs) who didn't take the normal academic route of research first.

canihasaccount ,

No, there is no coursework past a master's thesis. For the last typically ~3-4 years of graduate training, everything that you're doing is original research. If your research isn't good enough or done correctly, you will never get a PhD. You also have to defend your dissertation. Getting a PhD from a reputable university does mean that what you say, specifically related to your research area, is correct.

canihasaccount ,

I go out of my way not to do so. Whenever I search for some specific items and see "Sponsored," I'll scroll down until I get the same listing without the ad link.

canihasaccount ,

Would you, after devoting full years of your adult life to the unpaid work of learning the requisite advanced math and computer science needed to develop such a model, like to spend years more of your life to develop a generative AI model without compensation? Within the US, it is legal to use public text for commercial purposes without any need to obtain a permit. Developers of such models deserve to be paid, just like any other workers, and that doesn't happen unless either we make AI a utility (or something similar) and funnel tax dollars into it or the company charges for the product so it can pay its employees.

I wholeheartedly agree that AI shouldn't be trained on copyrighted, private, or any other works outside of the public domain. I think that OpenAI's use of nonpublic material was illegal and unethical, and that they should be legally obligated to scrap their entire model and train another one from legal material. But developers deserve to be paid for their labor and time, and that requires the company that employs them to make money somehow.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines