LChoshen , to random
@LChoshen@sigmoid.social avatar

🚨 Model Merging competition @NeurIPSConf! 🚀

Can you revolutionize model selection and merging?

Let's create the best LLMs! 🤖+🤖=🧠✨

💻 Come for science
💰 Stay for $8K

🔗 Sign up: https://llm-merging.github.io/
💬 Discord: https://discord.gg/ufycruJx

Sponsors: @huggingface sakana.ai arcee.ai
#ml #machinelearning #nlp #merging #nlproc #llm #llms #OpenScience #huggingFace #DataScience

metin , to random
@metin@graphics.social avatar

𝘾𝙝𝙖𝙩𝙂𝙋𝙏 𝙘𝙤𝙣𝙨𝙪𝙢𝙚𝙨 25 𝙩𝙞𝙢𝙚𝙨 𝙢𝙤𝙧𝙚 𝙚𝙣𝙚𝙧𝙜𝙮 𝙩𝙝𝙖𝙣 𝙂𝙤𝙤𝙜𝙡𝙚

https://www.brusselstimes.com/1042696/chatgpt-consumes-25-times-more-energy-than-google

#AI #ArtificialIntelligence #ML #MachineLearning #DeepLearning #LLM #LLMs #ChatGPT #OpenAI #energy #climate #ClimateChange #environment #nature

kaateeh , to random
@kaateeh@infosec.exchange avatar
ami , to random
@ami@mastodon.world avatar

I keep seeing posts about #AI, #ML and #LLM s... and the majority are vastly incorrect and full of fear.

There's a lack of understanding of what they are, how they work, what they are capable now and in the future.

Governments are looking to legislate on AI and they lack understanding as well.

AI is a tool that will improve other tools from traffic lights to your oven and climate control.

Banning AI is like banning pens because people write nasty things that make you scared.

impermanen_ , to random
@impermanen_@zirk.us avatar

This may be the worst case of corporate machine learning abuse I’ve come across yet:

UnitedHealth uses AI model with 90% error rate to deny doctor-approved care to elderly patients, according to a lawsuit by the estates of two who died when they lost critical care. (UnitedHealth is the insurance that AARP promotes to seniors). https://arstechnica.com/health/2023/11/ai-with-90-error-rate-forces-elderly-out-of-rehab-nursing-homes-suit-claims/

#ai #ml

nixCraft , to random
@nixCraft@mastodon.social avatar

AI does not understand ASCII art, it's how we win 😅 #AI #ML #chatgpt #openai

darkcisum , to random
@darkcisum@swiss.social avatar

I finally understand how Machine Learning works!

https://xkcd.com/1838/

#machinelearning #ml #xkcd

aral , to random
@aral@mastodon.ar.al avatar

Fake Intelligence is where we try to simulate intelligence by feeding huge amounts of dubious information to algorithms we don’t fully understand to create approximations of human behaviour where the safeguards that moderate the real thing provided by family, community, culture, personal responsibility, reputation, and ethics are replaced by norms that satisfy the profit motive of corporate entities.

#fakeIntelligence #artificialIntelligence #ai #machineLearning #ml #largeLanguageModels #llm

estelle , to random
@estelle@techhub.social avatar

The terrible human toll in Gaza has many causes.
A chilling investigation by +972 highlights efficiency:

  1. An engineer: “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed.”

  2. An AI outputs "100 targets a day". Like a factory with murder delivery:

"According to the investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”"

  1. "The third is “power targets,” which includes high-rises and residential towers in the heart of cities, and public buildings such as universities, banks, and government offices."

🧶

18+ estelle OP ,
@estelle@techhub.social avatar

The sources said that the approval to automatically adopt #Lavender’s kill lists, which had previously been used only as an auxiliary tool, was granted about two weeks into the war, after intelligence personnel “manually” checked the accuracy of a random sample of several hundred targets selected by the #AI system. When that sample found that Lavender’s results had reached 90 percent accuracy in identifying an individual’s affiliation with Hamas, the army authorized the sweeping use of the system. From that moment, if Lavender decided an individual was a militant in Hamas, the sources were essentially asked to treat that as an order.

“Still, I found them more ethical than the targets that we bombed just for ‘deterrence’ — highrises that are evacuated and toppled just to cause destruction.”

Yuval Abraham: https://www.972mag.com/lavender-ai-israeli-army-gaza/ @israel

#usability #ModelCalibration #MachineLearning #ML #OutputAudit #FoundationalModels

18+ estelle OP ,
@estelle@techhub.social avatar

The sources said that the approval to automatically adopt #Lavender’s kill lists, which had previously been used only as an auxiliary tool, was granted about two weeks into the war, after intelligence personnel “manually” checked the accuracy of a random sample of several hundred targets selected by the #AI system. When that sample found that Lavender’s results had reached 90 percent accuracy in identifying an individual’s affiliation with Hamas, the army authorized the sweeping use of the system. From that moment, if Lavender decided an individual was a militant in Hamas, the sources were essentially asked to treat that as an order.

“Still, I found them more ethical than the targets that we bombed just for ‘deterrence’ — highrises that are evacuated and toppled just to cause destruction.”

https://www.972mag.com/lavender-ai-israeli-army-gaza/ @israel @data

#metrics #probabilities #usability #ModelCalibration #MachineLearning #ML #OutputAudit #FoundationalModels

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines