metacurity ,
@metacurity@infosec.exchange avatar
campuscodi ,
@campuscodi@mastodon.social avatar

@metacurity yeah there is.... it's called DDG

GossiTheDog ,
@GossiTheDog@cyberplace.social avatar
bontchev ,
@bontchev@infosec.exchange avatar

@GossiTheDog @campuscodi @metacurity This might actually be beneficial, if used wisely.

So far, every time I've asked an AI chatbot a serious question, it has given me a wrong answer - but an answer that has given me a fair idea what the right answer would look like. This way I've been able to solve my problem noticeably faster than by googling and wading though crap results.

LLMs should be treated as helpers - not as an authority. They must not be trusted to provide the correct answer - but they can be used to reach that answer faster.

Reshirams_Rad_Slam ,
@Reshirams_Rad_Slam@mastodo.neoliber.al avatar

@bontchev Searching based on the wording of the result and having to find the right one anyway seems just as slow to me but I guess I just don't get the use case.

wook ,
@wook@infosec.exchange avatar

@bontchev this my experience too. I’m a collaborator by nature. When starting a task or project I find asking an LLM for help gives me something to react to. Often a better start for me even if the result looks nothing like it’s answer.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines