mnl , (edited )
@mnl@hachyderm.io avatar

The more I work with LLMs, the more powerful they become, the less I think they "understand" anything, but the more impressed I become by how much "language translation" and clever pattern matching they are able to do.

There is at this point 0 doubt in my mind that the field of software engineering is about to be profoundly changed. I'm just a bit more conservative in the timeline, seeing how slow change actually gets propagated.

1/

mnl OP ,
@mnl@hachyderm.io avatar

What I think will surprise a lot of programmers if they are not paying attention: we will have products where a non developer can reasonably productively say things like “build me a website where people can sign up for my book and preorder it, with a questionnaire to fill out after signing up.”

2b/

mnl OP ,
@mnl@hachyderm.io avatar

Maybe it will be a bit janky, maybe you have to click some weird plugin magic. Think of it a “AGI Wordpress”. Something that is general enough that you can throw most app ideas at it and it will work. Maybe you can pay a teenager to glue things right.

And all of a sudden, once this tool is mature enough, a lot of teams and freelancers will be legitimately obsolete from one moment to the next.

3b/

mnl OP ,
@mnl@hachyderm.io avatar

From my personal experience and from seeing what is starting to ship, as well as newer models like sonnet 3.5, even as a prediction averse person, this is going to happen sooner rather than later. I’m much more cautious about dates, but I think at most 2 years. 4b/4b

kellogh ,
@kellogh@hachyderm.io avatar

@mnl to a large extent, i think the bottleneck is training — letting people know what situations LLMs will do great (vs not). the tech is there

cayleyh ,
@cayleyh@hachyderm.io avatar

@mnl 100% agree. It’s already starting to happen with the proprietary fully hosted products like Wix/Squarespace/etc. Because the core functionality is inside their first party system it’s pretty straight forward to use an LLM to translate to discrete config and page content change. Contrast with “open” systems like Wordpress/Drupal where it’s a mess of incompatible 3rd party plugins and you have to have much more training and “intelligence” to get the same effect.

mnl OP , (edited )
@mnl@hachyderm.io avatar

In fact I think the real change is going to happen once the kids who grew up with LLMs are going to hit the workforce (soon!). There is no need for bigger models, in fact local inference with smaller models is very promising for a lot of the "language translation" I find effective in my job as a programmer.

The biggest change is how we architect our software. LLMs do poorly at programming because we are trying to have them program the way we do.

#llms #chatgpt

2/

mnl OP ,
@mnl@hachyderm.io avatar

Here's an example of "good LLM attack surface" code I wrote at 5:30am while making tea jetlagged in my hotel, in about 30 minutes:

https://github.com/go-go-golems/go-go-labs/tree/main/cmd/apps/bee

I took the python test program, asked claude to "log the requests". Then I took the request logs, and asked "make an openapi spec". I minorly fixed the spec, then said "make a go client". Then I took the go client and said "make a CLI app for everything", then "write documentation for both".

#llms #chatgpt

3/

mnl OP ,
@mnl@hachyderm.io avatar

This is probably most extensive and best documented SDK/CLI tool for an API I ever built.

Now you might say "well that's not very complicated, and not what I do day to day."

But, not only is that about 80% of what I do day to day, but it means that you now have a quality SDK to build apps on top of bee, and a solid prompting document for having an LLM do that. Which means you can build something with bee without losing 2 days on getting the SDK up.

And that's... transformative.

4/

mnl OP ,
@mnl@hachyderm.io avatar

Also, this took about... 1 minute of inference tops. It's not like this is extremely power hungry.

leifdavisson ,
@leifdavisson@ioc.exchange avatar

@mnl
Would you be willing to share your prompts? I want to see what I am doing wrong. Are you using previous code base for RAG?

I often find the code AI writes I don't understand. So I look it up sometimes it is really clever other parts are just bad mistakes. Can I trust the code, doc, client, CLI?

I have noticed that the more detailed I am with my instructions the better the code I get back. Are you very specific with your prompts or are you more open?

mnl OP ,
@mnl@hachyderm.io avatar

@leifdavisson I am very open with the prompts but I got pretty good at figuring out which “additional info” to give it. I will share some screenshots or recording of doing the bee sdk.

Here’s handouts from a workshop I gave recently that maybe sheds some light on for example how I create prompting context fragments.

https://github.com/go-go-golems/go-go-workshop/blob/main/2024-06-24%20-%20Workshop%20AI%20Programmer%20Handout.pdf

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines