Norobiik , to random
@Norobiik@noc.social avatar

The -powered "Search Generative Experience ()" that the company had been trialing for months is rolling out to everyone in the . The top of many results (especially questions) are now dominated by an that scrapes the and gives you a sometimes-correct summary without needing to click on a single result.

adds a “web” filter, because it is no longer focused on
https://arstechnica.com/gadgets/2024/05/google-search-adds-web-filter-as-it-pivots-to-ai-focused-search-results/

svetlyak40wt , to random
@svetlyak40wt@fosstodon.org avatar

Great news, everyone!

I've published a first version of the static site builder StatiCL.

As you might assume from it's name, it is written in Common Lisp.

Now I'm replacing all my sites which used Coleslaw with this new builder, because it is more flexible and suitable not only for blogs.

Read more in the docs: https://40ants.com/staticl/

I need first testers, so feel free to share your feelings and issues. Also I'd appreciate if you'll boost this post.

BelfastRoadster , to random
@BelfastRoadster@birds.town avatar
homlett , to random
@homlett@mamot.fr avatar

now share the equally with bots, report warns amid fears of the ‘dead internet’
https://www.independent.co.uk/tech/dead-internet-web-bots-humans-b2530324.html
“Nearly half, 49.6 per cent, of all came from last year”

molly0xfff , to random
@molly0xfff@hachyderm.io avatar

Many yearn for the "good old days" of the web. We could have those good old days back — or something even better — and if anything, it would be easier now than it ever was.

https://www.citationneeded.news/we-can-have-a-different-web/

molly0xfff , to random
@molly0xfff@hachyderm.io avatar

If you've ever found yourself missing the "good old days" of the , what is it that you miss? (Interpret "it" broadly: specific websites? types of activities? feelings? etc.) And approximately when were those good old days?

No wrong answers — I'm working on an article and wanted to get some outside thoughts.

claudius ,
@claudius@darmstadt.social avatar

@molly0xfff Vast choice of millions of quirky small tiny websites, including, but not limited to, blogs, "check out my hobby", movie websites. All that personal expression that was not funneled into the same three websites' allowed formats.

molly0xfff , to random
@molly0xfff@hachyderm.io avatar

"While more of the is becoming accessible to people with low-end connections, more of the web is becoming inaccessible to people with low-end devices even if they have high-end connections."

@danluu on web bloat: https://danluu.com/slow-device/

https://www.mollywhite.net/micro/entry/202404231317

weirdwriter , to random

Had to install extensions that prevented disabling paste in edit fields online. I wish browsers would make this possible natively because I never, ever, want paste disabled on any edit field, ever. Here's an extension for FF https://addons.mozilla.org/en-US/firefox/addon/don-t-fuck-with-paste/

aral , to random
@aral@mastodon.ar.al avatar

I’ve been looking for an ngrok alternative for a while now that’s (a) affordable (b) easy to use and (c) works with Kitten¹. Today, after testing a bunch of them again and getting fed up, I found LocalXpose that checks all the boxes.

I signed Small Technology Foundation up as an affiliate so if you use this link to check it out, we’ll get 40% of your $6/mo pro account fee should you subscribe:

https://localxpose.io/?via=kitten

¹ https://codeberg.org/kitten/app

sgotti , to random
@sgotti@mastodon.art avatar
rene_mobile , to random
@rene_mobile@infosec.exchange avatar

My current take on the situation, not having read the actual source backdoor commits yet (thanks a lot for hiding the evidence at this point...) besides reading what others have written about it (cf. https://boehs.org/node/everything-i-know-about-the-xz-backdoor for a good timeline):

  1. This is going to be an excellent teaching example for advanced supply chain attacks that I will definitely be using in the future - after much more in-depth analysis.

  2. It seems to have been a long game, executed with an impressive sequence of steps and preparation, including e.g. disabling OSSFuzz checks for the particular code path and pressuring the original maintainer into accepting the (malicious) contributions.

  3. The potential impact could have been massive, and we got incredibly lucky that it was caught and reported (https://www.openwall.com/lists/oss-security/2024/03/29/4) early. Don't count on such luck in the future.

  4. Given the luck involved in this case, we need to assume a number of other, currently unknown supply chain backdoors that were successfully deployed with comparable sophistication and are probably active in the field.

  5. Safe(r) languages like for such central library dependencies would maybe (really big maybe) have made it a bit harder to push a backdoor like this because - if and only if the safety features are used idiomatically in an open source project - reasonably looking code is (a bit?) more limited in the sneaky behavior it could include. We should still very much use those languages over C/C++ for infrastructure code because the much larger class of unintentional bugs is significantly mitigated, but I believe (without data to back it up) that even such "bugdoor" type changes will be harder to execute. However, given the sophistication in this case, it may not have helped at all. The attacker(s) have shown to be clever enough.

  6. Sandboxing library code may have helped - as the attacker(s) explicitly disabled e.g. landlock, that might already have had some impact. We should create better tooling to make it much easier to link to infrastructure libraries in a sandboxed way (although that will have performance implications in many cases).

  7. Automatic reproducible builds verification would have mitigated this particular vector of backdoor distribution, and the Debian team seems to be using the reproducibility advances of the last decade to verify/rebuild the build servers. We should build library and infrastructure code in a fully reproducible manner and automatically verify it, e.g. with added transparency logs for both source and binary artefacts. In general, it does however not prevent this kind of supply chain attack that directly targets source code at the "leaf" projects in Git commits.

  8. Verifying the real-life identity of contributors to open source projects is hard and a difficult trade-off. Something similar to the -of-trust would potentially have mitigated this style of attack somewhat, but with a different trade-off. We might have to think much harder about trust in individual accounts, and for some projects requiring a link to a real-world country-issued ID document may be the right balance (for others it wouldn't work). That is neither an easy nor a quick path, though. Also note that sophisticated nation state attackers will probably not have a problem procuring "good" fake IDs. It might still raise the bar, though.

  9. What happened here seems clearly criminal - at least under my IANAL naive understanding of EU criminal law. There was clear intent to cause harm, and that makes the specific method less important. The legal system should also be able to help in mitigating supply chain attacks; not in preventing them, but in making them more costly if attackers can be tracked down (this is difficult in itself, see point 8) and face risk of punishment after the fact.

H/T @GossiTheDog @AndresFreundTec @danderson @briankrebs @eloy

aral , to random
@aral@mastodon.ar.al avatar

So given it’s Saturday night, I thought I’d have a little fun with Kitten and make a tiny collaborative drawing toy.

You have a 20×20 grid and only black and white to draw with and everyone shares the same canvas.

https://draw-together.small-web.org

Have fun + looking forward to seeing what we all, umm, draw together.

:kitten:💕

PS. It took about 60 lines of code.

View source: https://codeberg.org/aral/draw-together

aral OP ,
@aral@mastodon.ar.al avatar

Hah, is this a request?

(If you want to play with it locally and add more colours, just add them to the colours array and you can click through as many colours as you like. I wanted to keep it simple and hence it’s black and white.)

aral , to random
@aral@mastodon.ar.al avatar

Want to really learn JavaScript? (Not whatever is the bloated framework of the week?) Attend Modern JavaScript for Beginners—a project-based workshop for beginners and aspiring developers by the wonderful @cferdinandi

Early bird discount (40% off) ends Sunday.

“I struggled with JavaScript for a decade so I really would recommend it for anyone who needs a big friendly confidence-booster.” – @laura

https://gomakethings.com/courses/modern-js-for-beginners/

andycarolan , to random
@andycarolan@social.lol avatar

Show visitors to your site that your content is human made and doesn't use AI!

Grab my badge pack for FREE (or pay as much as you want to help fund future stuff)

The pack contains 64 88x31px PNG and SVG badges in 8 colors and phrases “made by a human, drawn by a human, human content, written by a human, I am not a robot, never by ai, human content, there's no ai here!”

Finnish version upon request!

https://ko-fi.com/s/4662b19f61

ajsadauskas , (edited ) to DeGoogle Yourself
@ajsadauskas@aus.social avatar

In an age of LLMs, is it time to reconsider human-edited web directories?

Back in the early-to-mid '90s, one of the main ways of finding anything on the web was to browse through a web directory.

These directories generally had a list of categories on their front page. News/Sport/Entertainment/Arts/Technology/Fashion/etc.

Each of those categories had subcategories, and sub-subcategories that you clicked through until you got to a list of websites. These lists were maintained by actual humans.

Typically, these directories also had a limited web search that would crawl through the pages of websites listed in the directory.

Lycos, Excite, and of course Yahoo all offered web directories of this sort.

(EDIT: I initially also mentioned AltaVista. It did offer a web directory by the late '90s, but this was something it tacked on much later.)

By the late '90s, the standard narrative goes, the web got too big to index websites manually.

Google promised the world its algorithms would weed out the spam automatically.

And for a time, it worked.

But then SEO and SEM became a multi-billion-dollar industry. The spambots proliferated. Google itself began promoting its own content and advertisers above search results.

And now with LLMs, the industrial-scale spamming of the web is likely to grow exponentially.

My question is, if a lot of the web is turning to crap, do we even want to search the entire web anymore?

Do we really want to search every single website on the web?

Or just those that aren't filled with LLM-generated SEO spam?

Or just those that don't feature 200 tracking scripts, and passive-aggressive privacy warnings, and paywalls, and popovers, and newsletters, and increasingly obnoxious banner ads, and dark patterns to prevent you cancelling your "free trial" subscription?

At some point, does it become more desirable to go back to search engines that only crawl pages on human-curated lists of trustworthy, quality websites?

And is it time to begin considering what a modern version of those early web directories might look like?

@degoogle

aral , to random
@aral@mastodon.ar.al avatar

Hey folks,

@laura is looking for a new gig after three years at Stately. She’s a designer and front-end developer who writes, gives talks, makes videos, and has been doing a fair bit of dev/design advocacy recently.

Her approach is best summed up in her book, Accessibility for Everyone (https://abookapart.com/products/accessibility-for-everyone) and her talk on building tech that respects our rights https://m.youtube.com/watch?v=F5CvwioUy40

https://mastodon.laurakalbag.com/@laura/111998200065274945

nadiaalbelushi , to random
@nadiaalbelushi@mastodon.social avatar

Honestly, I had no idea DuckDuckGo had its own web browser lol. This article reminded me to try out DuckDuckGo's search engine again, and compare its search results with those of Google Search. I was actually surprised to find out that DuckDuckGo churned out way better search results. I'm definitely gonna use it instead of Google from now on.

https://techcrunch.com/2024/02/14/duckduckgo-adds-cross-device-password-and-bookmark-syncing/

aral , to random
@aral@mastodon.ar.al avatar

Ball’s in your court, @EU_Commission.

Apple has very publicly told you to go fuck yourselves with its malicious compliance. What you do next will decide whether malicious compliance is acceptable in the EU or not.

https://mastodon.social/@owa/111941009402592589

puresick , to random
@puresick@social.hnnng.space avatar

Yeah “thanks” Apple for breaking PWAs in Europe.

Ridiculous.

molly0xfff , to random
@molly0xfff@hachyderm.io avatar

When people say you should "own your own data", or that the future of the web is "ownership", what does that mean?

We need to talk about digital ownership.

https://www.citationneeded.news/we-need-to-talk-about-digital-ownership/

noellemitchell , to random
@noellemitchell@mstdn.social avatar

So the results from yesterday's poll made it clear: people on the do not want to try ! 😆

82% of people said they will not be using Bluesky now that it's open to the public for registration. I have to say I approve of this high percentage. 😄

https://mstdn.social/@noellemitchell/111887232944577112

preslavrachev , to random
@preslavrachev@mastodon.social avatar

“But here's the thing: being able to say, "wherever you get your podcasts" is a radical statement. Because what it represents is the triumph of exactly the kind of technology that's supposed to be impossible: open, empowering tech that's not owned by any one company, that can't be controlled by any one company, and that allows people to have ownership over their work and their relationship with their audience.”

https://www.anildash.com//2024/02/06/wherever-you-get-podcasts/ by @anildash (found via @topstories)


  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines