@kevincox@lemmy.ml avatar

kevincox

@kevincox@lemmy.ml

This profile is from a federated server and may be incomplete. For a complete list of posts, browse on the original instance.

kevincox ,
@kevincox@lemmy.ml avatar

I just added the search engine to my browser. I don't see the need for an app when all of the results are going to open in the browser anyways.

kevincox ,
@kevincox@lemmy.ml avatar

I am currently subscribed and it is definitely a step up from other engines I have tried. The main feature is just that it seems to somewhat cut back the general blogspam and SEO fluff. It isn't perfect but whenever I do compare it to Google, Brave or Duck Duck Go it seems to be ahead, or in rare cases similar.

The ability to lower/block sites is also quite nice. I also have a few raised sites, but that is really a minor improvement compared to blocking crap like Quora and Pintrest.

That being said the small plan is a pretty small number of searches so I need to pay for the unlimited plan which is quite expensive. I currently think it is worth it but it is definitely borderline value, not a slam-dunk decision.

I also have concerns about them focusing on things I don't care about. Lots of AI features and a browser. I don't want any of that, just focus on search, there is still lots of room for improvement, even if they are currently leading the pack.

kevincox ,
@kevincox@lemmy.ml avatar

Same here. I tried on the starter plan but had to upgrade. According to my account I have made 802 searches since January 4th. So 17.4 searches a day on average. This means that for a 31 day month I am looking at 620 searches.

I am also a heavy user of bookmarks and browser history. So I don't rely on search to open specific sites (like searching for "facebook" which is one of Google's most popular queries). So someone who is in the habit of using search for direct navigation is probably going to be a good chunk higher.

That being said I work on the computer and do a fair number of searches for my job. So I can believe that a light user is pretty comfortable at 300 searches a month. But moderate searches or people who use the search engine for navigation will need the unlimited plan.

kevincox ,
@kevincox@lemmy.ml avatar

If your only copy of critical data is on a portable storage device you are doing so many things wrong.

kevincox ,
@kevincox@lemmy.ml avatar

The downside with doing encryption in software is that you can't limit attempts. If you are using a high-entropy key this is fine. But getting users to use high-entropy keys has problems. If there is an HSM integrated into the device you can limit the potential guesses before the key is wiped which is critical without high-entropy keys.

A blog I follow recently had a good post about this: https://words.filippo.io/dispatches/secure-elements/

Of course you are still better off with a high-entropy key and software. But if you trade off too much usability in the name of security you will likely find that your users/employees just work around the security.

Are there any immutable distros meant for NAS systems or home servers?

Edit2: OK Per feedback I am going to have a dedicated external NAS and a separate homeserver. The NAS will probably run TrueNAS. The homeserver will use an immutable os like fedora silverblue. I am doing a dedicated NAS because it can be good at doing one thing - serving files and making backups. Then my homeserver can be...

kevincox ,
@kevincox@lemmy.ml avatar

I use NixOS for this. It works wonderfully.

Immutable means different things to different people, but to me:

  1. Different programs don't conflict with each other.
  2. My entire server config is stored in a versioned Git repo.
  3. I can rollback OS updates trivially and pick which base OS version I want to use.
kevincox ,
@kevincox@lemmy.ml avatar

bisect stands for “binary search commit”

Haha, that is a funny misunderstanding. "bisect" stands for bisect. It is a word. It means to cut in half. Because the command cuts the range of suspicious commits into two, then tests which half the problem started in.

to divide into two usually equal parts

https://www.merriam-webster.com/dictionary/bisect

But I guess it can be misread as BInary SEarch CommiT.

kevincox ,
@kevincox@lemmy.ml avatar

All of this can only work with a clean git history containing only working commits.

This isn't really true.

  1. You can use git bisect skip to skip commits that can't be evaluated. So if you are tracking down the failure of test foo and this commit fails to build, you can skip it.
  2. If all merged commits are green then you can use --first-parent to avoid testing inside a development branch. This way you can identify which merge caused the issue, even if other merges had broken commits.

So it is easier in general if you have all working commits, but it isn't necessary. Really as long as you have green history on your main branch you will be able to get good results without much effort. I would highly suggest using some sort of merge-queue based workflow to ensure that the master branch is always green.

I would generally prefer using --first-parent rather than forcing squashing. As smaller commits can be much easier to understand and the fact that commit IDs don't change when being merged can make it much easier to manage stacked PRs and hotfix backporting.

kevincox ,
@kevincox@lemmy.ml avatar

Yeah, "physical" in that the bits live under my control. Not like a separate disc per movie.

kevincox ,
@kevincox@lemmy.ml avatar

For low-cost I have been using RamNode. They are a pretty established company and provide HDD options which are great if you want lots of storage at a reasonable price:

https://ramnode.com/products/vps-hosting/#massive-kvm

They also have relatively good priced SSD, but it is obviously much more than HDD.

kevincox ,
@kevincox@lemmy.ml avatar

But it can't! (Maybe)

calling map(obj.func) will pass func but won't set the this parameter correctly. If the called method uses this you will have a bad time.

The code you actually want would be retval.map(v => self.cleanupMetadata(v)) or the old-skool retval.map(self.cleanupMetadata.bind(self)).

Also the first version reuses the Array which may be important, but even if not will likely result in better performance. (Although this is likely mitigated by making the array polymorphic depending on the return type of cleanupMetadata and the overhead of forEach vs map.)

Wow, isn't JS a great language.

Is it possible to have an on-screen keyboard that provides word suggestions without compromising privacy?

I've been playing with both the Thumb and the Unexpected keyboards. I like 'em both but, man, I have to admit I'd like them more if they had that top bar that predicts what you might be. Is that just a no-go from a privacy perspective? Can that functionality be local?...

kevincox ,
@kevincox@lemmy.ml avatar

While Google isn't generally good for privacy GBoard actually does this. IIRC they actually completely removed the sync service and your typing history is only kept on-device and Android backup.

However it is a bit of a privacy nightmare otherwise as many of the other features phone home. But last I checked (~4 years ago, worth checking again) the core typing functionality is actually fully offline and private.

So yes, it is possible.

kevincox ,
@kevincox@lemmy.ml avatar

I think you hugely estimate what it takes to complete and correct a few words. Maybe you would want some sort of accelerator for fine tuning but 1. You probably don't even need fine tuning and 2. You can probably just run it on the CPU while your device is charging. But for inference modern CPUs are by far powerful enough.

kevincox ,
@kevincox@lemmy.ml avatar

Well it does say n >= 150. But the phrasing makes it sound like it is trying to imply that this is a small number.

kevincox , (edited )
@kevincox@lemmy.ml avatar

FWIW I don't think this is a real issue. It is right now because Lemmy is fairly new and small. But over time it will become obvious which communities are popular and people will go there. I think there is a small issue because local communities are sort of given priority as /communities defaults to "Local". But that sort of seems like the end of the list.

Just like it isn't an issue that people can create "Cats" and "CuteCats" on Reddit I don't think it is an issue that you can create cats@a.example and cats@b.example. Over time people will find and participate in whichever popular community matches their preferences.

I don't like the idea of global "Multi-communities" as now there are more instance admins that have control over a community. I think that in general mods should have the most control, instance admins being necessary due to an implementation detail (communities are bound to servers) and should only need to step in for extreme cases. (Like violating server rules)

I don't mind "Communities following communities" as much but I fail to see the point. If you think that another community is a good place to have a discussion why not just tell your members that you recommend moving there? I can see this working as a "Public Playlist" style idea where you can subscribe to follow recommended communities. I think having the option to post to both a followed community or the community that is doing the following is unnecessarily confusing. Basically I would make this as more of a discovery feature than a way to merge communities together.

kevincox , (edited )
@kevincox@lemmy.ml avatar

Yes, I agree with this. I wrote a blog post about this a while ago. post lemmy discussion.

TL;DR communities on Lemmy are federated and highly dependent on the instance that they live on. If the source instance gets banned or goes offline the community will effectively go offline too.

This can be compared to Matrix rooms which don't really live on any specific instance and continue even if the source instance goes offline. Defederation will prevent users from seeing posts from users on the blocked instance, but the room itself isn't affected.

However I feel that trying to solve this by supporting some form of community merging would likely just be papering over the problem. The only way to really solve this is by properly decentralizing communities.

kevincox ,
@kevincox@lemmy.ml avatar

It may still be nice to have a reference implementation. For example maybe they can see if there are extra hardening options that they can enable or adopt the more seamless update flow.

How to fool a laptop into thinking a monitor is connected?

Hello! I converted an old laptop with a broken screen into a home server, and it all works well except for one thing: when I reboot it (via ssh), if no screen is connected, it will get stuck and refuse to boot. as soon as I connect an HDMI monitor, the fans will start spinning and it will start booting as usual. Then I can...

kevincox ,
@kevincox@lemmy.ml avatar

I had this issue as well where my mobo wouldn't boot without a GPU. In my case a BIOS update resolved the issue (it just beeps angrily a few times but continues booting).

kevincox ,
@kevincox@lemmy.ml avatar

This seems unlikely since it boots with a monitor attached. From past experience most laptops that refuse to boot while closed don't boot even if an HDMI display is connected.

Upgrade vs Reinstall

I'm a generalist SysAdmin. I use Linux when necessary or convenient. I find that when I need to upgrade a specific solution it's often easier to just spin up an entirely new instance and start from scratch. Is this normal or am I doing it wrong? For instance, this morning I'm looking at a Linux VM whose only task is to run...

kevincox ,
@kevincox@lemmy.ml avatar

I think yes. In general if you have good setup instructions (preferably automated) then it will be easier to start from scratch. This is because when starting from scratch you need to worry about the new setup. But when upgrading you need to worry about the new setup as well as any cruft that has been carried over from the previous setup. Basically starting clean has some advantages.

However it is important to make sure that you can go back to the old working state if required. Either via backups or leaving the old machine around working until the new one has been proven to be operational.

I also really like NixOS for this reason. It means that you can upgrade your system with very little cruft carrying over. Basically it behaves like a clean install every update. But it is easier to roll back if you need to.

kevincox ,
@kevincox@lemmy.ml avatar

Back in the day X was a great protocol that reflected the needs of the time.

  1. Applications asked it to draw some lines and text.
  2. It sent input events to applications.

People also wanted to customize how their windows were laid out more flexibly. So the window manager appeared. This would move all of your windows around for you and provide some global shortcuts for things.

Then graphics got more complicated. All of a sudden the simple drawing primitives of X weren't sufficient. Other than lines, text and rectangles applications wanted gradients, rounded corners and to display rich graphics. So now instead of using all of these fancy drawing APIs they were just uploading big bitmaps to the X server. At this point 1/3 of what the X server was previously doing became obsolete.

Next people wanted fancy effects and transparency (like drop shadows). So window managers started compositing the display. This is great but now they need more control than just moving windows around on the display in case they are warped, rendered somewhere slightly differently or on a different workspace. So now all input events go first from X to the window manager, then back to X, then to the application. Also output needs to be processed by the window manager, so it is sent from the client to X, then to the window manager, then the composited output is sent to X. So another 1/3 of what X was doing became obsolete.

So now what is the X server doing:

  1. Outputting the composited image to the display.
  2. Receiving input from input devices.
  3. Shuffling messages and graphics between the window manager and applications.

It turns out that 1 and 2 have got vastly simpler over the years, and can now basically be solved by a few libraries. 3 is just overhead (especially if you are trying to use X over a network because input and output need to make multiple round-trips each).

So 1 and 2 turned into libraries and 3 was just removed. Basically this made the X server disappear. Now the window manager just directly read input and displayed output usually using some common libraries.

Now removing the X server is a breaking change, so it was a great time to rethink a lot of decisions. Some of the highlights are:

  1. Accessing other applications information (output and input capture) requires explicit permission. This is a key piece to sandboxing applications.
  2. Organize the system around frames to avoid tearing except for when desired (X doesn't really have the concept of a frame).
  3. Remove lots of basically unused APIs like fonts, drawing and many others.

So the future is great. Simpler, faster, more secure and more extensible. However getting there takes time.

This was also slowed down by some people trying to resist some features that X had (such as applications being able to position themselves). And with a few examples like that it can be impossible to make a nice port of an application to Wayland. However over time these features are being added and these days most applications have good Wayland support.

kevincox ,
@kevincox@lemmy.ml avatar

Why I’ll need something like that?

IIUC it is mostly to avoid placing huge load on the original package host when people download the same package hundreds of times a day in their CI workflow. It also means that Google can take control over the user experience rather than huge issues coming up every time some smaller host goes down or someone deletes an existing package version.

Overall I doubt that this proxy was added as a source of tracking. And the privacy policy on the service is pretty strict: https://proxy.golang.org/privacy. So even though I am pretty wary of Google overall I think this is actually a fairly reasonable decision by them to have enabled by default.

kevincox ,
@kevincox@lemmy.ml avatar

I don't know what you mean by "the source of this concept".

kevincox ,
@kevincox@lemmy.ml avatar

I don't really have a source. It is just me thinking logically about the system and many offhand comments I have read over time. Other than the privacy policy which I have linked.

kevincox ,
@kevincox@lemmy.ml avatar

People are getting all upset at Facebook/Meta here but they were served a valid warrant. I don't think there is much to get mad about them here. The takeaway I get is this:

Avoid giving data to others. No matter how trustworthy they are (not that Meta is) they can be legally compelled to release it. Trust only in cryptography.

There is of course the other question of if abortion being illegal is a policy that most people agree with...but that is a whole different kettle of fish that I won't get into here.

kevincox ,
@kevincox@lemmy.ml avatar

This is controversial because they are "big bad" companies. But in some cases I think that is a plus because they have some responsibility to do as they say.

  1. Use a resolver that is a part of Mozilla's Trusted Recursive Resolver Program. Mozilla makes them agree to a solid privacy policy: https://wiki.mozilla.org/Security/DOH-resolver-policy#Conforming_Resolvers
  2. Google DNS. Obviously controversial but their privacy policy is very good. They keep "full" logs for at most 48 hours and only for debugging purposes.

The major concern for all of these is that they are allowed the keep anonymized logs forever. This means that if the hostname itself it sensitive then it can be recorded forever. (For example if you have "secret" subdomains).

The other option is running your own recursive resolver, this mostly nullifies the private subdomain issue as only the authoritative server will see it (other than network snoopers) however this has very real downsides.

  1. It exposes your IP address to many authoritative servers with no guarantees about the logs they keep.
  2. It can be slow as there is no shared cache.
  3. Requests from your resolver to the internet are not encrypted.

Disclaimer: I used to work at Google (but not on Google Public DNS) and have no affiliation with other named or referenced companies.

kevincox ,
@kevincox@lemmy.ml avatar

Just because it is not the advice that is expected does not make it bad advice. Obviously these names have some questionable behaviours but in this case they often have separate privacy policies for their DNS services (or the Mozilla endpoint for their DNS services) which makes it much better than the other Google products which are lumped behind a single privacy policy which isn't very privacy friendly.

Unfortunately it is impossible to know for sure they are complying with the privacy policy, but this applies to all providers, no matter how large or what businesses they have other than providing DNS. So while you shouldn't blindly follow some random post on the internet you should may give these providers a second look-over and consider that these large companies have some privacy benefits if their privacy policy is accurate.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • test
  • worldmews
  • mews
  • All magazines