• 40 Posts
  • 178 Comments
Joined 2 years ago
cake
Cake day: July 9th, 2023

help-circle

  • I really want to like Nix. The idea of declaratively defining my entire system sounds great. I can manage it with Git and even have multiple machines all look the same. I can define my partititioning once and magically get a btrfs disk working. Wow!

    But I find the language confusing no matter how many times people say it’s easy. I have a lot of experience with other programming languages so maybe it just doesn’t mesh. It also gives terrible error messages that are hard for me to understand. And Nixpkgs is unpredictable for what version I’m going to get. One of the services I installed ended up being a release candidate version which was a surprise. What if I don’t want the latest version of Docker? How do I pin it? Do I have to duplicate part of Nixpkgs? It just feels like a monorepo where everybody has to be on the same versions. Why on earth do the Nix language docs start by introducing math expressions instead of here is a simple self contained thing that installs one program. Here’s how you configure it. Here’s how you expand. Why does the dependency graph seem to pull in so many unnecessary dependencies? For example, I tried to build a minimal Docker image (which Nix looks to be a very good fit for), but I couldn’t figure out how to strip out dependencies that likely were only used during build for a dependency.

    I still like the idea and have managed to get my server defined entirely with NixOS which is very cool, but I can’t recommend this to my tech friends because if I’m confused they will be more so.







  • Fascinating. Just based on your comment and nothing else, sounds like it could be something like a CPU Enclave like Intel SGX. Basically a remote client can validate that an application runs in a secure part of a remote cloud computer. The stated goal of SGX is that you only have to trust Intel and if you trust Intel and say run program X in the enclave, then only that part of the CPU can access the data, not the applications running in the non-secure enclave.

    Now that brushes over some things like you still need to trust the client and IIRC in a WhatsApp situation, you don’t really know what enclave does, but the communications between the enclave and the host OS are heavily restricted. LLMs also require lots of CPU and are usually run on GPUs, so not sure how that works yet.


  • I’ve been experimenting with it for different use cases:

    • Standard chat style interface with open-webui. I use it to ask things that people would normally ask ChatGPT. Researching things, vacation plans, etc. I take it all with a grain of salt and also still use search engines
    • Parts of different software projects I have using ollama-python. For example, I tried using it to auto summarize transaction data
    • Home Assistant voice assistants for my own voice activated smart home
    • Trying out code completion using TabbyML

    I only have a GeForce 1080 Ti in it, so some projects are a bit slow and I don’t have the biggest models, but what really matters is the self-satisfaction I get by not using somebody else’s model, or that’s what I try to tell myself while I’m waiting for responses.