• SacredPony@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      All things cost your money, your data, or your soul. And those at the top love nothing more than to trick us into paying all three at once

  • I Cast Fist@programming.dev
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    3 days ago

    Come on, OP, Altman is still a billionaire. If he got out of the game right now, with OpenAi still unprofitable, he’d still have enough wealth for a dozen generations.

  • uberstar@lemmy.ml
    link
    fedilink
    arrow-up
    8
    ·
    2 days ago

    I tried DeepSeek, and immediately fell in love… My only nitpick is that images have to have text on them, otherwise it complains, but for the price of free, I’m basically just asking for too much. Contemporaries be damned.

  • glimmer_twin [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    2
    ·
    edit-2
    3 days ago

    Altman didn’t really make his money from tech. He’s basically a magic bean seller. He’ll be fine no matter what happens to AI. He’ll find a new grift and new suckers (famously one born every minute after all)

        • trevor@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          2 days ago

          Is it actually open source, or are we using the fake definition of “open source AI” that the OSI has massaged into being so corpo-friendly that the training data itself can be kept a secret?

          • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
            link
            fedilink
            arrow-up
            6
            arrow-down
            2
            ·
            2 days ago

            The code is open, weights are published, and so is the paper describing the algorithm. At the end of the day anybody can train their own model from scratch using open data if they don’t want to use the official one.

            • trevor@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              edit-2
              2 days ago

              The training data is the important piece, and if that’s not open, then it’s not open source.

              I don’t want the data to avoid using the official one. I want the data so that so that I can reproduce the model. Without the training data, you can’t reproduce the model, and if you can’t do that, it’s not open source.

              The idea that a normal person can scrape the same amount and quality of data that any company or government can, and tune the weights enough to recreate the model is absurd.

              • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
                link
                fedilink
                arrow-up
                2
                arrow-down
                2
                ·
                2 days ago

                What ultimately matters is the algorithm that makes DeepSeek efficient. Models come and go very quickly, and that part isn’t all that valuable. If people are serious about wanting to have a fully open model then they can build it. You can use stuff like Petals to distribute the work of training too.

                • trevor@lemmy.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  2 days ago

                  That’s fine if you think the algorithm is the most important thing. I think the training data is equally important, and I’m so frustrated by the bastardization of the meaning of “open source” as it’s applied to LLMs.

                  It’s like if a normal software product provides a thin wrapper over a proprietary library that you must link against calling their project open source. The wrapper is open, but the actual substance of what provides the functionality isn’t.

                  It’d be fine if we could just use more honest language like “open weight”, but “open source” means something different.

            • trevor@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              4
              ·
              2 days ago

              I’m not seeing the training data here… so it looks like the answer is yes, it’s not actually open source.

        • MajorSauce@sh.itjust.works
          link
          fedilink
          arrow-up
          26
          ·
          3 days ago

          So far, they are training models extremely efficiently while having US gatekeeping their GPUs and doing everything they can to slow their progress. Any innovation in having efficient models to operate and train is great for accessibility of the technology and to reduce the environment impacts of this (so far) very wasteful tech.

  • Sabre363@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    8
    ·
    3 days ago

    We doing paid promotions or something on Lemmy now? You sure seem to be pushing this DeepSeek thing pretty hard, op.

      • Sabre363@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        7
        ·
        3 days ago

        None of this has anything to do with the model being open source or not, plenty of other people have already disputed that claim.

        • Grapho@lemmy.ml
          link
          fedilink
          arrow-up
          15
          ·
          2 days ago

          It’s a model that outperforms the other ones in a bunch of areas with a smaller footprint and which was trained for less than a twentieth of the price, and then it was released as open source.

          If it were European or US made nobody would deem it suspicious if somebody talked about it all month, but it’s a Chinese breakthrough and god forbid you talk about it for three days

        • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          13
          arrow-down
          4
          ·
          3 days ago

          It has everything to do with the tech being open. You can dispute it all you like, but the fact is that all the code and research behind it is open. Anybody could build a new model from scratch using open data if they wanted to. That’s what matters.

          • Sabre363@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            11
            ·
            2 days ago

            I’m commenting on the odd nature of the post and your behavior in the comments, pointing out that it comes across as more a shallow advertisement than a sincere endorsement, that is all. I don’t know enough about DeepSeek to discuss it meaningfully, nor do I have enough evidence to decide upon its open source status.

              • Sabre363@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                3
                ·
                2 days ago

                You might have a far more positive interaction with the community if you learned to listen first before jumping on the defensive

                • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
                  link
                  fedilink
                  arrow-up
                  4
                  arrow-down
                  2
                  ·
                  2 days ago

                  Pretty much all my interactions with the community here have been positive, aside from a few toxic trolls such as yourself. Maybe take your own advice there champ.

  • Sem@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    31
    ·
    3 days ago

    Deepseek collects and process all the data you sent to their LLN even from API calls. It is a no-go for most of businesses applications. For example, OpenAI and Anyhropic do not collect or process anyhow data sent via API and there is an opy-ouy button in their settings that allows to avoid processing of the data sent via UI.

    • fl42v@lemmy.ml
      link
      fedilink
      arrow-up
      40
      ·
      3 days ago

      You can run 'em locally, tho, if their gh page is to be believed. And this way you can make sure nothing gets even sent to their servers, and not just believe nothing is processed.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      26
      arrow-down
      1
      ·
      3 days ago

      DeepSeek is an open source project that anybody can run, and it’s performant enough that even running the full model is cheap enough for any company to do.

      • shawn1122@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        8
        ·
        edit-2
        2 days ago

        Since it’s open source is there a way for companies to adjust so it doesn’t intentionally avoid saying anything bad about China?

        • HappyTimeHarry@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          If it was actually programed that way then yes you could go in and adjust that, but the model itself is not censored that way and has no problem describing all sorts of Chinese tabboo subjects.

          • Ajen@sh.itjust.works
            link
            fedilink
            arrow-up
            3
            arrow-down
            7
            ·
            2 days ago

            That doesn’t mean it’s straightforward, or even possible, to entirely remove the censorship that’s baked into the model.

            • Grapho@lemmy.ml
              link
              fedilink
              arrow-up
              9
              ·
              2 days ago

              People saying truisms that confirm their biases about shit they clearly know nothing about? I thought I’d left reddit.

            • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
              link
              fedilink
              arrow-up
              5
              ·
              2 days ago

              It doesn’t mean it’s easy, but it is certainly possible if somebody was dedicated enough. At the end of the day you could even use the open source code DeepSeek published and your own training data to train a whole new model with whatever biases you like.

              • Ajen@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                arrow-down
                4
                ·
                2 days ago

                “It’s possible, you just have to train your own model.”

                Which is almost as much work as you would have to do if you were to start from scratch.

                • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
                  link
                  fedilink
                  arrow-up
                  6
                  ·
                  2 days ago

                  It’s obviously not since the whole reason DeepSeek is interesting is the new mixture of experts algorithm that it introduces. If you don’t understand the subject then maybe spend a bit of time learning about it instead of adding noise to the discussion?

      • blarth
        link
        fedilink
        arrow-up
        2
        arrow-down
        15
        ·
        edit-2
        3 days ago

        It should be repeated: no American corporation is going to let their employees put data into DeepSeek.

        Accept this truth. The LLM you can download and run locally is not the same as what you’re getting on their site. If it is, it’s shit, because I’ve been testing r1 in ollama and it’s trash.

        • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          15
          arrow-down
          1
          ·
          3 days ago

          It should be repeated: anybody can run DeepSeek themselves on premise. You have absolutely no clue what you’re talking about. Keep on coping there though, it’s pretty adorable.

    • hungrybread [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      30
      ·
      edit-2
      3 days ago

      I’m too lazy to look for any of their documentation about this, but it would be pretty bold to believe privacy or processing claims from OpenAI or similar AI orgs, given their history flouting copyright.

      Silicon valley more generally just breaks laws and regulations to “disrupt”. Why wouldn’t an org like OpenAI at least leave a backdoor for themselves to process API requests down the road as a policy change? Not that they would need to, but it’s not uncommon for a co to leave an escape hatch in their policies.

      • The Octonaut@mander.xyz
        link
        fedilink
        arrow-up
        11
        arrow-down
        6
        ·
        3 days ago

        I don’t think you or that Medium writer understand what “open source” means. Being able to run a local stripped down version for free puts it on par with Llama, a Meta product. Privacy-first indeed. Unless you can train your own from scratch, it’s not open source.

        Here’s the OSI’s helpful definition for your reference https://opensource.org/ai/open-source-ai-definition

        • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          11
          arrow-down
          8
          ·
          3 days ago

          You can run the full version if you have the hardware, the weights are published, and importantly the research behind it is published as well. Go troll somewhere else.

          • The Octonaut@mander.xyz
            link
            fedilink
            arrow-up
            6
            arrow-down
            7
            ·
            3 days ago

            All that is true of Meta’s products too. It doesn’t make them open source.

            Do you disagree with the OSI?

            • Grapho@lemmy.ml
              link
              fedilink
              arrow-up
              7
              arrow-down
              2
              ·
              2 days ago

              What makes it open source is that the source code is open.

              My grandma is as old as my great aunts, that doesn’t transitively make her my great aunt.

              • The Octonaut@mander.xyz
                link
                fedilink
                arrow-up
                2
                arrow-down
                4
                ·
                2 days ago

                A model isn’t an application. It doesn’t have source code. Any more than an image or a movie has source code to be “open”. That’s why OSI’s definition of an “open source” model is controversial in itself.

                • Grapho@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  2 days ago

                  It’s clear you’re being disingenuous. A model is its dataset and its weights too but the weights are also open and if the source code was as irrelevant as you say it is, Deepseek wouldn’t be this much more performant, and “Open” AI would have published it instead of closing the whole release.

              • The Octonaut@mander.xyz
                link
                fedilink
                arrow-up
                10
                arrow-down
                3
                ·
                edit-2
                3 days ago

                The data part. ie the very first part of the OSI’s definition.

                It’s not available from their articles https://arxiv.org/html/2501.12948v1 https://arxiv.org/html/2401.02954v1

                Nor on their github https://github.com/deepseek-ai/DeepSeek-LLM

                Note that the OSI only ask for transparency of what the dataset was - a name and the fee paid will do - not that full access to it to be free and Free.

                It’s worth mentioning too that they’ve used the MIT license for the “code” included with the model (a few YAML files to feed it to software) but they have created their own unrecognised non-free license for the model itself. Why they having this misleading label on their github page would only be speculation.

                Without making the dataset available then nobody can accurately recreate, modify or learn from the model they’ve released. This is the only sane definition of open source available for an LLM model since it is not in itself code with a “source”.

      • Hnery@feddit.org
        link
        fedilink
        arrow-up
        2
        arrow-down
        7
        ·
        3 days ago

        So… as far as I understand from this thread, it’s basically a finished model (llama or qwen) which is then fine tuned using an unknown dataset? That’d explain the claimed 6M training cost, hiding the fact that the heavy lifting has been made by others (US of A’s Meta in this case). Nothing revolutionary to see here, I guess. Small improvements are nice to have, though. I wonder how their smallest models perform, are they any better than llama3.2:8b?

        • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          6
          arrow-down
          2
          ·
          3 days ago

          What’s revolutionary here is the use of mixture-of-experts approach to get far better performance. While it has 671 billion parameters overall, it only uses 37 billion at a time, making it very efficient. For comparison, Meta’s Llama3.1 uses 405 billion parameters used all at once. It does as well as GPT-4o in the benchmarks, and excels in advanced mathematics and code generation. It also has 128K token context window means it can process and understand very long documents, and processes text at 60 tokens per second, twice as fast as GPT-4o.