• zebidiah@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    18 minutes ago

    i use AI every day in my daily work, it writes my emails, performance reviews, project updates etc.

    …and yeah, that checks out!

  • TheObviousSolution@lemm.ee
    link
    fedilink
    arrow-up
    5
    ·
    1 hour ago

    Using AI is telling people they shouldn’t care about your IP because you clearly don’t care about theirs when it passes through the AI lens.

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 minutes ago

      I’ve been using a cartoonish profile picture for my work emails, teams portrait and other communications for many years. There is almost no way to tell that kind of icon apart from AI generated icons at that size anyway.

      And even if it was, that’s not the point of the conversation. Fixating on that is such bad faith it betrays a defensiveness about AI generated content, so it’s particularly important that someone like you get this message, let me reiterate clearly:

      I have a role of responsibility, I hire people and use company budget to make decisions on other companies and products we’ll be paying for. When making these decisions I don’t look at the email signatures of people or the icons they use. I look at their presentation materials and if that shit is AI generated I know immediately it’s just a couple people pretending to be an agency or company, or some company that doesn’t quality-control their slides and presentation decks. It shows laziness. I would rather go with a company that has data and specs rather than lean on graphics anyway. So if those graphics are also lazy AF that’s a hard pass. Not my first rodeo, I’ve learned to listen to experience.

  • zerofk@lemm.ee
    link
    fedilink
    arrow-up
    2
    arrow-down
    10
    ·
    48 minutes ago

    Ironically, an LLM could’ve made his post grammatically correct and understandable.

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 minutes ago

      If you had a hard time understanding the point being made in that post, you could probably be replaced by AI and we wouldn’t notice the difference.

  • romanticremedy@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    20
    ·
    14 hours ago

    I think this problem will get worse because many websites that’s used for “your own research” will lose human traffic to watch ads and more bots just scraping their data, reducing motivation to keep the websites running. Most people just take the least resistant path so AI search will be the default soon I think

    Yes, I hate this timeline

    • Rin@lemm.ee
      link
      fedilink
      arrow-up
      20
      ·
      edit-2
      13 hours ago

      It’s so annoying because you’re correct. I’m finding it harder and harder to use a search engine for things. Hell, the web in general is becoming unusable. It’s all shit.

      rant

      here’s my pesonal gripe. Imagine looking for a solution to a problem and find a reddit thread in your search engine of choice. Wording in the description seems to match your exact issue and it’s one of the first results. You click on it and…

      What i’d do to that smug fuck if I ever got my hands on him aside, me behind a VPN and deleted reddit account can’t find the answer. But i can go to chatgpt without logging in to answer my fucking queries and it does it with more efficiency than looking at random sites for things, which most of the time are just sloppy shitty mirrors of Stack Overflow or Quora that BLATANTLY copy content from those websites and just use slightly better SEO…

      Also, reddit did this in because of the API shit they pulled. AND THEN THEY SOLD FUCKING ACCESS TO GOOGLE. So me, a fucking person (as far as I know anyway), isn’t privileged enough to view the fucking crumbs of information that google just fucking gobbles on a daily basis. Fuck that. This is a move that makes me feel less important than my fucking roomba.

      I just feel so mad. I want the old web back. I just want my duckduckgo to work well.

      Sorry for the rant.

  • roude@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    20
    ·
    edit-2
    18 hours ago

    Alright I don’t like the direction of AI same as the next person, but this is a pretty fucking wild stance. There are multiple valid applications of AI that I’ve implemented myself: LTV estimation, document summary / search / categorization, fraud detection, clustering and scoring, video and audio recommendations… "Using AI” is not the problem, “AI charlatan-ing” is. Or in this guy’s case, “wholesale anti-AI stanning”. Shoehorning AI into everything is admittedly a waste, but to write off the entirety of a very broad category (AI) is just silly.

    • Mustakrakish@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      13 minutes ago

      I have ADHD and I have to ask A LOT of questions to get my brain around concepts sometimes, often cause I need to understand fringe cases before it “clicks”, AI has been so fucking helpful to be able to just copy a line from a textbook and say “I’m not sure what they meen by this, can you clarify” or “it says this, but also this, aren’t these two conflicting?” and having it explain has been a game changer for me. I still have to be sure to have my bullshit radar on, but thats solved by actually reading to understand and not just taking the answer as is. In fact, scrutinizing the answer against what I’ve learned and asking further questions has felt like its made me more engaged with the material.

      Most issues with AI are issues with capitalism.

    • jjjalljs@ttrpg.network
      link
      fedilink
      arrow-up
      41
      ·
      17 hours ago

      I don’t think AI is actually that good at summarizing. It doesn’t understand the text and is prone to hallucinate. I wouldn’t trust an AI summary for anything important.

      Also search just seems like overkill. If I type in “population of london”, i just want to be taken to a reputable site like wikipedia. I don’t want a guessing machine to tell me.

      Other use cases maybe. But there are so many poor uses of AI, it’s hard to take any of it seriously.

      • I don’t think AI is actually that good at summarizing. It doesn’t understand the text and is prone to hallucinate. I wouldn’t trust an AI summary for anything important.

        This right here. Whenever I’ve tried using an LLM to summarize, I spent more time fact-checking it (and finding the inevitable misunderstandings and outright hallucinations—they’re always there for anything of substance!) than I’d spend writing my own damned summary.

        There is, however, one use case I’ve found where LLMs work better than alternatives … provided you do due diligence. To put it bluntly, Google Translate and its ilk of similar slop from Bing, Baidu, etc. suck. They are god-awful at translation of anything but straightforward technical writing or the most tediously dull prose. LLMs are far better translators (and can be instructed to highlight cultural artifacts, possible transcription errors, etc.) …

        … as long as you back-translate in a separate session to check for hallucination.

        Oh, and Google Translate-style translators really suck at Classical Chinese. LLMs do much better (provided you do the back-translation check for hallucination).

        • brognak@lemm.ee
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          1 hour ago

          If I understand how AI works (predictive models), kinda seems perfectly suited for translating text. Also exactly how I have been using it with Gemini, translate all the memes in ich_iel 🤣. Unironically it works really well, and the only ones that aren’t understandable are cultural not linguistic.

      • roude@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        4
        ·
        edit-2
        14 hours ago

        I guess this really depends on the solution you’re working with.

        I’ve built a voting system that relays the same query to multiple online and offline LLMs and uses a consensus to complete a task. I chunk a task into smaller more manageable components, and pass those through the system. So one abstract, complex single query becomes a series of simpler asks with a higher chance of success. Is this system perfect? No, but I am not relying on a single LLM to complete it. Deficiencies in one LLM are usually made up for in at least one other LLM, so the system works pretty well. I’ve also reduced the possible kinds of queries down to a much more limited subset, so testing and evaluation of results is easier / possible. This system needs to evaluate the topic and sensitivity of millions of websites. This isn’t something I can do manually, in any reasonable amount of time. A human will be reviewing websites we flag under very specific conditions, but this cuts down on a lot of manual review work.

        When I said search, I meant offline document search. Like "find all software patents related to fly-by-wire aircraft embedded control systems” from a folder of patents. Something like elastic search would usually work well here too, but then I can dive further and get it to reason about results surfaced from the first query. I absolutely agree that AI powered search is a shitshow.

      • ArchRecord@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        7
        ·
        edit-2
        14 hours ago

        I don’t think AI is actually that good at summarizing.

        It really depends on the type and size of text you want it to summarize.

        For instance, it’ll only give you a very, very simplistic overview of a large research paper that uses technical terms, but if you want to to compress down a bullet point list, or take one paragraph and turn it into some bullet points, it’ll usually do that without any issues.

        Edit: I truly don’t understand why I’m getting downvoted for this. LLMs are actually relatively good at summarizing small, low-context-necessary pieces of information into bullet points. They’re quite literally made as code that interprets the likelihood of text based on an input. Giving it a small amount of text to rewrite or recontextualize is one of its best strengths. That’s why it was originally mostly implemented as a tool to reword small isolated sections in articles, emails, and papers, before the technology was improved.

        It’s when they get to larger pieces of information, like meetings, books, wikipedia articles, etc, that they begin to break down, due to the nature of the technology itself. (context windows, lack of external resources that humans are able to integrate into their writing, but LLMs can’t fully incorporate on the same level)

        • jjjalljs@ttrpg.network
          link
          fedilink
          arrow-up
          5
          ·
          11 hours ago

          But if the text you’re working on is small, you could just do it yourself. You don’t need an expensive guessing machine.

          Like, if I built a rube-goldberg machine using twenty rubber ducks, a diesel engine, and a blender to tie my shoes, and it gets it right most of the time, that’s impressive. but also kind of a stupid waste, because I could’ve just tied them with my hands.

          • ArchRecord@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            10 hours ago

            you could just do it yourself.

            Personally, I think that wholly depends on the context.

            For example, if someone’s having part of their email rewritten because they feel the tone was a bit off, they’re usually doing that because their own attempts to do so weren’t working for them, and they wanted a secondary… not exactly opinion, since it’s a machine obviously, but at least an attempt that’s outside whatever their brain might currently be locked into trying to do.

            I know I’ve gotten stuck for way too long wondering why my writing felt so off, only to have someone give me a quick suggestion that cleared it all up, so I can see how this would be helpful, while also not always being something they can easily or quickly do themselves.

            Also, there are legitimately just many use cases for applications using LLMs to parse small pieces of data on behalf of an application better than simple regex equations, for instance.

            For example, Linkwarden, a popular open source link management software, (on an opt-in basis) uses LLMs to just automatically tag your links based on the contents of the page. When I’m importing thousands of bookmarks for the first time, even though each individual task is short to do, in terms of just looking at the link and assigning the proper tags, and is not something that takes significant mental effort on its own, I don’t want to do that thousands of times if the LLM will get it done much faster with accuracy that’s good enough for my use case.

            I can definitely agree with you in a broader sense though, since at this point I’ve seen people write 2 sentence emails and short comments using AI before, using prompts even longer than the output, and that I can 100% agree is entirely pointless.

          • ArchRecord@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            4
            ·
            14 hours ago

            It can, but I don’t see that happen often in most places I see it used, at least by the average person, although I will say I’ve deliberately insulated myself a bit from the very AI bro type of people who use it regularly throughout their day, and mostly interact with people who are using it occasionally during research for an assignment, rewriting part of their email, etc, so I recognize that my opinion here might just be influenced by the type of uses I personally see it used for.

            In my experience, when it’s used to summarize, say, 4-6 sentences of text, in a general-audience readable text (i.e. not a research paper in a journal) that doesn’t explicitly rely on a high level of context from the rest of the text (e.g. a news article relies on information it doesn’t currently have, so a paragraph out of context would be bad, vs instructions on how to use a tool, which are general knowledge) then it seems to do pretty well, especially within the confines of an existing conversation about the topic where the intent and context has been established already.

            For example, a couple months back, I was having a hard time understanding subnetting, but I decided to give it a shot, and by giving it a bit of context on what was tripping me up, it was successfully able to reword and re-explain the topic in such a way that I was able to better understand it, and could then continue researching it.

            Broad topic that’s definitely in the training data + doesn’t rely on lots of extra context for the specific example = reasonably good output.

            But again, I also don’t frequently interact with the kind of people that like having AI in everything, and am mostly just around very casual users that don’t use it for anything very high stakes or complex, and I’m quite sure that anything more than extremely simple summaries of basic information or very well-known topics would probably have a lot of hallucinations.

              • ArchRecord@lemm.ee
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                14 hours ago

                See, when I have 4-6 sentences to summarize, I don’t see the value-add of a machine doing the summarizing for me.

                Oh I completely understand, I don’t often see it as useful either. I’m just saying that a lot of people I see using LLMs occasionally are usually just shortening their own replies to things, converting a text based list of steps to a numbered list for readability, or just rewording a concept because the original writer didn’t word it in a way their brain could process well, etc.

                Things that don’t necessarily require a huge amount of effort on their part, but still save them a little bit of time, which in my conversations with them, seems to prove valuable to them, even if it’s in a small way.

                • jjjalljs@ttrpg.network
                  link
                  fedilink
                  arrow-up
                  4
                  ·
                  11 hours ago

                  I feel like letting your skills in reading and communicating in writing atrophy is a poor choice. And skills do atrophy without use. I used to be able to read a book and write an essay critically analyzing it. If I tried to do that now, it would be a rough start.

                  I don’t think people are going to just up and forget how to write, but I do think they’ll get even worse at it if they don’t do it.

        • Thisiswritteningerman@midwest.social
          link
          fedilink
          English
          arrow-up
          5
          ·
          15 hours ago

          Our plant manager likes to use it to summarize meetings (Copilot). It in fact does not summarize to a bullet point list in any useful way. Breakes the notes into a headers for each topic then bullet points The header is a brief summary. The bullet points? The exact same summary but now broken by sentences as individual points. Truly stunning work. Even better with a “Please review the meeting transcript yourself as AI might not be 100% accurate” disclaimer.

          Truely worthless.

          That being said, I’ve a few vision systems using an “AI” to recognize product that doesn’t meet the pre taught pattern. It’s very good at this

          • ArchRecord@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            14 hours ago

            This is precisely why I don’t think anybody should be using it for meeting summaries. I know someone who does at his job, and even he only uses it for the boring, never acted upon meetings that everyone thinks is unnecessary but the managers think should be done anyways, because it just doesn’t work well enough to justify use on anything even remotely important.

            Even just from a purely technical standpoint, the context windows of LLMs are so small relative to the scale of meetings, that they will almost never be able to summarize it in its entirety without repeating points, over-explaining some topics and under-explaining others because it doesn’t have enough external context to judge importance, etc.

            But if you give it a single small paragraph from an article, it will probably summarize that small piece of information relatively well, and if you give it something already formatted like bullet points, it can usually combine points without losing much context, because it’s inherently summarizing a small, contextually isolated piece of information.

          • Lemminary@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            2
            ·
            14 hours ago

            I think your manager has a skill issue if his output is being badly formatted like that. I’d tell him to include a formatting guideline in his prompt. It won’t solve his issues but I’ll gain some favor. Just gotta make it clear I’m no damn prompt engineer. lol

            • Thisiswritteningerman@midwest.social
              link
              fedilink
              English
              arrow-up
              4
              ·
              13 hours ago

              I didn’t think we should be using it at all, from a security standpoint. Let’s run potentially business critical information through the plagiarism machine that Microsoft has unrestricted access to. So I’m not going to attempt to help make it’s use better at all. Hopefully if it’s trash enough, it’ll blow over once no one reasonable uses it. Besides, the man’s derided by production operators and non-kool aid drinking salaried folk He can keep it up. Lol

              • Lemminary@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                13 hours ago

                Right, I just don’t want him to think that, or he’d have me tailor the prompts for him and give him an opportunity to micromanage me.

    • unexposedhazard@discuss.tchncs.de
      link
      fedilink
      arrow-up
      18
      ·
      edit-2
      18 hours ago

      Its just a statistics game. When 99% of stuff that uses or advertises the use of “AI” is garbage, then having a mental heuristic that filters those out is very effective. Yes you will miss those 1% of useful things, but thats not really an issue for most people. If you need it you can still look for it.

    • Detun3d@lemm.ee
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      13 hours ago

      Yep. AI research has advanced for decades. It’s essentially math. Don’t be mad at math. Be mad at the salesmen lying about their unfinished, unregulated and unsupervised services that should be products but are being served as early access subscriptions to take advantage of the hype and ignorance while it lasts (and the fact that its easier to avoid refunds this way).

    • PresidentCamacho@lemm.ee
      link
      fedilink
      arrow-up
      4
      arrow-down
      5
      ·
      17 hours ago

      But what about me and my overly simplistic world views where there is no room for nuance? Have you thought about that?

  • apfelwoiSchoppen@lemmy.world
    link
    fedilink
    arrow-up
    87
    arrow-down
    1
    ·
    edit-2
    21 hours ago

    I ordered some well rated concert ear protection from the maker’s website. The order waited weeks to ship after a label was printed and likely forgotten. I went to find a place to call or contact a human there, all they had was a self-described AI chat robot that just talked down to me condescendingly. It simply would not believe my experience.

    I eventually got the ear protection but I won’t be buying from them again. Can’t even staff some folks to check email. I eventually found their PR email address but even that was outsourced to a PR firm that never got back to me. Utter shit, AI.

    • zod000@lemmy.ml
      link
      fedilink
      arrow-up
      33
      ·
      19 hours ago

      I’m glad you mentioned the company directly as I also want to steer clear of companies like this.

    • crank0271@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      21 hours ago

      That’s really good to know about these things. They’ve been on sale through Woot. I guess there’s a good reason for that.

    • lapping6596@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      14 hours ago

      Oh man, sad that’s the customer service cause I deeply love my loops. I was already carrying them with me everywhere I went so I grabbed a pill keychain thing and attached them to my keys so I’d never forget to grab them.

      • apfelwoiSchoppen@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        13 hours ago

        Yeah this happened back earlier this year. I had lost a pair from a purchase years ago and replaced them. Guessing they are laying off people/support contracts like so many stupid business owners. I was sure that my order would be stuck in limbo forever after the experience, but they eventually showed up. Never again.

    • Catoblepas@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      21 hours ago

      Wow, that’s extremely disappointing. I had a really positive experience with them a few years ago when I wanted to exchange what I got (it was too quiet for me), and they just sent me a free pair after I talked to an actual person on their chat thing. It’s good to know that’s not how they are anymore if I ever need to replace them.

    • Dozzi92@lemmy.world
      link
      fedilink
      arrow-up
      26
      arrow-down
      1
      ·
      20 hours ago

      That would’ve been such an easy disputed charge and get the plugs somewhere else. I’m not wasting a second on something like that, just telling my credit card company they didn’t uphold their end of the deal, and that’s that. I will lose hearing out of spite if this happened to me, because I’m an idiot.

  • anachrohack@lemmy.world
    link
    fedilink
    arrow-up
    18
    arrow-down
    3
    ·
    21 hours ago

    I use claude to ask it coding questions. I don’t use it to generate my code; I mostly use it to do a kind of automated code review to look for obvious pitfalls. It’s pretty neat for that

    I don’t use any other AI-powered products. I don’t let it generate emails, I don’t let it analyze data. If your site comes with a built in LLM powered feature, I assume

    1. It sucks
    2. You are a con artist

    AI is the new Crypto. If you are vaguely associated with it, I assume there’s something criminal going on

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 minutes ago

      I use AI to script code.

      For my minecraft server.

      I rely on expert humans to do tech work for my team and their tools.

      I am not anti-AI per-say, I just know what works best and what leads to best results.

    • Lemminary@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      14 hours ago

      I mostly use it to do a kind of automated code review

      Same here, especially when I’m working with plain JS. Just yesterday I was doing some benchmarking and it fixed a variable reference in my code unprompted by commenting the small fix as part of the answer when I asked it something else. I copy-pasted and it worked perfectly. It’s great for small scope stuff like that.

      But then again, I had to turn off Codeium that same day when writing documentation because it kept giving me useless and distracting, paragraph-long suggestions restating the obvious. I know it’s not meant for that, but jeez, it reminded me so much of Bing’s awfully distracting autocomplete.

      I’ve never felt this sort of technology before that, when it works, it feels like you’re gliding on ice, and when it doesn’t, it feels like ice skating on a dirt road.

  • IrateAnteater@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    arrow-down
    3
    ·
    21 hours ago

    The only time I disagree with this is when the business is substituting “AI” in for “machine learning”. I’ve personally seen that work in applications where traditional methods don’t work very well (vision guided industrial robot movement in this case).

    • Hotzilla@sopuli.xyz
      link
      fedilink
      arrow-up
      3
      arrow-down
      4
      ·
      edit-2
      17 hours ago

      These new LLM models and vision models have their place in software stack. They do enable some solutions that have been nearly impossible in the past (mandatory xkcd ref: https://xkcd.com/1425/ , this is now trivial task)

      ML works very well on large data sets and numbers, but it is poor at handling text data. LLM’s again are shit with large data and numbers, but they are good at handling small text data. It is a tool, and properly used very powerful one. And it is not a magic bullet.

      One easy example from real world requirements: you have five paragraph of human written text, and you need to summarize it to header automatically. Five years ago if some project owner would have request this feature, I would have said string.substring(100), live with it. Now it is pretty much one line of code.

      • TheTechnician27@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        15 hours ago

        Even though I understand your sentiment that different types of AI tools have their place, I’m going to try clarifying some points here. LLMs are machine learning models; the ‘P’ in ‘GPT’ – “pretrained” – refers to how it’s already done some learning. Transformer models (GPTs, BERTs, etc.) are a type of deep learning is a branch of machine learning is a field of artificial intelligence. (edit: so for a specific example of how this looks nested: AI > ML > DL > Transformer architecture > GPT > ChatGPT > ChatGPT 4.0.) The kind of “vision guided industrial robot movement” the original commenter mentions is a type of deep learning (so they’re correct it’s machine learning, but incorrect that it’s not AI). At this point, it’s downright plausible that the tool they’re describing uses a transformer model instead of traditional deep learning like a CNN or RNN.

        I don’t entirely understand your assertion that “LLMs are shit with large data and numbers”, because LLMs work with the largest data in human history. If you mean you can’t feed a large, structured dataset into ChatGPT and expect it to be able to categorize new information from that dataset, then sure, because: 1) it’s pretrained, not a blank slate that specializes on the new data you give it, and 2) it’s taking it in as plaintext rather than a structured format. If you took a transformer model and trained it on the “large data and numbers”, it would work better than traditional ML. Non-transformer machine learning models do work with text data; LSTMs (a type of RNN) do exactly this. The problem is that they’re just way too inefficient computationally to scale well to training on gargantuan datasets (and consequently don’t generate text well if you want to use it for generation and not just categorization). In general, transformer models do literally everything better than traditional machine learning models (unless you’re doing binary classification on data which is always cleanly bisected, in which case the perceptron reigns supreme /s). Generally, though, yes, if you’re using LLMs to do things like image recognition, taking in large datasets for classification, etc., what you probably have isn’t just an LLM; it’s a series of transformer models working in unison, one of which will be an LLM.


        Edit: When I mentioned LSTMs, I should clarify this isn’t just text data: RNNs (which LSTMs are a type of) are designed to work on pieces of data which don’t have a definite length, e.g. a text article, an audio clip, and so forth. The description of the transformer architecture in 2017 catalyzed generative AI so rapidly because it could train so efficiently on data not of a fixed size and then spit out data not of a fixed size. That is: like an RNN, the input data is not of a fixed size, and the transformed output data is not of a fixed size. Unlike an RNN, the data processing is vastly more efficient in a transformer because it can make great use of parallelization. RNNs were our main tool for taking in variable-length, unstructured data and categorizing it (or generating something new from it; these processes are more similar than you’d think), and since that describes most data, suddenly all data was trivially up for grabs.

    • TheTechnician27@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      8
      ·
      edit-2
      20 hours ago

      Huh? Deep learning is a subset of machine learning is a subset of AI. This is like saying a gardening center is substituting “flowers” in for “chrysanthemums”.

      • IrateAnteater@sh.itjust.works
        link
        fedilink
        arrow-up
        12
        ·
        20 hours ago

        I don’t control what the vendor marketing guys say.

        If you’re expecting “technically correct” from them, you’ll be doomed to disappointment.

        • TheTechnician27@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          10
          ·
          edit-2
          16 hours ago

          The scenario you just described, though, is technically correct is my point (edit: whereas you seem to be saying it isn’t technically correct; it’s also colloquially correct). Referring to “machine learning” as “AI” is correct in the same way referring to “a rectangle” as “a quadrilateral” is correct.


          EDIT: I think some people are interpreting my comment as “b-but it’s technically correct, the best kind of correct!” pedantry. My point is that the comment I’m responding to seems to think they got it technically incorrect, but they didn’t. Not only is it “technically correct”, but it’s completely, unambiguously correct in every way. They’re the ones who said “If you’re expecting “technically correct” from them, you’ll be doomed to disappointment.”, so I pointed out that I’m not doomed to disappointment because they literally are correct colloquially and correct technically. Please see my comment below where I talk about why what they said about distinguishing AI from machine learning makes literally zero sense.

          • subignition@fedia.io
            link
            fedilink
            arrow-up
            9
            arrow-down
            1
            ·
            18 hours ago

            Language is descriptive, not prescriptive. “AI” has come to be a specific colloquialism, and if you refuse to accept that, you’re going to cause yourself pain when communicating with people who aren’t equally pedantic as you.

            • TheTechnician27@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              6
              ·
              edit-2
              17 hours ago

              Okay, at this point, I’m convinced no one in here has even a bare minimum understanding of machine learning. This isn’t a pedantic prescriptivism thing:

              1. “Machine learning” is a major branch of AI. That’s just what it is. Literally every paper and every book ever published on the subject will tell you that. Go to the Wikipedia page right now: “Machine learning (ML) is a field of study in artificial intelligence”. The other type of AI of course means that the machine can’t learn and thus a human has to explicitly program everything; for example, video game AI usually doesn’t learn. Being uninformed is fine; being wrong is fine. There’s calling out pedantry (“reee you called this non-Hemiptera insect a bug”) and then there’s rendering your words immune to criticism under a flimsy excuse that language has changed to be exactly what you want it to be.

              2. Transformers, used in things like GPTs, are a type of machine learning. So even if you say that “AI is just generative AI like LLMs”, then, uh… Those are still machine learning. The ‘P’ in GPT literally stands for “pretrained”, indicating it’s already done the learning part of machine learning. OP’s statement literally self-contradicts.

              3. Meanwhile, deep learning (DNNs, CNNs, RNNs, transformers, etc.) is a branch of machine learning (likewise with every paper, every book, Wikipedia (“Deep learning is a subset of machine learning that focuses on […]”), etc.) wherein the model identifies its own features instead of the human needing to supply them. Notably, the kind of vision detection the original commenter is talking about is deep learning like a transformer model is. So “AI when they mean machine learning” by their own standard that we need to be specific should be “AI when they mean deep learning”.

              The reason “AI” is used all the time to refer to things like LLMs etc. is because generative AI is a type of AI. Just like “cars” are used all the time to refer to “sedans”. To be productive about this: for anyone who wants to delve (heh) further into it, Goodfellow et al. have a great 2016 textbook on deep learning*. In a bit of extremely unfortunate timing, transformer models were described in a 2017 paper, so they aren’t included (generative AI still is), but it gives you the framework you need to understand transformers (GPTs, BERTs). After Goodfellow et al., just reading Google’s original 2017 paper gives you sufficient context for transformer models.

              *Goodfellow et al.'s first five chapters cover traditional ML models so you’re not 100% lost, and Sci-Kit Learn in Python can help you use these traditional ML techniques to see what they’re like.


              Edit: TL;DR: You can’t just weasel your way into a position where “AI is all the bad stuff and machine learning is all the good stuff” under the guise of linguistic relativism.

              • petrol_sniff_king@lemmy.blahaj.zone
                link
                fedilink
                arrow-up
                4
                ·
                15 hours ago

                Edit: TL;DR: You can’t just weasel your way into a position where “AI is all the bad stuff and machine learning is all the good stuff” under the guise of linguistic relativism.

                You can, actually, because the inverse is exactly what marketers are vying for: AI, a term with immense baggage, is easier for layman to recognize, and implies a hell of a lot more than it actually does. It is intentionally leaning on the very cool futurism of AI to sell itself as the next evolutionary stage of human society—and so, has consumed all conversation about AI entirely. It is Hannibal Lecter wearing the skin of decades of sci-fi movies.

                “Machine learning” is not a term used by sycophants (as often), and so infers different things about the person saying it. For one, they may have actually seen a college with their eyes.

                So, you seem to be implying their isn’t a difference, but there is: people who suck say one, people who don’t say the other. No amount of academic rigor can sidestep this problem.

                • TheTechnician27@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  3
                  ·
                  edit-2
                  11 hours ago

                  Quite the opposite: I recognize there’s a difference, and it horrifies me that corporations spin AI as something you – “you” meaning the general public who don’t understand how to use it – should put your trust in. It similarly horrifies me that in an attempt to push back on this, people will jump straight to vibes-based, unresearched, and fundamentally nonsensical talking points. I want the general public to be informed, because like the old joke comparing tech enthusiasts to software engineers, learning these things 1) equips you with the tools to know and explain why this is bad, and 2) reveals that it’s worse than you think it is. I would actually prefer specificity when we’re talking about AI models; that’s why instead of “AI slop”, I use “LLM slop” for text and, well, unfortunately, literally nobody in casual conversation knows what other foundation models or their acronyms are, so sometimes I just have to call it “AI slop” (e.g. for imagegen). I would love it if more people knew what a transformer model is so we could talk about transformer models instead of the blanket “AI”.

                  By trying to incorrectly differentiate “AI” from “machine learning”, we’re giving dishonest corporations more power by implying that only now do we truly have “artificial intelligence” and that everything that came before is merely “machine learning”. By muddling what’s actually a very straightforward hierarchy of terms (opposed to a murky, nonsensical dichotomy of “AI is anything that I don’t like, and ML is anything I do”), we’re misinforming the public and making the problem worse. By showing that “AI” is just a very general field that GPTs live inside, we reduce the power of “AI” as a marketing buzzword word.

                • TheTechnician27@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  12 hours ago

                  “Expert in machine learning”, “has read the literal first sentence of the Wikipedia entry for ‘machine learning’” – same thing. Tomayto, tomahto.

                  Everything else I’m talking about in detail is just gravy; literally just read the first sentence of the Wikipedia article to know that machine learning is a field of AI. That’s the part that got me to say “no one in this thread knows what they’re talking about”: it’s the literal first sentence in the most prominent reference work in the world that everyone reading this can access in two seconds.

                  You can say most people don’t know the atomic weight of oxygen is 16-ish. That’s fine. I didn’t either; I looked it up for this example. What you can’t do is say “the atomic weight of oxygen is 42”, then when someone contradicts you that it’s 16, refuse to concede that you’re wrong and then – when they clarify why the atomic weight is 16 – stand there arms crossed and with a smarmy grin say: “wow, expert blindness much? geez guys check out this bozo”

                  We get it; you read xkcd. The point of this story is that you need to know fuck-all about atomic physics to just go on Wikipedia before you confidently claim the atomic weight is 42. Or, when someone calls you out on it, go on Wikipedia to verify that it’s 16. And if you want to dig in your heels and keep saying it’s 42, then you get the technical explanation. Then you get the talk about why it has that weight, because you decided to confidently challenge it instead of just acknowledging this isn’t your area of expertise.

  • Lukas Murch
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    19
    ·
    13 hours ago

    Sounds like Brian can’t figure out AI.

  • felykiosa@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    arrow-down
    9
    ·
    17 hours ago

    I have to disagree with that one but not completely. It really depends on what type of company I interact with . is that an independent small company or a big corp . also what type of AI (generate picture or generate summary etc…) And is the application fullfit or not . ex if you generate a logo or a picture in a small business is the style of the picture correct or is it the same as everyone , also did you check if the image was correct etc… But for big corps yeah they can go fuck themselves, they have the budget to pay artist

  • gmtom@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    18
    ·
    edit-2
    18 hours ago

    Cool, my work for my company with AI for medical scans has detected thousands upon thousands of tumors and respiratory diseases, long before even the most well trained doctor could have spotted them, and as a result saved many of those people’s lives. But it’s good to know we’re all just lazy pieces of shit because we use AI.

    • JandroDelSol@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      13 minutes ago

      When people talk about “AI” nowadays, they’re usually talking about LLMs and other generative AI, especially if it’s used to replace workers or human effort. Analytical AI is perfectly valid and is a wonderful tool!

    • jjjalljs@ttrpg.network
      link
      fedilink
      arrow-up
      14
      arrow-down
      1
      ·
      17 hours ago

      Assuming what you’re describing works (and i have no particular reason to doubt, beyond the generally poor reputation of AI), that’s a different beast than “lol i fired all the copywriters, artists, and support staff so I, the owner, could keep more profits for myself!”. Or, “I didn’t pay attention in English 101 and don’t know how to write, so I’ll have expensive auto suggest do it for me”

      • gmtom@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 hour ago

        Yeah that’s my point. AI has a lot of problems that need to be addressed, put people are getting so mad about AI that conversation around it is getting more and more extreme to the point people are talking about all AI being bad.

      • ArchRecord@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        3
        ·
        17 hours ago

        that’s a different beast

        I think what was being implied though was that the original poster was saying that any use or talk of AI by a company immediately invalidates it, regardless of there being any specific traits like firing workers present. (e.g. “Using AI” was the only prerequisite they mentioned)

        So it seems like, based on the original wording, if they saw a hospital going “we use top of the line AI to identify tumors and respiratory diseases early” they would just disregard that hospital entirely, without actually caring how the AI works, is implemented, or affects the employment of the other people working there, even though it’s wholly beneficial.

        At least, that’s just my reading of it though.

      • gmtom@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        4 hours ago

        No they are the same thing.

        The core algorithm we built upon is practically the same one used by AI image generators, the main difference is that we have deeper convolutional layers and more of them and we don’t do any GAN stuff that the newer image generators use.

      • ArchRecord@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 hours ago

        The problem is that anything even remotely related to AI is just being called “AI,” whether it’s by the average person or marketing people.

        So when you go to a company’s website and you see “powered by AI,” they could be talking about LLMs, or an ML model to detect cancer, and the average person won’t know the difference between the technologies.

        So if someone universally rejects anything that says it “uses AI” just because what’s usually called “AI” is just badly implemented LLMs that make the experience worse, they’re going to inevitably catch nearly every ML model in the crossfire too, since most companies are calling their ML use cases “AI powered,” and that means rejecting companies that develop models like those that detect tumors, predict protein folding patterns, identify anomalies in other health characteristics, optimize traffic routes in cities, etc, even if those use cases aren’t even related to LLMs and all the flaws they often bring.

  • Kowowow@lemmy.ca
    link
    fedilink
    arrow-up
    1
    arrow-down
    5
    ·
    20 hours ago

    If I ran a business I think the only thing I’d have “ai” do would be basic social media stuff because I don’t want to ever get into that kind of stuff though I think could make it in windows basic

    Reply to questions with company phone number/email, post company pictures from certain folder every 5 hours or so, like a post from random local person, like a random post from a random person anywhere(extra funny if porn or horrible opinion)