• OsrsNeedsF2P@lemmy.ml
    link
    fedilink
    arrow-up
    3
    arrow-down
    4
    ·
    edit-2
    18 hours ago

    Oof let’s see, what am I an expert in? Probably system design - I work at (insert big tech) and run a system design club there every Friday. I use ChatGPT to bounce ideas and find holes in my design planning before each session.

    Does it make mistakes? Not really? it has a hard time getting creative with nuanced examples (i.e. if you ask it to “give practical examples where the time/accuracy tradeoff in Flink is important” it can’t come up with more than 1 or 2 truly distinct examples) but it’s never wrong.

    The only times it’s blatantly wrong is when it hallucinates due to lack of context (or oversaturated context). But you can kind of tell something doesn’t make sense and prod followups.

    Tl;dr funny meme, would be funnier if true

    • RagingRobot@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      18 hours ago

      That’s not been my experience with it. I’m a software engineer and when I ask it stuff it usually gives plausible answers but there is always something wrong. For example it will recommend old outdated libraries or patterns that look like they would work but when you try them out you figure out they are setup differently now or didn’t even exist.

      I have been using windsurf to code recently and I’m liking that but it makes some weird choices sometimes and it is way too eager to code so it spits out a ton of code you need to review. It would be easy to get it to generate a bunch of spaghetti code that works mostly that’s not maintainable by a person out of the box.

    • spooky2092@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      17 hours ago

      I ask AI shitbots technical questions and get wrong answers daily. I said this in another comment, but I regularly have to ask it if what it gave me was actually real.

      Like, asking copilot about Powershell commands and modules that are by no means obscure will cause it to hallucinate flags that don’t exist based on the prompt. I give it plenty of context on what I’m using and trying to do, and it makes up shit based on what it thinks I want to hear.