…and I still don’t get it. I paid for a month of Pro to try it out, and it is consistently and confidently producing subtly broken junk. I had tried doing this before in the past, but gave up because it didn’t work well. I thought that maybe this time it would be far along enough to be useful.

The task was relatively simple, and it involved doing some 3d math. The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.

I spent nearly the whole day yesterday going back and forth with it, and felt like I was in a mental fog. It wasn’t until I had a full night’s sleep and reviewed the chat log this morning until I realized how much I was going in circles. I tried prompting a bit more today, but stopped when it kept doing the same crap.

The worst part of this is that, through out all of this, Claude was confidently responding. When I said there was a bug, it would “fix” the bug, and provide a confident explanation of what was wrong… Except it was clearly bullshit because it didn’t work.

I still want to keep an open mind. Is anyone having success with these tools? Is there a special way to prompt it? Would I get better results during certain hours of the day?

For reference, I used Opus 4.6 Extended.

  • rosco385@lemmy.wtf
    link
    fedilink
    arrow-up
    16
    ·
    16 hours ago

    The solutions it generated were almost write every time

    Did you vibe code this post? 😂

  • CCMan1701A@startrek.website
    link
    fedilink
    arrow-up
    2
    ·
    12 hours ago

    I use AI for researching what existing software or projects exist to help my build up my system that I then suffer through making.

  • Flames5123@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 day ago

    I have a full pro model for Kiro at work. It does actually work, but we have custom MCP servers for all the internal tools, context on how to use these tools, style guidelines, etc. and then on top of that we have a lot of AI context files in the code base to help the AI understand the code base and make the correct changes.

    I’ve been using it on a side project and it works if you know how to constrain it. It does get things wrong a lot. But the big thing about it is doing spec driven development where you give it a write up and it makes a requirements doc and a design doc with a lot of correctness properties in them to follow when generating and making the tasks.

    I don’t believe people can vibe code unless they can actually code. It’s a whole different way of coding. I still manually edit what it does a lot.

    A lot of people explain it like it’s a brand new junior developer. You need to give it as much context as possible, tell it to exactly what you want, tell it what you don’t want, tell it why, etc. and it still may not listen exactly.

  • Thirsty Hyena@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    23 hours ago

    I recently started using Pro to debug a problem I couldn’t solve. The one thing I need from it is an extra insight, a second opinion (because I’m the only developer), and it allowing me to let it read the whole folder helps, it identified a problem I didn’t consider because it’s a file outside of where I was looking.

  • zbyte64@awful.systems
    link
    fedilink
    arrow-up
    14
    ·
    edit-2
    1 day ago

    In my experience there are three ways to be successful with this tool:

    • write something that already exists so it doesn’t need to think
    • do all the thinking for it upfront (hello waterfall development)
    • work in very small iterations that doesn’t require any leaps of logic. Don’t reprompt when it gets something wrong, instead reshape the code so it can only get it right

    The issue with debugging is that it doesn’t actually think. LLMs pattern match to a chain of thought based on signals, not reasoning. For it to debug you need good signals in your code that explicitly tell what it is doing and the LLMs do not write code with that level of observability by default.

    Edit: one of my workflows that I had success with is as follows:

    • write a gherkin feature file describing desired functionality, maybe have the LLM create multiple scenarios after I defined one to copy from
    • tell the LLM to write tests using those feature files, does an okay job but needs help making tests run in parallel.
    • if the feature is simple, ask the LLM to make a plan and review it
    • if the feature is complex then stub out the implementation in code and add TODOs, then direct the LLM to plan. Giving explicit goals in the code itself reduces token consumption and yield better plans
    • spartanatreyu@programming.dev
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      22 hours ago

      write something that already exists so it doesn’t need to think

      If something already exists, it shouldn’t need to be rewritten.

      Doing otherwise is a sign that something has gone wrong.

      That was the case before LLMs and it is still the case today.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    22 hours ago

    It’s a tool that you need to learn. Try some of claude.md files people share online for your programming area as a starter. You still need to review what it does but just asking for it to create tests as it creates code does a lot to improve output.

  • tristynalxander@mander.xyz
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    8 hours ago

    Also working on some 3d maths.

    I’ve used the free versions a bit, but not really to the extent that I’d call it vibe coding. The chat bots often know where to find libraries or pre-existing functions that I don’t know. It’s also okay at algorithms for well defined problems, but it often says be careful not to do something I absolutely need to do or visea versa. It’s very hit and miss on debugging. It’ll point out obvious stuff (typos) reliably, and it can do some iteration stuff usually, but it usually doesn’t pick up on other things. Once in a rare while it will impress me by suggesting I look at a particular thing, and I think it manages this better in new chats, but most complex issues fail for it. I use it as a faster stackoverflow, but you need to be able to work through the code yourself, understand what you’re doing, and test that individual steps are doing what they need to do. The bots can’t really do any sort of planning or breaking down a problem into sub-problems, and they really suck at thinking about 3d stuff.

  • ozymandias@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    ·
    1 day ago

    you need to fully be able to program to work with these things, in my experience.
    you have to explain what you want very specifically, in precise programming terms.

    i tried a preview of chatgpt codex and it’s working better than my free version of claude, but codex creates a whole virtual programming environment, you have to connect it to a github repository, then it spins up an instance with tools you include and actually tests the code and fixes bugs before sending it back to you.
    but you still need to be able to find the bugs and fix them yourself.

    oh and i think they work best with python, but i’ve also used ruby and dart and it’s decent.
    it’s kinda like a power tool, it’ll definitely help you a lot to fix a car but if you can’t do it with wrenches it won’t help very much.

    • quixote84@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      1 day ago

      I’ve never been able to program in anything more complex than BASIC and command line batch files, but I’m able to get useful output from Claude.

      I’m an IT Infrastructure Manager by trade, and I got there through 20 years of supporting everything from desktop to datacenter including weird use cases like controlling systems in a research lab. On top of that, I’ve gotten under the hood of software in the form of running game servers in my spare time.

      What you need to get good programs out of AI boils down to 3 things:

      1. The ability to teach an entity whose mistakes resemble those of a gifted child where it went wrong a step or ten back from where it’s currently looking.
      2. The ability to provide useful beta test / debug output regarding programs which aren’t behaving as expected. This does include looking at an error log and having some idea what that error means.
      3. Comfort using (either executing or compiling depending on the language) source code associated with the language you’re doing things in. This might be as simple as “How do I run a Powershell script or verify that I meet the version and module requirements for the script in question?”, or it might be as complicated as building an executable in Visual Studio. Either way whatever the pipeline is from source to execution, it must be a pipeline you’re comfortable working with. If you’re doing things anywhere outside the IT administration space, it’s reasonable to be looking at Python as the best first path rather than Powershell. Personally, I must go where supported first party modules exist for the types of work I’m developing around. In IT Administration, that’s Powershell.

      I’ve made tools which automate and improve my entire department’s approach to user data, device data, application inventory, patch management, vulnerability management, and these are changes I started making with a free product three months ago, and two months back I switched to the paid version.

      Programming is sort of like conversation in an alien language. For that reason, if you can give precise instructions sometimes you really can pull something new into existence using LLM coding. It’s the same reason that you could say words which have never been said in that specific order before, and have an LLM translate them to Portuguese.

      I always used to talk about how everything in a computer was math, and that what interested me more than quantum computing would be a machine which starts performing the same sorts of operations on words or concepts that computers of that day ('90s and '00s when “quantum” was being slapped on everything to mean “fast” or “powerful”) were doing on math. I said that the best indicator when linguistic computing arrives would be that without ever learning to program, I’d start being able to program. I was looking at “Dragon Naturally Speaking” when I had this idea. It was one of the earliest effective speech to text programs. I stopped learning to program immediately and focused exclusively on learning operations from that point forward.

      I’ve been testing the code generation abilities of LLMs for about three years. Within the last six months I feel like I’m starting to see evidence that the associations being made internally by LLMs are complex enough to begin considering them the fulfillment of my childhood dream of a “word computer”.

      All the shitty stuff about environment and theft of art is all there too, which sucks, but more because our economic model sucks than because LLMs either do or do not suck. If we had a framework for meeting everybody’s basic needs, this software in its current state has the potential to turn everyone with a passion for grammatical and technical precision into a concept based developer practically overnight.

      • eneff@discuss.tchncs.de
        link
        fedilink
        arrow-up
        2
        ·
        14 hours ago

        I have no qualifications to judge the quality of the generated results, yet the generated results are always of great quality.

        Do you seriously not realize how out of touch this sounds?

        • quixote84@midwest.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          Of course it sounds out of touch. I didn’t say it, or anything like it. Just like the other commenter, you seem to have stopped after the first sentence.

          20 years of IT experience from a support perspective does qualify me to put anybody in the programming space on notice. The tools might not be as good as a talented and well trained dev, but they’re already better than a lazy dev. The output I get from Claude Code takes effort to get running. It just takes less of it than the output from my outsourced offshore MSP.

      • Feyd@programming.dev
        link
        fedilink
        arrow-up
        8
        ·
        1 day ago

        I’ve never been able to program in anything more complex than BASIC and command line batch files, but I’m able to get useful output from Claude.

        Chatbots being deemed useful in tasks by people unqualified to make those judgments is a running problem.

  • silver@das-eck.haus
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    I think it’s pretty heavily dependent on what you’re trying to do. I’ve gotten a lot of push from higher ups at my company to use copilot wherever possible. So, I’ve spent a lot of time lately having copilot + opus write code for me. Most of what I’m doing is super straightforward middleware APIs or basic internal front ends. Since it has access to very similar codebases for reference, and we have custom agents that point it in the right direction, it’s a pretty good experience.

    However, if I ask it to do something totally new, it does okay, more like what you’ve experienced. It takes a lot of hand holding, but it usually gets the job done as long as you’re very descriptive in your prompt. Probably not faster than an experienced developer at the moment though

  • x00z@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    The trick about vibe coding is that you confidently release the messed up code as something amazing by generating a professional looking readme to accompany it.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 day ago

    I think it’s mostly going to be useful for boilerplate generation, and effectiveness is going to vary wildly based on what language you’re using. JS or Python? It’ll probably do OK. Plenty of open source for it to “learn” from. Delphi? Forget it.

    Brief experimentation showed it liked to bullshit if it was wrong, rather than fix things.

  • ZoteTheMighty@lemmy.zip
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    That’s been my experience. It’s always subtlely wrong, its solutions are hard to maintain, and if you spend too much time with it, it starts forgetting what you said earlier. Managers don’t understand the distinction, they already can’t code well, and only test it in small problems where it’s not context-limited, so they’re amazed.

  • Feyd@programming.dev
    link
    fedilink
    arrow-up
    175
    arrow-down
    4
    ·
    2 days ago

    producing subtly broken junk

    The difference between you and people that say it’s amazing is that you are capable of discerning this reality.

    • OwOarchist@pawb.social
      link
      fedilink
      English
      arrow-up
      58
      arrow-down
      2
      ·
      2 days ago

      What I don’t get, though, is how the vibe code bros can’t discern this reality.

      How can they sit there and not see that their vibe-coded app just doesn’t do what they wanted it to do? Eventually, you’ve got to try actually running the app, right? And how do you keep drinking the AI kool-aid when you find out that the app doesn’t work?

      • Lumelore (She/her)@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        38
        arrow-down
        2
        ·
        2 days ago

        Vibe code bros aren’t real programmers. They’re business people, not computer people. Even if they have a CS degree, they only got that because they think it’ll get them more money. They lack passion and they don’t care about understanding anything. They probably don’t even care about what they’re generating beyond its potential to be used in a grift.

        I graduated college not that long ago and my CS classes had quite a few former business majors. They switched because they think it’ll be more lucrative for them but since they only care about money they didn’t bother to actually learn the material especially since they could just vibe code through everything.

        • b_n@sh.itjust.works
          link
          fedilink
          arrow-up
          13
          ·
          2 days ago

          So much this.

          After working in tech companies for the last 10 years I’ve noticed the difference between people that “generate code” and those that engineer code.

          My worry about the industry is that vibe coding gives the code generators the ability to generate even more code. The engineers (even those that use vibe tools) are not engineering as much code by volume compared to “the generators”.

          My hope is that this is one of those “short term gain, long term pain” things that might self correct in a couple of years 🤞.

          • sobchak@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            16 hours ago

            It’s insane that companies are going back to metrics like LOC (or tokens generated), when the industry figured out decades ago that these are horrible, counterproductive metrics.

            “The hard thing about building software is deciding what one wants to say, not saying it. No facilitation of expression can give more than marginal gains.” - No Silver Bullet (1986)

      • favoredponcho@lemmy.zip
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        20 hours ago

        You do try running the app, and then you see what is broken and then you have Claude fix it. The process is still iterative just like regular coding. I haven’t met a software engineer that wrote a perfect app the first try, its always broken, even in subtle ways. Why does everyone think vibecoding needs to be perfect on the first shot?

      • Feyd@programming.dev
        link
        fedilink
        arrow-up
        37
        ·
        2 days ago

        They’re the same people that copied code from stack overflow that you had to tell them how to actually fix every PR. The difference is the C suite types are backing them this time

      • tleb@lemmy.ca
        link
        fedilink
        arrow-up
        11
        arrow-down
        1
        ·
        2 days ago

        Eventually, you’ve got to try actually running the app, right?

        At least at my company, no, they just start selling it.

      • Oisteink@lemmy.world
        link
        fedilink
        arrow-up
        9
        arrow-down
        5
        ·
        2 days ago

        I do apps that work, i do patches that are production quality. Half the cs world does… I do full stack ai debugging of esp32 projects.

        It’s a powerful tool, you just need to learn it’s strong and weak points, just like any other tool you use.

        • Kissaki@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          2 days ago

          Half the cs world does…

          What’s the basis for this claim? I’m doubtful, but don’t have wide data for this.

          • Oisteink@lemmy.world
            link
            fedilink
            arrow-up
            6
            ·
            2 days ago

            Rough estimate from my personal connections only. Some work places where ai is not possible, but all that have made an effort report good code. You need to work with what it is - a word generator that sometimes gives correct results. Make it research and not trust training. Never let it do things on its own, require a plan and reason. Make it evaluate its own work/plan.

            Most issues i have stem from models beeing too eager. Restrain them and remove the “i can do this next…”behaviour.

            Context is king - so proper mcp and documentation that is agent facing. I use serena as i can get lsp for yaml, markup and keep these docs like that

              • Oisteink@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                1 day ago

                No sry. I only use idf, and use their create vscode files for lsp to work

                And tmux + skills for idf.py work including debug. Also repl on console/uart - agents love cli - including this.

                Imo mcp > pure skills for tmux

    • JustEnoughDucks@feddit.nl
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      2 days ago

      I wonder if it was even able to compile. I am a shitty hobby coder who just does it to make my embedded hardware projects function.

      I have yet to get compilable code out of any of the AI bots I have tried. Gemini, mistral, and chatGPT. I am not making an account lol.

      I have gotten some compilable python and VBA code for data analysis stuff at work, so I wonder if it is because embedded stuff uses specific SDKs that it can’t handle.

      Either way I have given up on it for anything besides bouncing ideas off of or debugging where electromagnetics issues could lie (though it has been completely wrong about that also even though it is using the wrong concepts, it just reminds me of concepts that I might have overlooked)

  • Katherine 🪴@piefed.social
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    Don’t just use it as a drop in replacement for a programmer; use it to automate menial tasks while employing trust but verify with every output it produces.

    A well written CLAUDE.md and prompt to restrict it from auto committing, auto pushing, and auto editing without explicit verification before doing anything will keep everything in your control while also aiding menial maintenance tasks like repetitive sections or user tests.

    • Feyd@programming.dev
      link
      fedilink
      arrow-up
      7
      ·
      2 days ago

      verify with every output it produces.

      I agree that you can get quality output using these tools, but if you actually take the time to validate and fix everything they’ve output then you spend more time than if you’d just written it, rob yourself of experience, and melt glaciers for no reason in the process.

      prompt to restrict it from auto committing, auto pushing, and auto editing without explicit verification

      Anything in the prompt is a suggestion, not a restriction. You are correct you should restrict those actions, but it must be done outside of the chatbot layer. This is part of the problem with this stuff. People using it don’t understand what it is or how it works at all and are being ridiculously irresponsible.

      repetitive sections

      Repetitive sections that are logic can be factored down and should be for maintainability. Those that can’t be can be written with tons of methods. A list of words can be expanded into whatever repetitive boilerplate with sed, awk, a python script etc and you’ll know nothing was hallucinated because it was deterministic in the first place.

      user tests.

      Tests are just as important as the rest of the code and should be given the same amount of attention instead of being treated as fine as long as you check the box.

      • Katherine 🪴@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I agree it’s not perfect; I still only use it very sparingly, I was just just saying as an alternative to trusting everything it does out of the box.