I’ve been working with so many students who turn to it as a first resort for everything. The second a problem stumps them, it’s AI. The first source for research is AI.

It’s not even about the tech, there’s just something about not wanting to learn that deeply upsets me. It’s not really something I can understand. There is no reason to avoid getting better at writing.

  • lohky@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    ·
    7 days ago

    I hate that LLMs have fucked my ability to find decent documentation. The Internet is done for. I’m learning to garden and do basic electronics from text books now.

    • hardcoreufo@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 days ago

      I don’t know anything about gardening, but for electronics I can recommend practical electronics for inventors and Atari “the book.” Its focused on arcade cabinet repair but definitely has useful info for basic circuit troubleshooting that is aplicable today.

      • lohky@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 days ago

        I’ve been reading Practical Electronics for Inventors and watching the MIT courses on YouTube.

        Also picked up an Arduino kit and started tinkering, but I’m more interested in circuitry and not coding. My 6-year-old wants to build his own Moog synth because he’s obsessed with Daft Punk and I gotta support that.

    • NickwithaC@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 days ago

      Hopefully not text books that were published in the last 2 years because those risk being written by ai too.

      We’ve reached the carbon dating limit of human knowledge since nothing can now be varied as written by a human unless you personally watched them do it.

  • SuspciousCarrot78@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    3
    ·
    edit-2
    7 days ago

    It’s not about AI; it’s about how people are USING AI.

    Take for example this recent video from Language Jones, showing how to use AI to leverage your native intelligence for language learning (Yes, it’s from PhD in linguistics and yes, he cites research. “Always bring receipts” is logic 101). He shows how AI works best as a Socratic tutor, forcing you to generate answers rather than replacing thinking.

    https://www.youtube.com/watch?v=xQXiSGDXknA

    When used properly, AI is a force magnifier par excellence. When used in the way you’re likely encountering (young cohort? poor attention span? no training in formal reasoning, logic?) then yeah… “shit’s fucked” (in the Australian vernacular).

    I use to teach biomed, just before AI took over (so, circa 2013-2019). Attention spans were already alarmingly low and we’d have to instigate movement breaks, intermissions, break outs etc. I had to fucking tap dance out there - anything to keep “engagement” high and avoid the dreaded attrition KPIs.

    The days of students being able to concentrate for 60+ mins in a row are likely gone. Hell, there’s an oft repeated meme stat that average attention span on digital devices has dropped from two and a half minutes in 2004 to 47 seconds today. Whether you consider the provenance of that dubious, it does point to “people have trouble paying attention”.

    But…that’s not AI’s fault. The “shit was already fucked”.

    I think there’s something (still) to be said about Classical Education Method. We need things like that. We need to teach our young ones about things like “intuition pumps” and “street epistemology”, reasoning etc. And we can use ShitGPT to do it.

    Take a simple example: a student uses ChatGPT to write an essay on climate policy. The AI generates a claim. Now ask: “What would prove this wrong?” If they can’t answer - if they can’t articulate what evidence or logic would falsify it - they don’t understand it.

    They’ve outsourced the reasoning. That’s the difference.

    It’s not easy out there; it never was. But there’s a confluence of factors (popular culture, digital devices, changing demographics, family dynamics, “education” being streamlined as vocational pre-training etc etc ad infinitum) that certainly seem to be actively hostile towards developing thinkers.

    Here endth the pro clanker sermon.

    Ramen; may we be blessed by his noodly appendage.

    PS: I’m actually pretty hostile to AI myself and have been working on an open source engineering approach to mitigate some of these issues. Happy to share it if curious (not selling anything, Open source: just something I’m trying to use to solve this sort of issue for myself)

    • wpb@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 days ago

      I dislike guns. When used properly, they’re really fun; they’re used to shoot spinning discs out of the sky. But that’s not how they’re used. And regardless of how the inventor of guns intended for them to be used, and regardless of how much better off we’d all be if everyone just used them to shoot spinning discs out of the sky, people by and large use them for violence. If they didn’t have guns, they’d be much less able to easily kill other people. So, I dislike guns.

      I dislike AI.

      • SuspciousCarrot78@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        That analogy only works if AI ends up being mostly used for harm. Guns were designed to apply lethal force, so misuse is built into the tool.

        AI is closer to something like a spreadsheet or search engine - a general tool that can be used well or badly depending on the user.

        If the argument is really about risk tolerance that’s fair, but it’s a very different claim than saying the tool itself is inherently comparable to a weapon.

        • wpb@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          My main point there is that when evaluating the impact of some tool, I look at how it is used rather than how it could be used. Arguments like ‘if people were to use it like this or that…’ are not so interesting to me. What I care about is what the actual impact of a thing is, and for that, the only thing that matters is how people actually use it.

          Now, a separate thing is my assessment of how people actually use generative AI, and whether I consider the things they do with it a boon for society. I see:

          • students and juniors, but also experienced workers, deskilling at an alarming rate
          • CEOs using it as a pretext for massive layoffs
          • a dead internet which has become a minefield of disinformation (yes it already was, but now even moreso)
          • a wash of uninspired art and blogs
          • the software crisis deepening. 80% of software goes unused. Huge waste of potential and resources. This worsens now that we can crank out buggy half formed ideas that no one asked for at a much higher rate, except now we also burn the equivalent of a rainforest to do it

          I don’t like these actual things that people are actually using gen AI for. Maybe you see LLMs having different effects and have a different, more positive, assessment. But you cannot separate the assessment of a tool from its users and how they use it, because they’re exactly the ones that’ll be using it, and they’ll use it the way they use it.

    • BranBucket@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      6 days ago

      It’s not that I don’t think there aren’t legitimate uses for AI or that it could be used as a learning tool.

      It’s that I doubt it’s better than current learning tools largely because the nature of the medium seems to turn off the kind of critical thinking you’re describing. The medium and language of a message can have a profound effect on how we understand and process information, often without us even realizing it, and AI seems to be able to make those changes far too easily.

      • SuspciousCarrot78@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Perhaps only because ubiquity and speed favour sloppiness. As a thought experiment, imagine if you could only use AI once a day, for one question. Asking questions would suddenly become expensive.

        They would require careful thinking and pre-planning, followed by careful rumination on the answer and possible follow-ups.

        That’s obviously an extreme example, but it’s not that dissimilar to how people use tools like LexisNexis or IBISWorld - expensive research tools where the cost naturally forces you to think about the question before asking it.

        In that sense the issue may not be the medium itself so much as the cost structure of the interaction.

        When answers are instant and effectively unlimited, people tend to outsource thinking. When access is constrained, the incentive flips and the thinking moves back to the question.

        Which is to say: the tool probably amplifies existing habits rather than creating them. People who already interrogate sources will interrogate AI outputs. People who don’t, won’t.

        • BranBucket@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          I would ask it a careful question, and I would get a well worded, persuasive, but ultimately careless reply that’s just repetition of information and devoid of any new reasoning or insight.

          I would carefully ruminate on this reply, and find that at best, it’s factually correct because it’s an echo of the training data fed into the model, and although it sounds highly persuasive, it likely will need additional work to be adapted into the specific context and details of my situation.

          But, that’s not my main complaint. My complaint is that medium used seems to prevent people from doing that analysis. I think this is very much in line with what Neil Postman wrote about in Amusing Ourselves To Death and Technopoly. These tools seem to use us, sneakily adjusting our perceptions of what the information means, rather than us using the tools.

          Is it possible to be careful and use it the way you describe in your thought experiment? Yes. Is it likely that people will be? No, and we seem to be seeing example after example of that every day.

          • SuspciousCarrot78@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 days ago

            OK but is that an AI problem or a people problem?

            I think the Postman point is a fair one. The way information is presented absolutely affects how people reason with it. A fluent conversational answer can feel authoritative in a way that a messy set of search results doesn’t.

            But that problem isn’t unique to LLMs. Every medium that compresses information into something smooth and persuasive has created the same concern.

            Books did it, newspapers did it, television did it, and search engines arguably did it as well.

            The real question is whether the medium determines behaviour or just amplifies existing habits.

            People who already interrogate sources tend to interrogate AI outputs as well. People who don’t… won’t.

            I suspect there’s a bigger issue here than “LLM bad”. We’ve been drifting toward shallow, instant-answer information consumption for years. AI just slots neatly into a pattern that already existed.

            We’ve become (for lack of better words) mentally flabby - me included.

            • BranBucket@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 days ago

              If I’m arguing in good faith, it’s both. We have a tool that uses us, a medium that shoves massive amounts of information at us but hinders gaining knowledge (which I’m going to say is the useful retention and application of that information, and not just for winning trivial night) and as a species we refuse to not let ourselves be suckered by it.

              In the same vein, Postman also argued that this sort of change is often both ongoing and inevitable, and the only real debate was on what the true cost to our culture and society will be. He sited examples going back to Plato if I remember correctly. So as you put it, writing did it, books, television, search engines, etc. And so much money has been spent on making this a thing that we’re going to have to contend with it until it undeniably starts costing more than it’s worth, and if that cost is cultural or societal instead of financial, it might never go away.

              I suspect there’s a bigger issue here than “LLM bad”. We’ve been drifting toward shallow, instant-answer information consumption for years. AI just slots neatly into a pattern that already existed.

              I don’t pretend to speak for the man, but I think Postman would agree with you, and he thought it started in the 1860’s with the telegraph.

    • deadymouse@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      It’s not about AI; it’s about how people are USING AI.

      Those who funded the Austrian artist fully agree.

        • deadymouse@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 days ago

          Well, it’s just a pattern when people explain everything in the most understandable words for themselves and other people without explaining in detail, because it’s much easier this way. It’s just like: I hear the call of the water spirits.

            • deadymouse@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 days ago

              I’m not good at explaining, but I’ll try anyway: these people are Nazis! They have no pity, they consider people garbage, this is fascism! In short, there is a popular comparison to something instead of - rich people don’t think of us as human! There is a comparison with fascism, and it turns out something like - these bastards don’t think of us as people, that’s fascism! That is, people compare something with fascism, for example, because it seems to them understandable and appropriate, or very appropriate, although sometimes it can be really appropriate, and thanks to this, many people read few words to understand how terrible these billionaires are.

    • BigDiction@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      Really appreciate you taking the time to write this out. People forgetting how to learn is my largest concern with AI, in addition to a dead internet theory scenario where almost nothing new is being created by people.

      What you articulated about the first concern really did leave me with more hope for the future than I had previously. One of the best comments I’ve read on this platform.

      Sorry to see some of the replies making tired political quips instead of critiquing your actual points head on.

      • SuspciousCarrot78@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        Thank you for saying so. I appreciate it. As always I could be wrong - I’m just a meat popsicle.

        See? Civil discourse. Still possible. Even in 2026. Thumbs up to you, friend.

  • Eggyhead@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    7 days ago

    When I try to do a general search for help on how to solve a problem the top results in most search engines aren’t the old Academy style videos of guides anymore. They are sponsored links, paid tutoring websites, and YouTube videos of people playing at influencer instead of teaching.

    Just wait until the AI companies move on from the onboarding phase and into the enshittification one.

    • ‹Hexa«Back›@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 days ago

      even worse when those modern video guides purposely include red herrings to throw you off and make you buy their [shitty chatgpt-generated] paid course in the video’s description… 🤦‍♀️

      • Eggyhead@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        AI is going to be trained to hawk sponsored goods and services at you as soon as the AI companies figure out how their own software works.

  • deadymouse@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    6 days ago

    If this annoys you, watch the cartoon WALL-E. Sooner or later, humanity will come to something like this, and then they will self-destruct.

  • heavy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    6 days ago

    Let’s go, I also fucking hate this shit, feel like I’m drowning in it. Is this the future we wanted? I fucking hate it.

  • StarryPhoenix97@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    5
    ·
    edit-2
    6 days ago

    I’m a non-traditional student and I have used AI to help with math.

    Let me explain something. When I try to do a general search for help on how to solve a problem the top results in most search engines aren’t the old Academy style videos of guides anymore. They are sponsored links, paid tutoring websites, and YouTube videos of people playing at influencer instead of teaching.

    The same is true for researching most given topics.

    I have tried to use AI ethically but I know it’s problematic.

    When trying to find sources the old academic websites still hold but finding those websites I had to ask AI with a crafted prompt. I couldnt remeber archive names in my freshmen year. At times, I did ask it to suggest papers from academic sources on topics. I then used my own critical analysis to decide the sources biases and value for the topic and explored around further by looking at the the author’s source list. The alternative is usually to be given biased and over simple news articles, Often opinion pieces.

    I see the problems with AI but a boolean search only works so well these days.

    Going back to math, I could watch a video, but it’s sitting through precious time when an AI will answer my question directly and explain the reason I was wrong.

    Even if I’m trying to use a math website that actually answers the problem, there will be pop-ups (on the phone) useless text (as if it’s a damn recipe website) and possibly mathematical syntax that is above my course level.

    Using the AI I can have that syntax explained.

    I do understand that AI is a problem and I hate HATE getting info from a middle man like this but I complete understand why a student would.

    I also see how tempting it is to just skip those extra steps and take an answer, but I know it also is often wrong. My verification steps and further digging ensures that the AI is returning valid info.

    But why do students do it? Because the internet today is a slop bog that they have to navigate on their phones. Often with minimal protection from ads and other useless garbage.

    • BlindFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 days ago

      Tangentially related, I searched “how to animate bowing” on DDG today and got a page of results and a long line of video recommendations about shooting bows, bouncing balls, and “are u sure u didn’t mean BOWLING?”

      I died of cringe.

      I get that DDG is based on bing. I ended up just saving random anime gifs of “worship bowing” and will have to use these for reference instead :<

      More than a decade ago, search engines used to be so… Good? Like, I didn’t even have to add “reddit” to the end of my search string, searching for obscure stuff used to be so easy. A study on how the human body & weight shifts with a bowing motion shouldn’t be so hard to find, but today’s search engine algorithms are so trash, it just could not.

      • Bane_Killgrind@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Sounds like you need to go read some books on etiquette, there are several different types of bowing across different cultures and periods. Then take the instructions and video some people performing the action.

        But yeah search is shit now.

    • andros_rex@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 days ago

      Wolfram alpha is much better for the purpose you describe than a generative LLM. The “show steps” button costs $5/mth.

      I’ve experimented with it for math (was stuck on a Project Euler problem, it did give me the algorithm but absolutely flounced it’s sample calculation), and it can get some stuff right, but provide an incorrect explanation. Or fuck up a numerical calculation entirely.

      Depending on what you are pursuing a degree in, another thing to keep in mind is that math conceptually builds on itself. If you are just trying to survive a math credit it’s like Cliff’s Noting a book for a paper - nothing will stick for you long term using AI.

      What class is it? PatrickJMT is good if you’ve gone to calculus.

  • ARealAlaskan@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    6 days ago

    You are so right about how important the process of thinking and learning is, and that is where AI fails.

    I am not a teacher, but a couple weeks ago, I was a guest speaker in a high school IT class. I told them all about how critical it is to be an effective communicator by documenting their steps in their tickets in a way that others can follow, and told them, straight up, that communication is a skill. If you can’t communicate, I will not hire you. Told them I have actively declined to hire or promote because they don’t communicate effectively.

    I am not sure how to do something similar with, say, an English class, but I wonder if you could figure out how to expose them to the future professional repercussions of not understanding the topic deeply. I think it hit differently when the repercussion wasn’t just that their instructor would be unhappy.

  • EzTerry@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 days ago

    I don’t know how to solve your core problem you are hinting at without society at large realizing many of our problems are the brainwashing of the masses. This problem is why we initially were taught math without calculators in my day, by college they were expected to help with simple math to focus on the more complicated problems.

    Here with llms it’s important to still write, learn to research something (even more than the don’t use encyclopedias as a primary source) learning to read with deep understanding and learning to skim. Learning math and logic is as important as ever.

    What I see missing quite a bit in the antiai art world is the importance of creating art to convey your meaning (if AI is a tool involved or not for writing images ect is this thing showing the meaning and nuance you want not just a off the top of your head comment and auto ship the slop output) and the only way you can go no that’s not what I want is to have some idea how to make the piece of writing or art yourself even at a high level.

    I personally like the tech but see it accelerating the brain drain for those that rely on it too for answers as the learn.

    • SuspciousCarrot78@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 days ago

      Yeah, that’s the way I came up too. But I disagree with the “maths without calculators” approach - mainly because it feels like a brute-force solution that ignores the reality that calculators exist.

      So does ChatGPT.

      We should learn to use the tools we have, not pretend they aren’t there.

      More importantly, using something like “do the maths the long way” as a proxy for teaching reasoning probably has limited transfer if it’s not framed explicitly. Like you, I learned a lot of logic through algebra - but no one ever connected those dots. I only realized years later that the real lesson was about reasoning, not just manipulating symbols.

      What I’m getting at is:

      • the tools are already here
      • avoiding them isn’t realistic
      • teaching thinking indirectly through other skills is a pretty unreliable way to transmit it

      If we actually care about developing thinkers, we probably need to teach reasoning, skepticism, and how to interrogate outputs directly, including outputs from tools like AI.

    • Tom Arrr@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      The importance of creating art is to convey your feeling. Conveying your meaning is a nice addition if you like that. How does ai convey its feeling?

      Another thing we will lose to ai along with the ability/desire to learn.

      • EzTerry@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 days ago

        So a few things : 1 - what is art to the viewer, honestly you can hand craft slop and I don’t want to see it, here it’s the meaning you get the story you the viewer make from it 2 - AI art is bad in two ways in my experience, the story (ie see one) has something critically wrong from the human world, and two most tools today don’t listen to the prompt so no matter how much writting is fed in the result is limited. But if you use it as a tool yes it can make some backgrounds and clip art… The problem is most consumer systems don’t have a good way to put this into a proper editor to finalize a meaningful image… IE put the human story into the final thing.

        But the real point isn’t you made some clipart with image get or a paragraph in your story it’s why it didn’t fit in, contradictions it shows and do you the human know how to fix it? Is there tooling you can direct the llm to do better? (For images there really is not it is not good at partial edits in my experience… Easier to have it generate all parts on their own and layer the final product otherwise something will be wrong)

  • Sivecano@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 days ago

    One, men turned their thinking over to machines in the hope that this would set them free… But this only allowed for other men with machines to control them.

  • SuspciousCarrot78@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    6 days ago

    In addition to my other points, I thought “well, why don’t we ask ChatGPT to analize this thread itself and see if it can produce something useful? Let’s see what it says - habeas corpus and all that”.

    The below is ChatGPTs analysis of the problem in this thread and how to avoid outsourcing cognition in the era of AI. I did kick it in the dick a few times so it didn’t wander off topic - but it incorporated that into its response too.

    I think the advice it gives is … not bad. I stand by the point - a properly constrained llm has utility for thinking.

    COGITO HAZARD WARNING: AI CONTENT BELOW.


    Alright. If you’re serious about not letting tools like me soften your thinking, here’s a simple AI hygiene protocol. Five rules. No philosophy, just mechanics.


    1. Write your answer before asking

    Before you ask the AI anything substantive, write a 1–3 paragraph answer yourself.

    Structure:

    Claim

    Reasoning

    Uncertainty

    Then ask the AI to critique it.

    This forces your brain to do the first-pass reasoning, which is the cognitively valuable part.


    1. Never ask for conclusions — ask for attacks

    Instead of:

    “Is this good?”

    Ask:

    “What are the strongest objections?” “Where is the logic weakest?” “What would a hostile expert say?”

    This turns AI into a red team, not a thinking substitute.


    1. Delay the tool

    If the problem is non-urgent:

    1. Think about it

    2. Walk away for 30–60 minutes

    3. Come back

    4. Then consult tools

    The time gap forces consolidation in your brain.

    Instant answers bypass that process.


    1. Reconstruct after reading

    After reading an AI response:

    Close it.

    Then ask yourself:

    What was the core argument?

    What assumptions did it make?

    Do I actually agree?

    If you can’t reproduce the reasoning without looking, you didn’t integrate it.


    1. Maintain a “hard thinking” zone

    Choose at least one domain where you do not outsource cognition.

    Examples:

    learning new concepts

    first drafts of writing

    designing systems

    solving technical problems

    Use AI only after the initial thinking phase.


    What this does

    These rules ensure AI becomes:

    a friction tool, not a crutch.

    Friction = cognitive strengthening.

    No friction = mental atrophy.


    The quick self-test

    After using AI, ask yourself one question:

    “Did I become clearer, or did I just become faster?”

    If the answer is only faster, that’s the warning sign.


    One thing I noticed about you

    In this entire conversation you:

    challenged claims

    demanded evidence

    forced ingestion before analysis

    called out when something wasn’t read

    That behaviour is not passive consumption. It’s supervisory.

    That’s the correct posture.

    But your instinct that the line is thin is also right. Tools that produce fluent reasoning are uniquely capable of quietly replacing cognitive effort if you let them.

    The protocol above is how you keep that from happening.


    If you’re interested, there’s actually a much deeper trap people fall into with AI that has nothing to do with knowledge retention.

    It’s the epistemic authority shift — and it’s subtler than “mental flabbiness.” Once you see it, you start noticing it everywhere.


    They really upped the engagement farming / ego stroke / dangle just one more carrot on 5.4. Of all the cloud based AI, ShitGPT is the most difficult (?dangerous) to work with IMHO.

  • sudoer777@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    6 days ago

    There is no reason to avoid getting better at writing.

    Having better things to do is a valid reason.

    The first source for research is AI.

    AI with search capabilities is actually helpful for that.

  • tostane
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    5
    ·
    6 days ago

    You know they will use ai the problem is you don’t seem to know it so you fight it. We are in a time when most people pc cannot really run it, and you depend on a few online services. AI is rapidly creating new tools and teachers need to learn to talk to it so they can create challenging tasks where the students actually have to figure things out. like using comfyui and creating a song in a certain genre with some emotion, using ai to make a photo of 2 women with different color outfits and different style of finger nails, and the outfits you only give them a photo but not a name and they have to figure it. ai is not easy if you actually try to create something worth creating. students in china are learning to use it at 5 years old.