• protonslive@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    I find this very offensive, wait until my chatgpt hears about this! It will have a witty comeback for you just you watch!

  • Hiro8811@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    7 days ago

    Also your ability to search information on the web. Most people I’ve seen got no idea how to use a damn browser or how to search effectively, ai is gonna fuck that ability completely

    • bromosapiens@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 days ago

      Gen Zs are TERRIBLE at searching things online in my experience. I’m a sweet spot millennial, born close to the middle in 1987. Man oh man watching the 22 year olds who work for me try to google things hurts my brain.

    • shortrounddev@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      7 days ago

      To be fair, the web has become flooded with AI slop. Search engines have never been more useless. I’ve started using kagi and I’m trying to be more intentional about it but after a bit of searching it’s often easier to just ask claude

  • arotrios@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    7 days ago

    Counterpoint - if you must rely on AI, you have to constantly exercise your critical thinking skills to parse through all its bullshit, or AI will eventually Darwin your ass when it tells you that bleach and ammonia make a lemon cleanser to die for.

  • kratoz29@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    7 days ago

    Is that it?

    One of the things I like more about AI is that it explains to detail each command they output for you, granted, I am aware it can hallucinate, so if I have the slightest doubt about it I usually look in the web too (I use it a lot for Linux basic stuff and docker).

    Some people would give a fuck about what it says and just copy & past unknowingly? Sure, that happened too in my teenage days when all the info was shared along many blogs and wikis…

    As usual, it is not the AI tool who could fuck our critical thinking but ourselves.

        • pulsewidth@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          7 days ago

          A hallucination is a false perception of sensory experiences (sights, sounds, etc).

          LLMs don’t have any senses, they have input, algorithms and output. They also have desired output and undesired output.

          So, no, ‘hallucinations’ fits far worse than failure or error or bad output. However assigning the term ‘hallucinaton’ does serve the billionaires in marketing their LLMs as actual sentience.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 days ago

      I see it exactly the same, I bet you find similar articles about calculators, PCs, internet, smartphones, smartwatches, etc

      Society will handle it sooner or later

  • underwire212@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    7 days ago

    It’s going to remove all individuality and turn us into a homogeneous jelly-like society. We all think exactly the same since AI “smoothes out” the edges of extreme thinking.

  • sumguyonline@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 days ago

    Just try using AI for a complicated mechanical repair. For instance draining the radiator fluid in your specific model of car, chances are googles AI model will throw in steps that are either wrong, or unnecessary. If you turn off your brain while using AI, you’re likely to make mistakes that will go unnoticed until the thing you did is business necessary. AI should be a tool like a straight edge, it has it’s purpose and it’s up to you the operator to make sure you got the edges squared(so to speak).

    • Jarix@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      Well there’s people that followed apple maps into lakes and other things so the precedent is there already(I have no doubt it also existed before that)

      You would need to heavily regulate it and thats not happening anytime soon if ever

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      7 days ago

      I think, this is only a issue in the beginning, people will sooner or later realise that they can’t blindly trust an LMM output and how to create prompts to verify prompts (or better said prove that not enough relevant data was analysed and prove that it is hallucinations)

  • j4yt33@feddit.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 days ago

    I’ve only used it to write cover letters for me. I tried to also use it to write some code but it would just cycle through the same 5 wrong solutions it could think of, telling me “I’ve fixed the problem now”

  • Dil@is.hardlywork.ing
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 days ago

    I felt it happen realtime everytime, I still use it for questions but ik im about to not be able to think crtically for the rest of the day, its a last resort if I cant find any info online or any response from discords/forums

    Its still useful for coding imo, I still have to think critically, it just fills some tedious stuff in.

    • Dil@is.hardlywork.ing
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 days ago

      It was hella useful for research in college and it made me think more because it kept giving me useful sources and telling me the context and where to find it, i still did the work and it actually took longer because I wouldnt commit to topics or keep adding more information. Just dont have it spit out your essay, it sucks at that, have it spit out topics and info on those topics with sources, then use that to build your work.

      • Dil@is.hardlywork.ing
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        Google used to be good, but this is far superior, I used bings chatgpt when I was in school idk whats good now (it only gave a paragraph max and included sources for each sentence)

          • Dil@is.hardlywork.ing
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 days ago

            It worked for school stuff well, I always added "prioritize factual sources with .edu " or something like that. Specify that it is for a research paper and tell it to look for stuff how you would.

            • RisingSwell@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 days ago

              Only time I told it to be factual was looking at 4k laptops, it gave me 5 laptops, 4 marked as 4k, 0 of the 5 were actually 4k.

              That was last year though so maybe it’s improved by now

              • Dil@is.hardlywork.ing
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 days ago

                I wouldnt use it on current info like that only scraped data, like using it on history classes itll be useful, using it for sales right now definitely not

                • RisingSwell@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  7 days ago

                  Ive also tried using it for old games but at the time it said wailord was the heaviest Pokemon (the blimp whale in fact does not weigh more than the sky scraper).

  • LovableSidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    7 days ago

    Their reasoning seems valid - common sense says the less you do something the more your skill atrophies - but this study doesn’t seem to have measured people’s critical thinking skills. It measured how the subjects felt about their skills. People who feel like they’re good at a job might not feel as adequate when their job changes to evaluating someone else’s work. The study said the subjects felt that they used their analytical skills less when they had confidence in the AI. The same thing happens when you get a human assistant - as your confidence in their work grows you scrutinize it less. But that doesn’t mean you yourself become less skillful. The title saying use of AI “kills” critical thinking skill isn’t justified, and is very clickbaity IMO.

  • Guidy@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    4
    ·
    7 days ago

    I use it to write code for me sometimes, saving me remembering the different syntax and syntactic sugar when I hop between languages. And I use to answer questions about things I wonder - it always provides references. So far it’s been quite useful. And for all that people bitch and piss and cry giant crocodile tears while gnashing their teeth - I quite enjoy Apple AI. It’s summaries have been amazing and even scarily accurate. No, it doesn’t mean Siri’s good now, but the rest of it’s pretty amazing.

  • Mouette@jlai.lu
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    7 days ago

    The definition of critical thinking is not relying on only one source. Next rain will make you wet keep tuned.

  • ᕙ(⇀‸↼‶)ᕗ@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    7
    ·
    7 days ago

    so no real chinese LLMs…who would have thought…not the chinese apparently…but yet they think their “culture” of opression and stome-like-thinking will get them anywhere. the honey badger Xi calls himself an antiintellectual. this is how i perceive moat students from china i get to know. i pitty the chinese kids for the regime they live in.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 days ago

      Improving your critical thinking skills is a process that involves learning new techniques, practicing them regularly, and reflecting on your thought processes. Here’s a comprehensive approach:

      1. Build a Foundation in Logic and Reasoning

      • Study basic logic: Familiarize yourself with formal and informal logic (e.g., learning about common fallacies, syllogisms, and deductive vs. inductive reasoning). This forms the groundwork for assessing arguments objectively.

      • Learn structured methods: Books and online courses on critical thinking (such as Lewis Vaughn’s texts) provide a systematic introduction to these concepts.

      2. Practice Socratic Questioning

      • Ask open-ended questions: Challenge assumptions by repeatedly asking “why” and “how” to uncover underlying beliefs and evidence.

      • Reflect on responses: This method helps you clarify your own reasoning and discover alternative viewpoints.

      3. Engage in Reflective Practice

      • Keep a journal: Write about decisions, problems, or debates you’ve had. Reflect on what went well, where you might have been biased, and what could be improved.

      • Use structured reflection models: Approaches like Gibbs’ reflective cycle guide you through describing an experience, analyzing it, and planning improvements.

      4. Use Structured Frameworks

      • Follow multi-step processes: For example, the Asana article “How to build your critical thinking skills in 7 steps” suggests: identify the problem, gather information, analyze data, consider alternatives, draw conclusions, communicate solutions, and then reflect on the process.

      • Experiment with frameworks like Six Thinking Hats: This method helps you view issues from different angles (facts, emotions, positives, negatives, creativity, and process control) by “wearing” a different metaphorical hat for each perspective.

      5. Read Widely and Critically

      • Expose yourself to diverse perspectives: Reading quality journalism (e.g., The Economist, FT) or academic articles forces you to analyze arguments, recognize biases, and evaluate evidence.

      • Practice lateral reading: Verify information by consulting multiple sources and questioning the credibility of each.

      6. Participate in Discussions and Debates

      • Engage with peers: Whether through formal debates, classroom discussions, or online forums, articulating your views and defending them against criticism deepens your reasoning.

      • Embrace feedback: Learn to view criticism as an opportunity to refine your thought process rather than a personal attack.

      7. Apply Critical Thinking to Real-World Problems

      • Experiment in everyday scenarios: Use critical thinking when making decisions—such as planning your day, solving work problems, or evaluating news stories.

      • Practice with “what-if” scenarios: This helps build your ability to foresee consequences and assess risks (as noted by Harvard Business’s discussion on avoiding the urgency trap).

      8. Develop a Habit of Continuous Learning

      • Set aside regular “mental workout” time: Like scheduled exercise, devote time to tackling complex questions without distractions.

      • Reflect on your biases and update your beliefs: Over time, becoming aware of and adjusting for your cognitive biases will improve your judgment.

      By integrating these strategies into your daily routine, you can gradually sharpen your critical thinking abilities. Remember, the key is consistency and the willingness to challenge your own assumptions continually.

      Happy thinking!

  • Sibbo@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    126
    arrow-down
    5
    ·
    8 days ago

    Sounds a bit bogus to call this a causation. Much more likely that people who are more gullible in general also believe AI whatever it says.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      72
      arrow-down
      2
      ·
      8 days ago

      This isn’t a profound extrapolation. It’s akin to saying “Kids who cheat on the exam do worse in practical skills tests than those that read the material and did the homework.” Or “kids who watch TV lack the reading skills of kids who read books”.

      Asking something else to do your mental labor for you means never developing your brain muscle to do the work on its own. By contrast, regularly exercising the brain muscle yields better long term mental fitness and intuitive skills.

      This isn’t predicated on the gullibility of the practitioner. The lack of mental exercise produces gullibility.

      Its just not something particular to AI. If you use any kind of 3rd party analysis in lieu of personal interrogation, you’re going to suffer in your capacity for future inquiry.

      • Fushuan [he/him]@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        ·
        8 days ago

        All tools can be abused tbh. Before chatgpt was a thing, we called those programmers the StackOverflow kids, copy the first answer and hope for the best memes.

        After searching for a solution a bit and not finding jack shit, asking a llm about some specific API thing or simple implementation example so you can extrapolate it into your complex code and confirm what it does reading the docs, both enriches the mind and you learn new techniques for the future.

        Good programmers do what I described, bad programmers copy and run without reading. It’s just like SO kids.

    • ODuffer @lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      8 days ago

      Seriously, ask AI about anything you have expert knowledge in. It’s laughable sometimes… However you need to know, to know it’s wrong. At face value, if you have no expertise it sounds entirely plausible, however the details can be shockingly incorrect. Do not trust it implicitly about anything.