• taiyang@lemmy.world
    link
    fedilink
    English
    arrow-up
    110
    ·
    2 days ago

    I’m the type to be in favor of new tech but this really is a downgrade after seeing it available for a few years. Midterms hit my classes this week and I’ll be grading them next week. I’m already seeing people try to pass off GPT as their own, but the quality of answers has really dropped in the past year.

    Just this last week, I was grading a quiz on persuasion and for fun, I have students pick an advertisement to analyze. You know, to personalize the experience, this was after the super bowl so we’re swimming in examples. Can even be audio, like a podcast ad, or a fucking bus bench or literally anything else.

    60% of them used the Nike Just Do It campaign, not even a specific commercial. I knew something was amiss, so I asked GPT what example it would probably use it asked. Sure enough, Nike Just Do It.

    Why even cheat on that? The universe has a billion ad examples. You could even feed GPT one and have it analyze for you. It’d be wrong, cause you have to reference the book, but at least it’d not be at blatant.

    I didn’t unilaterally give them 0s but they usually got it wrong anyway so I didn’t really have to. I did warn them that using that on the midterm in this way will likely get them in trouble though, as it is against the rules. I don’t even care that much because again, it’s usually worse quality anyway but I have to grade this stuff, I don’t want suffer like a sci-fi magazine getting thousands of LLM submissions trying to win prizes.

    • faythofdragons@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 hours ago

      Why even cheat on that? The universe has a billion ad examples.

      I’m not one of your students, but I do remember how I thought in high school. Both of my parents worked, so I was the one that had to cook dinner and help my little brothers with their homework, then I had multiple hours of my own homework to do.

      While I do enjoy analyzing media, the homework I struggled with would get priority. I was the oldest, so I didn’t have anybody to ask for help with questions, and often had to spend a larger amount of time than intended on topics I struggle with. So, I’d waste the whole night struggling with algebra and chemistry, then do the remaining ‘easy’ assignments as quickly and carelessly as possible so I could get to bed before midnight. Getting points knocked off for shoddy work is far preferable to getting a zero for not doing it at all, and if I could get to bed at a reasonable time, I wouldn’t lose points in the morning class for falling asleep.

      It just… makes sense to cheat sometimes.

    • RunawayFixer@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      21 hours ago

      Students and cheating is always going to be a thing, only the technology evolves. It’s always been an interesting cat and mouse game imo, as long as you’re not too personally affected (sorry).

      I was a student when the internet started to spread and some students had internet at home, while most teachers were still oblivious. There was a french book report due and 4 kids had picked the same book because they had found a good summary online. 3 of the kids hand wrote a summary of the summary, 1 kid printed out the original summary and handed that in. 3 kids received a 0, the 4th got a warning to not let others copy his work :D

      • taiyang@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        20 hours ago

        Lol, well sounds like a bad assignment if you can get away with just summary, although I guess it is language class(?) it’s more reasonable. I’m not really shooken up over this type of thing, though. I’m not pro-cheating, but it’s not for justice or morality; it’s cause education is for the students benefit and they’re missing out on growth. We really need more critical thinkers in this world. Like, desperately need them. Lol

        • RunawayFixer@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          14 hours ago

          Yep, french language class in a too large highschool class. If the class had been smaller, then the teacher would have definitely gone for more presentations by the students.

          Keep up the good fight, I’m certain that many of your students appreciate what they learn from you.

    • Shou@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      ·
      2 days ago

      As someone who has been a teenager. Cheating is easy, and class wasn’t as fun as video games. Plus, what teenager understands the importance of an assignment? Of the skill it is supposed to make them practice?

      That said, I unlearned to copy summaries when I heard I had to talk about the books I “read” as part of the final exams in high school. The examinor would ask very specific plot questions often not included in online summaries people posted… unless those summaries were too long to read. We had no other option but to take it seriously.

      As long as there isn’t something that GPT can’t do the work for, they won’t learn how to write/do the assignment.

      Perhaps use GPT to fail assignments? If GPT comes up with the same subject and writing style/quality, subract points/give 0s.

      • I Cast Fist@programming.dev
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 day ago

        Last November, I gave some volunteer drawing classes at a school. Since I had limited space, I had to pick and choose a small number of 9-10yo kids, and asked the students interested to do a drawing and answer “Why would you like to participate in the drawing classes?”

        One of the kids used chatgpt or some other AI. One of the parts that gave it away was that, while everyone else wrote something like “I want because”, he went on with “By participating, you can learn new things and make friends”. I called him out in private and he tried to bullshit me, but it wasn’t hard to make him contradict himself or admit to “using help”. I then told him that it was blatantly obvious that he used AI to answer for him and what really annoyed me wasn’t so much the fact he used it, but that he managed to write all of that without reading, and thought that I would be too dumb or lazy to bother reading or to notice any problems.

          • I Cast Fist@programming.dev
            link
            fedilink
            English
            arrow-up
            3
            ·
            21 hours ago

            That call out was after the first class, I didn’t tell him he was out and said “See you next week”. Still, he didn’t show up on the other 3 classes, though those were also very rainy days, so I can’t say what was the reason he didn’t show up again

            • sem@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              4
              ·
              17 hours ago

              It is so weird seeing these stories and trying to make sense of what it means for the future of humans using written communication.

              I’ve heard stories from some of the youth that they see no reason why not to use genAI to save time and effort.

              But it’s not like using a spell check, it’s like asking someone else to do the thinking for you.

              And the only reason we have genAI is because it ingested oodles of real people’s creative output made before genAI was created.

              “Why do you want to take the class?” --If you can’t be honest in how you answer, why should you get to take the class? On the other hand, if it’s not important to be able to write about it, why ask them to spend time on that assignment?

              I get that step 1 is to stop assigning any homework that is drudgery. But they can’t all be replaced by oral reports, or in-class writing assignments, can they? Are teachers going to start asking for assignments to be handwritten, so it’s at least not so easy to copy and paste?

              If education is for teaching kids how to think, and (I suppose) interact with emerging technologies, how do you teach people to think and write for themselves?

              Or is the answer that with LLMs being so good at generating plausible text, people of the future won’t need to be good at the writing process, and the skill of writing will decline?

              I mean, once we had texting and the internet, people wrote a lot fewer letters. It’s something of a disappearing art.

              Maybe good writing just goes away?

      • taiyang@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 day ago

        I have a similar background and no surprise, it’s mostly a problem in my asynchronous class. The ones who have my in person lectures are much more engaged, since it is a fun topic and I don’t enjoy teaching unless I’m also making them laugh. No dice with asynchronous.

        And yeah, I’m also kinda doing that with my essay questions, requiring stuff you sorta can’t just summarize. Important you critical thinking, even if you’re not just trying to detect GPT.

        I remember reading that GPT isn’t really foolproof on verifying bad usage, and I am not willing to fail anyone over it unless I had to. False positives and all that. Hell, I just used GPT as a sounding board for a few new questions I’m writing, and it’s advice wasn’t bad. There’s good ways to use it, just… you know, not so stupidly.

    • Vespair@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 day ago

      The reason chatgpt would recommend Nike though is because of its human-based training data. This means that for most humans the Nike ad campaign would also be the first suggestion to come to mind.

      I’m not saying LLMs aren’t having an impact, or denying that said impact is negative, but the way people talk about them is infuriating because it just displays a lack of understanding or forethought on how these systems work.

      People always talk about how they can tell something “sounds like chatgpt” or, as is the case here, is the default chatgpt answer, while ignoring the only reason it would be so is because of the real human patterns which it is mimicking.

      Brief caveats: of course chatgpt is wildly fallible and when producing purely generative content it pulls from nowhere because it’s just remixing unrelated sources, but for things within the normal course of discussion and output chatgpt’s output is vastly more human-like than we want to pretend.

      I would almost guarantee that Nike’s “just so it” was the singularly most popular answer to this kind of assignment before chatgpt existed too.

      • taiyang@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        23 hours ago

        Except I’ve given this quiz prior to GPT and no, it wasn’t once used because it’s not even a current advertisement campaign. My average 19 year old usually uses examples from my influencers, for instance, so I get stuff like Hello Fresh or Better Help, and usually specific to an ad read on stream on the past couple weeks. After all, the question asks for ads they’ve seen and remembered.

        Also, you neglect how these models get data. It’s likely pulled not because it’s a favorite, but because GPT steals from textbooks, blogs, etc, and those examples that would use that as a go-to (especially if the author uses 90s examples). Plus nevermind that your joe shmo Internet user isn’t the same as the group I’m teaching, most of them weren’t even alive when the Just Do It campaign started, lol.

        It really undermines the point of coming up with your own examples and applying theory to something from their life. I am not inherently anti GPT but this is a very bad use case.

    • Bamboodpanda@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      22 hours ago

      "I recall Ethan Mollick discussing a professor who required students to use LLMs for their assignments. However, the trade-off was that accuracy and grammar had to be flawless, or their grades would suffer. This approach makes me think—we need to reshape our academic standards to align with the capabilities of LLMs, ensuring that we’re assessing skills that truly matter in an AI-enhanced world.

      • taiyang@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        20 hours ago

        That’s actually something that was discussed like, two years ago within the institutions I’m connected to. I don’t think it was ever fully resolved, but I get the sense that the inaccurate results made it too troublesome.

        My mentally coming out of an education degree, if your assessment can be done by AI, you’re relying too much on memorization and not enough on critical thinking. I complain in my reply, but the honest truth is these students mostly lost points because they didn’t apply theory to the example (although it’s because the example wasn’t fully understood since it wasn’t their own). K-12 generally fails on this, which is why freshmen have the hardest time with these things, GPT or otherwise.