• RedSnt 👓♂️🖥️@feddit.dk
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 hours ago

    I believe there’s a study that shows that cursing when you get hurt helps alleviate the pain[1][2] (by about 33% apparently). I wonder if that’s related, like swearing by being an extension of language helps read and understand the code.

    For example, sed's lack of unicode support is the reason I prefer perl -pe. More available symbols is more good.
    flatpak list --app | perl -pe "s/\t/🐧/g" | cut -d🐧 -f2
    
  • NuraShiny [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 hours ago

    I don’t believe the clean curve on the left and I don’t believe there is an objective standard of code quality.

  • Thorry84@feddit.nl
    link
    fedilink
    English
    arrow-up
    22
    ·
    10 hours ago

    Well that’s probably because when the code is just run of the mill stuff, you don’t really think about it and just put out normal average code. So the code quality follows the normal distribution.

    However when the problem wat particularly hard or involved some weird thing, or the dev just happened to get stuck for some reason, they get worked up about it. They invest time to dig into the issue, figure out what’s going on and really engage their skillset. The code produced then is of higher quality, because the level of investment was higher. To release that stress swears are used and can make their way into the code (hopefully only in the comments).

    This is a typical case of correlation does not imply causation. Yes the code with swears is of higher quality, but simply putting in swears does not improve the code. In stead both the swears and the quality are influenced by another third thing not accounted for in the data. If one were to plot code difficulty or something against quality and swears, you’d probably see more swears as the difficulty rises along with better quality.

    Also this is an internet meme and probably made up, but still.

    • stormeuh@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      Also hard problems may produce some eclectic code which could be bug prone in a way which isn’t detected by automated tools.

    • RusAD@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 hours ago

      I thought along the lines of “Programmers with more knowledge and experience give less fucks about civility in the code comments”

    • JimmyMcGill@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      There’s still some causation, just the other way around

      Good quality code causes swear words for the reasons that you mentioned. Just not the other way around

  • Evil_Shrubbery@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    9 hours ago

    My variable names (and comments describing what they do) are the kinkiest, most deprived shit ever.

    Nobody reading my code shall ever be normal again.

  • collapse_already@lemmy.ml
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    14 hours ago

    I am curious how code quality is measured. Coverity metrics? Spelling errors? Bug reports? Sounds like bullshit.

    • paris@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 hours ago

      I don’t care enough to read through the whole thing, but some cursory searching brought up a reddit thread where a commenter found the original thesis:

      Strehmel, J. (2022). Is there a Correlation between the Use of Swearwords and Code Quality in Open Source Code? [Bachelor’s Thesis, Institute of Theoretical Informatics]. https://cme.h-its.org/exelixis/pubs/JanThesis.pdf

      • ltxrtquq@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        SoftWipe [30] is an open source tool and benchmark to assess, rate, and review scientific software written in C or C++ with respect to coding standard adherence. The coding standard adherence is assessed using a set of static and dynamic code analysers such as Lizard (https://github.com/terryyin/lizard) or the Clang address sanitiser (https://clang.llvm.org/). It returns a score between 0 (low adherence) and 10 (good adherence). In order to simplify our experimental setup, we excluded the compilation warnings, which require a difficult to automate compilation of the assessed software, from the analysis using the --exclude-compilation option.

        If that means anything to you.

    • errer@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      12 hours ago

      The distribution on the right looks all sorts of fucked up. Don’t even tell us the median value of this “quality” measure.