almostgotcaught 3 months ago

Academia is about figuring out progressively fancier and fancier sounding names for the same mundane things.

  • bubblyworld 3 months ago

    Expertise in any domain involves picking apart the mundane in progressively more complex ways! I think academia ends up with wild terminology because there's also the constraint of having to write everything down.

    • almostgotcaught 3 months ago

      I'm an expert in this domain. People make up words for the sake of sexy paper titles. You can recognize it when there's no formal definition in the body of the paper;

      > paramorphisms, a class of programs that includes the most common recursive programming patterns on algebraic data types

      "most common ... patterns". ok.

      they're alluding to math[1] but they're not doing math.

      [1] https://link.springer.com/article/10.1007/BF01211391

      • bradrn 3 months ago

        They define paramorphisms more rigorously on pages 4–5.

        • almostgotcaught 3 months ago

          That's not at all what they define in that section. They define their chosen combinator that is called "para" which is as close to what they're alluding to as a bicycle is to a semi truck.

          • trealira 3 months ago

            They didn't coin the term paramorphism. Lambert Marteens did in 1992 [0]. It's an extension of the concept of catamorphisms. These terms were invented by people knowledgeable about category theory [1].

            I also disagree with the notion that they aren't really doing math. It's very straightforward to derive programs by proving other properties of these recursive combinators. I would recommend reading A tutorial on the universality and expressiveness of fold [1] as an example of this; folds are catamorphisms, which is why I bring it up.

            [0]: https://www.researchgate.net/publication/256269748_Paramorph...

            [1]: https://www.cambridge.org/core/journals/journal-of-functiona...

            • almostgotcaught 3 months ago

              > They didn't coin the term paramorphism. Lambert Marteens did in 1992

              I literally linked that paper.

              > It's very straightforward to derive programs by proving other properties of these recursive combinators.

              Cool they don't do that. There's a word for being obliquely related: allusion.

              It's amazing how much benefit of the doubt CS people get. In math you either prove a theorem/lemma by writing it out in full or you... have nothing. In CS people tell nice stories, draw cool diagrams, write down random formulae and everyone is just hmm yes brilliant.

              • Twisol 3 months ago

                > they're alluding to math[1] but they're not doing math.

                So they're applying the math somebody else already put together? It isn't clear what your opinion is. Are you saying it's not valid if it's not math?

                Let's judge the paper by what they say they did in the abstract.

                > We show that synthesizing recursive functional programs using a class of primitive recursive combinators is both simpler and solves more benchmarks from the literature than previously proposed approaches.

                It sounds like their proposed metrics are subjective ("simpler") or data-oriented ("solves more ... than previously").

                > Our method synthesizes paramorphisms, a class of programs that includes the most common recursive programming patterns on algebraic data types.

                They're applying a pre-existing concept ("paramorphisms") to a problem domain where (it is implied) it either has not been applied before, or with a new spin.

                > The crux of our approach is to split the synthesis problem into two parts: a multi-hole template that fixes the recursive structure, and a search for non-recursive program fragments to fill the template holes.

                They give a novel (,it is implied) structural model for solving their problem using their chosen concept as their key contribution to the study of this problem. It is indirectly suggested that factoring the recursive and non-recursive parts apart allows them to be handled more effectively individually.

                ---

                Regarding your specific complaints: I do not see any immediate need for new math. Nor are they inventing new words, at least in the title or abstract. You may have legitimate concerns about general academic practice, but if you're going to levy those criticisms at this paper, you could do your arguments better justice than with a drive-by.

              • trealira 3 months ago

                > I literally linked that paper.

                You did; I overlooked that. I don't know why you're implying the authors invented the term and made up the combinator just now, then.

                > Cool they don't do that.

                Their program is basically doing an automated version of what Graham Hutton did in that paper, except they rewrite functions using para instead of fold.

                Edit:

                > It's amazing how much benefit of the doubt CS people get. In math you either prove a theorem/lemma by writing it out in full or you... have nothing. In CS people tell nice stories, draw cool diagrams, write down random formulae and everyone is just hmm yes brilliant.

                You seem to have a chip on your shoulder. I don't know why you're telling me this.

          • gergo_barany 3 months ago

            What they're "alluding to" is a standard combinator that has existed under the name "paramorphism" for a long time. Here is Jeremy Gibbons's "Origami programming" https://www.cs.ox.ac.uk/jeremy.gibbons/publications/origami.... from 2003:

                [...] capture this modified recursion pattern explicitly as a higher-order operator; in this case, the operator is known as a paramorphism [92]. In the case of lists, we could define
                
                paraL :: (α → (List α, β) → β) → β → List α → β
                paraL f e Nil = e
                paraL f e (Cons x xs) = f x (xs, paraL f e xs)
            
            I don't know of an older source for this particular syntactic presentation, though apparently if you put in the work to translate Squiggol to Haskell, you can read this concrete syntax out of Meijer et al.'s Bananas paper from 1991.

            I think what you're criticising is that the name "paramorphism" was adopted from category-theory-on-a-blackboard to functional-programming-that-actually-runs-on-a-computer. That ship sailed 30 years ago; this paper from 2024 is not what caused it to sail. I also think that you're incorrect to criticize this. Analogies are often very helpful.

            You are right that the people who pointed you to Meertens's paper were not helpful in clearing up the confusion, since your confusion is exactly that Meertens's paper talks about the other meaning of the term "paramorphism".

          • Twisol 3 months ago

            You're correct that they aren't defining the concept of paramorphisms in that section -- but that's because paramorphisms have been present in the literature for a long time:

            > for a detailed review of paramorphisms from a formal perspective, see [Meertens 1992]

            To your original point, the title of this paper is only using terms ("paramorphisms", "recursive", and "program synthesis") that are decidedly not novel.

            For the benefit of other readers:

            As to why "paramorphisms" are singled out as a concept worthy of having their own name, I recommend the 1991 paper "Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire" by Meijer, Fokkinga, and Paterson, which is much more readable than the referenced Meertens paper.

            We all know what an "infinite loop" is; a natural question is whether there are any looping constructs that cannot get caught in an infinite loop. The "for-each" loop has become popular in imperative languages; the reason it works is because it does a kind of structural iteration over finite data which fits a regular pattern.

            A "catamorphism" precisely captures this notion of structural iteration (really, structural recursion), for a vast family of data types (the "inductively defined" ones). But for any algorithm where you'd use an unbounded loop (or unbounded recursion), it's not always easy (or possible!) to implement that algorithm using catamorphic recursion instead -- and they're not always the most performant, either (e.g. you can't short-circuit the recursion).

            So we've gone from something that is too powerful/expressive (unbounded recursion/iteration) to something that is not powerful/expressive enough (catamorphic recursion/iteration), with the dividing line of "allows infinite loops" somewhere in the middle. The study of general recursion schemes, which includes paramorphisms, is essentially the study of alternatives to unbounded recursion/iteration that are better than catamorphisms on one or more desiderata.

            • almostgotcaught 3 months ago

              > present in the literature for a long time:

              You're the second person to link the exact same paper I did in my first response.

              > To your original point, the title of this paper is only using terms ("paramorphisms", "recursive", and "program synthesis") that are decidedly not novel.

              They italicize paramorphism in the abstract. We all very well know that italics means novel usage. So again: let's drop the charade that everything is above board here.

              • trealira 3 months ago

                > They italicize paramorphism in the abstract. We all very well know that italics means novel usage.

                Or, they're writing for an audience that isn't necessarily familiar with paramorphisms and other morphisms, but still is familiar with functional programming.

              • gergo_barany 3 months ago

                > let's drop the charade that everything is above board here.

                OK. Since you're opposed to merely alluding to things (your italics, which I find funny), could you please state explicitly the misconduct you are alleging?

              • Twisol 3 months ago

                > You're the second person to link the exact same paper I did in my first response.

                Yes; it sounds like you're suggesting they should be defining these concepts themselves instead of saying somebody else did them. But that would be academic dishonesty. So you must be alluding to something else.

                > We all very well know that italics means novel usage.

                You say that as though expecting everyone to agree. I do not. I personally use italics because several people I have spoken with, including my advisor, feel that boldface is shouty. Underline is rarely used for a similar reason. Italic font is the the only available form of typographic \emph{asis}; it is used (if sparingly) for key terms and ideas that the reader's attention should be drawn to, which include novel ideas, but can be much more than that.

      • bubblyworld 3 months ago

        Mmm, yeah on reading the paper more closely it seems like they don't make essential use of the concept really. It's more of a descriptor of the mechanics of their combinator language.

        Personally I'm fine with that - I have no issue with CS people borrowing terminology from category theory to describe stuff, so long as the analogy is reasonable. But I get where you're coming from - having learned some CT in (grad) school, I used to find Haskell wikis confusing as heck because they use CT terminology but nobody clarifies what category they're actually working in!

        • tome 3 months ago

          > I used to find Haskell wikis confusing as heck because they use CT terminology but nobody clarifies what category they're actually working in!

          "Hask", the (almost) category where the objects are types and the morphisms from A to B is the type A -> B.

          • bubblyworld 3 months ago

            Thanks, I know that now, but it's the "almost" part that trips up people like me! And it's not helped by the fact that the same sources will say something like "a monad is a monoid in the category of endofunctors", yet if you take Hask seriously there should be a discussion of endofunctors on Hask etc. When details are missing my brain gets stuck on figuring them out (whereas for Haskell the right approach is to take it with a pinch of salt and move on I think).

            Edit: to be more specific, you need to specify what constitutes a type - all possible ADTs under all possible names? Are equivalent functions considered the same morphism? What about seq? Is there a privileged "IO" type, and if not how is it represented in the set of types? What about other monads hiding different side effects? What about bottom? Many of these choices will of course be equivalent in some sense, but technically speaking you don't have a category until you've nailed this stuff down.