Is Functional Programming really slow?
Alright, so I write scientific computing software professionally and, candidly, I find it easier to write efficient software using functional languages than object oriented because of what the languages encourage. For example, say we were writing a PDE solver. Although we can abstract things lots of different ways, at the end of the day, we probably have a really big struct with a lot of different matrices and vectors in it, which constitutes the algorithm's state. The code then passes this state from routine to routine. Now, often, we want to just modify one element in this state and keep a copy around for this or that. In a language like C++, the most straightforward thing to do is use copy construction, but that copies every single item in the structure, which is expensive and unnecessary. Only one thing changed. Certainly, we could make everything in the structure a shared pointer, copy the structure, and then allocate new memory for the one item, which would fix the problem. In fact, this is what most people do. However, now we're doing a lot of memory management by hand, which is error prone, and, at the end of the day, I should be working on better algorithms not fixing memory problems. In a functional language, we typically have immutable data structures with a garbage collector. Or, more specifically, we have reasonable immutable data structures that can handle this for us. For example, in the same scenario above, if we use a language like OCaml and want to modify a single element of a record type, the language does not copy the entire record type. Rather, it creates a new record type with references pointing to the memory of the old one everywhere except for the one element we modified. It's fast. It's efficient. We don't have to do this memory management ourselves.
Technically, I can write efficient code in either OO languages or functional languages, but I've found that for scientific software, meaning PDE solvers or optimization solvers, it's easier to do that in a functional language because the language encourages it. And, no, I'm not ignorant of the issues with writing low level dense linear algebra solvers. Those solvers I do write in C, then I link it out using a foreign function interface. Basically, the right tool for the right job.
Funny the choice of laziness as an example of functional superiority.
First, functional != lazy evaluation. We can once again thank the Haskell converts for further trying to redefine what "functional" means to match Haskell's featureset (which began with purity... I'm waiting for Hindley-Milner to gradually become a requirement).
In fact, laziness in the context of OOP is a very natural fit as the paradigm encourages encapsulation and state hiding. Build the right interface (basically one that looks like consuming items from a stream) and you could swap out approaches largely at will.
Second, laziness is the source for a huge number of performance issues in Haskell. Show me a high performance Haskell program and I'll show you one whose developer probably banged their head against a space leak and found themselves littering !'s throughout their code.
I stopped reading after this:
I'm trying to avoid being snarky, but all I really want to point out that if one doesn't understand the benefits and drawbacks of GC's, perhaps your opinion on performance of FP might not be very accurate.After all, several factors key to the concept have indubitable costs – garbage collection is a must
> garbage collection is a must
Has anyone done FP in Rust? What was memory management like?