I thought a consensus had emerged that function overloading was a bad idea for a while now? Even in strongly-typed languages, it pushes that extra bit of cognitive load onto the human reader. It also complicates things for tooling. In loosely typed languages it's hard to see the need. As somebody else mentioned here, variable args and kwargs are the more pythonic* way to address such concerns. If you want to have different behaviour for different args you can do this explicitly.
I guess this article is a fun discussion and a nice comparison of language features, maybe I'm taking it too seriously.
> I thought a consensus had emerged that function overloading was a bad idea for a while now?
Hm, really? I didn't get the memo :-) After all, there is no semantic difference between function overloading and varying behavior based on args and kwargs.
You can trivially compile any set of overloaded functions into a single function with a bunch of if/else statements at the top level, without any change to the caller side. Basically, you need either variable arguments or function overloading to support a dynamic API. Nearly all languages support either one, many support both. Why is that a bad idea?
Languages like Elixir encourage heavy function overloading (to the point that some people replace nearly every potential if/case by a bunch of overloaded private functions), and I don't think its ecosystem is hurting for it.
Sidenote: I also really like how TypeScript does function overloading: you can only define one function implementation, but you can give it multiple overloaded typed signatures. This lets you say stuff like "if the first parameter is a string, then the second one must be a number". But you still have to implement that top-level if/else by hand, like you would in JavaScript. It's nice and pragmatic!
I think it depends entirely on whether the overloads perform the same conceptual action. If I have a "send value" function that transmits over an existing network connection, I can have overloads for bool, int, string, etc. Each does the same action, but for a different type.
In the other hand, overloads can do entirely arbitrary actions. If there is no consistency between the different overloads, then there is no reason to have them be named the same thing.
On one hand - I agree, overloading isn't great. On the other hand having a `def func(args, kwargs)` is a pain the ass for everyone involved (people & IDEs): you have no idea what the args or kwargs could be without reading the source.
If you can get away with just a bunch of named kwargs after the arguments that is fine, but I'd take overloading over the `args, kwargs` garbage any day, even if that is the more "pythonic" way.
I think the kwargs approach is fine for when you’re reading your code. When you’re writing I think you’ll always have to consult the docs, or headers. In a strongly typed language IDE can pick up the hint but in a more dynamic language like python it can get confused.
I fully agree that named kwargs is perfectly acceptable, but this is the version I think is horrible:
def func(*args, **kwargs)
is valid and completely ambiguous - you have no idea what it's doing unless you read the source, and track down how it branches out. IDEs are stopped dead in their tracks as they aren't going to parse the logic to figure it out
And yes it's not the most common approach, but I have seen it enough times to despise it. This overloading approach removes the need for it so I'm all for that.
from functools import singledispatch
from typing import Union
@singledispatch
def myfunc(arg:Union[int, str]):
print("Called with something else")
@myfunc.register
def _(arg:int):
print("Called the one with int")
@myfunc.register
def _(arg:str):
print("Called the one with str")
myfunc(123)
myfunc("123")
Everything works as you'd expect, and the IDE's type hints are accurate as long as you actually put the correct type hint
Like most things, function overloading can be used for good or bad effect. You could for example implement something like multiple dispatch, which seems to be trending somewhat currently (in e.g. Julia and modern C++).
It adds cognitive load when done badly, but reduces it when done well and consistently. The knee jerk dismissal of different approaches is probably the main reason why programming as an art has been by and large stagnant for decades.
Historically in C++ it was considered to be more performant to dispatch function calls on the base type of its arguments, not the concrete types.
Consider for simplicity member functions A::f(x...) in C++ as free functions where the first argument is fixed to be of type A, as in f(A, x...). If you want to emulate multiple dispatch for just 2 arguments f(A, B), you already find yourself in visitor pattern-land.
And the other issue with C++ is that it does not allow open functions. In the visitor pattern you necessarily have to implement f(A, B) for _all_ subtypes of A and B.
In Julia open functions + multiple dispatch make life quite a bit easier than in C++:
julia> abstract type Animal end
julia> abstract type Color end
julia> struct Dog <: Animal end
julia> struct Cat <: Animal end
julia> struct Brown <: Color end
julia> struct White <: Color end
julia> f(::White, ::Cat) = println("Hello white cat")
julia> f(::Brown, ::Dog) = println("Hello brown dog")
julia> for (color, animal) in [(White(), Cat()), (Brown(), Dog())]
f(color, animal)
end
Hello white cat
Hello brown dog
Note that array type is here is Vector{Tuple{Color,Animal}}, which would correspond to type erasure in C++.
Lastly, the modern C++ version with std::variant + std::visitor is probably not really multiple dispatch, since it has a union storage type under the hood.
I tend to think of C++ function templates to have (static) multiple dispatch. With runtime polymorphism you do end up with visitors or some other hackery.
In Julia function overloading is used instead of object oriented programming, and it's actually quite powerful abstraction for mathematical manipulation.
Just as an example automatic differentation libraries are written using function overloading (in which the arguments of the overloaded functions are a tuple of a value and its derivative).
Another example is CUDA array support. And the best thing is that you can combine these 2 libraries (and many others that take advantage of operator overloading)
concepts/traits/type classes or whatever they are called in your language of choice are all forms of overloading (more specifically ad-hoc polymorphism) and are generally understood to be A Good Thing.
At least in C++, most forms of overloading I see is to enable generic programming.
Of course nobody stops you from naming functions doing different things with the same name, but then again nobody stops you from naming functions badly in the first place. As usual with more powers come more responsibility.
I guess this article is a fun discussion and a nice comparison of language features, maybe I'm taking it too seriously.