I don't understand his emphasis on session on TCP and UDP. A session requires nothing more than a "unique" number to identify it. You can certainly have session in UDP; just attach a session number on all the messages belonging to the session. And you can certainly be sessionless in TCP; just close the connection after each message. HTTP request has no session by definition and it sits on top of TCP.
UDP is certainly faster than TCP, no question about it, because of different levels of quality of service. However, coding for both is similarly simple, given the same requirement for short receive-only message. TCP session has nothing to do with the complexity. The complexity comes in how to do process() in parallel.
A UDP loop has similar steps, minus the accept and close. The complexity to use threads or async to run process() is the same for UDP and TCP.
For a normal data volume remote command protocol, the simple TCP loop is more than adequate. Just set a reasonable MAXCONN to queue up client connection requests, which can drop connections if there are too many requests, just like UDP dropping packets.
Edit: I don't believe lock can be avoided in Sonic Pi server when handling concurrent incoming commands, whether it's written in Erlang or not. Sonic Pi I believe has a single audio device, which makes it a shared resource. Concurrent access to a shared resource has to be managed with lock somewhere along the call path. In that case a single thread TCP server is perfectly fine, serving as a lock as well.
Fragmented IP datagram is re-assembled at the IP layer before it is handed up to UDP or TCP layer. If it can't be re-assembled, the datagram is considered lost.
UDP is unreliable and has small packet size. The TCP code is emulating that simple requirement. Why expand the requirement? Looping to read fully would block on one connection. One rogue client would hold up the whole server.
That's correct. A send call could call with a 1GB buffer and the IP layer would have to break it up into multiple datagram packets. That's the nature of TCP. Again sending large message is expanding the requirement beyond what is capable in UDP, while we were striking to emulate UDP in sending short unreliable message.
He didn't talk about performance at all. If he said TCP is slower than UDP, fine, that's a perfectly valid reason to use UDP.
But he was talking about TCP imposes the notion of session on the application while UDP doesn't, which are false. If he meant the TCP connection as a session, why he led the discussion to managing session with locks and threads in application. Then he talked of the ease of Erlang handling session while other languages having a hard time, which was an exaggeration. Session management is a solved problem, in many languages by many people.