As someone who had to maintain two applications written entirely using the FRP Paradigm (Rx in Kotlin/Swift with a heavy focus on FRP principles), I am fascinated the idea but I absolutely hated the experience.
Writing behaviour flows can end in beautiful blocks of easy to understand operations. However, as these get more complex and you need to combine multiple data streams, logic is scattered all over a module.
Refactoring data-flows that go through multiple modules is a huge hassle. Sometimes, we would spend hours just refactoring data-passing, wrapping and unwrapping and tests surrounding modules, because we needed to pass some additional values.
It doesn't help that you have to set up all behaviours at setup time, which means the code is mixed with one-time setup code which regularly confused people working on the projects as to what is run at startup time and what is run per-event.
Debugging itself was mostly hampered by the libraries that we used not providing adequate tools for the job but even if they did, it was a lot more difficult to reason about compared to something like async/await based code or callback chains.
I can imagine FRP works better in purely functional languages but implementing FRP paradigms in general purpose languages - especially when interfacing with non-functional code, which is often necessary - has led to nothing but trouble for me.
Huh, my experience with Rx-Java is (partially) the opposite to yours. Partially because I've seen it used poorly in one project, and it was quite horrible, but that time it was more due to misusage of RX than it was the fault of RX itself.
The second time I've used it in a big commercial project it was used well, and that remains the best codebase I've ever worked with. Super easy to reason about, extremely performant, easily the cleanest asynchronous codebase I've ever seen.
I do believe surrounding tooling is important though. Part of why that codebase was so clean was because it used a framework that integrates very well with RX-Java (micronaut).
That said, I've never worked with await/async in any larger project, so I can't really compare the two fairly.
I am big fan of FRP with Java and RxJava/Reactor. When I learned about it three years ago I practically got drunk on it.
I think the issue is, the more power you get the more you have to be vigilant and more effort into ensuring the power is used wisely. The first project I joined that used RxJava learned this lesson very painfully.
In my projects I am ensuring some standards on how APIs are constructed and expected to behave so that composing large reactive systems is not getting out of hand very quickly.
Another problem is the steep learning curve for people who learned to mostly copy/paste code from Stack*. You can create very quickly what would normally be a very complex application with just few lines of code of Reactor, but it is not for free -- you still have to understand what it all does and how it all works or you will face consequences at some point.
I agree that the surrounding framework (and I'd like to add also additional libraries) have a huge impact on this experience.
The codebase I have worked on might be a bit of an edge-case too, as it was used in realtime audio/video communication, so ms-timing, order of async operations and keeping a consistent state were absolutely necessary.
Using Rx for such complex, long-lived business logic is probably a long shot from using it for a cleanly structured SPA for example. I've used a lot of reactive concepts in applications that simply fetched and displayed data and in those cases, I really enjoyed it.
FRP is like OOP. For some problems, there is a level of granularity where it's an absolute killer, and nothing gets even nearly as effective. But if you write it with too much granularity, it will completely destroy your code. (And with too little granularity, it will be useless.)
This phrase alone: "data-flows that go through multiple modules" is a very good indication that you broke things down too much, and should have ditched the FRP abstraction on a higher level.
I absolutely agree. We took over the codebase from another team and they really took clean code abstraction to the next next level while using every last detail the FRP libraries provided.
It's better when you use a reasonable level of abstraction and think hard about maintainability of FRP code beforehand.
> As someone who had to maintain two applications written entirely using the FRP Paradigm (Rx in Kotlin/Swift with a heavy focus on FRP principles), I am fascinated the idea but I absolutely hated the experience.
I had a similar experience maintaining and reviewing an Angular+Typescript application at work. I was fascinated about RX and was looking forward to see it in action.
In practice Angular exposes everything from its internal framework and interfaces as Observables, making RX everywhere the default.
The simplest things became mindbogglingly complex and asking “what does this code actually do” became the most frequent code-review question.
When we were unable to get the application stable, I made a renegade effort to eliminate all Observables and replace them with standard, well-understood Promises instead.
The result was fewer lines of code, clearer code, fewer bugs and more tests as bootstrapping/mocking a Promise-based API is significantly less effort than doing the same for its Observable-based counterpart.
It was a terrible experience all over and realistically (given the choice) I’ll never touch an Angular or RxJS based app ever again. My team will just touch React for any new development from now on.
Unfortunately I didn’t have the time nor management buy-in for doing that (a full rewrite), so instead I went for the second best option, which was making the code we had at least manageable.
I moved everything from RxSwift to Combine back with iOS (13?) and was happy with the experience.
As for the “complexity”, I largely solved this problem by limiting dimensionality of stream handling to 1, and chaining streams via explicit function calls so that I was never map/flat mapping stream to stream to stream etc.
This made everything much easier to understand in my projects, and also test etc.
But you’re right as now that async/await is in Swift proper I’m not sure it makes much sense anymore. There’s still some functionality I need that seems external (looks like async sequence might help once I can use iOS16 as a min spec?). Beyond the basic stuff, it’s essential that I can:
1) Define explicit timeouts that throw typed errors for every step of the async process
2) Have the ability to queue up an array of async operations (each with their own timeouts), dispatch them in a serial queue, then buffer their results to emit a single array, itself with a global timeout
3) While doing all of the above, I need a way to terminate async actions “in flight” upon receipt of some signal, so that a long running async queue can be aborted if necessary
Once all of the above is possible I’ll probably go back and rip out Rx (Combine) in a few years for a pure async-await implementation, but that will take forever
Requires Xcode 14, which is still in beta and cannot push to the App Store.
Also, Apple fucked their back port badly. It’s supposedly fixed now, but if you built an app that used async/await anywhere in Xcode 13.X and a user installed running iOS 12/13/up to 14.5 they’d crash on launch.
So I personally wouldn’t trust it, and instead just push to raise your iOS minimum. I’ve had no problems requiring iOS15 in my projects over 1M installs
swift-async-algorithms can be used in earlier versions of Xcode if you checkout an earlier tag. They just moved forward more quickly.
Async / await's backport no longer causes crashes in Xcode 13.4, but it does have the downside of being the Swift 5.5 runtime, which has several unfixed issues, unfortunately.
Can't find them at the moment (check the Swift forums), but the two big issues I've seen reported are both in TaskGroup. One is a runtime crash under certain conditions, the other a major performance bottleneck when adding thousands of child tasks to a group. Both have workarounds, but they can hit you when using back deployment or any OS that has the 5.5 runtime (iOS 15 - 15.3 for instance). Wouldn't be showstoppers for me, just something to be aware of.
I have the same experience. I remember when I was first introduced to Rx(JS in my case) it was really fun to compose async flows. I kept having to explain it to others and they would sort of scratch their head. Eventually we stopped using it. A year later I found some code still using it, and to my own amazement, I was having trouble understanding it. Lightbulb. It's like perl - it's write only.
> Refactoring data-flows that go through multiple modules is a huge hassle. Sometimes, we would spend hours just refactoring data-passing, wrapping and unwrapping and tests surrounding modules, because we needed to pass some additional values.
Same experience. Functional/declarative code is more elegant, but a small change in desired behavior might require completely different structure of the code (or an ugly hack). Meanwhile imperative code changes much less even if it's less elegant.
That's basically my Akka (akka-streams/Alpakka specifically) experience. When it works it's great and terse. If it occasionally hangs in the middle of things good luck finding that one timeout parameter you need to adjust. Also, not all people who actually use Actors bother to draw a large FSM diagram.
i’ve been writing FRP code in JS for years now. Ease of refactoring and total clarity of what is happening (assuming you are familiar with the FP methods, etc) are the biggest advantages.
I agree the code can balloon in size, and gets a bit hard to organize at some point, but the self-documenting aspect of it is amazing.
I combine an FRP framework (bacon.js) with Ramda.
I’ll admit I’ve had trouble working with things like React because I’m so used to wiring everything up explicitly and there being no “magic” behavior.
The explicitness is probably what makes it seem so unwieldy as the code/team scales
Out of curiosity, a lot of people are praising the "self-documenting aspect of [Reactive Code]", and I definitely see why.
A single block of statements that transform and handle data in a specific way is indeed almost self documenting, which is really cool.
However, how did this work for you with more complex combinations and transformations?
For example, our codebase had one module that took inputs from multiple different sources (low-level network handling based on a library) and generated a consistent state out of them. To keep the code performant, we introduced some intermediary values, so that certain transformations had to be run less frequent. (this was necessary)
In the end, we ended up with a ~500LoC module. I wrote the same later with async/await and split everything into ~20 functions, which worked really well.
The reactive version however, was just a bunch (~15) blocks of transformations, which were somewhat self explanatory in themselves but it was almost impossible to trace the flow of data through the whole module without drawing it up.
Even just naming a block (a function name) helped a lot. Sure, you can document blocks of reactive code, which is what we ended up doing but I felt like the self-document aspect fell apart when I had to write a sentence above every block as to why this intermediary transformation is necessary.
For me, I think it comes from the ability to easily split a stream at any point and create intermediate variables. I get pretty obsessive about the variable naming (as I'm sure many do here) but a good variable name is crucial to the self-documenting concept. Also, creating more intermediate values than strictly necessary, creates more opportunity to flesh out the explanatory breadcrumb trail.
Because of the enforced formality of how data is flowing around, the ability to split these streams up is trivial in a way that was a revelation to me. In the past, I'd be far less confident that I was going to mess up something unintentionally, to the point where I would maybe not bother.
And yeah, I think it's totally valid to take larger blocks of streams/transformations and wrap in a named function that takes a stream and returns a stream.
Over the years I've gotten out of the habit of always using FRP for every problem. There are many cases where async/await is totally fine and simple. I still always use Ramda, which gives me the same self-documenting qualities (which is really a function of the strictness of FP, not so much the R in FRP, now that I think about it).
But whenever I have to coordinate multiple inputs with different time delays (i.e. multiple AJAX requests or UI input) or process a bunch of data where each item creates its own stream of fetching and processing, the "reactive" bit is extremely handy.
i think an issue is that most "apps" are just CRUD and the "reactive" parts end up just being used to propagate the READ part to the ui, but this can be handled by promises most cases (ime) just fine
the other part is the UPDATE from the ui to the model, again this can be handled by simple callbacks and refresh the ui again with promises
there are some nice features of these libraries but most of it goes unused because most apps don't actually need all that power/complexity (sadly i suppose)
lastly, (ime) many devs also don't seem to know how to program data flow in a single direction which gets really hairy because now everything is reacting to everything and fixing one thing breaks another....
Could you elaborate? I don't see why, aside from the "refactoring dataflows" part but without static typing we would have just missed some of them and introduced runtime errors, which would have been even worse.
So, I'll start with a disclaimer - I've only futzed around with FRP in Haskell, and never for money. Somewhere on an old computer I've got some code to simulate a pool table. But it has been a few years.
The pool table is pretty fun to model. You've got events, which are collisions, and you've got an evolving state that you can sort of fast-forward through thanks to elementary physics. if you want to sample at a given time, you can. You can calculate out the time to the next event (collision with another ball or bumper)
For me, when I got to multiple objects interacting at different times it got really tough to store the state of the world. (like after the break) I didn't try to support angular momentum - but it would have been fairly straightforward to add.
What I've been thinking about lately, is algebra driven design https://algebradriven.design That'd really help with nailing down the data structures and operations on them. That book has a couple of really gnarly sequenced state with constraints examples, that came from real production code.
There's no closed form solution to some problems (a lot! most problems!). So I think really nailing down a data structure, and providing operations on that data structure are essential. You, well, I need to create an environment where I can create tactics for solving special cases, without that code for the tactic getting smeared all over the code base.
I'm a dilettante, I've got very little experience on the JS side of things. But from pure Haskell, I'm a reasonably sophisticated amateur. So take my opinion in that context.
I think the automated tooling for finding laws presented by Sandy Maguire really have a lot of potential for Locking down those interactions - and opening up new ways of decomposing those interactions, to make more sophisticated descriptions of interactions. I'm not clever enough to find a closed form, but given some state and a time delta, I can work out a bunch of special cases - the interactions. Keeping that tidy is tricky. That algebra approach seems really promising.
Again. I'm a dilettante. Take my opinion in that context. There are probably horrible corners where all this falls down. Anyway, the day is starting and it's almost time to go wrestle yaml.
Having a state algebra can also help with speculative execution, as when doing FRP-like complex event processing on human input device streams, while preserving responsiveness despite diverse latencies.
Consider doubleclick. After a click, rather than pausing for 30 frames to see if a second click eventually shows up, you can act immediately, and if there's later a second click, retract and rerun with a doubleclick event.
With incremental speech recognition, interpretation of past words can change as a sentence progresses, so "Launch the missiles!" can replay as "Lunch is mussels! In butter!".
Keyboard and ml-based visual tracking have very different latencies. So when combining them (eg, which finger pressed the key, where on the keycap, and what were any other vision-derived key modifiers), you can pursue a guessed-at transition immediately, and then correct if needed when tracking finally coughs up the hand pose.
Edit: Hmm. Well... having "retractable state transitions". "Algebra" has other implications. Edit edit: s/reversible/retractable/ - sorry.
Yeah. Java swing early on actually paused the event loop waiting for a second mouse click to return a double click. If I recall, some platforms would drop those events while paused, others would queue them. I think X dropped, windows didn't. this caused some workarounds smeared across the codebase.
It's never going to be easy, and somebody has to pick the "right" answer. With more structure, you can get more consistency. I may not like the answers, but they're consistent answers and so I can reason about them.
Reversible state transitions is a whole other bag of mixed metaphors. But I'd still say, more structure makes it consistent, and then you can reason about it, and then you can maybe solve some of those cases.
From my experience working on the projects mentioned above as well as lots of Elixir code:
Things you can do:
* Use static typing. If not possible, use type annotations or proper pattern matching with a consistent structure (e.g. {:ok, val} | {:error, :reason} in Elixir).
* Keep abstraction at a reasonable level, do not abstract things simply because they might become useful.
* Wrap all values that flow in streams in custom data structures that can easily be expanded. Never (unless you're absolutely sure it never changes) use primitive types.
* Use FSM or any kind of modelled sequence to document what a module does (we ended up having mermaid diagram syntax as comments in some of the most complex modules)
* Make sure the language/library you use has good support for testing and especially debugging. Lots of FRP libraries don't or their implementation works against the design of the language which leads to all kinds of odd behaviours.
Things to be aware of:
* FRP enthusiasts often tout that you can use the same flow of logic across languages that provide FRP libraries. Yes, you kind of can but all the functions are named different, have different interfaces and it's been a huge mess for us.
* If you interface with libraries that rely on strict threading (e.g. C libraries using pthread), strap in for some pain. FRP libraries often do what apple does with GCD in where they kind of abstract threading away from you into queues, which works fine if you stay in the FRP world but falls apart when interfacing with thread sensitive code.
I'll be adding this to Inflex (https://inflex.io/), I have the design worked out on paper. (But I'll be open sourcing it first and releasing as a desktop app, and then get back to dev.)
It's also helpful to think of FRP in terms of "push" and "pull" (for which there's a related paper by the same chap). This refers to control flow. Behaviours are "pull" i.e. your program has to pull from them. Events are "push" i.e. your program gets pushed to, by some other active agent.
The trick is how one "makes things happen". If you want to pull the latest tweets every 5 seconds, in reverse order, you might have code that looks like:
timer(5).joinWith(latestTweets).map(reverse)
Similar to promises you're always building up more declarative values. It's just that FRP has a clean semantic description. There are laws, and no assumptions about time/imperative escape hatches. It can be quite hard to use it practically for some types of apps; space leaks and cycles are a challenge, and some code can be messy. Research continues.
But for my simpler use-case, it's a very good fit. Spreadsheets use "volatile" cells to side-step the whole issue of interacting with time and the outside world, but it leaves a bad taste in the mouth because it's a hack that makes state implicit. FRP brings a strong mental framework to address this properly, rather than as an afterthought.
The UI is implemented using Halogen, a front end library for PureScript. That handles redisplay when recalculation happens. That’s more like React or Elm.
Rather, I mean exposing an FRP API to people using the Inflex language.
Angular is using rxJs which is a reactive programming framework, but I think it was an error to do so.
The angular project I am working on is now 5 years old and the parts of the application that are the least understood are the ones with more rxJs in it. We even have custom rxJs operators that nobody understand anymore...
The way we do things now is to transform everything we can into promises because it's more easy to work with.
With promises you have a few functions with which you can do everything. With rxJs you have dozens of function with specific use cases and most of them looks alike. It's too easy to not take the right one and new peoples in the project needs to learn a lot of things to understand the codebase.
I was interviewing some angular devs aand asked the question: what is the difference between a promise and an observable and 80% of the time the answer was "for observable you need to sunscribe to get the result"). That shows a clear lack of understand of rxJs.
Anybody had a better experience with angular and rxJs?
Yeah totally the opposite. I really like RxJs and used it along side my senior dev for good effect at my last perm for several years. Our junior did find it a bit tricky sometimes though (edit: I don’t mean to imply you weren’t “senior” enough, just an observation that it was a new set of concepts to learn).
We would literally never use promises because we’d become so comfortable with how we (and angular) managed the lifecycle of observables. And we never ever used a async/await after some nasty bugs in previous projects. We never started making our own operators. And I think we never really got too complex with it. RxJs marbles and learnrxjs were often consulted. But the code seemed nice and clean and reliable. Testing was more difficult and harder to understand but we got there.
We switched to using NgRx for our state quite early in the project after a brief flirtation with observable services, so that probably pushed us further down the observable route. We kept NgRx up to date and found the helper functions really nice so there wasn’t too much boilerplate. Making new selectors and integrating them into components with the async pipe is just so damn easy with NgRx and RxJs. Effects would get mildly stupid in terms of complexity and I did have a habit of hilariously writing “these were your father’s parentheses” at the end of any particularly long set of closing parens... but yeah it seemed to just work and give us relatively few bugs and none that I recall were hard to track down. It was all very smooth.
The only thing I’d ever really complain about on that project other than our build times was MSAL, which I hated with a passion.
We found that rx.js is bad for coordination. I think coordination is a big part of services in angular UI's. You need to fetch some data, wait for it, ask for different thing, maybe change some state. Promises and await are great for this and especially await syntax is readable (go channels could be even better). In rx.js you had to nest multiple switchMaps for dependant queries and for state - you either produce state as a stream result or you put some `tap` or `subscribe` with `takeUntil`.
But rx.js shines with more complex user interactions - drag and drops, brushes, interactive forms (angular has nice, reactive form api). We even put some state machines inside streams, so input signals produced events. and we had `scan` operator that manipulated machine.
My personal issue with rx.js is that BehaviourSubject is leaky - you can do anything to it, and break it's property of 'always has a value'. It is nice if your service is a reactive value that you can inject and transform or render using async pipe. But you need to be careful what operators are you using on it.
> My personal issue with rx.js is that BehaviourSubject is leaky
RxJS best practice is that BehaviorSubject should be used infrequently to never, mostly as "internal plumbing" to things like building your own mini-Operators, and that they should never escape an API boundary. If you need to pass a BehaviorSubject to a consumer you always .asObservable() it and the consumer must treat it like a regular Observable (your API boundary is always Observable<T>, never Subject<T>). (BehaviorSubjects are an imperative "back door" that breaks building things the reactive way.) That's one of my biggest personal problems with the Angular core libraries is how many BehaviorSubjects leak out everywhere in that API design. EventEmitter is a big giant BehaviorSubject. The Routing APIs leak BehaviorSubjects. Angular's "Reactive" Forms leak imperative BehaviorSubjects all over the place. The bad use of BehaviorSubjects by the core libraries leaks to the rest of the Anuglar ecosystem and there are so many "Angular best practices" that are "RxJS worst practices" purely from these early API design decisions.
> We found that rx.js is bad for coordination.
RxJS is great at "coordination". It requires a different mindset. Angular is awful at helping you get into that mindset. Angular's HttpClient is especially "bad, heavy promises masquerading as Observables" which makes it hard to think of queries/API fetches as events that can return refreshed data over time (streams of fetch results), more similar to those user interactions you saw good results from. There's a lot of useful coordination operators in RxJS beyond `switchMap()` like `mergeAll()` and `concatAll()`. If you think of a stream of request events flowing into a stream of response events flowing into a state machine (possibly with a very simple, similar `scan` to your user interaction model to reduce your state over time, a little like the "redux" pattern [1]), RxJS can be brilliant for "coordination" as data arrives.
(Angular kind of sets it up to fail. With how HttpClient works. With how Async Pipe works and isn't the default.)
> or you put some `tap` or `subscribe` with `takeUntil`
This is also where Angular "best practices" and I diverge, and I think also stems from RxJS "worst practices". RxJS best practice is also to use `tap` as infrequently as possible. It's a worst chance escape hatch at best. RxJS best practice is also the `subscribe` as "late" as possible and also as infrequently as possible and that you never have a `subscribe` without an `unsubscribe` to clean up resources, including memory. (Which can be very important if you are trying to do everything the reactive way. You can move all your setup into Observables, including the setup and teardown of vanilla JS components.) The overuse of `subscribe` in Angular components seems one of the biggest obvious reasons why so much of the usage of Observables in the Angular ecosystem looks like bloated Promises. (Which is directly a bad example set by the core library's own HttpClient.) (I also think some of Angular "best practices" uses of `takeUntil` aren't great either. I was taught that Observable completion should "mean something" as it's a key event in the stream, and shutting down Observables early mean you miss later events.)
If Angular's template language took Observables directly, without needing an "async pipe", most of those manual subscribes would just vanish in an instant. Most of the needs for `ngOnInit` and `ngOnDestroy` "lifecycle events" disappear. (As Observables have lifecycle events already in subscribe/unsubscribe setup/teardown.) Angular could have not needed Zone.js at all nor its complex "Change Detector" apparatus. Angular could have done smart things in coordinating Observable observations by templates. (A lot of what React's last several major releases have been about in building its Concurrency, Suspense, and related systems out have been about among other things throttling "non-important" DOM updates together to things like requestAnimationFrame and doing very complicated work under the hood to set all that up from VDOM changes. In an Observable world you can pipe a `mergeAll()` through `debounce(0, requestAnimationFrameScheduler)` and get things like that "really easy".)
I wound up trying to encode all of my personal best practices for writing powerful, reactive components in Angular into an opinionated reusable "component framework": https://www.npmjs.com/package/angular-pharkas
I haven't yet found a good way to encode my mindset to "reactive service classes" in Angular, in part because "it feels obvious" to me and isn't really a pattern so much as a mindset, which I know it is exactly not obvious or lots of people would be doing it and Angular's ecosystem would be less full of bad examples. Probably a key place to start based on the above conversation is that I almost always wrap any calls to Angular's HttpClient in a "forever Observable". Whatever the input is to call that HTTP API, whether it is a refresh signal or some sort of other input event (Observable) hide the `switchMap()` in an Observable of results over time. Most "dependent" data streams are coordinated sometimes as simply as a `combineLatest()` and others are `scan()` reductions (even full "state machines" in some cases of those reducers). Everything is returned as Observable<T> and never any Subject<T>. Everything flows into the Components and `subscribes` are as late as possible (and these days hidden entirely away in "Pharkas" binds). Learn when to `share()` observables (or more often `shareReplay(x)`) and reuse existing Observables rather than build new ones.
I don't know how much that helps. I've been able to wring a lot of good reactive programming out of a massive Angular frontend, but I've been fighting Angular itself (and the terrible debugging/performance experience of the gross, unnecessary Zone.js), and the greater ecosystem of "Angular best practices" the whole way. Every Junior Developer with Angular experience that looks at the code for the first time generally thinks it is readable but that "it looks nothing like Angular I am used to". It's definitely not "Angular best practices" as people are learning them today.
Also, a useful related tip for removing most uses of `tap`: install `rxjs-spy`. It's great. It provides a very simple `tag('some-observable-name`)` operator that is a no-op in Production builds and in Development builds gives you a `window.spy` toolkit that lets you log events from tagged Observables (or a regex matching tagged Observable names) or even set debugger breakpoints at tags.
[1] Though I tend towards "lots of little observable streams" over combined ball of single state refiltered back into little observables like in the "proper" "redux" pattern or how tools like NgRx try to implement that in the Angular world.
Yes - totally the opposite. In fact, I think the issue with Angular is that they've not finished the work to make Angular RX-everywhere (for example, reactive components need to be manually plugged into Subjects at the moment).
With RxJS you should use it everywhere (observing state, observing component inputs, side effects etc). It's when you use it half-heartedly that you get problems with merging different programming paradigms.
The biggest issue with RxJS that we've found is that some devs have a super hard time getting to grips with the paradigm, and if your project is mostly those types of people, it will end up a disaster.
> With RxJS you should use it everywhere (observing state, observing component inputs, side effects etc). It's when you use it half-heartedly that you get problems with merging different programming paradigms
Yes, yes ,yes!
I have been working with angular professionally for five years and fell in love with rxjs.
If you manage to use it for everything, it really shines.
Your entire codebase becomes declarative and it works beautifully.
The only downside is that it takes some time to get it started up from the ground up, but once you do making changes and adding features becomes trivial.
Try making smallish pipes and comment their purpose. Break them modular pieces.
On the one hand, I absolutely love working with rxjs on a fairly large angular app as a side project. It affords me the chance to be clever with code, to reason about coordinating multiple asychronous streams, and so on. Firestore gives me observables of the queries I run where the data update themselves, and the user generates events that turn into data mutations. It is all just super great.
On the other hand, I recognize that doing this well and correctly (understanding how to pipe together operators instead of creating lots of intermediate subscriptions manually, which I see a lot in example / stack overflow code) requires a high level of understanding.
It is a whole change in how you think about event pipelines and code structure. I don't think I would want to migrate my day job systems to it, because then you need everyone on the team to develop that understanding. I'm sure they _could_, but working with promises and trad event handlers is a lot simpler, as long as you keep the rest of the data / eventing pipelines simple.
I understand this paradigm within the context of stream processing, but it seems like a weird way of modeling most REST api-based web applications.
I feel like a promise-based model makes way more sense for most simple web applications. Where you have a http request and response, a promise model seems to suffice for most applications communicating via http-requests to an API layer. Modeling as a stream doesn't make sense to me, and seems like it would over-complicate an otherwise simple mental-model.
In most of the simple web-applications I have encountered there is one or more requests made to unique endpoints for data after the document response has been completed. No need for handling multiple events from the same endpoint, debouncing, multi-casting, unsubscribing, back-pressure, or whatever else. These operations seem to make way more sense in the context of stream processing.
As a developer of real life messy code (as opposed to unreal life?), I have never found ignoring CS to be beneficial; and on the other hand, deciding in advance that CS is not helpful is hubristic: you never know when theoretical insight will become actionable.
On the other hand, who wouldn’t be curious? Like someone who purports to enjoy fast cars but has no interest in understanding internal combustion or electric engines?
"It is sometimes called “functional reactive programming” but this is a misnomer. ReactiveX may be functional, and it may be reactive, but “functional reactive programming” is a different animal. One main point of difference is that functional reactive programming operates on values that change continuously over time, while ReactiveX operates on discrete values that are emitted over time."
At the same time, I see lots of references to FRP on https://reactivex.io/tutorials.html so I am not sure what the actual position of the project is, or the actual status of the technologies (maybe things have changed?).
There was reactive programming coming from c#. You can look up Eric Meijer and the duality of Observable and IEnumerable. It is discrete, push and event based and has some history with flow based programming.
FRP comes from Conal Elliot and has a continuous pull based aspect with maybe an additional reactive event based component on the side.
Then the javascript horde discovered rx and since functional programming is cool, rp is now frp.
Indeed. That is why you have:
at : Behavior α → Time → α
which is a pull based continuous data type. This is a very elegant concept but comes with its own set of difficulties if implemented naively (space and time leaks).
Imo the main problem in the end is/was that not many problems are that conveniently described as continuous. Discrete and push based (rx) can do most things and even that is overused. It makes problems that are otherwise very hard, much easier to describe and solve using the many powerful combinators https://www.learnrxjs.io/learn-rxjs/operators
On the other hand you make all those easy problems quite a bit harder and if you are not disciplined you end up with complex rx "queries" that have semantics that you dont understand.
Elm is not dead. It just prefers a slow release schedule but is still actively worked on in the background.
That said, you might want to check out OCaml for general purpose programming. Super fast compiler, great performance, can target both native and JS.
It is easier to use than Haskell due to defaulting to eager evaluation (like most languages) strategy instead of laziness and being generally more pragmatic, offering more escape hatches into the imperative world if need be. Plus great upward trajectory with lot's of cool stuff like an effects system and multi-core support coming.
> you might want to check out OCaml for general purpose programming
Any tips on backend frameworks to look at? I need to write a small websocket service for a side-project and have always wanted to try OCaml. I came across https://github.com/aantron/dream.
Dream is great for small http services -- I've not used the websocket support so I can't say much about it.
I do highly recommend starting with one of the Dream example projects just to lower the barrier to entry on the tooling side.
You can also use Rescript (compiles to very readable Javascript), which is the OCaml type system with more familiar syntax and some of the complexity shaved off.
You'll have a much easier time doing business stuff with F# - it's OCAML inspired but has access to broader .NET ecosystem. We're using it on front end as well w/ Fable compiler. It's pretty awesome.
It was pretty obvious that at some point, someone would fork Elm, given how Elm has always preferred purity over practicality... but it's not clear how this fork improves things even after reading the (tiny) documentation... can you expand on that?
It seems to address the important problems with Elm, around its overly restrictive project management, by 1. introducing more flexible package management (e.g. you can depend on a fork of a core package, or a private git repo etc.) 2. being open to adding various missing web APIs (e.g. while I'm not sure it's there yet, I expect websockets to make a return, which were dropped with Elm 0.19) 3. generally going for an open development model (even just not having to wait years to fix trivial crippling bugs in the compiler is a step forward...).
The other part is that I have a good impression of the person/people behind it and could see it sticking, for whatever that's worth.
Lead developer of Gren here. It's great to see people being interested in the project, thanks for the mention!
Just wanted to add that both websockets and indexed-db is being worked on by members of the community. I'm hopeful we'll have them ready for the 0.2.0 release in December, alongside preliminary nodejs support.
Since the 0.1.0 release, we've added support for local- and session-storage, which was another often-requested feature for Elm.
I believe it's quite possible to support websockets with the current language features, though, and I hope that websockets will be ready for Gren along with the 0.2.0 release in december.
I would recommend starting with Elixir. Elixir is a dynamic functional language, unlike Haskell/Ocaml etc. which are statically compiled languages.
This means that you can learnt he basic concepts of operating on data with pure functions, pattern matching etc. Plus it's a nice language that is great for building a lot of things (Phoenix is a great web framework).
Once you've got some of that background of how you build functional programs rather than OO, moving to something like Haskell or Ocaml is a bit easier because it becomes more about understanding how to declare types to keep the compiler happy and less about the fundamental program design stuff.
So sad to hear that Elm appears to be dead. It really excited me at first but I guess that like many I never made the time to give Elm a serious attempt.
For me, learning Racket was not too hard. The offical learning materials are excellent, although they are also meant for programmers starting from zero. For me that meant that I occasionally skipped over parts I considered "too easy" only to be confronted by my hubris later.
For me, what makes lisps easier than Haskells is that lisps are multi-paradigm. So you can write a more imperative implementation of whatever you're doing right next to the "proper" functional one to get things to click, and also to identify those elusive merits of functional programming.
I wouldn't describe the Racket community as "vibrantly alive", but it's definitely still moving along. And the knowledge is very transferrable to other lisps and schemes. Next on my list is Carp, for example. And if you (choose to) use Emacs you'll reap even more rewards from the knowledge transfer.
"Elm is dead" means different things to different people. I think what most people mean is they are not adding the features that they want from Elm. Elm has a strong opinionated nature, as well as being very well scoped to the front end and using "TEA" system for rendering, so it may appear that it is dead because there isn't much to update anymore.
People are still working on Elm ecosystem packages, so I think that is evidence that it isn't dead. But it isn't as "alive" (or perhaps... hectic) as the JS ecosystem of course.
Elixir paved the way into FP for me. I did Lisp, Prolog and a bit of Java & Javascript done the FP way (I know) at University though, but only with Elixir I've had the "now I get it" Eureka moment.
I have been using rxjs with typescript for some stuff. Functional reactive has proven to be a good paradigm for some stuff in the UI and thornier parts of shared state across multiple applications.
Is it viable to build a traditional web product/company on top of Haskell?
That sounds like a silly question - but I’m serious. I’m enamored by the beauty of FP, but I’m not sure if there’s enough tooling or libraries to get to market.
For example - GraphQL. There are two packages (mu and morpheus), but neither clearly document their feature parity in relation to other packages for other languages - and things like dataloaders.
The Reddit almost encourages people to use a different language, too.
I have never tried, but my impression is basically "You could, but why would you?" Haskell had a big jump in popularity around a decade ago, but it didn't gain traction with the "i'll grind 17 hours a day to become a rockstar ninja whatever" demographic that every language seems to bootstrap itself off of. I won't speculate as to why, but the end result is that Haskell is missing a lot of "adapter"-style libraries and instead is just a bunch of building blocks. Yeah you can write great stuff in Haskell, but also you have no choice. No one else has written what you need so you'll be writing it yourself.
The benefits of Haskell don't even _really_ seem that useful when you're just writing yet another big ass crud app where the main goal is to convert json into some other json and then maybe render it. I could see it making more sense in specific domains where the main challenge is wrangling a complex mess of business logic into something you can be confident about. Not like "is my shopping cart correct" but "how can I optimize this PCB design to reduce trace length and total area?"
> The benefits of Haskell don't even _really_ seem that useful when you're just writing yet another big ass crud app where the main goal is to convert json into some other json and then maybe render it.
The benefits have been massive for me, and I'm often doing basically this.
Yes, it's possible to build a traditional web company with Haskell. We've made IHP exactly for that :) It's like Rails/Django but for Haskell. https://ihp.digitallyinduced.com/ We specifically try to be batteries-includes (like rails), so you don't have to think too much about what libraries to use, the core of IHP can get you very far without needing to manually decide between libraries.
You absolutely can. Its a great platform for this.
There may not be as wide a variety of libraries as you would hope, but that's true of any smaller language.
The main thing you need to be able to do is have the wisdom to avoid the footguns. They're quite different from the footguns in other languages. in general, using Haskell well requires a good bit of experience with it, and knowledge of who in the community are selling false solutions.
FRP is a fascinating paradigm, but I find I really have to turn my brain inside-out to "get" it. But it's really cool to have UIs that are completely consistent.
I disagree. The moment my controller needs to "set" or "update" something in the model the whole thing becomes a mess in my experience. FRP requires that you change the way you code but it's 100% worth IMO.
Actual MVC has the model notifying the UI that it has changed and then the UI updating itself from the model, by pulling data. That's always consistent.
We then did all sorts of things that we call MVC, but that do not follow the MVC-defined interaction patterns at all, particularly either the model or other views incrementally poking data into the view.
That doesn't work, and it is nigh impossible to keep consistent. It also isn't MVC.
So, what if you have a background process (running in the controller I assume) that updates data in the model periodically from somewhere. And at the same time you have the user making changes to the model. How is consistency guaranteed here by having "UI updating itself from the model, by pulling data"?
Maybe I have a different understanding of "consistency" but this might very well lead to undesired results if the logic of data updates isn't well-controlled. The developer needs to decide if the updates can happen arbitrarily or if some kind of transaction-model needs to be used, forcing the background process to wait during user interactions or the other way around, etc.
This is not MVC's problem to solve. It also doesn't solve global warming.
If you have unprotected multithreaded imperative updates of global data, nothing is consistent. Has nothing to do with MVC or no MVC.
Actually having a consistent state to present to the UI is the model's problem.
Oh, and for goodness sake, don't have any kind of async update process running in the controller. All this stuff belongs in the model.
If you're doing that sort of stuff in the controller, I can see why you're having problems with concurrency. You're also almost certainly not doing MVC.
> If you have unprotected multithreaded imperative updates of global data, nothing is consistent.
Yeah, but that is exactly what FRP solves (or strives to solve) and MVC does not give you on it's own (as you said). Ofc MVC can be used in combination with other techniques to gain consistency, but it doesn't provide it on its own, which is what I believe was implied by your original post.
If you were just talking about the UI in isolation, then yeah, maybe MVC gives you that, but it misses the point of what FRP gives you.
You can build perfectly fine SPAs using an MVC architecture, even with lots of concurrent requests and data fetching. This is why JS has an event loop and why we invented data-binding.
I’m not saying FRP makes anything easy, just that there’s a set of things that are desirable that MVC makes difficult. Right now I’m making a graphical node editor with a lot of drag and drop that displays live data streams. I can go into more detail about why a typical MVC sucks for this but it definitely does.
I apologize if it came out as if I don't agree. I asked out of ignorance I truly find FRP interesting and wonder when is it the right tool.
It'll be very interesting to see this a sort of concrete comparison.
Hum, I'll complain about calling it "being picky". We are up to 3 completely different theories about what the GP meant, and no way to tell them apart.
In the context of games it is in fact "real time", at least in "real time strategy" (and as opposed to "turn based") but obviously that conflicts with the engineering definition.
Writing behaviour flows can end in beautiful blocks of easy to understand operations. However, as these get more complex and you need to combine multiple data streams, logic is scattered all over a module.
Refactoring data-flows that go through multiple modules is a huge hassle. Sometimes, we would spend hours just refactoring data-passing, wrapping and unwrapping and tests surrounding modules, because we needed to pass some additional values.
It doesn't help that you have to set up all behaviours at setup time, which means the code is mixed with one-time setup code which regularly confused people working on the projects as to what is run at startup time and what is run per-event.
Debugging itself was mostly hampered by the libraries that we used not providing adequate tools for the job but even if they did, it was a lot more difficult to reason about compared to something like async/await based code or callback chains.
I can imagine FRP works better in purely functional languages but implementing FRP paradigms in general purpose languages - especially when interfacing with non-functional code, which is often necessary - has led to nothing but trouble for me.