Hacker Newsnew | past | comments | ask | show | jobs | submit | joaorico's commentslogin

Kafka [1] on which types of book to read:

"I believe one should only read those books which bite and sting. If the book we are reading does not wake us up with a blow to the head, then why read the book? To make us happy, as you write? My God, we would be just as happy if we had no books, and those books that make us happy, we could write ourselves if necessary. But we need the books that affect us like a disaster, that hurts us deeply, like the death of someone we loved more than ourselves, like if we were being driven into forests, away from all people, like a suicide, a book must be the axe for the frozen sea inside us." [2]

[1] Brief an Oskar Pollak, 27. Januar 1904. , https://homepage.univie.ac.at/werner.haas/1904/br04-003.htm

[2] Literal translation by ChatGPT. Original:

"Ich glaube, man sollte überhaupt nur solche Bücher lesen, die einen beißen und stechen. Wenn das Buch, das wir lesen, uns nicht mit einem Faustschlag auf den Schädel weckt, wozu lesen wir dann das Buch? Damit es uns glücklich macht, wie Du schreibst? Mein Gott, glücklich wären wir eben auch, wenn wir keine Bücher hätten, und solche Bücher, die uns glücklich machen, könnten wir zur Not selber schreiben. Wir brauchen aber die Bücher, die auf uns wirken wie ein Unglück, das uns sehr schmerzt, wie der Tod eines, den wir lieber hatten als uns, wie wenn wir in Wälder vorstoßen würden, von allen Menschen weg, wie ein Selbstmord, ein Buch muß die Axt sein für das gefrorene Meer in uns."


I don’t know if I’m taking Kafka too literally here, but the books that I read that bite and sting probably fall into two categories. Things that are cynically written in bad faith and things that are hopeless and callous. Torture porn bites and stings, reading hacky partisan politics bites and stings. Anything that makes me feel stupider after reading it bites and stings.

The things that I think that he wants to say, the inconvenient truths, the things that make me see the world in a whole new way, that challenge everything I believe in. Those things fill me with joy and wonder they are just so few and far between.

Maybe the thing he’s getting at is the existential dread? The truth that nothing you do is meaningful? The staring into the abyss? In which case maybe in moderation, but I fundamentally disagree.

in a sense I wonder, if this is what he means, what a weird way to view life, that those things that challenge you are negative.


I think he is ultimately saying that you should be emotionally vulnerable, and you should read things that break through that inner barrier that we all put up. Reading a good book is like making a connection, and becoming emotionally invested in that world and the ideas within it. Turning that last page and knowing that its all over can be a heart-wrenching experience as well. I know I've experienced this indescribable feeling of loss or even grief almost after reading a good book.

That could also mean reading biographies of others lives, love stories, things that challenge your world view and things that are a little above our skill level. There is value in being willing to challenge your own beliefs (if they can't be challenged with a new understanding or new knowledge, then they aren't so much beliefs as they are a doctrine to be followed) and being willing to be emotionally vulnerable.


For as long as i can remember ideas have "Struck Me", and the more i read the more "Intellectually Uneasy" i become. I realise faulty assumptions about "What a thing represents" can lead me down a dead "Branch"; Whether in formal systems like math, tech architecture or social matters.

Sometimes as you say, "cynically written" books like 1984 can be have that bite, and thats true, but some books that have "Bite" because makes me go "Whoa!" or a slight panic when my world-view gets changed.

Godel, escher, bach was one of the first books that did that for me. It struck me on the head and i could not put the book down. Concepts of infinity and strange loops dominated my underlying intellectual uneasyness for some time afterwards.

Blood Meridian was also a book that shook my understanding of pre-1800 life. How close to savagery humanity still was only 200 years ago fundamentally shook my understanding of where i stand in relation to my ancestors.

"The Quants" showed me how shaky our financial infrastructure really is.

The Rose Of Paracelsus: On Secrets & Sacraments blew my mind. Spending 20 years to create a masterpeice that would certainly fall into both of your categories at once... a brilliant, cynical book, hopeless and callous in the eyes of a population with the attention span of a tik tok.


Kafka'd want you to get tougher so some hack can't hurt you.

"Those things fill me with joy and wonder they are just so few and far between."

Yes, but that's what you should be looking for.


“Yes, but that’s what you should be looking for.”

… and you aren’t going to find them in Silicon Valley.


The one book I recall that 'bit and stung' as I think Kafka meant to say was 1984. How would you categorize that work? Torture porn?


Oh god no, that’s definitely in the cynically written in bad faith hacky partisan politics category. Maybe it just hasn’t aged well, but I couldn’t get through it.


This is (to me) a strange comment. I assume it refers to the fact that some right-wingers have latched onto Orwell. But that happened long after his death.

Orwell was a British Socialist, and the people he's attacking in the books are totalitarians, whether fascist or Stalinist. So it's neither bad faith nor partisan unless you count anti-totalitarian as a party, though I guess hacky is in the eye of the beholder.


"1984" is a quintessential example of literature that challenges and provokes, embodying Kafka’s idea of a book that serves as "the axe for the frozen sea inside us."


A lot of positive change can from works of philosophy.

Thats things that just knock your world view around for a brief moment in a almost confused-joyous-understanding. Make question your intuitions for a little bit.


I think he's referring to works that provoke profound thought and emotional engagement


"If the book we are reading does not wake us up with a blow to the head, then why read the book?" --

That's the authorial feeling of self-importance making itself visible. Why read the book? Because it might be enjoyable, a pastime, something that makes us dream, reflect, cry, or connect some dots in our lives through a parallel representation of feelings or ideas. There are many reasons, and the "blow to the head" will not and should not be the main reason, especially for older people who have seen some water flowing under the bridge and see the shock factor as artfully constructed and therefore much less provocative than the author intended it to be.


Made me check and Google translated it about the same. No unexpected comma or awkward driven into the woods (by whom?), punch is a punch (blow is more generic), but awkward duplication of "one".

> I believe that one should only read books that bite and sting one. If the book we are reading does not wake us up with a punch to the head, why do we read the book? So that it makes us happy, as you write? My God, we would be happy even if we had no books, and if necessary we could write the kind of books that make us happy ourselves. But we need books that affect us like a misfortune that hurts us greatly, like the death of someone we loved more than ourselves, like if we were to venture into the woods, away from everyone, like a suicide; a book must be the axe for the frozen sea within us.


I liked the ChatGPT version better. The repeated "one" is paticularly jarring and not a stylistic feature of the original (which has man...einen, two different words reasonably translated as "one"). "Misfortune" is a more literal translation than "disaster", but in the context that it greatly hurts us, I prefer the latter. And I'm pretty sure "wir in Wälder vorstoßen würden" is closer to being cast out or driven away than just going for a nice sylvan walk. The passive voice there is faithful to the original. The comma after "disaster" is the only part I don't like in modern English.


Cast out vs going out into the woods can tip it over I suppose. Don't know German well enough myself to say but checking word by word it didn't have the being driven out vibe. But it's interesting that the difference is almost nonexistent.


DeepL:

I think you should only read books that bite and sting you. If the book we're reading doesn't wake us up with a punch to the skull, why are we reading it? So that it makes us happy, as you write? My God, we would be happy even if we didn't have books, and we could write the books that make us happy ourselves if need be. But we need books that have an effect on us like a misfortune that hurts us very much, like the death of someone we preferred to us, like pushing us into the woods, away from all people, like a suicide, a book must be the axe for the frozen sea within us.


Truly impactful books should provoke deep emotional and intellectual responses


Terrible advice. I can’t imagine anyone following it. Not even Kafka.


I don't know anything about the man; what kind of life did Kafka have that happiness was easily had IRL and he needed books to experience misery?


Kafka's life abounded with misery. Great writer, though


> Kafka's life abounded with misery

Which is to say that his life was objectively mostly very comfortable (for the time) but still miserable in his personal experience.


I think it's important to understand Franz Kafka's life in context when discussing his experiences


I suppose it's quite off-topic, but some weeks ago I read a small book by Mary Gaitskill, the writer of the piece.

It's called "Lost Cat".

I highly recommend it. Ironically, it might be an approximate opposite of Pale Fire. It's very short, with simple yet beautiful prose, filled with intense, raw emotions.


On-topic enough. Gaitskill writing previously on Pale Fire (in another context):

https://unherd.com/2022/06/the-death-of-literature/

The link contains audio of her reading from a favorite passage of the novel.


For anyone diving into Ulysses, I highly recommend checking out The Joyce Project [1].

It's filled with interactive notes that are very useful for understanding the linguistic and cultural references.

Here's my reading method that I found effective:

  1. Read a section on paper.
  2. Go through the same section on the site.
  3. (Re-)read on paper.
I toggled between 1-2-3, 1-2, or 2-3 depending on my mood, and it worked really well.

[1] https://www.joyceproject.com/


I'll say this, with the caveat that I never did finish the book due to unrelated reasons, this is an excellent method.

One may think "It's fine, I'll simply read the text and then, if I have questions, absorb some scholarly articles on it." Trust me, you will enjoy it so much more when you understand Joyce's intent and clever writing as it happens. You simply can't take it all in post-factum, too much would be missed.


There also is https://www.ulyssesguide.com. It has episode guides, which explain what actually happens in each chapter (this can sometimes be difficult to decifer), the cross references to other chapters, and sometimes possible interpretations. I found that extremely helpful and would have missed a ton without even noticing.


Thanks for the link, this does make it more approachable!


Something like this but for The Iliad would be awesome


The Iliad is much more approachable with a background in ancient history. Whereas I feel only a background in Ulysses really helps with Ulysses.


Is the effort worth it?


If you're a fan of modernist literature or of literature as an at form, undoubtedly yes. If you're just interested in reading it because it is (justifiably, in my opinion) famous, then possibly.

It's a bit like reading and studying the Bible if you're not religious. Will you come out having read and studied one of the foundational texts in English literature, able to approach later texts with fresh eyes to the unending allusions it spawned? Yep. Will it be 'worth it', though, in a revelatory sense? That's up to you in the end.


It's certainly worth it, especially if you have some appreciation for the craft of writing, a love for words and the English language and the patience to take things slow and put the effort to really understand what you're reading. After the clouds clear and you can see what he is doing a monument reveals itself and there is this feeling of astonishment that a human was able to create such a thing.

The second half of the book (chapters 10 to 18 although page-wise it's more like two thirds) is especially satisfying. Each chapter is written in a vastly different style: imitation of music, a romantic novel, a play, the historical development of style in the English language, how a bad writer writes, a technical text, and a couple of others.

It's challenging and might not be satisfying if you're looking for plot (there is none). I suggest to read a chapter and then the accompanying text in https://www.ulyssesguide.com


Incidentally, if you’re looking to start reading in French, there is hardly a better book in terms of (impact on literature) times (simple, accessible writing) [2]. It’s also a short book.

Regarding the literary merit of Camus, Nabokov had this to say [1]:

”I happen to find second-rate and ephemeral the works of a number of puffed-up writers—such as Camus, Lorca, Kazantzakis, D. H. Lawrence, Thomas Mann, Thomas Wolfe, and literally hundreds of other “great” second-raters.”

“Brecht, Faulkner, Camus, many others, mean absolutely nothing to me, and I must fight a suspicion of conspiracy against my brain when I see blandly accepted as “great literature” by critics and fellow authors Lady Chatterley’s copulations or the pretentious nonsense of Mr. Pound, that total fake.”

“Incidentally, I frequently hear the distant whining of people who complain in print that I dislike the writers whom they venerate such as Faulkner, Mann, Camus, Dreiser, and of course Dostoevski.”

“It is a shame that he [Franz Hellens] is read less than that awful Monsieur Camus and even more awful Monsieur Sartre.”

[1] Strong Opinions

[2] Although Le Petit Prince beats it in all three (impact, even simpler language, shorter).


Opportunity costs. The real debate has been whether it makes sense for string theory (whatever the prevailing definition is) to dominate funding for theoretical research of the "bridge". There are alternatives besides strings for the bridge, and there should be even more, in theory...


I don't know enough about the history of funding theoretical physics research to comment on that one way or another. However, neither did the comment I was replying to reference any actual facts about the distribution of funding that might suggest any of them had been wasted.

The fact is, we have no falsifiable theories that can unite GR and QM. Should every theory be abandoned that doesn't quickly lead to a resolution? No, clearly not. So the question is what kind of criteria we could use to determine that string theory is a dead end or is otherwise stifling true progress.

And that's pretty much what I was trying to ask previously... is ST actually sucking all the air out of the room? I'm a layperson and not just going to assume that hundreds of experts have blown their careers doing pointless calculations on a theory that "obviously" isn't worth the resources put into it. But the comment I replied to seemed to be making that assumtpion,


Simpson’s paradox should be taken into account.

If you group the population into only 2 groups: all of the vaccinated and all the unvaccinated, regardless of age; then the vaccinated had a higher death toll.

But age is a hidden factor. The older have more risks and are more vaccinated.

If you group by vaccination AND by age bracket, the opposite happens. For example, the 60 to 65 vaccinated have a lower death rate than the 60 to 65 unvaccinated.


Just so I understand what you're saying, it sounds to me like covid is killing old people and something else is killing everyone else at a higher rate, correlating with vaccination status?

So only old people and high risk individuals should've gotten the vaccine?


I think you jumped to a conclusion. COVID still kills vaccinated individuals, just at a lower rate than unvaccinated.


That much was obvious. Young people weren't asked to take the vaccine to protect themselves but to protect the herd.


And by „asked“ you mean (in many cases) forced.

Not to mention, the emergency auth clearly stated (in europe at least) that it was unknown if the serum prevented spread. Lookup the documents, it‘s right there.

So the reality is that young people were forced to take untested gene therapy for no medical reason at all.


Despite the fact that the clinical trials didn't test for ability to stop transmission and infection. So the CDC, FDA, and Pfizer outright lied to the public. And when that lie fell apart, it was so 'you don't overrun the hospitals' despite the fact that young people weren't being hospitalized at any rate more than the seasonal flu.


And for anyone skilled enough in mathematician to check the assumptions for the formulas for "herd immunity": It was clear from the beginning that the term "herd immunity" was based on totally unrealistic assumptions. These are seasonal viruses - and outbreaks were following strict mathematics fitting a seasonal outbreak. This was clearly visible from the beginning (people posted formulas for that in early 2020....). The term Herd immunity absolutely doesn't make any sense for a seasonal virus like this. There is no such thing as a constant R-Value. That model is based on stochastic independent infections... (which is very unrealistic... and dumb...)


Pfizer didn't. Their press release distinguishes between SARS-CoV-2 and COVID-19, only claims they prevented symptoms in people not previously infected by the virus, and didn't make any claims about infection/transmission. It was the conflation of the two terms in the media, referring to the virus as "COVID-19", that resulted in this mistake.

And even so, for several months articles were coming out questioning of it stopped infection/transmission at all. It was only once we got a few months into 2021 that that got buried with the assumption it did.


The study is about age group 25-44 where you should really not expect heart attack (very rare).


Alain Connes, Fields medalist, talks about going on walks while reading math books in a particular way (and on how a mathematician works and should read a book) [0]:

"To understand any subject, above all, a mathematician SHOULD NOT pick up a book and read it.

It is the worst error!

No, a mathematician needs to look in a book, and to read it backwards. Then, he sees the statement of a theorem. And, well, he goes for a walk. And, above all, he does not look at the book.

He says, "How the hell could I prove this?"

He goes for his walk, he takes two hours ... He comes back and he has thought about how he would have proved it. He looks at the book. The proof is 10 pages long. 99% of the proof, pff, doesn't matter.

Tak!, here's the idea!

But this idea, on paper, it looks the same as everything else that is written. But there is a place, where this little thing is written, that will immediately translate in his brain through a complete change of mental image that will make the proof.

So, this is how we operate. Well, at least some of us. Math is not learned in a book, it cannot be read from a book. There is something active about it, tremendously active.

[...]

It's a personal, individual work."

[0] https://www.youtube.com/watch?v=9qlqVEUgdgo


In case you're interested in learning about graph deep learning, and are familiar with standard DL, I strongly recommend these two very good, recent books (freely available):

[1] Graph Representation Learning - William L. Hamilton https://www.cs.mcgill.ca/~wlh/grl_book/

[2] Deep Learning on Graphs - Yao Ma, Jilian Tang https://cse.msu.edu/~mayao4/dlg_book/


The new edition has been split in two parts. The pdf draft (921 pages) and python code [1] of the first part are now available. The table of contents of the second part is here [2].

From the preface:

"By Spring 2020, my draft of the second edition had swollen to about 1600 pages, and I was still not done. At this point, 3 major events happened. First, the COVID-19 pandemic struck, so I decided to “pivot” so I could spend most of my time on COVID-19 modeling. Second, MIT Press told me they could not publish a 1600 page book, and that I would need to split it into two volumes. Third, I decided to recruit several colleagues to help me finish the last ∼ 15% of “missing content”. (See acknowledgements below.)

The result is two new books, “Probabilistic Machine Learning: An Introduction”, which you are currently reading, and “Probabilistic Machine Learning: Advanced Topics”, which is the sequel to this book [Mur22].

Together these two books attempt to present a fairly broad coverage of the field of ML c. 2020, using the same unifying lens of probabilistic modeling and Bayesian decision theory that I used in the first book. Most of the content from the first book has been reused, but it is now split fairly evenly between the two new books. In addition, each book has lots of new material, covering some topics from deep learning, but also advances in other parts of the field, such as generative models, variational inference and reinforcement learning. To make the book more self-contained and useful for students, I have also added some more background content, on topics such as optimization and linear algebra, that was omitted from the first book due to lack of space.

Another major change is that nearly all of the software now uses Python instead of Matlab."

[1] https://github.com/probml/pyprobml

[2] https://probml.github.io/pml-book/book2.html


It's very encouraging to see Matlab losing ground in the educational space. I don't know why so many engineers let their foundational skills to be locked behind a proprietary ecosystem like that.


>I don't know why so many engineers let their foundational skills to be locked behind a proprietary ecosystem like that.

Because no open source toolkit can do what Matlab can do.

The same is true of a lot of high end software: Photoshop, pretty much any serious parametric CAD modeling system (say, SolidWorks), DaVinci Resolve, Ableton Live, etc. When a professional costs $100K+ to employ, paying a few grand to make them vastly more productive is a no brainer. If open source truly offered a replacement, then these costly programs would die. But there just isn't anything close for most work.

Matlab is used for massive amounts of precise numerical engineering design, modeling, and running systems. So while Python is good for some tasks, for the places Matlab shines Python is no where near usable. And before Python catches up in this space, I'd expect Julia to get there faster.


As someone who helped migrate a university course from Matlab to python I must say proprietary features of Matlab had nothing to do with the reason it lasted so long.

Basically, it was mainly inhertia. Older professors that liked it and rarly used anything else and the fact that generally no one gets rewarded for actually rewriting parts of an existing functioning course.

As an instructor you basically create more work for yourself in the first time you migrate a course's programming language. (And you also annoy some senior staff when forcing them to learn new things)


I work at a government r&d/systems engineering center, and it's the same case here. The engineers who went through college with Matlab use that as their default (i.e., when the project doesn't call for something else from the start), while newer engineers don't. As that generation ages out, it'll be more and more sidelined. It's their inertia keeping it around at all.

Proprietary features don't matter here like there. We get MathWorks employees here at least a couple times a year hawking their latest (paid) libraries, but at this point they're always something 5+ years too late, something that already exists in preferred languages--often for free.

Since our clients never deploy Matlab, it doesn't matter if their libraries are fractionally faster in any case besides mockup/experimentation in R&D, and for that I've never met anyone who chooses it for speed there. Plus in this day where even laptops are fast and cloud instances spun up in a few seconds, there's no point. It's also nicer for the dev to complain about not having enough ram to get a better machine than take the time to learn a new language for a specific use case. Likewise the project manager will prefer the quicker solution, buying.

The one item close to a "tie" with Python here is probably migration. Matlab always and Python most of the time get rewritten into something else, Java in my department.


I guess I'll rephrase - if you can't understand a transfer function or a probability distribution without opening Matlab, then you've allowed your own expertise to be held hostage. Unfortunately, I know a large number of professionals for whom this is true.

If you're more productive in Matlab, that's fine. But if you're at a loss without it, that's not.

It doesn't belong in the education system or in educational books.


Conversely, if at every step of learning, you're hindered by inferior tools, you'll learn less, and be at a permanent disadvantage to those using superior tools.

If your job will use tool X, learning it well has value. Those not learning it will be at a disadvantage.

Again, no open source software can do what Matlab can. Why ignore this?


> Again, no open source software can do what Matlab can. Why ignore this?

Can you list (or point to a list of) some of MatLab's features that are absent from other software?


One big feature is a massive amount of built-in functionality [1]. You don't have to find various packages, install them, spend a day fighting version issues, or that some author hasn't upgraded to a recent language version, or used a non-standard logging facility, or any of a zillion other time-sinks you face daily with gluing open source packages together. As soon as a professional has been paid to fight open source integration for 1-2 days, it would have been better if the employer had simply bought matlab.

And, here is by far the biggest issue with open source - the numerical accuracy of lots of it is crap. Matlab (and Mathematica, etc.), have employed professional numerical analysists to create numerically stable, robust algorithms, and has had decades (Matlab started in 1970, under academic numerical analyst Cleve Moeller) of refinement to weed out bugs. It's the difference between using BLAS and writing your own linear algebra package - one is likely far more robust.

Sure, some numerical open source packages are decent, and a few are excellent (BLAS and related). But when you need to glue some together, you end up far too often with stuff that's just flakey for production work.

If you've ever coded the quadratic formula as written in high school textbooks and not known all the mess you just made, then you are what most open source developers are. Taking almost any formula from a paper and just typing it in is surely the wrong way to do it numerically, but this is what open source does. A robust engineering platform should have every such formula analyzed for the proper form(s) for implementation to maintain numerical robustness, and it should also avoid allowing users easy ways to do stuff that is not robust. This is the biggest difference between tools like Matlab and Mathematica versus open source projects.

And, like the time spent fiddling with getting open source to work, as soon as you have one engineering task or design fail due to numerical problems, it would have been vastly cheaper to simply use the better tool - Matlab.

Sure, most people don't use it very much, and rarely run into such problems. People using it for serious work in engineering toolchains or production systems cannot rely on instability of opensource.

And those reasons are why things like Matlab still exist, have incredible revenue, and are growing in use.

For example, want to do some work in python? Well, soon you need numpy. Then you might wat pytorch - but crap, it's numpy-ish, but not numpy. So you learn some more nuances on getting the two to play nicely, to get consistent error messages... Then you need some visualization - again, another package (with a host of dependencies), with different conventions, syntax, uses, and god forbid these packages get a little out of sync between releases - then you get to spend a day chasing that down. Now you want some optimization stuff - pull in scikit, but it's not quite consistent with the other libs... so you spend more time making glue functions between the stuff you want to build. Next you need some finite element analysis stuff - oops, pretty much dead compared to the massive amount of toolkits already in Matlab.

Take a moment and look through the list(s) of functions and toolkits standard in matlab [1]. For an incredible amount of engineering work, what you need is there - you spend less time trying to build enough pieces to start to work and you instead get working on the parts you want.

There's a reason python stole a lot of matplotlib ideas from Matlab - it's quite useful.

[1] https://www.mathworks.com/help/referencelist.html?type=funct...


> As soon as a professional has been paid to fight open source integration for 1-2 days, it would have been better if the employer had simply bought matlab.

I'm a licensed professional, and in my experience it takes 1-2 hours to set up a conda virtualenv with all the packages I need. Whereas if I want Matlab, it takes about a week to talk through the budgeting and licensing options with my employer, find the right number of seats to purchase (other departments might decide to get in on the purchase, so we need to consult broadly), choose which toolboxes we'll pay for, go back and forth on the quotes and POs, and make sure all the licensing really works.

But your mileage may vary.


>I'm a licensed professional, and in my experience it takes 1-2 hours to set up a conda virtualenv with all the packages I need.

Yes, there are problems where Python is an easy solution. And many where it is not. And some where it cannot solve the problem without extreme effort.

Having been in dev a long time, this is the simplest, naive works best case path. If this were how setting up Python worked for everyone, there would not be an incredible amount of forum posts, github issues, setup help and problems, easily found on the internet. If you've not had to change underlying code in some python package or even worse recompile underlying C libraries, then you have not faced the kids of problems many (me included) have.

Ever solve a problem like the one I listed? That is not a simple conda install (and I use conda stuff vastly more than matlab/mathematica, so I'm pretty aware of it's use and features). Many problems I can solve in Mathematica (my preferred tool for certain work) cannot be approached by Python at all (or any open source tools I am aware of, and I have tried pretty much all of the things listed as MMA replacements).

>find the right number of seats to purchase (other departments might decide to get in on the purchase

So you're no longer making an apples to apples comparison - you just solved a bigger problem with the Matlab side.


But isn't Octave supposed to include the same built-in functionality as MatLab?


No. Octave claims to support Matlab syntax, and they largely do, but not completely. And they most certainly don't provide all the packages Matlab has, which is where a lot of the use is.

Octave is also unstable, and I doubt any company needing heavy use of a tool like this in production would trust Octave to not puke. It's just simply cheaper to use the polished and vastly more feature rich tool. Download Octave, go find some decently complex matlab code on github, and try to run it. Do that a bit and see how much works as it should.

Octave lists places they see themselves as different, some of which is core pieces that don't work the same. So if you want to replace some engineering tasks with Octave, it's going to be a mess, in the same way OpenOffice is close to MS Office, until the day you send a proposal with a deadline and it pukes because the other end used MS Word instead of an almost clone.

I've used Octave - it's decent. If you cannot afford Matlab, or your school doesn't have it, or you want to learn "matlab" to get marketable skills, then one can learn on Octave. Most serious engineering will not be done on Octave though.

[1] https://wiki.octave.org/Differences_between_Octave_and_Matla...


In the specific case of ML courses - many of which I have TA-ed or attended classes of, this reason does not ring true at all. Libraries for most standard algorithms are available in some form with a Python interface (or for the more statistical stuff: R). Its almost always the inertia from the initial design of the course.

It is also not true today that not knowing Matlab harms your industry productivity in ML. This might have been true around a decade ago, but most teams outside academia also have moved to non-Matlab resources. And if anything, this has been further reinforced by Deep Learning libraries, the current crop of MLOps tools and cloud-based frameworks.

Matlab might be good for specific areas, but ML has not been a stronghold for a while. It is also important to remember that in the context of numerical accuracy or computation speed, Python is almost always the user-facing layer. You might (correctly) argue that the Python language is slower/faster than X, but this is not a useful metric for comparing libraries and frameworks, where the compute heavy code is probably in C/C++: numpy, tensorflow, pytorch are good examples of this.


Professional costs $100k+ to employ partially because only those able to afford those tools for training get into the field.


Those fields require work to get done, so they use tools that make people as productive as possible. There's simply no open source packages with the wide range of numerical capability that Matlab has.


It's a regression as far as code readability goes for fairly straightforward reasons: almost everything in Matlab is a matrix. Matrices are not first class citizens in Python, and it matters. I use Python a hell of a lot more than Matlab, but for examining how an algorithm works (say, for implementing in another language or modifying it to do tricks), Matlab wins. Go look at these PRML collections in Python and Matlab and see if you disagree:

https://github.com/ctgk/PRML

https://github.com/PRML/PRMLT


I used to feel the same, but three years after making the switch, I've changed my mind. Matlab code has brevity, but sometimes at the expense of clarity. For example, sum(x,axis=1) is more clear than sum(x,1). Especially when matlab has functions like diff() where the second argument is not axis.

Broadcasting in python is a lot more clean than the "bsxfun(@plus, ...)" abomination in matlab. If you think all the "np." is too wordy then just do "from numpy import *". For matrix multiplication you can use "@". Numpy code can be dense but most people choose clarity over brevity.


I'd rather write python than matlab any day (I made this choice, literally in '98): it's a statement about reading. Matlab is closer to a a math notation and python is a clunky programming language. I'd never in a million years write new code in Matlab, but I prefer it for didactics.


The only thing I find nice in what Mathworks offers nowadays is their caps & T-shirts at conferences. MATLAB is on Medicare in deep learning times.


Right now, Andy is perhaps the most sophisticated thinker in this space sharing his insights and prototypes (meta-knowledge work, backlinked evergreen notes, spaced repetition, new UX/UI for these systems, etc). Here's some additional pointers:

- Andy livestreamed a demo of him on a typical work session: https://www.youtube.com/watch?v=DGcs4tyey18

- this Patreon post explains in greater lenght his OS-level spaced repetition approach: https://www.patreon.com/posts/bringing-ideas-36925173

- Andy is working on a prototype of that system, called Orbit, which might be available soon: https://twitter.com/withorbit

- in regards to his specific writing/thinking system, here's a couple more clarifications: https://notes.andymatuschak.org/z4AX7pHAu5uUfmrq4K4zig9x8jmm... https://notes.andymatuschak.org/z6f6xgGG4NKjkA5NA1kDd46whJh2...

- Obsidian has a plug-in which replicates the sliding panes of Andy's notes: https://forum.obsidian.md/t/andy-matuschak-mode-v2-7-updated...

I think the space of graph/backlinked personal notes/knowledge systems is taking off [1], with many solutions free and open-source. (Of that list, many have spaced-repetition plug-ins not referenced there.) It will be interesting to how the field matures in a couple of years.

[1] https://www.notion.so/db13644f08144495ad9877f217a161a1?v=ff6...


Thanks for the links! I wonder if Andy (or anyone else) has addressed the lack of images in these notes. Some ideas are expressed much better visually, but I have yet to figure out a frictionless way of integrating things drawn in a notebook or tablet into notes written mostly on a laptop.


You seem to know a lot about Andy. Do you know which tools he uses to write his notes and keep track of back links?


If you check out the livestream on youtube, you'll see that he does the writing in Bear (Mac/iOS only app). Presumably there's some custom code to export everything and build the HTML with backlinks, but as mentioned in the link posted here, this code is not public anywhere.


He has shared part of his toolkit, namely the exporting and syncing of the Bear notes, and “link-janitor” for the backlinks [1]. Although I wouldn’t recommend it in general - it’s a brittle prototype. And right now there are better tools out there (linked above) many of which appeared in the last few months.

[1] https://www.reddit.com/r/bearapp/comments/enbk65/sharing_a_s...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: