For now. The term people use is "centaur", like the half-man-half-horse of mythology.
The AI CEO's are pointing out that when chess was "solved", in that Kasparov was famously beaten by deep blue, there was a window of time after that event where grandmasters + computers were the strongest players. The knowledge/experience of a grandmaster paired with the search/scoring of the engines was an unbeatable pair.
However, that was just a window in time. Eventually engines alone were capable of beating grandmaster + engine pairs. Think about that carefully. It implies something. The human involvement eventually became an impediment.
Whether you believe this will transfer to other domains is up to you to decide.
I don't think this can last, because whatever advantage structured text files in a file tree provides over an API will be maximized by some format that is better than a file system.
That might mean leaning into SQLite or some other open format. My own thought is that a graph-like structure of documents will be ultimately more valuable than either a tree-like structure or a database-like structure.
But if a proprietary implementation of whatever usurps structured text in a file-system becomes popular, that company will have significant leverage.
Two advantages I see to files in folders are the ability to browse and the ability to easily read - by both LLMs and humans.
But I also agree that this has limitations. If I were to challenge you and say that Obsidian solves this, or gets close to solving it, what gaps would you say were left unfulfilled?
I am because I'm working on something in this space and your comment touches on my three main ingredients: text files, SQLite, and graphs.
Very good point and question. I am not intimately familiar with Obsidian because I haven't used it. My understanding is that it indexes directories of .md files to provide a graph-like interface over them, using the contents of the files themselves to store meta-data (e.g. in the form of links and YAML frontmatter).
Let's imagine a .zip file format that combines:
- A directory of .md and/or .xml files (there is more to structured text than markdown)
- A sqlite db in the root for some metadata
- A graph representation (RDF, triples, whatever) of the files
That is directionally what I am talking about, but with a standard spec so that someone receiving a project bundle knows how to parse it all.
You can still have all of the Obsidian stuff over the contents of the directory. But you can also store much more interesting meta-data in the db that may not be appropriate to put in any individual file (or in some kind of pure data .md file). It is something that can be sent over the network a bit more conveniently than a directory. And if you standardize the db schema and the graph you can have immediate interoperability between systems.
In fact, given advanced enough LLMs, you could just have an README.md or something in the root that sketches out the directory structure and files, the db schema for the metadata, and then just expect that the agent, knowing how to load and query a SQLite db, will be able to work with it. But I think there is value in having some loose standards around the db schema, some YAML frontmatter conventions, some well known graph representation, etc.
Awareness would be any form of agency, goal seeking, or loss minimizing.
As Briggs–Rauscher reactions can eventually lead to Belousov–Zhabotinsky reactions, the system can maintain homeostasis with its environment (and continuing to oscillate) by varying reactants in a loss minimizing fashion.
This loss minimizing would be done during scarcity to limp towards an abundance phase.
This is the mechanism that hypothetical tidal pools batteries would had exhibited to continue between periods of sunlight/darkness/acidity that eventually gets stratified as a resilency trait.
I'm not sure if you're familiar with the work from the lab of Mike Levin at Tufts but I'm betting you'll find it interesting if not. Here's a taste https://pmc.ncbi.nlm.nih.gov/articles/PMC6923654/
While I disagree with your notion that this is explicity due to gravity, the rest of your argument seems to align with some of this lab's work. Learning can be demonstrated on scales as low as a few molecules, way below what we would normally call "life".
I'm not sure what your argument is here, except stating an opinion that loss minimization is equivalent to agency. But even if that was accepted, which is a huge stretch, it doesn't stretch all the way to awareness.
It is, in context of its place in the cosmic scale.
Loss minimizing to a few problems will generalize into abstraction, and a few solutions will develop.
These systems with more generalizable resilency traits will encounter increasingly varied selective sieves.
Systems that survive this seive will exhibit increasingly sophisticated, generalizable solutions to prevent loss of needed dependent reactions/resources.
These solutions must exert influence to be effective; influencing the environment for its own benefit.
As systems influence their environment, delineation of "self" and "environment" becomes a fundamental barrier.
The system would prefer itself, or be outcompeted by a similar system that does.
This layer of semi-life like material would form between sunlight and the oscillating reaction, and eventually envelope it, minimizing surface tension by means of a spherical cell like structure.
Small stuff runs off of loss minimizing at a force level for its mechanistic affect; from covalent bonds to cellular ion transport, the path of lesser resistance is the fundamental forces.
As systems become more complex, the minimizing is less directly attributable to the fundamental forces and becomes more of a Byzantine dependency/feedback network.
This byzantine labyrinth of interactions is called biology.
The delineation of self, the ego.
At the highest levels, geopolitics. At the human level, mate suppression.
Lowest level, energy conservation.
I understand the sketch you are making and my claim isn't "you are wrong". My claim is "it isn't sufficient to explain all of the behavior". You are making massive leaps over important details. In order to feel a grasp on the big picture, you are turning a cow into a sphere.
"Awareness" isn't a well defined term and is often just a proxy for consciousness. But in as much as we can define it, it is one or both of experience and knowledge. You may (or may not be) aware of the hum some electronics in your house. At certain points in the day that hum is present in you attention, at other points it is absent from your attention. Sometimes you choose to bring previously unattended objects into your awareness, sometimes they are thrust there despite your will.
What is actually interesting about awareness, and one of the reasons it is a tricky subject, is that it isn't clearly related to agency. There are objects of your awareness that you do not act on, and you act with respect to objects that are provably not in your awareness.
There is also the question of the field within which these oscillations take place. Is it the electro-magnetic field? A quantum field? Which field are we talking about? If you are proposing some "principle of least action" in that field, can you describe it?
You seem to claim "loss minimization" and then hand wave the rest. But without descriptions of knowledge and experience it feels like you aren't actually saying anything except stating an opinion that reduces all knowledge and experience to loss minimization. That is an extraordinary claim and requires either extraordinary evidence or extraordinary reasoning.
I've thought this myself. We sure do consume a lot of yeast-based foods. Entire industries created to cultivate the stuff.
I thought it would be an interesting story about how the bakers and beer makers are the actual illuminati, consciously working on behalf of the yeast which is our actual over-lords.
I don't think the entirety of the phenomenon can be explained by fear, and in complex issues such as this a single variable analysis is suspect.
I would encourage you to think of forces other than fear that might be driving observed behavior. It is only in this way that you have any hope of creating an alternative attractive force that satisfies the needs that are currently being served by religion.
One obvious one, is that people crave community and religion provides a ready made and welcoming community.
But this is an exercise that is best performed by you. It is about changing your own attention. Next time you come across some "adult convert", instead of looking for signs that prove your assumptions (that they are looking for comfort), look for some positive sign.
Consider how you are reacting to this very comment chain. Are you thinking "there is no possible way any person could ever be attracted to religion for any other possible reason than fear and need for comfort". When you asked me for suggestions were you actually curious? Bring that same curiosity to your next interaction. Demand of your own attention to see a positive reason.
And consider that if you are not able to notice the positive intentions in the actions of another, that might be a you problem.
People might be afraid for all sorts of reasons and seek religion as a way of allaying their fear, sure. "I want a pre-made community" is just another way of saying "I am afraid of being alone."
There is nothing wrong with wanting a community, but there is something wrong with telling people something is true when it is not just to get one. I get that people benefit from religion in a lot of ways. I just don't think that undermining basic questions of what we do and do not, what we can and can not, know, is worth those benefits.
A person who, in the privacy of their bathroom, looks in the mirror and repeats comforting lies to themselves to make themselves feel better is substantially less offensive to me than someone who publicly professes bullshit to make themselves feel better. If this seems blunt, ask yourself why we live in a world which gives automatic credence to ridiculous beliefs if and only if they happen to align with the ridiculous beliefs of a few sanctioned groups.
> "I want a pre-made community" is just another way of saying "I am afraid of being alone."
Do you think people join other communities out of fear? Someone who decides to play magic the gathering, join a dance class, a book reading group, or art class?
If people join these kinds of communities sometimes out of fear of being alone, and sometimes for other reasons, why do you not extend the same generous interpretation to people who join religious communities?
Some people see no value in learning art, music, literature or poetry. Some people do. Some people see no value in exploring spiritual topics, some people do.
Consider everything you said and apply it to improv classes. What changes?
> t took humans years to write the tests by hand, and the agents still failed to converge.
I think there is some hazard in assuming that what agents fail at today they will continue to fail on in the future.
What I mean is, if we take the optimistic view of agents continuing to improve on the trajectory they have started at for one or two years, then it is worth while considering what tools and infrastructure we will need for them. Companies that start to build that now for the future they assume is coming are going to be better positioned than people who wake up to a new reality in two years.
I'm just pointing out "we don't need this right now" isn't necessarily an argument against "we don't need this".
There is a saying that isn't perfect but may apply: better to have it and not need it then to need it and not have it.
Here is another way of looking at it. Let's say agents don't meet the hyped up expectations and we build all of this robust tooling for nothing. So we have all of this work towards creating autonomous testing systems but we don't have the autonomous agents. That still seems like a decent outcome.
When we plan around optimistic views of the future, we tend to build generally useful things.
It is interesting that Marc Andreesen was having a bit of a X crash out over his belief that introspection is bad [1]
I disagree because I tend to seek a middle way. I would agree that too much (excessive) introspection is bad. But I would argue that too little is equally bad.
I think obsessively examining ones own comment history would verge on excessive. I'm wondering how much LLM analysis of my public and private life can remain healthy.
> I think obsessively examining ones own comment history would verge on excessive
If you write a journal, do you not sometimes review the things you've written?
This is no different.
If you read the whole journal every day before writing a new entry, then perhaps it is a but excessive. But a review once or twice a year is not a bad thing.
I very much take your point, and literally the paragraph before I said "... I tend to seek a middle way". So if you think reviewing once or twice a year is right for you, then go for it.
But to answer your specific question: I do keep journals and I almost never review them. I have a box with 20+ years of handwritten journals in it. Once every couple of years I'll even open the box and flip through a few pages of a few of the books and read a couple of pages. But that is generally all I do and I have no desire to do any more than that.
I also have maybe 100 journal entries in Google Docs and to the best of my recollection I have never read a single one of them after they were written.
Lately I've been using LLMs as a kind of active journal, where I engage in a dialogue as a means to externalizing my thoughts and refining ideas through critical feedback. I have never gone back and re-read any of those conversations.
I'm not making a value claim here, just answering you honestly. It isn't that I think one should or shouldn't review their journals, it is that it doesn't give me pleasure to do so and I find no personal value in it.
Part of me thinks this is a bad sign for Apple. They have always been a premium brand. I'm not a business major, but it just feels like a bad thing when premium enters low-end markets.
But on the other hand, this is kind of the culmination of them owning their hardware stack. They can avoid the commoditization race to the bottom since they are the exclusive owners of a significant amount of their hardware vertical, From chips to enclosure. Perhaps that will let them retain the margins that were previously driven by a consumer base that prized prestige over price.
While my intuition is that this may be the last big cash grab that Apple squeezes out of their premium image, they did have a massive hit back in the day with the original iMac (the CRT based one). They've defined "cheap and premium" categories before.
Tim Cook, as CEO of a public company, is incentivized to deliver shareholder value.
Entering this market with a good product does just that.
Beyond that, this is an entry point for people to use Apple products. It can be bridge to get this consumer to buy more premium hardware and software later on.
I'm not a business major, but it just feels like a bad thing when premium enters low-end markets.
Microsoft's malevolent stewardship of Windows has handed the market to them on the proverbial silver platter. It's only reasonable for Cook to take advantage of their generosity.
I have always wanted Deno to succeed. But it just seems to be too full of contradictions.
Their initial baffling stance about package.json was the first bad sign. I almost can't imagine the hubris of expecting devs to abandon such a large eco-system of packages by not striving for 100% support out of the gate. Of course they had to relent, but honestly the damage was done. They chose ideology over practicality and that doesn't bode well with devs.
I think they saw Rust and thought that devs were willing to abandon C++ for a language that was more modern and secure. By touting these same benefits perhaps they were hoping for similar sentiment from the JavaScript community.
Deno has some really good ideas (e.g. the library KV interface). I agree with a lot (but not all) of Dahl's vision. But the whole thing is just a bit too quirky for me to invest anything critical into an ecosystem that is one funding round away from disappearing completely.
I recall an earlier exchange, posted to HN, between Wolfram and Knuth on the GPT-4 model [1].
Knuth was dismissive in that exchange, concluding "I myself shall certainly continue to leave such research to others, and to devote my time to developing concepts that are authentic and trustworthy. And I hope you do the same."
I've noticed with the latest models, especially Opus 4.6, some of the resistance to these LLMs is relenting. Kudos for people being willing to change their opinion and update when new evidence comes to light.
I think that's what make the bayesian faction of statistics so appealing. Updating their prior belief based on new evidence is at the core of the scinetific method. Take that frequentists.
It does not seem fair to say that frequentists do not update their beliefs based on new evidence. This does not seem to accurately capture what the difference between Bayesians and frequentists (or anyone else) is.
The AI CEO's are pointing out that when chess was "solved", in that Kasparov was famously beaten by deep blue, there was a window of time after that event where grandmasters + computers were the strongest players. The knowledge/experience of a grandmaster paired with the search/scoring of the engines was an unbeatable pair.
However, that was just a window in time. Eventually engines alone were capable of beating grandmaster + engine pairs. Think about that carefully. It implies something. The human involvement eventually became an impediment.
Whether you believe this will transfer to other domains is up to you to decide.
reply