If we can't get zero-days discovered and fixed in due time to protect our privacy, I certainly do hope one day AI will discover most of it and suggest how to fix it.
I do hope it's going to be capable enough to be plugged into CI/CD to discover that the top-talent today made another obvious XSS, SQLi or other trivial issue that just created a 0-day. Even a few of those cyber-models, so they verify each other. I do hope it's going to be trained on all prior issues, like the one with xz, or Axios, and be vigilant against these things.
> systemd has had optional fields for those for years and nobody complained.
GECOS in 1962, and UNIX in '70s had them as well, and nobody threatened to kill their creators.
Having a field in a database is not equal to mandatory data collection. Let me remind of data that /etc/passwd allows to store on even an OS without systemd:
- User's full name (or application name, if the account is for a program)
- Building and room number or contact person
- Office telephone number
- Home telephone number
- Any other contact information (pager number, fax, external e-mail address, etc.)
100 computers in 100 homes are way harder to find and get destroyed, unlike dropping one bomb on an AWS region. Also it's much easier to get rebuilt and running again than $1-5 bn data center.
Actually reading that Iran attacked AWS and is going to attack other major cloud providers, the massively distributed compute is going to be the only solution that will be resilient enough for the current civilization to survive attacks on the infrastructure.
We'll be certainly less performant and less capable, but the data will continue to flow and business processes will proceed. If cloud data centers are destroyed, everything that's important to sustain us stops and we die.
I like it, and I hope it's soon going to be available in various Linux distributions, along with other modern tools such as fd instead of find, ripgrep instead of grep, and fzf, for instance.
So no, thank you USA. I'm not going to visit you. I have all I need in this our European "socialism" as many Americans like to call it. I'm not assumed to be a criminal, and governments aren't building databases of all my steps and activities, and I have a great healthcare.
> I think OpenAI is going to be bigger than Microsoft in market cap within the next 3 years.
I am yet to see how a one-legged business model with just a single product (that is not crude oil), without a plan and money is going to become sustainable. Oh yeah, maybe they'll finally make money on those autonomous lethal weapons. That sounds the easiest.
Sure. I'll give you a basic plan without any insider knowledge on OpenAI.
First, OpenAI and Anthropic are the leaders in model capabilities. Google is a close 3rd but 3rd nonetheless.
Second, ChatGPT likely has about 1 billion active users right now. I think ads on ChatGPT will surpass even Google search ads in the future. There will be a class of users who will never pay for ChatGPT subscriptions and that's ok. Meta and Google are two of the most profitable companies in history who almost rely solely on free users for their cash cows. "Ask ChatGPT" is already "google it" for the masses.
Third, there is so much untapped revenue potential from science, medicine field that OpenAI can eventually own with Anthropic. Microsoft stands no chance here since they can't build competing models.
Fourth, I can easily see ChatGPT morphing into agents for consumers and people will pay for them. AI is moving up the value chain fast. I don't see any reason why consumers won't pay for ChatGPT but will pay for Netflix.
Just some basic ideas based on public knowledge. I'm sure there are plenty more.
I'm not going to bet my house that OpenAI will become bigger than Microsoft in 3 years, but I'll put down a few hundred dollars on this bet.
I don't discount this as a possibility but my impression is that the OpenAI brand isn't very sticky.
Internet Explorer being pre-installed on Windows devices didn't prevent it from being demolished by newcomer Chrome throughout the 2010s. Now we're looking at a product that's even less integrated, and whose value is exposed through universal interfaces (human language, images, etc.).
If OpenAI succeeds, I imagine that remarkably little of it will have come from the brand. But subtracting the first-mover brand advantage: they can either compete on the frontier, which seems difficult and bears potentially diminishing returns (particularly wrt to distillation); or compete as a commodity, which I imagine cannot justify their valuation/spend.
For people that use ChatGPT the same way you do, yeah it's not. For people in the throes of AI psychosis who've named their ChatGPT and have a deep relationship with it, switching to a newer model from OpenAI is an issue, nevermind switching to a different model from a different company.
I considered that but I don't see it being very impactful. It presumes a user who cares enough about "their" ChatGPT that they can't move from a particular model provider, but simultaneously does not care enough that model providers themselves have a financial motivation to shoo users onto their newer and more efficient models.
The transition from GPT4 to GPT5 was not well recieved among this crowd -- nevermind that I think this crowd is pretty small (comparatively) to begin with. I just don't imagine you can build a business on that sliver of a sliver, much less one that justifies OpenAI's spending.
Excellent reading to realize how the rich greedy investment monkeys with no plan other than "let's build a data center" will ultimately drag the market and the economy down. This time it may not explode as abruptly as in dotcom era, but will slowly sink as the stupid US data center boom proves unprofitable. Billions burned for nothing more than a run for the money.
It all should be tuned with an AMD CPU expert, and programmer adjusting code under their guidance to leverage all CPU features.
Did AMD engineers or seasoned hardware experts from server vendor assist in this implementation?
Were the "Nodes Per Socket", "CCX as NUMA", "Last Level Cache as NUMA" settings tested/optimized? I don't see them mentioned in the article. They can make A LOT of difference for different workloads, and there's no single setting/single recommendation that would fit all scenarios.
"The locality of cores, memory, and IO hub/devices in a NUMA-based system is an important factor when tuning for performance” - „AMD EPYC 9005 Processor Architecture Overview” page 7
What was the RAM configuration? 12 DIMM modules (optimal) or 24 (suboptimal)?
Was the virtualization involved? If so, how was it configured? How does bare metal performance compare to virtualized system for this specific code?
So many opportunities to explore not mentioned in the text.
I do hope it's going to be capable enough to be plugged into CI/CD to discover that the top-talent today made another obvious XSS, SQLi or other trivial issue that just created a 0-day. Even a few of those cyber-models, so they verify each other. I do hope it's going to be trained on all prior issues, like the one with xz, or Axios, and be vigilant against these things.
reply