1)
Probability of that is so miniscule as to be nonexistent. I’d suspect foul play at that point. As in someone just transporting the animal there. You can’t really get identical DNA on earth. It’s to be expected that it would be even harder if there are no common ancestors.
2)
Yes. In order to make that you need to basically fully understand the organism in question. All of it’s properties are the same and product of the same evolutionary history by the virtue of you just looking at the existing creature and making an 1:1 copy
Funnily enough, I have the opposite view: given the amount of stars and of earth like planets in the universe, probabilistically speaking there is bound to be a random chance where two instances of life have the same DNA without ever being in contact.
Disclaimer: I'm bad at maths and good at dreaming.
The bare simplest genomes are viral genomes about 1000 bases long. These can only operate in environments created by more complex genomes. But even taking them, the theoretical diversity in a genome 1000 bases long, using the 4 canonical nucleic acids, is 4^1000. The number of atoms in the entire universe is around 10^80 (https://educationblog.oup.com/secondary/maths/numbers-of-ato... ).
There's a lot of complexity here: other life may use other nucleotides; those atoms and bases are constantly being rearranged; many base changes are relatively silent - they don't materially matter; etcetera.
I don't know how to calculate all of the rest of these factors, but I think the odds are more than astronomical.
>> The number of atoms in the entire universe is around 10^80
This is incorrect. It's an unsolved question in modern physics if the universe is finite or infinite. Even if it's finite most theoretical physicists believe that visible universe (94 billion light years across) is only a small piece of a far larger universe - so there are probably far more atoms.
If the universe is infinite it may be that there are infinite number of copies of every living organism on Earth out there.
At that point aren't we getting into the idea of a multiverse, and not a universe anymore? We have no non-hypothetical knowledge of what exists beyond the limits of the universe as it descended from what we can still see of the radiation shortly after the Big Bang. However, from that radiation, can't we get a ballpark figure for the atomic limits of the Big-Bang-Universe?
No, the different multiverse hypotheses are something different. The size of "our" universe is unknown, we can can see a sphere with a diameter of 94 billion light years. How much is there beyond that sphere is a research topic. No light from beyond the sphere has reached us during the lifetime of the universe.
The number only seems small because we use notation to make it seem small.
Most combinations of nucleotides aren't functional, or even capable of being synthesized by natural processes. This limits what can be created quite a bit. The true odds are still extremely high, but not nearly as high as these calculations guess.
> You know your logarithm calculation rules, right? Right?!
Eh, at one point I did. I'll take your word for it. It seems familiar.
Now factor in number of rearrangements since the universe started. Or heck, let's keep it simple to the last 4.5 billion years.
I count about 30 atoms in a nucleotide, so X is a bit more than 10^523.
Let's say 2 rearrangements per day (this will vary dramatically, but I have to pull some number out of a hat). Over 4.5 billion years that's on the order of a bit more than 10^12. so X is around 10^511.
More seriously people need to stop with the Apple comparisons. Unified memory has been a thing for a way longer time. Heck around 2014 AMD had integrated GPUs with not just unified memory but fully unified address spaces with the host. Unified memory in itself happened way before that.
Not to mention that mobiles have always been unified archs. It’s just a design decision.
Ian Buck's 2004 prediction is still 10 years before 2014. I did not say Apple invented unified memory, it just got popular with Apple Silicon, and looking at local LLM inference on M1/M2 and the 192 GB of memory M2 Ultra allows, it will surely get more important.
Browser doesn’t really have limitations in that regard. Thus it offers nothing unique in that sense that’s not already in normal windows/whatever desktop platform demos.
If some frontend developer would make a demo without using webgl or webgpu that would actually get interesting.
WebGL is a very high level API because OpenGL itself is a very high level API in this context. People have been coding directly on top of OpenGL since it was first created, including 4k demos. A demoscene demo will itself implement whatever custom "graphics engine" is needed (for a 4k, essentially nothing is needed). Demos don’t commonly use third-party dependencies because the entire idea is to show off your (group’s) programming and art savvy.
It’s the same mechanism that triggers the game mode in Windows. You can tag a program in the Xbox game bar as a game if it hasn’t recognized it by default.
Not just X86, the older X86. Before Pentium Pro there was not even register renaming. So one was both register starved and the registers you saw where actually the registers you got. Even after that it was still somewhat starved but it alleviated the problem greatly.
Nowadays X64 has a decent amounts of registers + the HW internally has several times more the registers, so saving one is almost meaningless.
One reason is that they are already here. It's just called the GPU which happens to be way more parallel than puny 128 cores. That's a major reason why just bunch of low power CPUs are not really a thing, as they get trashed by GPUs.
What we are seeing is a hierarchy of processing cores. It's already visible even in the desktop market. Intel has their efficiency cores. Apple has the power and efficiency cores. And the GPU on top. So we already have 3 different stages here, at the top the least amount of cores but the best single threaded performance going all the way down to, let me check how many threads, almost 10000 threads on a flagship consumer GPU running simultaneously right now.
Except GPU cores are nothing like CPU cores, GPGPU was around 10 years ago, and whatever scala language/framework features he was shilling are probably useless for that kind of programming - so that's a failed prediction.
On that I fully agree. And the reason why scala approach doesn't work is because it's horrendously inefficient. What actually does work at massive scale is the GPU style workload.
Intel tried with their Larrabee on what happens if they just toss in ton of traditional low performance CPU cores. It failed to perform. It's really hard to beat the modern SIMT style GPUs when it comes to massively parallel computation.
AMD has a really weird view on how the programming world works internally. Some years ago they had some ARM server chips. Did they allow people access? No. They only allowed it for "Serious customers" and the whole thing failed as no software was available for it.
Same applies to their compute stuff. They really really want you to purchase their Pro cards and whatnot, and if you have a consumer card well too bad. Go back to playing games. Whereas with my Nvidia card CUDA runs flawlessly which leads me to doing test projects at home and fixing a bug here and there.
AMD doesn't understand how much of development happens on either home computers or university labs with small budgets which means consumer cards. They seem to think that everything happens using big teams and big money. Even though in reality it's the small developers doing stuff that then drives the purchase of the very lucrative large clusters.
For all of it's downsides NVidia actually understand the grassroots approach. Which is why their stuff just works on consumer stuff too.
My impression is that they largely seem to focus on Big Money HPC contracts and other very large deals where bespoke engineering on the software stack is worth their time and can be paid off. So if you're paying them a shitload of money in a massive supercomputing-style deal, they'll get you software for their otherwise-unobtainium accelerators. But normal "day to day" stuff is literally not on their radar. At best it's an afterthought.
I think the timing of when they entered into the field is also relevant. When Nvidia first came on the scene swinging, they really needed to see what would stick; so you have to try fishing in every available pond. You don't know what will necessarily pan out or catch on. So you need the tech to be available to all your consumers at every level, consumer, enterprise, HPC, everything. And it turns out, they found that there are customers at every single one of those levels, and providing for all of them provides both a good on-ramp, and a path to bigger things.
In contrast AMD is so late to the game, and there's so much money sloshing around in the field right now, and the trajectories of what big buyers want is so much more clear -- that they can just afford to mostly ignore the lower end. They know where the money is. The field and buyers and use cases are just much more well understood from the business "make money in the easiest way" POV.
As for Intel, they seem to actually be dedicating tons of money to the software stack for oneAPI, to work everywhere, which is not surprising IMO. They have the money to dump into this (the most money intensive) task and know it's critical for their products to reach anything other than pork-belly HPC contracts.
It's still peanuts compared to what semiconductor research&engineering costs. They can afford it to make the most out of their market share. AMD must be rational and prioritize to repay investors for their trust. Time will tell whether that strategy works out.
I feel there's a large disconnect at AMD between the CPU division and the RTG (Radeon Technologies Group) which feels more insular than the rest of the company. I feel Lisa Su had her hand deeper in propping up the wailing CPU division and kind of forgot about RTG that's been loosing market share to Nvidia for the last 10 years not just in gaming but in ML. Maybe she should take a break from Ryzen and look into leading the RTG.
Also AMD seems to underestimate how many people may just be curious to use compute functionality with their consumer based card. I was ultra excited to get a rdna2 card to make a dedicated linux system with a friendly gpu. Gaming works, but davinci resolve doesn’t, can’t use AMF for screen recording or streaming. I have a 3060ti back in a box abd parts for a “work computer,” and I thought I would work a bit with my system with the 6700xt but can only really game, screen record, and edit video with kaden live using my intel cpu.
I watched a video over the weekend about how a new amd apu using rdna3 will make entry level/cheap graphics cards useless, but thats not really going to be the case if you want to dip your feet into utilizing an entry level card for workloads. Gamers turn their nose up at dinky gtx 1650s but at least you can use it to edit video. Theres no wonder intel doesnt see AMD as real competition in the space.