Hacker Newsnew | past | comments | ask | show | jobs | submit | maxov's commentslogin

Yes, I also found the description a little weird because of the emphasis on linear-time parsing. It is cool theoretically, and it could be understandable from a perspective of "make the compiler fast", but parsing is never the bottleneck in modern compilers. For a systems programming language this seems to be the wrong emphasis.


Yes, I would go the opposite in terms of simplicity.

Parse once for syntax and symbol definition, 2nd pass over parsed structure to link symbol references to their definitions. Two uncomplicated passes.

That handles a general code graph - so the language can go anywhere, and never be fundamentally held up by limitations of early syntax/parsing decisions.


Looking at the actual article (https://www.nature.com/articles/s41467-024-54178-1), their procedure does actually use deep learning in the process of synthesizing candidate chip designs, and the use of deep learning is key to their work. In particular, it looks like the final process is a genetic algorithm that interacts with a deep learning model that predicts the performance of candidate chips. It seems like trying to simulate the chip analytically to predict performance was far too inefficient, and replacing that part with a deep learning model made this entire procedure possible. So in summary, nothing in this article is called AI that has not been called AI before. Most importantly it produces novel designs "globally" without a human in the loop whereas one was required before. I think calling that AI-designed is pretty reasonable.

On a very high level, the role of deep learning here seems similar to AlphaGo (which is also the combination of a less novel generic optimization algorithm, Monte Carlo tree search, with deep learning-provided predictions). I don't think anyone would debate that AlphaGo is fundamentally an AI system. Maybe if we are to be really precise, both of these systems are optimization guided by heuristics provided by deep learning.



Also, see the Feynman algorithm:

https://proftomcrick.com/2011/04/26/feynman-problem-solving-...

> 1. Write down the problem.

> 2. Think very hard.

> 3. Write down the answer.

The first step is crucial.


What a strange recommendation. I do research in CS theory and machine learning and I try to find arXiv preprints when I can, they are usually more complete than conference versions of papers. If you stick to papers from authors you know or to those from well-known conferences, arXiv is often times the best source.


In case you do not know, I find Google Scholar to be pretty useful for finding dates and other information about the paper - e.g. in this case https://scholar.google.com/scholar?hl=en&as_sdt=0%2C14&q=New...

I work with papers in CS/math and have been able to find dates and metadata like DOI pretty quick. There are complications when there are really multiple versions of a paper, like one in a conference that is only an extended abstract and one in a journal, but you would need to figure out which version you want to read in that situation anyway. I agree it's annoying, but coupled with reference manager software like Zotero it's hardly an issue for me.


Google Scholar is a great tool but I fear its days are numbered. It's already frustrating that they removed the feature hot linking to an article's scholar page if you enter the title in google search.


If Google Scholar is sunset that would be a great loss for the academic community. When was that feature removed? I have felt reasonably confident in Google Scholar so far because it is already 18 or so years old.


SemanticDB (https://scalameta.org/docs/semanticdb/guide.html) is a protobuf-based file format that does almost exactly this for JVM languages, primarily Scala (I was a contributor a while back). It is used to build an intelligent online code browser, as the backend for a language server, and to do intelligent refactorings.

I think a language-agnostic semantic metadata format is a good idea, but requires a lot of compromise. ctags partially does this, but only to a very coarse level (mostly definitions and references). I think some ctags implementations also define 'extension fields' that could be used to give type information, but I don't know how/if these are used in practice. SemanticDB is extremely fine-grained, but highly specialized to JVM languages and type systems that are designed to work with the JVM. Finding a common set of semantic features that can be used across languages and type systems that is fine-grained enough to be more useful than ctags sounds very difficult to me.


I think simple things like "go to reference" or "show type" would be sufficient for 95% of usecases. But if you split languages up into a few different categories (maybe along the lines of Algol-like vs Lisp-like), and were flexible with extensions, I'd imagine we'd see some common patterns emerging, and clients would take advantage of that. Best effort is probably good enough to greatly improve the ergonomics of search.


I love this! But after trying it a few times I got this result :). So fascinating.

https://thismoviedoesnotexist.org/movie/the-terminator

Brings up the age-old question of how much the learning in these models is just memorization. Though in cases like these it’s hard to tell.


Yep, that's because GPT-3 was trained on real existing data, and it's quite a challenge to make sure the story plot is 100% fake. When it's too close from an existing film, it just sometimes gives it the same film title. I have in-between GPT-3 prompts to avoid that as much as possible, but sometimes real movie titles slips through the cracks. Something I hope to improve shortly.


What a great project, you absolute legend!


Given Hollywood's proclivity to remake everything on a 20-year cycle, it seems completely appropriate to get a 2023 Terminator reboot in a 1920's style.


It also seems completely appropriate for The Terminator to be written, directed, and acted by an AI.


Here's "The Terminator by F.W. Murnau" from Stable Diffusion:

https://ibb.co/RN5bxJb


There are only so many stories to tell. The Terminator is a rehash of so many other previous stories. The real art is in putting it together so that it seems new and fresh and gets people exited about it. The Terminator 1920s style looks interesting.


If a story is a rehash of "many" stories, then it's actually a new story. Similar to how an "Airbnb but for dog walkers" isn't actually a ripoff of Airbnb, but is in fact an original idea.


> The Terminator is a rehash of so many other previous stories.

I know what you mean, but I also laughed at that.

"I bet one legend that keeps recurring throughout history, in every culture, is the story of Popeye." - Jack Handey


Yeah, just got: https://thismoviedoesnotexist.org/movie/the-legend-of-zelda-...

It's crazy that it just made up those names...


Since the generated output is so close to the training data, the model is probably overfitted and trained with too few data...


Note to mention "In the Land of Oz: The Search for the Wizard":

https://thismoviedoesnotexist.org/movie/in-the-land-of-oz-th...

Which reads like a bad translation of a bad translation. Like the the old joke about the AI program which was supposed to translate "The spirit is willing but the flesh is weak" from English to Russian to English, and after the roundtrip came up with "The vodka is good but the meat is rotten."


Not sure how I feel about an AI generating a movie concept that involves a "rise of the machines".


I feel like "generate" is kind of a strong word in this case though. At this rate if the machines rise up, they will do so just to parrot all the "machines rise up" plot synopses in their training corpus.



Common, this is genius. Plot reads: in 2154 a soldier from the future is send back in time to the present.

Why send a soldier from the present to the past when you can also have a soldier from the future send to the past!



The same could technically be said about Disney which is just remaking their entire classical collection but with CGI.


~20bn sounds about right. 9! * 4^9 / 4 ~ 23.8bn. Divide by 4 due to rotational symmetry.

Absolutely mind-boggling to me that such a small, simple puzzle has such an extreme search space! Great example of P vs NP, time to build a Where’s Waldo matching game cryptosystem.


I think a naive tree search (start from one tile and try adding others with matching edges) might find a solution pretty fast, because most of the search space gets discarded early.


Good point! I think this heuristic works well if the edges between tiles in completed picture are uniquely identifiable or close to that. However from the images there seem to be only 4 "colors" of edges in the puzzle (8 if you count two orientations for each character), so I think this will prune the search space to "only" perhaps around 100k-500k. Still really good, this makes the search space 50k-250k times smaller.


I haven't been a parent so I can't be positive, but I doubt time off for maternity or paternity acts as a "positive" incentive for having children, meaning people who didn't want children before would now want them. That time off is filled pretty well caring for the baby, which is extremely tiring and time-consuming; it doesn't really function as a break. I also don't know how many people would trade a several month absence from work for the at least 18-year (more like lifetime, really) commitment of raising a child; they have children for other reasons.

That said, I could see how having maternity/paternity leave would help people who want to put in the effort of raising children, but cannot/do not want to quit their job or keep working with a baby for the most demanding first few months. There is likely negative pressure on having children from not being able to take paid leave from work, and in aggregate people would probably have more children as a result if they do get paid leave.

Although this is an entirely different topic, such a policy seems like a good idea to combat the lack of births many developed countries are facing, which in the long term will cause an asymmetric demography. This could be a big problem, as countries with births under replacement and little net inward migration need to support an aging population with fewer active workers (see e.g. Japan). I imagine some would argue that we should move toward smaller and more sustainable societies in order to preserve the planet's resources and reduce our total consumption, in which case they would advocate for alternate solutions to deal with societal aging.


Another way to think about it is counting the probability of getting k boys out of 2 children.

  0 boys - 1/4
  1 boy - 1/2
  2 boys - 1/4
There's a half chance of getting exactly one boy, and one way to calculate this is by noticing there are two different ways to get one boy if we take order in account. You are right that the orderings don't matter in this case, so we could also e.g. model this with a binomial distribution. Once you know there are >= 1 boys, the chance you have two is 0.25/(0.25+0.5) = 1/3.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: