Hacker Newsnew | past | comments | ask | show | jobs | submit | sim04ful's commentslogin

Graphql introspection queries would be a really neat application for LLM calls


MCP server to give AI agents design taste - https://fontofweb.com/mcp

Agents can search for design inspiration from production websites using semantic search. Since this inspiration comes from live websites, their design tokens; colors, typography usage, layout data are also available.


This seems like it would have adversarial consequences. Wouldn't these list of tropes get longer over the years.


I've noticed a key quality signal with LLM coding is an LOC growth rate that tapers off or even turns negative.


Nothing on symbolic reasoning ?


I believe that would be part of whats now "classical ai"


It's called GOFAI, or not AI at all. It's basically all machine learning nowadays.


good old-fashioned artificial intelligence

https://en.wikipedia.org/wiki/GOFAI


that would be the exact opposite of modern


No. That will be covered by the Post-modern AI course in the fall semester.


That's not AI.


Why not? It was called AI at the time.


Looks pretty interesting. How could i use this on other MCP clients e.g OpenCode ?


Hey! Thank you for your comment! You can actually use an MCP on this basis, but I haven't tested it yet. I'll look into it as soon as possible. Your feedback is valuable.


nice, I'd love to se it for codex and opencode


Thanks! Context Mode is a standard MCP server, so it works with any client that supports MCP — including Codex and opencode.

Codex CLI:

  codex mcp add context-mode -- npx -y context-mode
Or in ~/.codex/config.toml:

  [mcp_servers.context-mode]
  command = "npx"
  args = ["-y", "context-mode"]
opencode:

In opencode.json:

  {
    "mcp": {
      "context-mode": {
        "type": "local",
        "command": ["npx", "-y", "context-mode"],
        "enabled": true
      }
    }
  }
We haven't tested yet — would love to hear if anyone tries it!


I daresay we're going to see a burgeoning situations where the software (code) is open-sourced under a permissive or copyleft license, while the associated data, content, or assets (e.g., datasets, models, or databases) are handled under separate, often more restrictive licenses.


I'm also working on a Chinese learning app (heyzima.com) and my "solution" to this was to use the TTS token/word log probabilities.


The prevalence of this "personal vibecoded app" spirit makes me start to wonder if an "App" is the right level of abstraction for packaging capabilities. Perhaps we need something more "granular".


Personally I hope we land on "widget" although I'd settle for "thingamabob"


I'm very green to this so forgive if this question sounds silly:

Would instead of the RL step a constrained decoding say via something like xgrammar fix syntax generation issue ?


> Would instead of the RL step a constrained decoding say via something like xgrammar fix syntax generation issue ?

It can, but you have to consider two things here:

a) constrained decoding ensures adherence to syntax, not semantics. Say you're editing a field in an enum in rust. You can write syntactically correct rust code that doesn't address the new field further in the code (say in a switch). You'd get correctly syntactic code, but the compiler will scream at you. RL works on both.

b) if your goal is to further train the model, so it works on many tasks, RL helps with exploring new paths and training the model further. Constrained grammars help with inference, but the model doesn't "learn" anything. With RL you can also have many reward functions at the same time. Say one that rewards good syntax, one that rewards "closing" all the functions so tree-sitter doesn't complain, and one that rewards 0 errors from the compiler. The model gets to train on all 3 at the same time.


^ these were pretty much the main reasons.

The other one is that constrained decoding only works on CFGs (simpler grammars like JSON schemas) since only these ones can produce automatas which can be used for constrained decoding. Programming languages like Python and C++ aren't CFGs so it doesn't work.

Also constrained decoding generally worsens model quality since the model would be generating off-policy. So RL helps push corrected syntax back on-policy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: